Obrazy na stronie
PDF
ePub

measures permit discriminations of utility for equivalent performance. For example, at the same level of technical performance, the utility of the decoy in the $2 million suite on a cruiser in the open sea scenario might be far greater than that in the $500,000 suite on a destroyer in the amphibious scenario.

The use of scenarios in evaluation, especially in MAU modeling, is necessary as an economi

cal means of predicting and evaluating performance, and of reducing to a manageable figure the infinite number of details required to describe an infinite number of possible futures. Scenarios serve, then, as classifying abstractions; that is, sets of generalized features which characterize a class of possible futures. For purposes of modeling, even this greatly reduced number of generalized descriptions is neither economical nor manageable. Accordingly, a selection of scenarios must be made to ensure that the sampling is representative of situations in which the suites are likely to be employed on the one hand and that the selection discriminates among the design proposals on the other. The pro

cedures for selecting an economical and manageable number of scenarios that are at once representative and discriminating raise difficult difficult methodological questions. Nevertheless, the procedures have been sufficiently de

veloped and refined to enable the

selection of scenarios that can

serve as a basis for dependable

evaluation.8

Quantifying the Model

The remaining four stepsdefining the element relationships, establishing element

boundaries, developing effectiveness curves, and determining element weights-involve a highly iterative process. It goes on continuously and leads to refinements of initial quantifications until the model is satisfactory to those who must use it. Since the model must generate figures indicative of the relative TSU of design proposals, the relationships among the elements as well as their measures of utility and effectiveness must be expressed mathematically.

"The cost-per-unit ceil-
ing and the general
absence of perform-
ance requirements
make evaluating di-
verse design proposals
more difficult than it
has been in the past,
since all contractors are
likely to propose de-
signs meeting the cost-
per-unit ceiling."

One of the most important steps in developing the MAU model involves a determination

of the rules, or formulae, for representing element relation

ships within the model. The ele

open sea scenario. In short, the combination rules define the principles of aggregating the worths of elements throughout the MAU model; they capture the value independence of the worths of the elements or the nature of their value dependency. The relative simplicity of the rules themselves belies their sophistication; accordingly, this article avoids the long and technical discussion necessary to explain either the rules or the concepts of value dependence or independence."

Once the combination rules are established (remember that they are amended as necessary as the next two steps are taken), it is necessary to establish element boundaries on ranges of levels of performance of controllable parameters. The purpose of such boundaries is twofold: they limit the range of performance to be measured to plausible dimensions; and they permit the importance of that performance to be gauged realistically. Too often

the importance attached to a specific kind of performance is inflated because it is considered

universally, without regard to the range of technically and economically feasible levels of performance likely to be suggested by design proposals. This is one point at which the impact of a

ments subordinate to the same immediate mediating factor combine to determine its worth (a term used to encompass either utility or effectiveness). The worth of the decoy in the open sea scenario (see Figure 3) is determined by the aggregated Designs, Inc., McLean, VA, December

worths of probability of avail

ability, sensor effectiveness, effector effectiveness, and system reaction time. The rule for representing the relationship among them must validly capture the manner in which they affect the performance of the decoy in the

"The subject is discussed theoretically in Michael F. O'Connor and Ward Edwards, "On Using Scenarios in the Evaluation of Complex Alternatives," Technical Report 76-17, Decisions and

1976.

In this case, the Navy reduced an original selection of eight scenarios to four and tested the results obtained from the MAU model by comparing them with the results of a much-reduced number of simulation runs.

'See Hays, O'Connor, and Peterson, "Application of Multi-Attribute Utility Theory," pp. 21-25.

design-to-cost procurement policy is felt. Thus, whereas an expert may wish the sensor of an EW suite to have a large capacity to detect discrete targets and avoid saturation, he may set a range of targets prior to saturation from zero to 25 and make other judgments with respect to that range, like those for effectiveness curves and weights. However, if design proposals proposals drawn up in light of cost-perunit ceiling are likely to have capacities from four to ten targets prior to saturation, it is unlikely that the expert will find much difference among the proposals in this respect or attach the proper degree of importance to it. Accordingly, experts must set boundaries based on their best estimates of what range is technically and economically feasible.

Completing the quantification of the model involves two more iterative steps: developing effectiveness curves and determining element weights. It is necessary first to translate measures of performance of controllable parameters into levels of effectiveness expressed as percentages, then to assess the importance of the bounded ranges of performance. The translation is achieved by having the appropriate experts assess the levels of effectiveness (on a scale of zero to 100) for varying levels of performance within previously defined boundaries. The result is an effectiveness curve, a graphic representation of the relationship between levels of performance with respect to a mediating factor, and effectiveness (see Figure 4). The significance of an effectiveness curve is that performance is immediately related to and cannot be considered apart from the context in which it oc

curs. The curve itself means that performance is seen as a continuum over a range of levels of performance; a particular level of performance is but one possibility within the range of plausible levels of performance, not a point immutable. Similarly, effectiveness varies with the range of levels of performance; many levels of performance within the previously specified boundaries have some, though less than optimum, effectiveness with respect to the mediating factor. Effectiveness, then, is an intraparametric measure of worth throughout the plausible range of levels of performance.

"One of the most important steps in developing the MAU model involves a determination of the rules, or formulae, for representing element relationships within the model."

The use of effectiveness curves has two benefits. First, an effectiveness curve avoids categorizing a particular level of performance as either acceptable or unacceptable, as the stipulation of system requirements demands. Specific requirements can, in fact, be represented by effectiveness curves in which effectiveness is at a minimum when the requirement is not satisfied, at a maximum when it is. Second, the particular shape of an effectiveness curve reveals those ranges of performance in which a slight change in performance may result in a significant change in effectiveness, for better or worse, or those ranges in which a significant change in

performance may result in little or no change in effectiveness. The effect of changing levels of performance in terms of TSU can be readily displayed, and the shape of an effectiveness curve may often be diagnostic of that range in which a trade-off between performance and cost is desirable.

The final step in this highly iterative process in developing an MAU model is to determine the relative importance to be assigned to subelements of the same component. As part of the quantification of the model, relative importance (determined by a process called weighting) is expressed as a decimal between 0.0 and 1.0. The purpose of weighting is to recognize the different contributions that subelement effectiveness makes to element effectiveness. Otherwise, the assumption is that the effectiveness of each subelement contributes equally to the effectiveness of the element-which assumption is usually false.

The problem in weighting, then, is a methodological one. Since weights impinge upon the figures for effectiveness and the formula for aggregating them, the difficulty is twofold: how to assess weights and how to incorporate them into existing but as yet incomplete combination rules for aggregating measures of effectiveness. Assessing weights depends upon the relative influence a change in the effectiveness of a subelement has upon the effectiveness of an element, an influence that may be affected by the relationship among the subelements. That is, weights assigned to subelements are a function of their effect upon the element as the effectiveness of each changes from minimum to maximum. Since the

assessment of weights depends upon the nature of the combination rules, this technical discussion is also avoided.10

Validating the Model

Once the MAU model is developed, two types of validation analyses are conducted. The first, internal validation, consists of examining the effects of variations in weights and effectiveness curves for purposes of identifying critical aspects of the model. This process, known as a sensitivity analysis, has shown that MAU models are usually dependable despite small errors in utility functions or weights. It is also typical that MAU models are sensitive to structural changes resulting from different interrelationships among model factors or to associated changes in combination rules. It is the structure of the model that either does or does not capture the true nature of the problem. Internal sensitivity analyses cannot, however, guarantee this type of validity.

The second type of validation, external validation, provides evidence of the validity of the model structure by using known systems to calibrate the model. The model should assign to these known systems, whose record of performance has been well documented, values that are consistent with the generally accepted assessments of the systems based upon that record. If it does so, it helps to ensure that the model behaves as it should. If not, structural changes are required.11

10 Ibid., pp. 27-29.

11 In the actual external validation of the MAU model, it was found that not enough credit had been given the sensor for its ability to detect false alarms. The model was altered to account for this fact.

[subsumed][subsumed][subsumed][merged small][graphic][merged small][subsumed][merged small][subsumed][merged small][merged small][graphic][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed]

The implementation of such models on interactive computers also allows "what-if" type analyses that can accommodate proposed design changes. Similarly, the model can actually be used to search for those feasible configurations that have maximum utility.12

An example of what the completed model is like is suggested by that part of it for the effectiveness of the sensor of the decoy in the open sea scenario, as shown in Figure 5. Effectiveness curves would assess the percentages of effectiveness for the estimated level of performance of smallest parameters. These percentages would be multiplied by their weights, and related products would be combined according to the rule implied by the signs (here, by addition in all cases). The resulting figures for the effectiveness of the subelements or parameters relating them are, in turn, multiplied by their weights, and, again, the products are combined according to the rule implied by the signs. Note in this instance that the figures for three subelements and an additive constant are

by the

summed before multiplication by the subelement for detection. (The additive constant in this case says that the sensor has some, potentially 70 percent, effectiveness if it merely detects a target.) By a repetition of these steps throughout and up the model, figures are aggregated until they yield a single figure representing the military worth of an EW System design pro

12The MAU model was implemented on computers at the Naval Research Laboratories, Washington, DC, and was used both for numerous "what-if” type analyses in the procurement stages as well as for other design problems on related projects.

posal. Theoretically, if an MAU model completely encompasses those considerations entering into a particular stage of the procurement cycle, a ranking of the figures should select those proposals for engineering development for prototype testing.

Ultimate Benefit

The use of MAU models is not likely to be confined solely to the evaluation and selection of design proposals and prototypes in the procurement cycle. It should be clear that an MAU model can also be used to design a system and diagnose areas in which technological development is worthwhile or not. It is quite likely, therefore, that MAU models will be employed by contractors to assist in the design of other sophisticated and complex major weapon systems. When the use of MAU models becomes commonplace, the military services and contractors will be better able to discuss the performance of major military systems in a systematic manner. For one of the most significant advantages of MAU models is their potential for promoting effective communication. DMJ

DR. MICHAEL L. HAYS was Publications Manager of Decisions and Designs, Inc., at the time this article was written. In that position he edited technical reports on decision-analytical research and applications.

Dr. Hays holds B.A. and M.Ed. degrees from Cornell University and M.A. and Ph.D. degrees from the University of Michigan.

DR. MICHAEL F. O'CONNOR is a systems evaluation manager at Decisions and Designs, Inc., where he applies decision-analytic techniques to the evaluation of major decisions on military procurement, social programs, and water quality studies.

He holds a Ph.D. from the University of Michigan.

[merged small][graphic][merged small]

The extent to which conflict is a positive or negative factor in reaching the objective is largely determined by the manager's perspective.

[blocks in formation]

This article is adapted from the November-December 1976 issue of the Air University Review with permission. This article may not be reprinted or reproduced in any way without permission of the Air University Review and the Defense Management Journal.

« PoprzedniaDalej »