Let us calculate.


In the previous chapter, I laid out the three pillars upon which the utility framework rests. In this chapter, I want to closely examine the role the Principle of Optimality plays in the utility framework. This principle expresses the notion that choices must bear some relationship to a potential optimum: we choose optimally (under microeconomic theory); should choose optimally when practicing ethics (if we accept ethical utilitarianism); or we should choose optimally to make our life go as well as possible (if we are interested in the dictates of decision theory).

I examine the role optimality plays in choice and explore the difficulties that arise for the utility framework from the regrettable fact that decisions must be made in a stream of time with uncertain future consequences. My key argument will be that the Principle of Optimality relies upon a theory of information gathering that does not reflect the structure of the problem of choice facing individuals. Making decisions in a world with an uncertain future requires reaching a conclusion about possible consequences; the utility framework has never given sufficient consideration to the difficulties this project entails. As a result, the Principle of Optimality emphasizes the act of choosing rather than setting the stage for a subsequent choice. In building the argument, I will address both the uncertainty of preferences and the uncertainty of events in the external world, though it is the latter uncertainty with which the economics literature has generally dealt.

My argument will begin with the difficulties that arise from uncertainty in preferences.

2.1 The Unexperienced Utility ProblemFirst to be considered is something I will term the unexperienced utility problem.1* Roughly stated, the core of the argument is: decisions are always undertaken under conditions of uncertainty. This uncertainty stems from the regrettable necessity of taking decisions now whose outcomes will only be known later. One manifestation of this uncertainty is that an individual typically does not know the utility that will result from the vast majority of options in the feasible set. I am not referring to the difficulty that arises because one cannot predict the future state of the world; that problem will be addressed on the following pages. What I am referring to is the difficulty that lies in not knowing what utility is going to be derived from a contemplated action. Some simple examples should make my meaning clearer. I might like playing tennis, and know this, but I might not know that T would enjoy playing golf even more unless I tried golf. Similarly, I might like reading Tolstoy, but I would also enjoy reading Dostoevsky. In both cases, because I cannot know the potential utilities of unknown choices, my utility is lower than it would be were I to know the utility all states would provide. A further example is that of a friend who always orders the pepper steak at his favorite restaurant, for fear that any other dish will disappoint him. However, he does not know for a fact that this will happen since he has never tried anything else on the menu. An unlimited number of such examples can be imagined. We repeatedly spend our vacations in the sun because we have little experience with touring. Regret may follow a career decision of investment banking because it is difficult to know ex ante that hospital administration would have been preferred. Lessons in squash are not sought, since one does not know how much enjoyment can be derived from an improved game. These are situations of choice that are common to daily life.

The difficulty here arises from a not unwelcome source: the number and diversity of activities in which we can potentially engage ourselves is very much greater than our capacity for doing so. At a given moment of decision, then, one must be unaware of the utility payoff that many feasible decisions will bring. The results, I suggest, are manifold: regret, unnecessary restriction on chosen alternatives, dependence of choice upon spurious factors to make decisions, and ’’status quo bias,” to use Zeckhausnr’s and Samuelson’s term [102].

For the most part, the bodies of thought relying upon the utility framework have had little to say regarding the unexperienced utility problem. If we look to microeconomics, practitioners generally have been happy to assume that preferences are perfectly known.2 Ethical utilitarianism and decision theory fare no better. Utilitarianism, strongly beleaguered by other ethical theories, does not need to provide attackers with the additional ammunition of the unexperienced utility problem. And decision theory, already wrestling with sufficiently complex issues, generally ignores the further troubles introduced above.

Nonetheless, the unexperienced utility problem is a real difficulty and some thinkers have attempted to address the additional complications it raises. The claim on the part of these defenders of the utility framework is that the individual gathers information in some sort of optimal manner in order to determine actual preferences. The “some sort of optimal manner” condition is required to satisfy the Principle of Optimality; otherwise, we slide into the morass of possible satisficing models, and the key benefit of the utility framework -its ability to yield a solution -disintegrates.

Becker [4:6-7] recognizes the necessity of this optimal gathering of information, and makes this point succinctly in putting forth his case for the “economic approach to human behavior”:

The economic approach does not assume that all participants in any market necessarily have complete information . . . Incomplete information . . . should not be . . . confused with irrational or volatile behavior. The economic approach has developed a theory of the optimal or rational accumulation of costly information that implies, for example, greater investment in information when undertaking major than minor decisions . . .

He does not explicitly consider the problem of uncertainty of preferences in this work; uncertainty of preferences certainly fall, however, under the rubric of “incomplete information.”3

The solution that this tack suggests -that of “optimal or rational accumulation of costly information” -is, unhappily for the defenders of the utility framework, no solution at all. As pointed out at the beginning of this section, the individual cannot know the utility that will result from the activity in question until the completion of the activity. No amount of information gathering, short of actually experiencing the activity in its entirety, will suffice to determine the utility of the unknown activity. As elsewhere, gathering a sample would be useful in this regard; the difficulty is in specifying what form such a sample could take without actually experiencing the activity for which the sample is to provide information. If one hasn’t experienced the activity, then one cannot know how to properly design the sample on which the subsequent decision will be based. The bottom line, then, is this: one cannot weigh the cost of information against the benefit of information, thereby gathering the optimal amount of information, since one does not know the benefit nor does one have any technique to appropriately estimate such a benefit.4

We can conclude that the rules by which one decides to undertake unexperienced activities and to gather information regarding potential activities can never be optimal, at least if optimal is taken to mean more than the rules currently being followed. Any set of rules can only be a heuristic. This is not to say that such a heuristic cannot be rational; the difficulty is rather to specify what is meant by rationality in such a situation. The simplistic rationality criterion, equating expected marginal benefit with marginal cost (the Principle of Optimality), does little to illuminate the decision-maker’s predicament and the claim of optimality is empty.5

It is worth noting that the assertion of some form of unspecified evolutionary pressure cannot eliminate the unexperienced utility problem. An evolutionary learning process with a stable utility function might conceivably uncover most of the underlying preferences, but there is no guarantee, nor even any presumption, that this would be the case. The problem is simply that with respect to consumer behavior (such as choosing whether to vacation in Maui or fish in Wisconsin), there seems to be no compelling pressure for evolution to converge on the optimal rule. There is no reason why one cannot go through life without realizing that one would have preferred to become an expert wine-taster rather than a mediocre gardener. If we are going to make the claim of optimality, it cannot arise from the force of some type of evolutionary pressure. Indeed, it often seems that the word “evolutionary” is invoked in economics as a catchall to describe processes we do not yet understand.

I now wish to draw a distinction between two ways in which this phenomenon can be understood. The first is that preferences are already in existence and the information-gathering activity is a search to uncover these utilities. In sampling different goods or experiences, my knowledge of my preferences slowly converges to the actual, but not completely known, preferences. An adaptive learning process might be powerful to converge on actual preferences. This is the idea upon which Cyert and DeGroot [12] expand by taking the rational updating of probabilities that is the hallmark of Bayesian analysis and applying it to the utility function.6

The second interpretation of the future utility problem is a much richer one, and presents some additional complications for the proponents of the utility theory. The issue here is not one of uncovering preferences that are already in existence, but one of gathering information with respect to preferences that are themselves evolving. This case seems to be far more common than that of discovering pre-existing preferences.7

I have in mind those activities that require a considerable investment in the learning of a specialized skill before one derives much utility from the activity. That is, where a heavy investment in consumption skills (borrowing a term from Scitovsky [75]) is necessary. This would include various artistic endeavors (learning an instrument, voice training, dancing, and painting), sports (particularly those that require a high degree of skill before they are enjoyable, such as tennis, golf, squash, and wind surfing), various capacities for appreciation (ballet, opera, music), and aesthetic appreciation in general. All these activities must be learned before pleasure can be derived. The net of learning shapes future preferences (in addition to changing the feasible set).

Playing tennis and playing tennis well are obviously two different things. Moreover, knowing how to play bad tennis may be little indication of the utility that will be derived from playing well. Similarly, appreciating fine wines is a pleasure that is difficult to savor from the vantage point of quaffing Chateau le Grape. Predictions in the evolution of preferences are, for this reason, likely to be inaccurate and possibly biased in a systematic fashion. The relationship between the amount of effort invested and the payoff obviously turns on some factors that can be gauged in advance: the disposition of the individual to pick up the skill, the number of similar skills (such as learning a new language), and the effort applied. Yet if the skill or talent or ability to appreciate is truly unlike other activities that have been previously encountered, then estimating precisely the future utility of the activity is both a conceptual impossibility and a practical stumbling block to maximizing utility.8

We need not draw all examples from the realm of aesthetics. The acceptance of a certain career choice, for instance, is going to bring with it changes in preferences that cannot be predicted in advance -business success may bring one to despise the futility of the rat race or to increasingly savor the excitement of the big deal; the decision to forgo children may be regretted too late as the joy contemporaries’ offspring may bring becomes visible and the relative emptiness of affluence becomes painfully apparent.9

This discussion should also partially restrain what is usually termed “rational character planning”: the deliberate aiming at certain preferences to improve overall utility. We can certainly plan our character, but we cannot do so optimally. Some things we may surmise we will dislike or like: being an alcoholic or drug addict, or being healthy, or a grandmother. Nonetheless, we can never know for certain, and such mistakes can be irreversible.10

This section can be summarized by noting that the unexperienced utility problem -the difficulties arising from not knowing the utility that a particular outcome will yield in advance -creates a predicament for the Principle of Optimality underlying the utility framework. For the most part, bodies of thought relying upon the utility framework have been content to ignore this quagmire and assume that future preferences are fully known. Those theorists who have tried to wrestle with the future utility problem have appealed to some version of the optimal gathering of information argument.

This tack is unsuccessful, as the issue has simply been pushed back one stage and the decision-maker must still find a mechanism for predicting when an optimal information search must end. This may be possible for activities with which one has had a great deal of experience, but it cannot apply to novel activities. And since the world presents one with many more options than one could ever take advantage of, our decision-maker is trapped. The notion of optimality disintegrates.11 We have thus reached an impasse. The Principle of Optimality and the future utility problem are fundamentally incompatible. And since the structure of the decision maker’s problem of choice logically implies the unexperienced utility problem, the Principle of Optimality is going to have to give some ground. In the final chapter, I will have more to say about precisely what ground will need to be relinquished. In the following section, I will examine a second issue, one that is structurally identical to the unexperienced utility problem: the future events problem. In addition to not knowing their preferences for unexperienced activities, decision-makers also face the difficulty of not knowing future events. This latter problem has generally been of more interest to academics (despite the parallels in the two problems) and hence should be more familiar.

2.2 The Future Events Problem Previously, I considered the decision problem from the vantage point of an individual trying to determine what levels of utility would be received when the outcome could be predicted but the resulting utilities were less clear. Now, I turn the question around to the more familiar issue of how the individual makes decisions in an uncertain world. As this problem has traditionally been of more concern to those relying upon the utility framework, there is less need to motivate interest in the issue by starting from first principles.12 In the previous section, I began by describing situations that presented difficulties for the decision-maker, as the utility of possible outcomes was not fully known. In this section, the reader needs little convincing that a decision-maker does not always know what the future outcome will be, for experience soon demonstrates that the world does not always unfold as previously believed.

First, a description of the decision situation. Unlike the case in the previous section, there is no uncertainty about the utilities that would be obtained from various outcomes in the world. The only issue, then, is the uncertainty that arises by not knowing precisely what the future will bring. Further, in order to fix our mind on something tangible, let us suppose that the individual consciously strives to maximize his expected utility (that is, we are essentially assuming that our individual has taken the dictates of decision theory to heart and is trying to do as well as he can). Traditionally, the task for our decision-maker has been viewed as one of arriving at the relevant probabilities of future events for the purposes of decision making. Given these probabilities, and the required preference ranking of outcomes (assumed ex hypothesis), the expected utility theorem will enable the individual to make the optimal choice.

The term probability is fraught with an unusual amount of ambiguity, so specifying that the decision-maker must settle upon probabilities still leaves him with a fair amount of difficulty. At least three views of probability -empirical, objective, and subjective -have been developed in the relevant literature. Proponents of theories based on the utility framework, at least in the past 20 years, have generally depended upon the subjective view and I will concern my remarks, at least in the body of the text, to this view.13

Subjective probabilities are interpreted as the degree of belief an individual holds in a proposition. To give this notion content, there must be a procedure to obtain these subjective probabilities. To begin, any procedure that does not rely upon an action in the real world as evidence of a subjective probability can be ruled out. That is, ephemeral degrees of conviction or feelings on the part of the decision-maker are not of interest to us. Such mental entities are far too slender a basis upon which to build a theory of choice. Thus, subjective probabilities are entities whose efficacy can be shown in the world. The method by which this efficacy is demonstrated is a series of hypothetical wagers, as first properly worked out by Ramsey [61] and later formalized by Savage [67].14 An example from Elster15 [17:129] provides the notion here:

Have our decision-maker consider the following two options:

A: If event E happens, reward R is received. If E does not happen, nothing is received. B: If a red ball is drawn at random from an urn containing P percent red balls and l00-P percent black balls, R is received. If a black ball is drawn, nothing is received.

Assume for an initial value of P that the individual prefers option A. That is, expected utility is maximized with option A at this initial value of P. Then we can say that a subjective probability of greater than P is assigned to event E. By varying the mix of red and black balls, that is, by varying P, we can elicit with this procedure the subjective probability with any desired accuracy.

So far, no structure has been imposed on the subjective probabilities. The decision-maker, however, has such subjective probabilities over a great number of possible events, and a requirement of internal coherence is usually deemed desirable. By coherence, it is meant that one cannot make a sure (or “Dutch”) book against the decision-maker by using the latter’s subjective probabilities.16

The import of the coherence requirement turns upon the role which the utility framework is being asked to play. If our starting point is microeconomics, coherence implies that decision-makers generally construct the subjective probabilities in such a way as to satisfy this constraint. If our starting point is decision or game theory, the exercise of eliciting these subjective probabilities can reveal inconsistencies which the decision-maker should endeavour to patch up. As long as the inconsistency vanishes, how such a patching up is done is up to the individual.

The difficulty faced by this solution to the future events problem, however, is that the conditions governing the choice of subjective probabilities are far too easily satisfied to ensure an optimal, or even an intelligent, decision. I can believe that Canada will win the America’s Cup, the Toronto Maple Leafs the Stanley Cup, the Brazilian economy will strongly rebound, Ivan Boesky will become the next President of the United States, and a Hopi dance will bring a rain of egg-beaters, all with equal impunity as far as the subjective probability version of the expected utility theory goes.

I must adjust my subjective probabilities as my interlocutor shows that my belief that the yen will rise is incompatible with my belief that mice are conducting a widespread experiment with people, but the adjustments to be made are up to me and are a second-order issue. It’s difficult not to acknowledge that actual decision-makers use probabilities that satisfy conditions stronger than those imposed by a theory of subjective probability. We all use probabilities that are not only appropriate by the standards of consistency of hypothetical wagers, but that also satisfy stronger conditions of rationality. A belief that Polynesia is going to take over Canada certainly conforms to the formal constraints imposed on subjective probabilities, but I do not entertain this belief for many reasons that, in my judgment, appear to be decisive. Thus, to say that my probabilities satisfy the formal axioms is too weak a description of the situation in which decision-makers find themselves.

The requirement of optimality, therefore, remains unfulfilled (unless we merely redefine optimal to fit the theory at hand). And the spirit of the Principle of Optimality underlying the utility framework is grotesquely violated by the weakness of the formal constraints imposed on the choice of subjective probabilities. Paraphrasing Wittgenstein, to think that one is following a rule is not the same as to follow a rule [100:paragraphs 202-216].

Most proponents of the utility framework have been content to rely upon a theory of subjective probabilities -a reliance, as I have just demonstrated, that is incompatible with the Principle of Optimality. A move to a model of optimal information gathering is an improvement on using subjective probabilities as a solution to the future events problem. Indeed, any type of information gathering is an improvement on subjective probabilities from this standpoint.

Nevertheless, the considerations of Section 2. 1 still apply.

The optimal gathering of information to determine how the world will unfold presumably runs into the same difficulties as information gathering to determine unexperienced preferences. I can go to business school thinking that I will be one of the lucky ones who joins a consulting firm; instead, I could end up as a purchasing manager for a tire factory wishing that I had gone into dentistry instead.

The difficulties encountered by the Principle of Optimality can now be precisely pinpointed. Again, it should be noted that these problems, experienced preferences and future events, arise from a lack of full information. To make decisions, the puzzled decision maker must execute two separate activities: (i) determine preferences and probabilities and (ii) make decisions. Imagine that the process of gathering the relevant information has been completed, and our decision-maker is presented with these facts and asked to choose. There is little doubt that he would do a pretty good job, good enough to label his choice “optimal.” Deviations can be explained by appealing to that old standby, the psychic costs of thinking, or to time constraints, or to a whole host of factors that militate against finding the single optimum.

What is the role of the Principle of Optimality and the utility framework in this decision sequence, then? It is the following: given the relevant probabilities, and given the relevant preferences, the decision-maker makes the appropriate choice (under microeconomic theory), can choose the correct ethical path (under ethical utilitarianism), or can be taught how to make better decisions (using the lessons from decision theory). The utility framework, thus, only starts from the point where a stand has been taken on unexperienced utilities and future events.

The difficulty for the Principle of Optimality, however, is that one is neither given the relevant probabilities nor the relevant preferences. These are entities about which the decision-maker is uncertain and for which the relevant information must be gathered. And based on the argument of this chapter, the concept of optimality is inapplicable. The utility framework has cut into the problem half way. The prior activity of building the model defining unexperienced utility and future events is equally relevant to the eventual quality of decision making and, in my view, much more difficult. The issue, therefore, is not whether the decision-maker has made the appropriate decision given his view of unexperienced utility and future events. Rather, the issue is to determine the basis necessary to actually generate the predictions of utility and events required by the decision-maker. Whether the individual actually chooses optimally or not is secondary. The utility framework has been conspicuously silent on this point.17 In the subsequent section, I elaborate on this notion of a two-part decision process and start to describe the distinctions in approach necessary for the activity prior to choosing probabilities and preferences and the one that occurs afterward.

2.3 Judgment and Calculation This section begins by reviewing Knight’s18 [38:233] crucial distinction between risk and uncertainty:

The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori from the statistics of past experience), while in the case of uncertainty this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique.

A risky situation is one in which the probabilities of the possible outcomes are not themselves subject to much doubt. Thus, while the outcome itself is unknown, it can be asserted with confidence that the ex ante probabilities of the various outcomes are known. Uncertainty refers to the situation in which probabilities can be assigned, but little confidence can be placed in them.

Some examples will clarify this distinction. Rolling a fair die is a risky situation. There is an equal probability that it will turn up 1 or 6. So while one cannot know which number will turn up, one is confident that the number 3 will turn up one-sixth of the time. A less artificial situation might be predicting the outcome of a tennis match between two players who have played many times and have split the matches. With justification, this situation could be considered risky with probabilities one-half of winning for each player.

As an example of an uncertain situation, consider trying to compute probabilities that the American Olympic ice-hockey team will win the Gold Medal in Albertville in 1992. What would be the procedure that could produce such a probability? Would one look at past Olympics, assuming that these would somehow be “like” the upcoming Games? It is this “likeness” that one relies upon, of course, to give confidence in an estimate. How far back does one go? 1980? 1972? 1964? What could be grounds for determining which extrapolation would be the most accurate?

As another example, consider the argument that the recent economic events in the developed world are similar to those of the period leading up to the great depression of the 1930s, with the October 1987 crash simply serving as the definitive evidence that we are rushing toward global economic disaster. The implication is that, as a result of this similarity, we should be wary of the grave economic consequences that will result from the current activity.

In both of these examples, a number that expresses the probability of possible events can certainly be computed -but the fact that one can produce such probabilities does not ensure confidence in them.19

Knight puts his distinction to use by explaining the existence of profits in a capitalist economy. In the first chapter of Risk, uncertainty, and profit, Knight demonstrates how the inexorable logic of the businessman’s challenge finally leads us to the insight that economic profits (that is, profits above the risk-adjusted time-value of money) only arise in situations in which the future is truly uncertain [38:46]:

If risk were exclusively of the nature of a known chance or mathematical probability, there could be no reward for risk-taking. For if the actuarial chance of a gain or loss in any transaction is ascertainable, either by calculation or a priori or by the application of statistical methods to past experience, the burden of bearing the risk can be avoided. . .

Known risks can always be converted from uncertainty to certainty through the judicious design of economic institutions.20

Nor do changes in the economic environment necessarily lead to opportunities for extraordinary profits. If the change is anticipated, this will be reflected in the underlying price of the asset – as all speculators should know. Nor can change that is unknown, but can be assigned probabilities, lead to profits, since such risks can be diversified away.21

In the second category of decisions – those in which such an objective frequency does not exist – can be placed most of the true determinants of business success: whether it is worthwhile to invest in innovation in the railroad, industry, whether Argentina exhibits enough political stability to place a factory there, whether a cost-cutting plan can succeed without destroying corporate morale, whether business property values will rise because a state might pass right-to-work legislation, whether entering the home video market will be a good long-term strategy. It is these types of decisions, in which the required, probabilities are not known, that are key for business success, since it is only from judicious judgment here that economic profits arise.

Using this distinction between risky and uncertain situations, some remarks should be made on how a decision-maker must approach each category of decisions.

Decisions in risky situations, with known probabilities, can be made exclusively with the use of purely analytical methods. These types of decisions will be called calculated. Decisions in uncertain situations, however, generally cannot be solved analytically with analytical tools, precisely because the probabilities upon which such a technique rests are unknown. This follows, ex hypothesis, from the existence of uncertainty in these cases. Moreover, the decision-making difficulty in such cases is more acute than in those of calculated decisions simply because such a technique is not available. These types of decisions will be called judged.

The claim is not that decisions requiring judgment are merely shots in the dark; but that objective standards of measurement, which can be used for calculated decisions, are not available. With future events known, or with the objective probabilities of future events known, the issue facing a decision-maker is reduced to the relatively mundane technical problem of choosing the correct alternative. Calculated decisions are, thus, completely deductive. To be sure, this task might require a certain degree of expertise and undoubtedly many decision-makers will fall short of the optimum. An example is given in a recent paper by Johnson et al [33]. In this paper, individuals were presented with a known scenario that included incomes, life span, and utility functions, and were asked to choose the most preferred consumption path. The right answer in this situation is the consumption plan that maximizes lifetime utility. Not surprisingly, many of the surveyed individuals were unable to perform the calculation correctly – though perhaps if explicit recognition of the effort of the computation and the payoffs were included, the respondents’ answers were actually closer to the true optimum. The well-known survey edited by Kahneman, Slovic and Tversky [34] is also an illustration of problems that creep into the deductive stage of decision making – despite its misleading title.In another Tversky and Kahneman article [87], subjects were asked to update the probabilities of an individual being an engineer or businessman given a certain description of the individual. As Kahneman and Tversky describe this problem, it is deductive. The information provided was of no value and the prior probabilities should not have been updated. Thus, a failure to arrive at the correct solution is a failure of calculation and not, as the authors imply, a failure of judgment. A similar ambiguity can be noted in their use of the term “uncertainty.” Undoubtedly, they mean risk, since the relevant probabilities can be precisely determined. By applying the terms “deductive” and “calculated” to such risky decisions, I imply that there is relatively little action occurring here. Were decisions under risk the rule, the specifically entrepreneurial function would disappear, and the world could be given over to the technocrats.

In the case of those decisions in which the probabilities are not available – that is, in which genuine uncertainty prevails – the decision-maker cannot simply compute the optimal decision from the given probabilities. Something further is required here. And the precise form of this “something further” is what we need to explore. Applying the term “judged” to those decisions in which uncertainty prevails implies that the activity of judgment plays the key role here. And, indeed, this the case. Delineating precisely how such judgment takes place will obviously bring us that much closer to a clear distinction between these two types – calculated and judged of decision making.22

According to Knight’s sketch of epistemology,23 physics suggests to us that the world is made up of “ultimate things” whose behavior is governed by unvarying laws. Unfortunately, workable knowledge of the world requires that we consider not these ultimate things, but those of everyday experiences that are complexes of the ultimate things. However, there are still far too many things to be dealt with by our finite intelligence. Thus, we require not only that the same thing is going to behave the same way, but also that the same kind of thing is going to behave in the same kind of way. This is why classification has always played the key role in thought and in the theory of knowledge. It is also why we, as creatures endowed only with finite intelligence, require a certain degree of regularity in our everyday life. Lacking such regularity, we could not rely upon the same kind of thing behaving in the same kind of way and our limited intelligence could not provide the necessary coherence for thought to be possible.24

The issue has now been pushed back to delineating the same kind of thing. The difficulty here is that no two things are identical in all aspects: everything is in some ways alike and in some ways different. To say that two objects are alike, then, means that the two objects are alike in a number of specified aspects.

Classification (of objects, persons, situations, etc.) now requires two steps: (i) deciding which aspects are deemed to be relevant; and (ii) determining the degree of resemblance sufficient for these aspects to be alike. The idea here can be captured by a logic of analogy, a logic which, at a first pass, might look something like this:

(A)Objects that are alike in the relevant (to be specified) respects behave similarly. (B)Objects X and Y are sufficiently similar in these respects. (C)Therefore, we can conclude that X and Y will behave similarly.

Thus, if we have previously encountered object A, we now have a guide to the probable behavior of object B.

Unfortunately, there is a drawback to such a logic when compared with typical systems of deductive logic (the same drawback, incidentally, that applies to systems of so-called relevance logic): despite the specification of rules of analogical inference there still remain two crucial sources of ambiguity. Whereas a deductive system takes us from beginning to end without asking anything from the syllogizer, reasoning by analogy cannot completely eliminate personal judgment. This act of judgment is manifested in the relevant aspects of objects chosen and the degree of similarity deemed sufficient. This is not to say that the choice of analogies cannot be discussed, nor that some inferences are better than others, but simply that no amount of argument will completely eliminate judgment. As with moral reasoning, discussion of how this classification is to occur can proceed, but it promises no ultimate resolution. In the end, the same object (person, situation, etc.) can be legitimately placed in more than one category.25

What ramifications does this observation have for decision making under uncertainty? Simply that there is no unique maximal decision – even if we agree completely on the objective – that arises from the nature of the decision to be made. It is precisely this lack of determinateness that puts judgment out of the bounds of the utility framework; the tidy linear thinking of the Principle of Sufficiency of Preferences has disintegrated and the decision-maker can no longer be assured that a unique action will be identified in all situations.

An example will make this epistemology more concrete. First, a simple one. Consider an orange. I can identify this object as an orange because I have a category “orange” with which I am well acquainted and as this article is sufficiently similar to my category “orange” in the relevant aspects I feel confident in calling it an “orange.” The relevant aspects here probably correspond to color, texture, and size. I may, of course, be wrong (it might be a tiny grapefruit injected with orange dye), but most of the time I will not be too surprised.

In a more complex situation, classification under these categories might not be so simple. Is IBM a good buy at the present time? Well, it depends. We need to answer two prior questions.

First, what are the relevant criteria we must consider to form a judgment as to whether the stock will move up or down? The relevant criteria in this case might be the general state of the economy, the likelihood of certain technological changes in the industry, future exchange rates, etc. Second, given the models of IBM stock movement constructed, to which model does the current situation correspond in the relevant criteria? Both of these stages introduce crucial questions as to how classification of certain phenomena is to proceed, questions that cannot be settled by any amount of calculation. It is easy to multiply examples of these more complex situations: Will a Canadian free trade agreement with the United States widen the income disparity in Canada?26 Who should the New York Giants draft in the first round in 1989? Will there be a major bank crisis in the United States in the early 1990s?

The activity of judgment, thus, consists of (i) choosing the categories that are to be used to parse the “things” in the world; (ii) selecting the relevant aspects of “things” to perform this classification; and (iii) applying an analogical type of reasoning to this classification to decide how “things” are likely to act. There is no analytic technique that eliminates a degree of personal judgment in deciding the relevant categories and necessary degrees of similarity. Moreover, there does not seem to be any relationship between calculation and the process that decides upon the relevant classifications.

Decision making can thus be described as a sequential process: a stage in which the model of the world is chosen – and here judgment plays an exclusive role – and a stage in which, given the model of the world, the decision-maker calculates to arrive at the preferred decision. In this model, all the interesting work has been done before the calculation stage has commenced. As with a deductive argument, in which the conclusion is contained in the premise, the decision has essentially been made once a model of the world has been accepted. The calculation can be done more or less well, but this is, as I have stressed, a second-order issue.

Judgment, therefore, is the necessary response to uncertainty in the world. If risks were all of the known variety, then judgment would have no role to play and Liebniz’ prophesy (or wish?) of “Let us calculate” would be realized.27

Returning to Knight’s theory regarding profits, it is judgment, and not calculation, that is the source of economic profits. Arbitragers may be able to calculate in a world with transaction costs, but businessmen make money only by having better judgment than average.

We can finally connect these insights regarding the role of uncertainty to the future events and unexperienced utility problems that originally motivated this chapter. If economic profits can only be explained through the application of judgment rather than calculation, then judgment is of even greater import for the individual facing the future events and unexperienced utility problems. Whereas the businessman has several methods of countering uncertainty – a number of which are listed by Knight in Chapter VIII of Risk, uncertainty, and profit – the individual has no mechanism to somehow convert his uncertainty to risk through the judicious design of social institutions. This is because the individual plays the game only once and, hence, has no way of pooling instances. Thus, the aspect of judgment that gives rise to economic profits plays an even greater role in individual decision making. If the decision-maker faced a future where both events and utility were known, decision making would be easy and utility theory would illuminate the situation. However, those decisions that are the most important are precisely those in which these conditions do not exist. Decisions, and the more critical the decision the more this holds true, are made under uncertain circumstances. The success of such decisions will be determined, therefore, by this elusive aspect of judgment. One who displays better judgment – using a scheme of classification and a logic of analogy in a better way – will make better decisions.

Given the arguments of this chapter, how is it that the utility framework has provided much less illumination into decision making than that claimed by its proponents? By assimilating Knight’s uncertainty with known risks through the device of subjective probability, the crucial distinction between calculation and judgment has been obliterated. By equating objective probabilities with subjective probabilities (as elicited through the device of bets described earlier in this section), economists have delegated the problem of judgment to the formation of these probabilities. Since we take these probabilities as given, there is no room to question their origin and the whole issue of judgment is discreetly ignored.

2.4 Conclusion This chapter has examined the role of the future utilities and future events problems and argued that the Principle of Optimality and the underlying utility framework are not relevant to the problem posed by these uncertainties. By focusing only on the second stage of a two-stage process – that of making decisions once the relevant data has been assembled – the utility framework has ignored the more pressing issues raised by the prior activity of gathering and interpreting the information required. When the entire decision process is examined more closely, it is obvious that the Principle of Optimality can play a subordinate role only. The utility framework is of no assistance during the crucial stage of information gathering and model building.

Further, the issues raised in this chapter apply equally to microeconomics, ethical utilitarianism, and decision theory. All three bodies of thought must rely upon some mechanism to decide what will happen in the future – both with events and preferences – and must grapple with the element of judgment which this chapter argues is essential to decision making.

Neither will profit maximizers avoid the difficulties raised above. Though profit maximizing does not need to concern itself with the unexperienced preferences problem – money is already a wonderfully fungible commodity and, ex hypothesis, the profit maximizer is simply seeking more of it – the future events problem is no more tractable for this group than for utility maximizers.

In the next chapter, I examine the principles of Sufficiency of Preferences and Commensurability in more detail and argue that the capacity to directly compare alternatives is imposed by the decision maker and is not external to the decision process.


1.The unexperienced utility problem is related to the argument for satisficing given by Winter [98,99] in a slightly different context. The original argument for satisficing is due, of course, to Simon [80] and reiterated in a later work by Elster [17:III.5]. The term “future utility problem” is used by March [43] in his excellent study of the “engineering of preferences.”

2.I know of no standard microeconomics treatment with much to say about the unexperienced problem.

3.The other source of incomplete information, that of not knowing what the future will bring, is the topic of Section 2.2.

4.Simon [80:11] makes much the same point in exploring the difficulties that arise from incomplete preference orderings: If the payoff were measurable in money or utility terms, and if the cost of discovering alternatives were similarly measurable, we could replace the partial ordering of alternatives … by a complete ordering . . . Then we could speak of the optimal degree of persistence in behavior . . . But the central argument of this paper is that the behaving organism does not in general know these costs . . . Not knowing unexperienced utilities is another manifestation of the problem of bounded rationality that is the thrust of Simon’s paper.

5.The preceding discussion should also clarify why simplistic approaches are not going to solve this particular problem. I have in mind those models (for example, von Weizsacker [97] or Poliak [58] in which an ad hoc dynamic of preference change is introduced. For example, in von Weizsacker’s two-good model, demand in the current period is a function of prices, income, and consumption in the previous period. The appropriate difference equations and stability conditions are easily derived. Long-run demands are given by the stationary point of the short-term demand functions. Von Weizsacker does derive an important result that accords with intuition: long-run demand is likely to be more elastic than short-run demand as individuals adapt. This insight, however, does not answer the central concern: how the individual is to choose those demand functions that such models assume in advance. The problem addressed is how such a demand is formed. It is precisely when the consumer does not have the model of future preferences that a problem arises.

6. According to Cyert and DeGroot [12:223-4]:

It is our belief that the concept of learning can be applied to utilities as well as to probabilities . . . The common use of utility functions assumes that an individual can calculate accurately the utility we will receive from any specified value of the variables. We are proposing instead the concept of an adaptive or dynamic utility function in which the utility that will be received by the individual from specified values of the variables is to some extent uncertain, and the expected utility from these values will change as a result of learning through experience.

The idea of deliberately learning about one’s utility function becomes explicit, as it can clearly be optimal to expend some resources to find out if a particular commodity yields utility [12:228]:

By exploring the actual utilities of various consequences in the early stage of the process, the individual can learn about his utility function. This learning will result in the elimination of some or all of the uncertainty that is present in his utility function.

For reasons that will be explained later in this section, it is difficult to provide examples of this phenomenon that are not ambiguous. One suggestion might be how one can resolve the uncertainty of preferred choice when confronted by a plethora of ice-cream flavors available from the better vendors. Until one has sampled at least some of the choices – from raspberry swirl to heavenly hash to butter pecan and lemon sherbet – one will not know one’s preferred flavor.

In Cyert and DeGroot’s model, unfortunately, the same error of uncovering the cloaked utilities in an optimal fashion is committed. In their model, an unknown parameter affects the utility function in a systematic fashion. The decision-maker, in a multi-period model, has three possible strategies: (i) optimize myopically; (ii) optimize over the multiple periods, but ignore the fact that in general expected utility and actual utility will not be coincident; (iii) globally optimize by using early decisions to deliberately learn the unknown parameter, thereby increasing utility in the later periods. Not surprisingly, the correct strategy is the final one. This setup, however, ignores the key issue: a problem exists precisely because the individual does not have full information regarding his preferences. The assumption that he has the information necessary to construct the probability distributions gives rise to exactly thesame difficulty that led Cyert and DeGroot to abandon the standard model in the first place. Indeed, it seems much more plausible that one would simply know the utility of any consumption basket than that one would know the distribution of these potential utility payoffs.

7. I should emphasize that it is critical to bear in mind the distinction between uncovering extant preferences and tracking evolving ones. Simon [80:113] even falls into this muddle:

The consequences that the organism experiences may change the payoff function – it doesn’t know how well it likes cheese until it has eaten cheese.

Here, Simon confuses the changes in preferences that result from activities undertaken by the organism (“The consequences that the organism experiences may change the payoff function”) from the uncovering of an already extant payoff function that arises from trying something new (“it doesn’t know how well it likes cheese until it has eaten cheese”). The former corresponds to the second and, as I have argued, more common situation, while the latter corresponds to Cyert and DeGroot’s notion of learning one’s utility function through search behavior.

8. It is for these reasons that the exercise in which Stigler and Becker [83] engage does not quite capture all the nuances presented by preferences over time. In the Becker-Stigler model, a taste for good music is treated by taking into explicit account future evolution of music tastes [83:78-9]:

The marginal cost is complicated for music appreciation M by the positive effect on subsequent human capital of the production of music appreciation at any moment j.

There is one insight here – tastes can change in response to exposure to an unfamiliar activity (or rather, in the explanation of Stigler and Becker, tastes remain constant and the capital invested in music appreciation grows). Further, there is the recognition that an optimal decision-maker must account for this evolution. Thus, marginal cost is “complicated.” The difficulty, however, lies in the assumption that one can predict how investment in music appreciation capital will alter the utility function. How this is to be performed is never specified by Stigler and Becker – and for good reason: it can’t be done. Unhappily for proponents of this view, owls only spread their wings at dusk.

9. A related issue here is the complication introduced by assuming an individual with both current preferences and known future preferences. Perhaps this point was made most succinctly by the jazz musician’s answer to: “Where will jazz be in 20 years?” “if I knew, I’d be there.” This pithy response reflects the idea that it is only by “being there” that one can judge what “being there” is like.

This idea also has some connection, incidentally, with monetary models that display the unraveling effect. If we are absolutely certain that currency will be worthless in 1,000 years then, by the process of unraveling, it must also be worthless today. A similar dilemma presents itself in relationships. If I am sure that I will have to leave the woman I love in 6 months time then, despite my current emotions, this knowledge works to unravel the relationship back to the present moment.

We might sum up this digression by saying that there are some deep conceptual issues implied by having current preferences and known future preferences. The investigation of these issues will be relegated to a later work.

10. We must distinguish this type of rational character planning, however, from the case in which two simultaneous selves have competing preferences. For example, I may wish to smoke when I crave a cigarette but there are times I may wish that I never had such cravings. The issue here is one of two selves who are present concurrently or consecutively, and the preferences of each self are known. In this case, there is no difficulty with knowing the other self’s preferences, because the other self does not exist solely in the future.

I will have more to say about competing preferences in Chapter 3.

11. How we should view the future utility problem clearly turns on the magnitude of utility losses through misunderstanding potential preferences. While this is indeed a serious problem – many individuals miss out on potentially satisfying activities because of a lack of investment in information gathering – this thesis is not going to attempt to measure the size of the loss. I will leave that task for another study.

12. This comment explains why I have presented the two issues in this order, despite the fact that both the unexperienced utility and future events problems are structurally identical. By starting with the less familiar unexperienced utility problem, I was able to make some key points without pouring over vast amounts of prior literature.

This disparity in attention devoted to the two problems reflects the opinion of thoughtful investigators that while one’s future utility is known with relative certainty, the future states of the world are not. From my contention that the two issues are very similar, it is obvious that I do not share this sentiment. The difficulty of knowing one’s future mind is no more tractable than that of knowing what will happen in the future. Both issues must be addressed by a theory of choice under uncertainty. By presenting the future utility problem first, I hope to have convinced the reader to view the two issues as parallel.

13. In the interests of completeness, it is worthwhile to review the concepts of objective and empirical probabilities:

     1.Logical or objective probabilities. The idea here is that probability represents the objective relationship between two sets of propositions. Given one set of propositions, a definite degree of belief exists that is rational to entertain with respect to the second set of propositions. According to Keynes [37:6], the rules by which this probability is determined are the proper subject matter of the logic of probability. This probability is, thus, a relation between two sets of propositions. If the premises are altered, then the rational degree of belief in the conclusions will also generally change. Therefore, to say that a certain proposition is “probable,” or to assign a probability to a proposition is incomplete until the initial premises have also been specified. Given the present state of the individual’s knowledge, he should seek the probabilities that are “objectively” implied by the logic of induction:

Between two sets of propositions . . . there exists a relation, in virtue of which if we know the first, we can attach to the latter some degree of rational belief.

It should be stressed that this relation, though conditional on degree of knowledge, is independent of the individual who possesses this knowledge, and, in this sense, is objective. I think we can dismiss the idea that an objective degree of rational belief exists – that is, the notion of a logical probability. My reasons here are essentially those of Ramsey [61]. The key criticism is the obvious one that [61:65-6]:

. . . there really do not seem to be any such things as the probability relationship [Keynes] describes … I do not perceive them … I shrewdly suspect that others do not perceive them either, because they are able to come to so very little agreement as to which of them relates any two given propositions.

The point is simply that there is no empirical evidence, nor any compelling theoretical reason, to believe that such a probability relating two propositions actually exists. We can agree that broad rules constrain the way in which such probabilities can be combined – the normal rules of consistently manipulating probabilities certainly hold – but we cannot agree, even given specific evidence, on the basic probabilities of any event occurring. As Ramsey [61:66] puts it, “it is as if everyone knew the laws of geometry but no one could tell whether any given object were round or square.”

Indeed, as Ramsey points out, there is some equivocation in the treatment Keynes himself gives to this view [37:32-3]:

Probability is … relative in a sense to the principles of human reason … if we do not limit it in this way and make it …relative to human powers, we are altogether adrift in the unknown; for we cannot ever know what degree of probability would be justified by the perception of logical relations which we are, and must always be, incapable of comprehending.

Keynes’ treatment undercuts his own efforts at arriving at probabilities objectively based on the degree of evidence by arguing that probabilities are not entirely numerical or orderable, an idea that co-exists uneasily with the “objective” degree of rational belief. In Part I, Chapter III, of his Treatise on probability, Keynes uses a careful empirical argument to demonstrate that these probabilities may only be cardinal and not ordinal, or even inorderable. He concludes [37:30]:

Some cases . . . there certainly are in which no rational basis has been discovered for numerical comparison. It is not the case here that the method of calculation, prescribed by theory, is beyond our powers or too labourious for actual application. No method of calculation, however impracticable, has been suggested.


Consider three sets of experiments, each directed towards establishing a generalisation. The first set is more numerous; in the second set the irrelevant conditions have been more carefully varied; in the third case the generalisation in view is wider in scope than in the others. Which of these generalisations is on such evidence more valid? There is surely no answer; there is neither equality nor inequality between them.

In these passages, Keynes appears to have effectively demolished the basis upon which he erected his theory in the first chapter. If the degree of “objective” belief justified by giving evidence cannot be quantified, or even ordered, then the relation which Keynes describes loses its sense. Keynes’ acuity as a thinker and his grasp of the issues concerned make it difficult to understand the source of these apparent confusions. Perhaps a more careful study of the text can clear up these seeming inconsistencies.

     2.Empirical probabilities. These probabilities represent the limit of a relative frequency. A large number of trials under relatively similar conditions (the issue of “relatively similar” has its own difficulties, to which I allude below in mentioning the paradox of probability) yields a probability that our individual employs for decision making. In the original von Neumann-Morgenstern [52] development of the expected utility theory, the assumption that the decision-maker has access to these probabilities is glibly inserted into the theory under the guise that such a concept is “perfectly well founded” [52:19]. If the decision-maker stumbles upon the needed probabilities, then he is going to choose the most preferred action from his feasible set (remember that the future utility problem has been put aside).

The difficulty here, of course, is the necessary conditions under which relative frequencies can be constructed do not hold for those decisions in life that are unique or, to a large measure, unrepeatable. This criticism requires a proper delineation; in those situations in which the decision-maker has access to the results of a large number of relatively similar trials (and where “similar” is fraught with its own difficulties in specification), the relative frequency concept is applicable. When I know that 1 can of tennis balls will last 2 matches, or that my tires are likely to last 60,000 miles on the basis of consumer tests, or that flights from Boston to New York are on average 90 minutes late during rush hours, the probabilities used have been garnered from a large number of relatively similar trials. The future state of the world is not known but I have a reliable guide – one that is similar to that often assumed in the elementary decision-tree explanations in introductory books on decision theory.

The conditions that give rise to such probabilities, however, are not always met and are the most infrequently satisfied for those crucial decisions that are made most seldom. Choosing whether to marry a particular person, between Harvard and MIT business schools, or between professional football and baseball are decisions in which such long-run frequencies are likely to be irrelevant or meaningless. Statistics may exist, of course: a high proportion of people who marry at a young age get divorced, MIT graduates get better jobs, baseball players are higher paid and have longer careers. The difficulty, though, is that there may not be good reasons to assume that such probabilities, even if they do exist, apply to the case at hand. Though aggregate frequencies can be computed for the relevant class of events, there may be reasons to believe that such probabilities are not those I require to evaluate the situation for me. I have access to additional information: my fiancee and I have extensively discussed issues that will arise in marriage, Harvard has better marketing professors and marketing particularly interests me, football is better suited to the skills I will likely develop in the pros.

Knight has carefully spelt out the cruel irony of this situation [38:218]:

The paradox, which carries us at once into the heart of the logical problem of probability, is that if we had absolutely homogeneous groups we should have uniformity and not probability in the result … If the idea of natural law is valid at all, it would seem that men exactly alike and identically circumstanced would all die at once; in any particular interval either all or none would succumb, and the idea of probability becomes meaningless.

Knight’s comment here calls into question the coherence of assigning a probability in such situations. Although there may be some situations, such as drawing playing cards, to which the notion of probability is applicable, there are many where it cannot be applied. Our decision-maker has bumped up against the unhappy fact of life that the requisite objective probabilities do not exist in most situations of interest.

Shackle’s Expectations in economics, an illuminating though neglected work, also lays out the argument against the frequency interpretation [78:109]:

In order to establish empirically a figure for the probability of a given outcome we must have made a “large” number of trials under conditions which are constant in a specified sense . . . Now for many important kinds of decisions which must be taken in human affairs it will be impossible to find a sufficient number of past instances.

Shackle goes on to add a further twist to the issue. Not only are long-run or empirical frequencies generally not available for the most crucial decisions, but also the individual – even if the frequencies are available – chooses only once. Shackle sees this difficulty as being even more critical [78:109-11]:

. . . this difficulty [of establishing empirical probabilities] … is a minor one compared with the fact that, even if by vicarious experience a probability is established, many kinds of decisions are for each individual virtually unique.

So much for empirical probabilities. The preceding discussion emphasizes the difficulty of discovering these probabilities for the purposes of our decision-maker. There is no reason for us to expect that these probabilities exist for a wide range (and the most important range) of decisions that interest us. Therefore, empirical probabilities cannot underpin the theory of decision making under uncertainty. To the microeconomist and decision theorist, this conclusion will hardly come as a surprise.

14. Luce and Raiffa [41] also explain this idea in their classic survey:

We shall report on the school [of thought]… which holds the view that by processing one’s partial information (as evidenced by one’s response to a series of hypothetical questions of the Yes-No variety) one can generate an a priori probability distribution over the states of nature which is appropriate for making decisions. This reduces the problem from one of uncertainty to one of risk. The a priori distribution obtained in this manner is called a subjective probability distribution.

Reading Games and decisions is still essential for students of game and decision theory.

15. As Elster [17:128] states it, subjective probability (along with cardinal utility) is one of the two “pillars of modern decision theory.” By decision theory, Elster seems to be referring to both microeconomics and decision theory.

16. This, incidentally, is how financial arbitraging functions. The arbitrager finds a set of bets that ensures a positive gain, regardless of how the market moves. It is not surprising that this can be a lucrative activity.

17. Perhaps, as Friedman might again claim, this is just a manifestation of the necessary academic division of labor. Economists worry about the structure of decision making and philosophers, statisticians, and psychologists must fill in the remaining pieces. This argument can occasionally be a legitimate refuge. However, the difficulty in the present case is that, stripped of its explanatory power for the type of decisions discussed, not much seems to be left for the utility framework to answer. The important questions regarding the future utility and future events dilemmas are precisely those that beg responses. To hide behind the argument of an academic division, then, is not simply to explore some issues and generously leave some research agendas for other thinkers; but rather it is to admit that the utility framework has little to add to the debate.

18. Frank Knight’s Risk, uncertainty and profit is a work that falls into that most unfortunate category: often cited but seldom understood. Though not completely central to the argument at hand, it is worthwhile spending some effort unwinding Knight’s arguments concerning capitalist dynamics.

Rather than beginning with a model of statics – to get our bearings, as we are often told in graduate economics classes – Knight takes the dynamics of an economy as the prime element. To abstract from uncertainty in time is to commit the sin of poor model building retaining the superfluous and discarding the essential. In the problem Knight considers, that of the determination of profits in a capitalist economy, the essential element is uncertainty [38:19]:

The key to the whole tangle [of profits] will be found to lie in the notion of risk or uncertainty and the ambiguities concealed therein … A satisfactory theory of profit will bring into relief the nature of the distinction between the perfect competition of theory and the remote approach which is made to it by the actual competition of, say, twentieth-century United States.

The failure to account for the proper dynamics, expressed here in the notion of uncertainty, is a failure to bridge the gap between remote reality and our static models (though, admittedly, the static models of Knight’s period are less complex than those of today).

19. This distinction seems to have largely disappeared from the modern economist’s toolkit under the pressing weight of the theory of subjective probability. It is easy to find examples of standard textbooks’ treatment of microeconomics in which this issue is taken as solved. Deaton and Muellbauer [14] and Varian [88] are among those commentators who feel confident they can dispense with Knight’s work.

20. The time-value of money, of course, varies with the systematic risk of an investment, as the Capital Asset Pricing Model (CAPM) demonstrates. Nonetheless, for a security with a given systematic risk, there is no additional reward for riskiness. Mining stocks are a good example: despite highly volatile profits, the magnitude of the risk is largely known. Consequently, average profits in the industry are not unduly high and the beta coefficient is approximately 1.

21. Both of these propositions are well understood by those connected with the securities market. Risks, at least the unsystematic part, can always be diversified by market participants.

22. Although one might be tempted to appeal to some principle of induction to characterize such decisions, I will not do this as a science of induction has developed (largely due to Rudolph Carnap [10]) that is antithetical to the point I am making here. Carnap’s work has its roots in the project of Liebniz and the writings of Keynes (which we have already seen) and Harold Jeffreys [32]. We have already briefly touched upon the project from Keynes’ angle. It is still worthwhile, however, to review the schematics of this project as described by Hacking [25:134-5]:

The tenets of this programme can be set out tersely as follows: First, there is such a thing as non-deductive evidence. That is, there may be good reasons for believing p which do not logically entail p. Second, “being a good reason” is a relation between propositions. Third, this relation is to be characterized by a relation between sentences in a suitably formalized language. Fourth, there is an ordering of reasons from good to bad, and indeed a measure of the degree to which r is a reason for p. Fifth, this measure is autonomous and independent of anyone’s opinion: it is an objective measure . . . Sixth, this measure is global – it applies to any pair of propositions (r,p) whatsoever . . .

The point is simply that the success of such a program would bring about the prophesy of Leibniz: “Let us calculate.” Nothing could be further from the argument I am making on judged decisions. Calculation, for fundamental epistemological reasons, is incapable of replacing judgment. For this reason, decision making under Knight’s uncertainty will not be described as an application of induction, though there are obvious affinities with the way in which the term “induction” is used in everyday language.

23. The line of thought I wish to describe here has its origins in Kant’s [35] deduction of the fundamental categories, and it has become an integral part of epistemology. It also plays a key role in the thoughts of Wittgenstein [100] and Weber [96], among others. Knight reformulates the ideas in a more accessible form, particularly for economists, and it is his development that I will sketch here.

24. This is a key point in Kant’s proof of the existence of the external world [35], and Wittgenstein’s [100] discussion of rule following.

25. Although we will not have an opportunity to address this question here, it is legitimate to ask from where such categories arise. The three major hypotheses correspond to an a priori construction of the categories (from Kant, though not really applying to categories as detailed as “orange” but rather to “space” and “time”); an empirical construction from the external world (from Hume); and the hypothesis that they are a social construction imposed on the individual from the group (from Durkheim and, it can be argued, Wittgenstein).

26. It is indeed curious to have watched the free trade debate unfold in Canada in the 1986-88 period. Put starkly, nobody knows what will happen now that the deal has finally been ratified.

27. This is why, incidentally, simulations of complex processes are so often fruitless: the actual calculations of the simulation are performed flawlessly but the assumptions – the assigning of certain phenomena to one category rather than another – determine the usefulness of the result. The simulation is often the attempt to lend the deductive nature to a problem that essentially turns on judgment. The same point is also made by insightful econometricians regarding econometric model building. Once we have the right model in place, running the tests is the easy part. It is finding the right model that requires talent – and judgment.