The Spin Stops Here!
Decision-Making Under Severe Uncertainty  
Faqs | Help | @ | Contact | home  
voodoostan info-gap decision theory info-gap economics severe uncertainty mighty maximin robust decisions responsible decisions


The Mighty Maximin and worst-case analysis

Last modified: Friday, 30-May-2014 17:38:41 MST

  Latest News  
   
  Supplementary Material  
Nukes 6of6: Worst Case Scenario and Nuclear Waste Storage

This is one of my more recent campaigns. Indeed, it is a mini-campaign in its own right. Still, this particular project has an affinity with my Solution-Method-Free-Modeling campaign in that, both projects are concerned with the role of mathematical modeling in decision making. I might also add that this campaign inspired my Voodoo Decision-Making campaign.

The direct trigger for this project was the realization that there is something suspect in the interpretation that naive, and not so naive, users of Info-Gap Decision Theory — a purportedly new theory for decision making under severe uncertainty — give this theory.

In any case, the objective of this campaign is to clarify conceptual and technical issues associated with the modeling aspects of the classical Maximin paradigm, especially in the context of decision-making in the face of severe uncertainty.


Table of contents


Overview

However incomprehensible this may appear to seasoned analysts/scholars working in the area of decision-making under uncertainty, there are those who profess to be experts in the field yet they are not conversant with Worst-Case Analysis and Wald's Maximin Principle. This can range from ignorance about the existence of these two central modeling tools, to a shaky grasp of their role and place in decision theory, to a total lack of understanding of the mathematical modeling issues that come into play in the application of these important modeling tools.

So here are some thoughts on this topic.

The basic idea underlying the Maximin paradigm can be summarized as follows:

Maximin Rule

Rank alternatives by their worst possible outcomes: adopt the alternative the worst outcome of which is at least as good as the worst outcome of the other alternatives.

This paradigm was conceived by the mathematician Abraham Wald (1902-1950). It is in fact an adaptation of John von Neumann's (1903-1957) Maximin paradigm that he developed in the context of game theory. Wald hit on the idea of adapting this paradigm to the framework of decision-making under uncertainty thus casting Uncertainty (call it Nature) as the player pit against the decision maker. The idea here is that, to play it safe, the decision maker assumes that Nature constitutes an antagonistic adversary.

So, if the decision maker maximizes her "utility", Nature will attampt to minimze the "utility", whereas if the decision maker minimizes the "utility", Nature will attempt to maximize the "utility". The first case is captured by the term Maximin and the second case by the term Minimax.

The Maximin and Minimax models are essentially equivalent (subject to multiplying the "utility" by -1) and the choice between them is a matter of convenience.

Note that this dark view of uncertainty is not meant to be taken as a philosophical statement about the nature of reality. Rather, the point of the Maximin/Minimax model is that Nature is used as a device for expressing the decision maker's attitude towards the risks associated with uncertainty. So, like my dear wife, in this framework the decision maker takes an extremely pessimistic stance by assuming that the "worst-case-scenario" pertaining to her decision will be realized.

Not surprisingly, therefore, Maximin/Minimax has become almost synonymous with robust decision-making not only in classical decision theory but in other areas as well. For instance, here is the abstract of the entry Robust Control by Noah Williams in the New Palgrave Dictionary of Economics, Second Edition, 2008:

Robust control is an approach for confronting model uncertainty in decision making, aiming at finding decision rules which perform well across a range of alternative models. This typically leads to a minimax approach, where the robust decision rule minimizes the worst-case outcome from the possible set. This article discusses the rationale for robust decisions, the background literature in control theory, and different approaches which have been used in economics, including the most prominent approach due to Hansen and Sargent.

The following quote is from the book Robust Statistics by Huber (1981, pp. 16-17):

But as we defined robustness to mean insensitivity with regard to small deviations from assumptions, any quantitative measure of robustness must somehow be concerned with the maximum degradation of performance possible for an e-deviation from the assumptions. The optimally robust procedure minimizes this degradation and hence will be a minimax procedure of some kind.

Keeping in mind that "minimax" signifies the reverse of Maximin, in this discussion, we are concerned for the most part, with the Maximin option.

Math formulation

The two most prevalent (equivalent) formal mathematical formulations of the Maximin paradigm are: the classic formulation and the mathematical programming (MP) formulation. Here they are side by side:

where the double-lined R denotes the real line.

Note that an instance of a Maximin model is specified by a triplet (D,S,f) whose elements can be interpreted as follows:

In short, these formulations describe a game played by two players: one player (the decision maker) tries to maximize the objective function by controlling d in D, while the other player (Nature) tries to minimize the objective function by controlling s in S(d). Observe that when Nature decides on her best s she knows the value of d selected by the decision maker. In plain language, in this game the decision maker plays first whereupon Nature responds to his choice of a d in D.

In cases where robustness is sought with respect to constraints, rather than "utility", the Maximin model would be formulated as follows:

where — without loss of generality — the constraint requires the value of c(d,s) to be an element of some given set C. In other words, here the decision maker's goal is to maximize the utility f(d) over all d in D subject to the condition that c(d,s) is in C for all s in S(d). For her part, Nature will attempt to violate this constraint — if possible — by a proper choice of s in S(d).

And if robustness is sought with repsect to the objective function and the constraint, the Maximin model would be formulated as follows:

The Minimax formulations are similar except that the min and max operations are interchanged. In this case the decision maker seeks to minimize the objective function and therefore antagonistic Nature attempts to maximize it.

For our purposes here, the above discussion is sufficient. We need point out, however, that Wald's Maximin model plays a key role in classical decision theory, robust optimization, economics, statistics, and control theory.

To some this paradigm is second nature, to others it is a puzzle.

Remark:

The fact that a "max" and a "min" appear in a formulation of a decision-making model does not automatically render the model a Maximin model. For example, consider the following decision-making model:

In general, this is not a Maximin model because we do not have here a situation where Nature (deciding on s) "antagonizes" the decision maker (deciding on d). Or, in other words, the idea here is not to identify the "best worst-case".

This is so, because here not all s in S(d) are required to satisfy the constraint K ≥ c(d,s). So, by selecting s to minimize c(d,s) over s in S(d), Nature does not select the worst s in S(d) with regard to the constraint K ≥ c(d,s), which means that some s in S(d) are "allowed" to violate the constraint K ≥ c(d,s).

In a word, all that is required here is that "at least one" element of S(d) satisfy the constraint K ≥ c(d,s).

This, as a matter of fact is a Maximax model, observing that

where

Note that in this framework Nature is not antagonistic towards the decision maker. To the contrary, here Nature cooperates with her.

In contrast, the model

is a Maximin model even though no "min" occurs in the formulation. Note that here all s in S(d) must satisfy the constraint K ≥ c(d,s). Thus,

where

Variations on the same theme

Now, the pessimistic stance to uncertainty prescribed by Wald's Maximin paradigm proves far too conservative for many applications. Not surprisingly, therefore, a number of attempts have been made to modify the paradigm's grim approach to uncertainty with the view to mitigate its excessive conservatism.

The reader is advised that modeling the Maximin can involve a significant effort. Indeed, stating the paradigm in terms of a particular situation can often require of the modeler/analyst considerable insight, imagination and invention (see my paper The Mighty Maximin!).

Decision Tables

In cases where the decision and state spaces are finite sets, the Maximin model can be depicted as a Decision Table. The convention is that the rows of the table represent decisions, the columns represent states and the entries of the table represent the rewards r(d,s), say in AU$.

For example, consider the following simple decision table:

  s1     s2     s3     s4  
d1 7569
d2 4865
d3 9178

We now append an additional column to the table to list the security levels of the decisions. The security level of decision dj is the smallest reward in the j-th row of the table. This means that if we select decision dj, then no matter what state is realized, the actual reward will be no smaller than the security level.

So here is our decision table with the associated security levels (AU$):

  s1     s2     s3     s4     SL(j)  
d1 75695
d2 48654
d3 91781

What is left then is to find the largest security level. We indicate this in the usual/unusual way:

  s1     s2     s3     s4     SL(j)  
d1 75695
d2 48654
d3 91781

In short, the Maximin rule selects decision d1. The optimal (maximal) security level is AU$5.

Bayes Rule vs Maximin Rule

Recently, I was greatly surprised to learn that apparently, there is a perception out there, that a Maximin decision rule can be simulated by a degenerate Bayes Rule that selects (with certainty) one of the states.

It is important therefore to set the record straight on this matter by showing that this is not so.

Recall that in the framework of Bayes Rule, some probability distribution function is assumed to operate on the state space and the performance of a decision, say d, is measured by the expected value of the reward r(d,s) determined by this distribution. Thus let E[d] denote the expected value of r(d,s) generated by decision d and the assumed probability distribution function of s.

In keeping with Bayes rule, decisions are ranked by their E[d] values — the larger the better. So the optimal decision according to this rule is one that maximizes E[d] over all the feasible values of d.

For example, consider the following (familiar) decision table:

  s1     s2     s3     s4  
d1 7561
d2 4865
d3 9178

To illustrate Bayes Rule in action we have to postulate a probability distribution over the four states. Suppose that we consider the vector p=(0.1, 0.2,0.4,0.3) for this purpose. The convention is to list these probabilities above the states:

p(i)  0.1     0.2     0.4     0.3  
  s1     s2     s3     s4  
d1 7561
d2 4865
d3 9178

We now append an additional column to the table where we list the value of E[dj], j=1,2,3.

p(i)  0.1     0.2     0.4     0.3  
  s1     s2     s3     s4   E[dj]
d1 75614.4
d2 48655.8
d3 91786.3

Thus, the best decision according to Bayes Rule is d3. It yields an expected reward of 6.3.

p(i)  0.1     0.2     0.4     0.3  
  s1     s2     s3     s4   E[dj]
d1 75614.4
d2 48655.8
d3 91786.3

The point to note then is that here, all the decisions are evaluated in accordance with the same assumed probability distribution (over the state space). But in the framework of the Maximin model, each individual decision is evaluated on the basis of the worst case (state) pertaining to this decision alone. This means that, in general, there is no guarantee that a single probability distribution over the state space will be able to simulate the Maximin Rule.

If you remain unconvinced, consider the following simple case: there are two states, say s1 and s2, and three decisions, call them d1 , d2, and d3. Suppose that the reward table is as follows:

  s1     s2  
d1 75
d2 48
d3 91

Appending the security levels column we have:

  s1     s2   SL(j)
d1 755
d2 484
d3 911

We conclude then that the optimal decision prescribed by Maximin is d1, yielding the maximum security level of AU$5.

In short, in this case the Maximin decision rule yields the following result:

Optimal decision: d1
Guaranteed reward: AU$5

Note that there is no degenerate distribution on S = {s1,s2} that can simulate this result under Bayes Rule. If Nature selects s1 with probability 1, then clearly decision d3 is better than decision d1. If Nature selects s2 with probability 1, then clearly d2 is better than d1.

The two decision tables are as follows:

p(i)  1     0
  s1     s2   E[dj]
d1 757
d2 484
d3 919
 
p(i)  0     1
  s1     s2   E[dj]
d1 755
d2 488
d3 911

In short, in this case there is no degenerate distribution on the state space that can force Bayes Rule to select d1.

The conclusion is therefore that the Maximin Rule is not an instance of Bayes Rule. This, however, does not mean that the two rules are rivals. Rather, what this means is that, they complement one another.

So the moral of the story is that both are essential. Indeed, don't leave home without either of them. It is best to have both. Even if you travel to Australia to take up the best job in the world!

That said, we must not neglect to mention in this context another pillar of classical decision theory:

Laplace's Principle of Insufficient Reason

This Principle — attributed to the French mathematician and astronomer Pierre-Simon, marquis de Laplace (1749 - 1827) — argues, by symmetry, that if there are n > 1 mutually exclusive and collectively exhaustive possibilities, then each possibility should be assigned the same probability (equal to 1/n).

To put it in more general terms, this Principle, which is also known as the Indifference Principle — so named by the British economist John Maynard Keynes, 1st Baron Keynes (1883 - 1946) — argues that under severe uncertainty all the states are equally likely. This means that often it is possible to postulate a uniform probability distribution function over the state space.

Of course, there are situations where this is impossible: for instance, if the state space is the real line, then we cannot formulate a uniform probability distribution function on the state space.

From the standpoint of Bayesian decision theory, the state-of-affairs captured by this principle can be viewed as the simplest non-informative prior.

The idea captured by this Principle is so fundamental that it is appealed to widely, often invoked implicitly, one might say automatically. For example, since the possible two outcomes of a (fair) coin toss are mutually exclusive, exhaustive, and interchangeable, we assign each of these outcomes a probability of 1/2.

Still, it is important to make sure that the Principle is applied correctly for otherwise one faces the prospect of embarrassing nonsensical results. The following example illustrates a typical misuse of the Principle (see WIKIPEDIA):

  • Suppose that we know that a cube inside a closed safe has a side length between 3 and 5 cm, but the true value of this length is subject to severe uncertainty.
  • We can safely conclude that the surface area of the cube is between 54 and 150 cm2.
  • Similarly, we can safely conclude that the volume of the cube is between 27 and 125 cm3.
  • Applying the Principle to each of these three intervals, we may (erroneously) conclude that the following three assertions are true:
    • The side of the cube is uniformly distributed on the interval [3,5].
    • The surface area of the cube is uniformly distributed on the interval [54,150].
    • The volume of the cube is uniformly distributed on [27,150].

To see clearly why this collective conclusion is wrong, let X denote the random variable representing the length of the side of the cube. Then, the surface area of the cube is represented by the random variable Y=6X2, and the volume of the cube is represented by the random variable Z=X3.

Since clearly these three random variables are not stochastically independent — in fact they are "highly" dependent on each other, so much so that the value of one uniquely determines the values of the other two. And since the relation between them is not linear, it follows that if one of these random variables is uniformly distributed, then clearly the other two are definitely not uniformly distributed.

In short, if we want to use the Principle to deal with the uncertainty associated with X, Y and Z, then we can apply it to one of these three variables and then use standard tools for the transformation of random variables to determine the distributions of the other two variables. The Principle does not tell us to which one of these three variables it should be applied.

The moral of the story narrated in this example is that one must make sure that the Principle is not being applied simultaneously to a number of random variables unless these variables are stochastically independent, or the dependence between them is linear.

The Info-Gap saga

And to illustrate the kind of trouble one might fall into if one is not at home with the modeling aspects of the Maximin paradigm, consider the following:

Assignment 1 :
Formulate a Maximin model for the following optimization problem:

This is the Info-Gap robustness model considered in Davidovitch and Ben-Haim (2008). The authors claim that this is not a Maximin/Minimax model. So your assignment is to prove them wrong.

And while you are at it, consider also the following:

Assignment 2 :
Explain in detail why it is absurd to consider the Maximin model
as a Maximin/Minimax representation of the optimization problem given in Assignment 1.

The Minimax model given in Assignment 2 is the model developed by Davidovitch and Ben-Haim (2008) as a "proof" that Info-Gap's robustness model is not a Maximin/Minimax model.

Solutions to these assignments and a critique of Davidovitch and Ben-Haim's (2008) Maximin/Minimax modeling effort are available in my paper

Anatomy of a Misguided Maximin formulation of Info-Gap's Robustness Model

In brief, the fatal error in Davidovitch and Ben-Haim's (2008) analysis is the uncalled for assumption that in Minimax/Maximin modeling the parameter "alpha" of the Info-Gap model must be treated as a fixed number. There are other serious Maximin/Minimax modeling errors in Davidovitch and Ben-Haim (2008).

In any case, the picture is this:

Similar erroneous Maximin/Minimax formulations of Info-Gap's robustness model can be found in the following publications of the Norges Bank. For your convenience I include the abstracts as well. The full papers are available on line:

And talking about Banks, how about the De Nederlandsche Bank?

" ... Info-gap robust-satisficing is motivated by the same perception of uncertainty which motivates the min-max class of strategies: lack of reliable probability distributions and the potential for severe and extreme events. We will see that the robust-satisficing decision will sometimes coincide with a min-max decision. On the other hand we will identify some fundamental distinctions between the min-max and the robust-satisficing strategies and we will see that they do not always lead to the same decision.

First of all, if a worst case or maximal uncertainty is unknown, then the min-max strategy cannot be implemented. That is, the min-max approach requires a specific piece of knowledge about the real world: "What is the greatest possible error of the analyst's model?". This is an ontological question: relating to the state of the real world. In contrast, the robust-satisficing strategy does not require knowledge of the greatest possible error of the analyst's model. The robust-satisficing strategy centers on the vulnerability of the analyst's knowledge by asking: "How wrong can the analyst be, and the decision still yields acceptable outcomes?" The answer to this question reveals nothing about how wrong the analyst in fact is. The answer to this question is the info-gap robustness function, while the true maximal error may or may not exceed the info-gap robust satisficing. This is an epistemic question, relating to the analyst's knowledge, positing nothing about how good that knowledge actually is. The epistemic question relates to the analyst's knowledge, while the ontological question relates to the relation between that knowledge and the state of the world. In summary, knowledge of a worst case is necessary for the min-max approach, but not necessary for the robust-satisficing approach.

The second consideration is that the min-max approaches depend on what tends to be the least reliable part of our knowledge about the uncertainty. Under Knightian uncertainty we do not know the probability distribution of the uncertain entities. We may be unsure what are typical occurrences, and the systematics of extreme events are even less clear. Nonetheless the min-max decision hinges on ameliorating what is supposed to be a worst case. This supposition may be substantially wrong, so the min-max strategy may be mis-directed.

A third point of comparison is that min-max aims to ameliorate a worst case, without worrying about whether an adequate or required outcome is achieved. This strategy is motivated by severe uncertainty which suggests that catastrophic outcomes are possible, in conjunction with a precautionary attitude which stresses preventing disaster. The robust-satisficing strategy acknowledges unbounded uncertainty, but also incorporates the outcome requirements of the analyst. The choice between the two strategies — min-max and robust-satisficing — hinges on the priorities and preferences of the analyst.

The fourth distinction between the min-max and robust-satisficing approaches is that they need not lead to the same decision, even starting with the same information. ..."

Confidence in Monetary Policy
Yakov Ben-Haim and Maria Demertzis
DNB Working Paper 192, 2008.
pp. 17-18

What a mess!

My point is then that the authors demonstrate a gross misapprehension of the modeling aspects of the Minimax/Maximin paradigm. As a result, they attribute to the Minimax paradigm "undesirable" properties that are in fact the properties of the misguided ad hoc instances of Wald's model that they constructed for this comparison.

Had they set out a proper (equivalent) Maximin formulation of their Info-Gap's robust-satisficing model, their analyses would have vanished into thin air.

Moreover, what is the point in reinventing the wheel and a faulty one at that?

This is another illustration of Info-Gap proponents discoursing at length on purported "similarities and differences" between Info-Gap's robustness model and the Maximin model while the clear proof that Info-Gap's robustness model is a simple instance of Wald's Maximin model is staring them in the face.

What a waste of time!

More details on the on-going Info-Gap/Maximin saga can be found in

WIKIPEDIA article on Info-Gap Decision Theory

The interesting material is in the associated WIKIPEDIA Discussion page.

If you wish to join the campaign or just be on my mailing list for this project, send me note.


I am now working on a short book entitled "Worst-Case Analysis for Decision Making Under Severe Uncertainty". I plan to post it here when it is ready. But do not hold your breadth, I have plenty of other more urgent tasks to complete.

However, if you are eager to read this material now, I suggest the following bits and pieces that eventually will be incorporated in the book.

Warning:

This is work in progress. I plan to update the files regularly. Make sure that you have the latest update.

More on this and related topics can be found in the pages of the Worst-Case Analysis / Maximin Campaign, Severe Uncertainty, and the Info-Gap Campaign.

Recent Articles, Working Papers, Notes

Also, see my complete list of articles
    Moshe's new book!
  • Sniedovich, M. (2012) Fooled by local robustness, Risk Analysis, Early View.

  • Sniedovich, M. (2012) Black swans, new Nostradamuses, voodoo decision theories and the science of decision-making in the face of severe uncertainty, International Transactions in Operational Research, 19(1-2), 253-281 (Available free of charge)

  • Sniedovich, M. (2011) A classic decision theoretic perspective on worst-case analysis, Applications of Mathematics, 56(5), 499-509.

  • Sniedovich, M. (2011) Dynamic programming: introductory concepts, in Wiley Encyclopedia of Operations Research and Management Science (EORMS), Wiley.

  • Caserta, M., Voss, S., Sniedovich, M. (2011) Applying the corridor method to a blocks relocation problem, OR Spectrum, 33(4), 815-929, 2011.

  • Sniedovich, M. (2011) Dynamic Programming: Foundations and Principles, Second Edition, Taylor & Francis.

  • Sniedovich, M. (2010) A bird's view of Info-Gap decision theory, Journal of Risk Finance, 11(3), 268-283.

  • Sniedovich M. (2009) Modeling of robustness against severe uncertainty, pp. 33- 42, Proceedings of the 10th International Symposium on Operational Research, SOR'09, Nova Gorica, Slovenia, September 23-25, 2009.

  • Sniedovich M. (2009) A Critique of Info-Gap Robustness Model. In: Martorell et al. (eds), Safety, Reliability and Risk Analysis: Theory, Methods and Applications, pp. 2071-2079, Taylor and Francis Group, London.
  • .
  • Sniedovich M. (2009) A Classical Decision Theoretic Perspective on Worst-Case Analysis, Working Paper No. MS-03-09, Department of Mathematics and Statistics, The University of Melbourne.(PDF File)

  • Caserta, M., Voss, S., Sniedovich, M. (2008) The corridor method - A general solution concept with application to the blocks relocation problem. In: A. Bruzzone, F. Longo, Y. Merkuriev, G. Mirabelli and M.A. Piera (eds.), 11th International Workshop on Harbour, Maritime and Multimodal Logistics Modeling and Simulation, DIPTEM, Genova, 89-94.

  • Sniedovich, M. (2008) FAQS about Info-Gap Decision Theory, Working Paper No. MS-12-08, Department of Mathematics and Statistics, The University of Melbourne, (PDF File)

  • Sniedovich, M. (2008) A Call for the Reassessment of the Use and Promotion of Info-Gap Decision Theory in Australia (PDF File)

  • Sniedovich, M. (2008) Info-Gap decision theory and the small applied world of environmental decision-making, Working Paper No. MS-11-08
    This is a response to comments made by Mark Burgman on my criticism of Info-Gap (PDF file )

  • Sniedovich, M. (2008) A call for the reassessment of Info-Gap decision theory, Decision Point, 24, 10.

  • Sniedovich, M. (2008) From Shakespeare to Wald: modeling wors-case analysis in the face of severe uncertainty, Decision Point, 22, 8-9.

  • Sniedovich, M. (2008) Wald's Maximin model: a treasure in disguise!, Journal of Risk Finance, 9(3), 287-291.

  • Sniedovich, M. (2008) Anatomy of a Misguided Maximin formulation of Info-Gap's Robustness Model (PDF File)
    In this paper I explain, again, the misconceptions that Info-Gap proponents seem to have regarding the relationship between Info-Gap's robustness model and Wald's Maximin model.

  • Sniedovich. M. (2008) The Mighty Maximin! (PDF File)
    This paper is dedicated to the modeling aspects of Maximin and robust optimization.

  • Sniedovich, M. (2007) The art and science of modeling decision-making under severe uncertainty, Decision Making in Manufacturing and Services, 1-2, 111-136. (PDF File) .

  • Sniedovich, M. (2007) Crystal-Clear Answers to Two FAQs about Info-Gap (PDF File)
    In this paper I examine the two fundamental flaws in Info-Gap decision theory, and the flawed attempts to shrug off my criticism of Info-Gap decision theory.

  • My reply (PDF File) to Ben-Haim's response to one of my papers. (April 22, 2007)

    This is an exciting development!

    • Ben-Haim's response confirms my assessment of Info-Gap. It is clear that Info-Gap is fundamentally flawed and therefore unsuitable for decision-making under severe uncertainty.

    • Ben-Haim is not familiar with the fundamental concept point estimate. He does not realize that a function can be a point estimate of another function.

      So when you read my papers make sure that you do not misinterpret the notion point estimate. The phrase "A is a point estimate of B" simply means that A is an element of the same topological space that B belongs to. Thus, if B is say a probability density function and A is a point estimate of B, then A is a probability density function belonging to the same (assumed) set (family) of probability density functions.

      Ben-Haim mistakenly assumes that a point estimate is a point in a Euclidean space and therefore a point estimate cannot be say a function. This is incredible!


  • A formal proof that Info-Gap is Wald's Maximin Principle in disguise. (December 31, 2006)
    This is a very short article entitled Eureka! Info-Gap is Worst Case (maximin) in Disguise! (PDF File)
    It shows that Info-Gap is not a new theory but rather a simple instance of Wald's famous Maximin Principle dating back to 1945, which in turn goes back to von Neumann's work on Maximin problems in the context of Game Theory (1928).

  • A proof that Info-Gap's uncertainty model is fundamentally flawed. (December 31, 2006)
    This is a very short article entitled The Fundamental Flaw in Info-Gap's Uncertainty Model (PDF File) .
    It shows that because Info-Gap deploys a single point estimate under severe uncertainty, there is no reason to believe that the solutions it generates are likely to be robust.

  • A math-free explanation of the flaw in Info-Gap. ( December 31, 2006)
    This is a very short article entitled The GAP in Info-Gap (PDF File) .
    It is a math-free version of the paper above. Read it if you are allergic to math.

  • A long essay entitled What's Wrong with Info-Gap? An Operations Research Perspective (PDF File) (December 31, 2006).
    This is a paper that I presented at the ASOR Recent Advances in Operations Research (PDF File) mini-conference (December 1, 2006, Melbourne, Australia).

Recent Lectures, Seminars, Presentations

If your organization is promoting Info-Gap, I suggest that you invite me for a seminar at your place. I promise to deliver a lively, informative, entertaining and convincing presentation explaining why it is not a good idea to use — let alone promote — Info-Gap as a decision-making tool.

Here is a list of relevant lectures/seminars on this topic that I gave in the last two years.


Disclaimer: This page, its contents and style, are the responsibility of the author (Moshe Sniedovich) and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.


Last modified: Friday, 30-May-2014 17:38:41 MST