Qui si parla di giochi/Games

In questa pagina metterò qualche pezzo su aspetti della teoria dei giochi. Il primo è micidiale, soprattutto per lunghezza…in futuro voglio mettere cose più brevi,

 

Game theory: a user’s guide (from my own point of view, of course)

Life exists because there is interaction. Every living being is in contact and interacts with external reality throughout its life cycle, often in a totally unconscious way from the intellectual viewpoint, sometimes aware of the consequences of its actions. The mathematical theory that deals with interactions is commonly known by the very effective name of game theory: effective, in particular, because game is a beautiful model of interaction. Game theory has applications in every field and is also a very valuable test to measure the effectiveness of many methods for solving complex problems. The reason is very simple. A game often has clear, easy to understand rules, but then it is terribly difficult to analyse: even in an advanced course in game theory, if students do not have specific software, they can only perform calculations for absolutely trivial games, and even then such calculations may not be too simple. For instance, calculating mixed strategies for a two-player game where each player has three available strategies is a long and tedious process, despite the enormously limited number of actions available to the players (and if you look at Appendix 1, you’ll find an interesting story about the game of draughts, or checkers). Beyond this aspect, there is an extremely important question that we have to ask ourselves about this theory, which after all has very recent origins: how reliable is such a mathematical theory?

This is an important question because, while we are all aware that mathematics obviously introduces many simplifications in its models, at least in not particularly sophisticated theories we are also used to the fact that these simplifications, if the models are well made, do not lead to unpredictable and perhaps devastating results. Errors of approximation and simplification do not preclude great results; the model that carries a space probe around the universe probably includes several approximations, to make the problem tractable, but then, at the end of the day, the results are obtained. What happens in game theory?

We are talking about a theory that, at least at its beginnings, had above all the ambition of describing scientifically the behaviour of the rational man. Now, we see right away that even to start is difficult, because it is a matter of giving a definition, even if not necessarily a formal one, of the concept of rationality. But this is just the first, tiny step. A next, much harder one immediately awaits us: once we have this definition, or at least an idea of it, how far can the results so obtained be applied to the real world? In other words: when we make people “play”, do they behave in the way prescribed by theory or not, are they rational or not? At least when experimenting with simple games, you might think that making the appropriate verifications is not difficult… but this is not so.

Let us perform, or better imagine, a quick experiment. We take two people and say, publicly, to each of them: “Do you prefer me to give 1 euro to you, or 20 euros to the other player”? Now we can observe different possibilities: for instance, both could say, “Give me 1 euro”, or to the contrary, both could say “Give 20 euros to the other player”. It is obviously not even impossible to imagine the case in which one says, “I want 1 euro for me” and the other, “Please give 20 euro to the other player”. The theory, presumably, suggests what the rational outcome of a game like this is and therefore, if one outcome is the correct one, the other ones are necessarily wrong! Unfortunately, or perhaps fortunately, it is not so simple.

How can an answer be rational, when theory signals it as wrong? The subtle and complicated issue is to be sure of what the players’ goals really are, when confronted with a game like the one we have just described. In other words, what do the two players actually want? The theory quite naturally assumes that, for every player, having more, in monetary terms, is always better than having less. But can I be sure that those who say, “Give 20 euros to the other person” are doing something irrational? To be sure of the conclusions I have obtained through the theory, I need to be sure of how the players evaluate different situations.

In a game like this, which is apparently so simple but actually so interesting and challenging (it is a variant of the most famous of all the examples in game theory, the prisoner’s dilemma), there are also other factors that can affect the responses of rational players. For instance, if they are intimately convinced, even unconsciously, that the game will be played repeatedly, then answers that are not rational in the game played only once can become so if the game is repeated.

Another famous and interesting example goes by the name of “ultimatum game” and can be briefly described as follows: the first player is told that there are 100 euros available and that he must offer a part x of them, x ≥ 1, to the second player: if the latter accepts, x will be given to the second and the remainder to the first, while if the second player does not accept neither one gets anything.

This game was tested on people (paid to play), who had to accept or reject the offer x and on whom, during the game, a brain MRI scan was being performed, to see which areas in the brain were being activated. Again, if having more money is better than having less money, even an offer of x = 1 should be accepted. Yet this does not happen in practice, and perhaps it is no coincidence that when the second player was told that the offer was generated by a computer, then the average accepted bids dropped significantly. What determines the rejection of an x that is deemed too low? Apparently, entering into the evaluation a person makes are other factors which are much more imponderable than the mere economic evaluation and make it very arbitrary to conclude that the player behaves irrationally.

To sum up all of this, and neglecting other aspects, though important, I often like to say that it is one thing to predict the movements of Jupiter, meant as a planet, and another to make predictions about Jupiter, meant as the god of the Romans: we cannot expect equally reliable results in both cases.

A truly crucial question then arises spontaneously: how useful is this mathematics whose reliability is not clear? The question is pretty important and concerns more than just game theory, because other human disciplines have recently started using mathematics in a massive way; just think of the theory of social choices or economy, not to mention medicine and psychology. So, how reliable are these theories?

Game theory – this is my opinion – it is not a mathematics that is beautiful but has limited practical utility. On the contrary, it is a powerful tool, but it must be used with extreme critical sense. I would be horrified if in some international crisis decisions were made on the basis of pre-packaged models by experts in game theory: the situations at stake are always very complex and have specific aspects that make them unique; it is easy to get some uncertain parameter of the problem wrong and to obtain answers that could in retrospect prove to be disastrous.

But there are also far more usual situations, where such answers might be acceptable. If I have to establish how to allocate certain expenses for the construction of a so-called common good, using the Shapley value is an excellent idea. Furthermore, there are situations in which a game theory expert can certainly help people to make sensible decisions, for example, in the case of a company wanting to participate in an important auction. Let me stress help, since no one can guarantee winning an auction. I myself was approached by a company that asked me if I was willing to help them in a similar situation – and we can very well discuss such a collaboration – arriving inevitably to “Then, with your advice, are we sure to we win the auction?” At that point, my answer was that, as far as I knew, the ten richest men in the world were not experts in game theory, and the thing was over.

But there is another point. This theory is not only of avail, if used judiciously, in some specific situations. By teaching game theory, by talking about it on many occasions, in short, by continuing to reflect about it, I came to the conclusion that this mathematics has an inestimable value, less practical than problem-solving, but certainly not less valuable: it leads those who know it to seeing things in a different way than usual, and this different way can be really precious.

Here are a couple of illuminating examples.

When a rational decision maker is faced with alternatives, having many possible choices is obviously a good thing. If I need to buy a pair of shoes and go into a shop which I usually patronise, discovering that it has acquired an extra room so that in addition to the shoes I usually find there, there are more of different brands, is a pleasant surprise: I have more possible choices, so in the end I will be more satisfied (I’m talking about rational players). Mathematically, I’m saying a very trivial thing: given a function u that represents the utility that I associate with each type of shoes, and given two sets A and B, which represent the sets of shoes out of which I can choose my pair, then if BA, the maximum of u on B is certainly not less than that on A, which means that I shall not go out of the shop less happy when I have more choices.

Let us now consider a group of rational decision-makers. Will the same thing happen? Our intuition, I think, is that if they are rational, they will still be able to choose the right alternatives: how could a greater possibility of choice lead to worse results?

Let’s look at an interesting example, which deals with the traffic problem exemplified in the figure below.

 

From Turin to Milan

Every morning at 7 a.m. a given number of people must go from Turin to Milan and have two alternative ways through two intermediate towns, one more to the north and one more to the south. Both routes consist of an urban section, in which travel time depends on the traffic there, and lasts N/100 minutes, where N is the number of cars that are on the road, and a highway, where the travel time is fixed, 50 minutes, because it is regulated by speed limit enforcement. Suppose now that there are 4000 people travelling, each in a car. How long will they take to go from Turin to Milan? There is no need to know what a Nash equilibrium is to convince oneself that what will happen is that, given the perfect symmetry of the two routes, one half of motorists will take the northern route, the other half the southern one, with the result that each will take 20 + 50 = 70 minutes for the trip.

Suppose now that the efficient mayors of the two towns, which are very close, decide to build a very wide road to connect them, so it is possible to get from one town to the other one in five minutes. What happens now? Any rational driver would realise that if he took the highway to the southern town, the trip would take 50 minutes, while if he passes through the northern town first, it would take at the most 45 minutes to arrive to the southern town. Thus, for him the only rational action would be to move through North. Of course, everyone would reason like this, so in the end everybody’s travel time would become 85 minutes!

What happened? It is no coincidence that this example takes the name of Braess’s paradox: the paradox lies in the difficulty our mind has in accepting that in interactive situations, albeit with rational agents, there may exist alternatives that eventually lead to negative results for everyone: the mayors must weigh accurately the pros and cons before building a new road!

Here is an important first lesson from the theory: giving players more choices may be detrimental to them, even if they are smart players!

There is another example, which I consider wonderful, of an unorthodox reflection that a game theorist can do and try to explain. The details are given in Appendix 2; here I say only that it is the notion of correlated equilibrium: the players, which as we know are supposed to be selfish and rational, decide to agree on a probability distribution of the outcome of the game (this makes sense when there are no pure equilibria or they are not very interesting). Then they enact a random mechanism that selects an outcome. Finally, there is a mediator, possibly a suitably programmed computer, which, based on the outcome selected by the random event, tells each player what they should do. If the probability distribution is chosen wisely, the players have no interest in deviating from the received recommendation (whatever this recommendation was), and in addition they get a higher utility than the Nash equilibrium! Where is the beauty of the message that this idea brings? There are at least two extraordinary aspects: first, even in a selfish world a limited, rational collaboration that is beneficial for all may be possible; second, this collaboration is possible only under certain conditions. In this case, the players have to give up some information: the mechanism only works because each player is told only what he is to do but not what the others are told. But this is important: the choice of the players is a conscious one, they are the ones who choose the probability distribution on the outcomes, they are the ones who accept the mechanism.

To conclude, then, a lucid analysis of the situation, even starting from the (very reasonable) assumption that people basically act in their own interests, shows that it is possible to create the bases of a collaboration that can bring benefits to everyone.

 

Appendix 1. A game or a test?

“Checkers is solved” is the title of a paper published in Science in 2007 [Sch]. Someone computed that in the game of draughts there is an approximate number of 5×1020 possible positions. It is clear that such a game cannot be analysed in its entirety. Yet, we know that rational players would always get the same result. In technical terms, we say that the game has an equilibrium in pure strategies and it is also known that in the case of multiple equilibria (which the theorem does not exclude a priori) the result would still be the same: this depends on the fact that it is a zero-sum game, assuming to award +1 to the winner, –1 to the loser and zero for a draw. But then the question becomes: which of the three possible results is the actual one? That paper shows that the correct result is a draw, just as in noughts and crosses (tic-tac-toe), two rational players would always get a draw. How was this result proved? The proof is really a product of our times, because it uses a mixed technique: the problem is simplified mathematically, then is tackled with powerful computing algorithms, based in turn on typical methods of game theory, in particular the method of backward induction, which however can be applied to very simplified situations (for instance, a chessboard with a very limited number of pieces on it). The work is the result of at least 20 years of work: the game has been a formidable test to make the tools of artificial intelligence increasingly sophisticated.

Appendix 2. Correlated equilibrium

Consider the game described by the following table:

Left Right
High (7, 2) (0, 0)
Low (6, 6) (2, 7)

There are two players, each with two strategies. For the sake of simplicity, let us say that the first player can choose between High and Low and the second between Right and Left. This game has three Nash equilibria: two in pure strategies and one in mixed strategies. The equilibria in pure strategies consists in playing High, Left and having as utility (7,2), or playing Low, Right with utility (2,7). The equilibrium in mixed strategies is instead given by the strategy (1/3, 2/3) played by the first player and (2/3, 1/3) by the second one. In this case, the expected utility for both players is 14/3. In some ways this balance is the most interesting, as it yields the same utility to both players; moreover, as we can see by looking at the table, the two players are in a symmetrical situation and, out of a sense of justice, the equilibria in which the result is the same for both seem better. If the equilibrium in mixed strategies  is played, the probability distribution on the outcomes of the game is as follows:

2/9 1/9
4/9 2/9

That is, with probability 2/9 “High, Left” or “Low, Right” will be played, which correspond to the results (7, 2) and (2, 7); with probability 1/9 the two players will choose “High, Right”, with outcome (0, 0), and with probability 4/9 they will play “Low, Left” with result (6, 6).

Let’s then try to change the probability distribution on the outcomes. Consider, for instance, this probability distribution:

1/3 0
1/3 1/3

where each non-zero outcome is played with equal probability. With this probability distribution the players would end up better than with the equilibrium in mixed strategies, because now they would get 15/3 rather than 14/3!

But how could we convince them to accept that kind of probability, since it is not a situation of Nash equilibrium? The idea is to invent a mechanism, the following: we agree that a die is cast by a third person, who observes the outcome and then suggests privately to each player what they should do. This private information forces rational players to update their odds on the outcomes (for instance, if the player who chooses the row is told to play the second row he knows that the outcome “Low, Left” and “Low, Right” have the same probability). With this update, the player who chooses the row now asks themselves: is it then better for me to deviate from the recommendation or not? With simple calculations, it turns out that for the players it is not convenient to deviate, whatever the suggestion they received privately. This is the brilliant idea of correlated equilibrium.

  Translated from the Italian by Daniele A. Gewurz

 References

[Ber] Bernardi, G., Lucchetti, R.: È tutto un gioco. Introduzione ai giochi non cooperativi. Francesco Brioschi, Milan (2018)

[Sch] Schaeffer, J. et al.: Checkers is solved. Science 317, 1518–1522 (2007)

Lascia un Commento

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *

*

I tag HTML non sono ammessi