If we wish to understand how people make decisions, we must understand the concept of utility. How does the brain make comparisons among many different objects? Does ever object have a utility value associated with it such that we can simply find the utility of a group of objects by adding these values? It would seem to be the case that purely linear utility does not exist intrinsically in the brain because we can observe decreasing marginal utility. Furthermore, it seems logical that simply getting more of something is not necessarily worth the same to someone when they already have a lot of the item.
Therefore, often in economic theory I have found the use of an ordinal concept of utility. It seems as if cardinal concepts are always mentioned but then pushed aside as not that important because as long as we can order objects we can use monotone transformations and preserve all of the meaning. However, we cannot really do the math one might desire with such a utility framework. We might like to know how certain choices rank in a cardinal sense because for example this would make complex decisions less complicated as they could be broken down into smaller decisions.
The obvious question then becomes how does one assign an objective number of utility units to a certain choice. This would be unique for each individual as everyone has their own utility structure. Furthermore, we must preserve transitivity in order to preserve some semblance of rationality. I think that neuroscience will allow us to understand how the brain creates a utility map and then uses this to make decisions. It might seem that we would have to monitor people's choice behavior and create the utility structure from observation. However, I believe that if we actually understand how the brain processes new information and updates its internal utility structure, we should be able to create a robust concept of cardinal utility albeit somewhat localized to individual behavior. That is, aggregate utility is most likely beyond the reach of this method, however aggregate utility in itself is flawed and thus this is not a major issue.
The goal must therefore be to study the action potential sequences generated in the parts of the brain most likely linked to cognitive decision making; use the sequences obtained when a subject is asked to make choices to create a model for the generation of such sequences given a choice to be made. We should then be able to predict actual choice behavior if we know the choices to be made and the action potential sequence observed when a subject is presented with separate alternatives. From this framework, we should be able to infer a utility structure that is cardinal for the individual being studied.
Showing posts with label economics. Show all posts
Showing posts with label economics. Show all posts
Tuesday, July 24, 2007
Thursday, June 21, 2007
Fundamentally Stochastic
I have just finished reading a 2005 paper from Games and Economic Behavior entitled "Physiological utility theory and the neuroecomics of choice" by Glimcher et al. This is the best paper on neuroeconomics that I have read and follows along almost exactly the same lines that I have been pondering for my doctoral research. A thought came to my head while reading the paper that I found quite interesting.
Much data is presented in the paper that suggests that the precise pattern of activity in cortical neurons is stochastic. "The dynamics of exactly when an action potential is generated seems to depend on truly random physical processes." I have discovered this point of view previously in Dayan and Abbott's Theoretical Neuroscience, where they model action potentials using essentially Poisson processes. The fact that struck me is related to one of the common sticking points of the analysis of game theory: mixed stratgey equilibrium. The (very basic) idea is that actions in a game are in a stable equilibrium if the players of the game impose a probabilistic structure to their choice behavior. The problem that I often find when talking to people about this is that it does not matter if equilibrium exists if each of two actions is chosen 50% of the time, how does one actually do that outside of flipping a coin. Furthermore, in one-shot games, this concept of equilibrium seems to be lacking in that one intuitive thinks that a nonrandom choice would be better than simply rolling a die. The interesting idea that I garned from this paper is that the stochastic nature of a mixed strategy equilibrium is actually naturally encoded in our actions by the fundamental stochasticity of synaptic transmission.
I am essentially saying that stochastic analysis of game theory is not only elegant in that it provides equilbirum conditions under very general settings as described by Nash, but further it is essential to the understanding of real human behavior. The physiology of decision making is stochastic in nature. This in itself is enough to reject Descartes' dualism. This paper explicates this rejection along with the rejection of the point of view held by many current economists that decision making is divded into two parts: a rational component controlled by the cerebral cortex and an irrational component controlled by a separate part of the brain more associted with emotions. I believe I have even stated this point of view in a previous post. However, this was from my point of view as an amateur in the field of neuroscience.
The current view of the biology field seems to hold that decision making is made by the brain in a more unitary approach and that rational and irrational outcomes come from the same process. Henceforth, a predictive model must be able to unify both rational and irrational choices behavior and have the means of determining when each will occur. It is this goal that I think will be most fruitful going forward and which I intend to pursue.
Much data is presented in the paper that suggests that the precise pattern of activity in cortical neurons is stochastic. "The dynamics of exactly when an action potential is generated seems to depend on truly random physical processes." I have discovered this point of view previously in Dayan and Abbott's Theoretical Neuroscience, where they model action potentials using essentially Poisson processes. The fact that struck me is related to one of the common sticking points of the analysis of game theory: mixed stratgey equilibrium. The (very basic) idea is that actions in a game are in a stable equilibrium if the players of the game impose a probabilistic structure to their choice behavior. The problem that I often find when talking to people about this is that it does not matter if equilibrium exists if each of two actions is chosen 50% of the time, how does one actually do that outside of flipping a coin. Furthermore, in one-shot games, this concept of equilibrium seems to be lacking in that one intuitive thinks that a nonrandom choice would be better than simply rolling a die. The interesting idea that I garned from this paper is that the stochastic nature of a mixed strategy equilibrium is actually naturally encoded in our actions by the fundamental stochasticity of synaptic transmission.
I am essentially saying that stochastic analysis of game theory is not only elegant in that it provides equilbirum conditions under very general settings as described by Nash, but further it is essential to the understanding of real human behavior. The physiology of decision making is stochastic in nature. This in itself is enough to reject Descartes' dualism. This paper explicates this rejection along with the rejection of the point of view held by many current economists that decision making is divded into two parts: a rational component controlled by the cerebral cortex and an irrational component controlled by a separate part of the brain more associted with emotions. I believe I have even stated this point of view in a previous post. However, this was from my point of view as an amateur in the field of neuroscience.
The current view of the biology field seems to hold that decision making is made by the brain in a more unitary approach and that rational and irrational outcomes come from the same process. Henceforth, a predictive model must be able to unify both rational and irrational choices behavior and have the means of determining when each will occur. It is this goal that I think will be most fruitful going forward and which I intend to pursue.
Friday, June 15, 2007
Ultimatum Game
I recently have begun collecting and reading journal articles in various fields related to what I plan to do research on. I have been looking for articles relating to the relatively new field of neuroeconomics and the search has actually not proved as easy as I thought. The real problem lies with the majority of the material I have been finding is in psychology journals and is lacking in mathematical content. However, these papers do provide some insight into that area of work and definetly helps me understand some of the current problems being pursued. The problem that I have before me is to find a direction for research that involves my interest in neuroscience and economics, but that also is significantly mathematical in nature. This should not be incredibly difficult to find and I believe that some of the papers I have been reading, although nonmathematical, will help me to this end.
A 2003 paper in Science titled "The neural basis of economic decision-making in the Ultimatum Game" by Sanfey, et. al. was quite fascinating to me. They essentially seem to provide empirical evidence for irrational behavior. Therefore, rationality assumptions in economic models may not be accurate in predicting behavior in the real world. Traditionally, the joke is that economic models tend to always make assumptions that never hold in reality. Furthermore, the real difficulty in my mind seems to be in how to best implement the relaxation of rationality.
A specific example from this paper might be the best way to explain. The Ultimatum game is one where a fixed amount of money is to be split among two players. Player A makes an offer to Player B about how to split the money. Player B may accept the offer and each receives their respective payoffs, or Player B may reject and both receive nothing. The game is played only once and after Player B's decision the game is over. Rational players we traditional would expect to play such that Player A offers as little as possible to Player B and Player B will always accept as long as they are offered a positive quantity of money. This is because we assume that rational people prefer any amount of money to none. But, when a deal is viewed as "too unfair", the experimentors found that the second player rejected offers. Neurologically this was correlated to increased activity in the bilateral anterior insula. This area of the brain is often associated with negative emotional responses. Therefore, they essentially found that negative emotions in the brain can outweigh cognitive decision-making areas of the brain and cause one to act irrationally. Player B consciously turns money down in order to spite Player A whom they believe made an unfair offer. How do we reconcile an economic model to such a result?
It seems that the area that needs to be researched is a better definition of game theoretic equilibrium. We need to somehow add to models of decision behavior the fact that emotions can alter decisions from pure rationality. To do so seems almost impossible though just at a glace. However, that is the purpose of doing something original and what will take time and effort. Possibly one can introduce a rationality function that is specific to the decision at hand and that is responsive to how "fair" or "unfair" a situation might be. Further, the situation is completely different if a game like the ultimatum game is played repeatedly.
My simple yet original idea can be explicated by seeing how one might examine the payoff structure of the ultimatum game with irrationality. Let the total amount of money to be split be denoted by $T. Then the payoff to player A is defined as: P(A) = f(x)*x; where x is the amount that player A chooses to give herself in the deal. We then define y = T - x, and see that this is the amount offered to player B in the proposed deal. Therefore, P(B) = f(x)*y. We are simply left to define f(x). We want to have a function that is equal to 1 when the deal is accepted by player B and equal to 0 when the deal is rejected. This function is thus a rationality function. If we assume pure rationality then f(x) = 1 for all x. However, we can introduce an infinite amount of possible irrational behaviors by player B that will define the ultimate payout. For example, let us define f(x) = 1 for x <= 7 and f(x) = 0 for x > 7. This would follow along the same lines as the experiment in the aforementioned paper where the majority of deals where the money was to be split 8/2 were rejected and the 7/3 deals were accepted. We could further introduce a stochastic component and use a dirac delta function multipled by a separate continuous function in order to more accurately create an f(x) that models experimental behavior.
I believe that something along these lines, obviously more well developed, could be used along with different notions of equilibrium to create more accurate economic models. We could, for example, attempt to find robust equilibrium conditions that hold even when irrationality is assumed. This would be a giant step foward for macroeconomic theory where we tend to always have to assume a rational reprsentative consumer in order to aggregate much of the strong results from microeconomics.
A 2003 paper in Science titled "The neural basis of economic decision-making in the Ultimatum Game" by Sanfey, et. al. was quite fascinating to me. They essentially seem to provide empirical evidence for irrational behavior. Therefore, rationality assumptions in economic models may not be accurate in predicting behavior in the real world. Traditionally, the joke is that economic models tend to always make assumptions that never hold in reality. Furthermore, the real difficulty in my mind seems to be in how to best implement the relaxation of rationality.
A specific example from this paper might be the best way to explain. The Ultimatum game is one where a fixed amount of money is to be split among two players. Player A makes an offer to Player B about how to split the money. Player B may accept the offer and each receives their respective payoffs, or Player B may reject and both receive nothing. The game is played only once and after Player B's decision the game is over. Rational players we traditional would expect to play such that Player A offers as little as possible to Player B and Player B will always accept as long as they are offered a positive quantity of money. This is because we assume that rational people prefer any amount of money to none. But, when a deal is viewed as "too unfair", the experimentors found that the second player rejected offers. Neurologically this was correlated to increased activity in the bilateral anterior insula. This area of the brain is often associated with negative emotional responses. Therefore, they essentially found that negative emotions in the brain can outweigh cognitive decision-making areas of the brain and cause one to act irrationally. Player B consciously turns money down in order to spite Player A whom they believe made an unfair offer. How do we reconcile an economic model to such a result?
It seems that the area that needs to be researched is a better definition of game theoretic equilibrium. We need to somehow add to models of decision behavior the fact that emotions can alter decisions from pure rationality. To do so seems almost impossible though just at a glace. However, that is the purpose of doing something original and what will take time and effort. Possibly one can introduce a rationality function that is specific to the decision at hand and that is responsive to how "fair" or "unfair" a situation might be. Further, the situation is completely different if a game like the ultimatum game is played repeatedly.
My simple yet original idea can be explicated by seeing how one might examine the payoff structure of the ultimatum game with irrationality. Let the total amount of money to be split be denoted by $T. Then the payoff to player A is defined as: P(A) = f(x)*x; where x is the amount that player A chooses to give herself in the deal. We then define y = T - x, and see that this is the amount offered to player B in the proposed deal. Therefore, P(B) = f(x)*y. We are simply left to define f(x). We want to have a function that is equal to 1 when the deal is accepted by player B and equal to 0 when the deal is rejected. This function is thus a rationality function. If we assume pure rationality then f(x) = 1 for all x. However, we can introduce an infinite amount of possible irrational behaviors by player B that will define the ultimate payout. For example, let us define f(x) = 1 for x <= 7 and f(x) = 0 for x > 7. This would follow along the same lines as the experiment in the aforementioned paper where the majority of deals where the money was to be split 8/2 were rejected and the 7/3 deals were accepted. We could further introduce a stochastic component and use a dirac delta function multipled by a separate continuous function in order to more accurately create an f(x) that models experimental behavior.
I believe that something along these lines, obviously more well developed, could be used along with different notions of equilibrium to create more accurate economic models. We could, for example, attempt to find robust equilibrium conditions that hold even when irrationality is assumed. This would be a giant step foward for macroeconomic theory where we tend to always have to assume a rational reprsentative consumer in order to aggregate much of the strong results from microeconomics.
Subscribe to:
Comments (Atom)