If a priori our bodies and minds are designed, i.e. have evolved, and continue to evolve to solve the problem of promoting inclusive fitness, then how do we reconcile certain behaviors that seem detrimental to health, e.g. anti-inclusive fitness. For example, the ability of humans to go on hunger strikes and in some extermes literally kill themselves for ideas (examples from Montague's book). This can even be seen in the actions of those with certain eating disorders. I think that solution to this seeming paradox becomes one for psychology in so much that these individuals actually believe such actions are in their best interest for the promotion of their genetic material. Neurologically we could explain this as a problem of plasticity and one of working memory. The brain must be able to conform and patterns of spikes must encode within the brain the probabilities that imply how good of an action death may be over the instinct of survival represented by hunger. The idea must replace survival ideas like gathering food and reproduction. This is almost obvious if one looks at drug addicts.
As humans trying to solve a problem we often use an experimental method that heuristically determines the optimal behavior for a given problem. Once this has been determined we often do not want to "fix what isn't broken." We believe that it must be inefficient to keep making mistakes. However, we often find that we repeat certain trials of behavior over time wherein we had previously found optimality (or nearly so). How can we reconcile this with a point of view that regards humans as rational (or nearly rational), optimizing economic agents? The answer is two-fold: we are ourselves dynamic, constantly evolving through space, time and hence experience; secondly, the world separate from our persons is dynamically changing. So our optimization strategy cannot admit a static equilibrium. A book I am currently reading by Don Ross, which I am only half way through, tends to imply that these features actually break the classical model of whole humans as persisting economic agents. In Ross's words, "This amounts to the flat denial of anthropocentric neoclassicism" (Ross 2005).
The idea of a static equilibrium as an optimum is thus plain false, this equilibrium concept itself is suboptimal. We must use an algorithm that reconciles our dynamic experience and the world's dynamic evolution together to create dynamic strategies such that they are always (read: often) testing for new possible optima. Thus, we have a maximization problem wherein the constraint set changes over time but also the goal (value function) changes over time. Our system of equilibrium is not equilibrium then in the colloquial sense or even in the Nash sense. Our equilibrium state is not "equilibrium" because it is nearly always suboptimal and continuously changing. But this itself is the truly optimal "steady state".
An agent who sticks to a strategy that is optimal today given the current constraints will lose evolutionarily to an agent who optimizes relative to flexible constraints and a dynamic utility structure. This would seem to require a new equilibrium definition as well as a new optimization strategy. This is a concept of stochastic dynamic optimization over a dynamic continuous game. I do not know right now whether such concepts are well known or even studied.
Thursday, September 13, 2007
Efficient Competition
Efficient competition among intelligent opponents requires irreducibly uncertain behaviors. John Maynard Smith said that biologically we should see this uncertainty within and between individuals. However, it seems interesting that game theory provides us only with a theory of static equilibrium in games among two or more players. What about decisions we make (games we play) that seem to involve only ourselves? Do they always involve other parties, do we model "the world" or "nature" as the other player? These questions seem unanswered under my current understanding, this could be wrong as I gain a better understanding of game theoretic research, but nonetheless they are interesting ideas.
The implementation of a mixed strategy in decisions made by ourselves seems a difficult concept to grasp. We must find a neuronal basis for randomness, a basis that shows the fundamentally stochastic nature of our brains that allows us to implement algorithms that require mixed strategies and other probabilistic behaviors. I have written on this before, but I believe more work needs to be done to demonstrate this principle unequivocally. Our intuition would make one believe that we are not making decisions randomly even when we might actually be doing just that.
As intelligent agents we must develop algorithms to exploit structure in the environment. If any pattern exists that underlies the world around us, then those who can exploit this most efficiently will be more likely to promote their inclusive fitness. Henceforth, evolution is driving our brains and behaviors towards optimal pattern recognition and causal linking abilities. Neuronal predictions of the future and decision making abilities are directly tied to our ability to see cause for effect and store this information in our cognitive map of the probabilistic nature of the world. The only problem with this is that I do not believe people are that good at understanding causal relationships. From a general and unscientific point of view, it just seems that people tend to see what they want or expect and not necessarily the true relationships. This must be tested experimentally and if confirmed, must be explained analytically.
The implementation of a mixed strategy in decisions made by ourselves seems a difficult concept to grasp. We must find a neuronal basis for randomness, a basis that shows the fundamentally stochastic nature of our brains that allows us to implement algorithms that require mixed strategies and other probabilistic behaviors. I have written on this before, but I believe more work needs to be done to demonstrate this principle unequivocally. Our intuition would make one believe that we are not making decisions randomly even when we might actually be doing just that.
As intelligent agents we must develop algorithms to exploit structure in the environment. If any pattern exists that underlies the world around us, then those who can exploit this most efficiently will be more likely to promote their inclusive fitness. Henceforth, evolution is driving our brains and behaviors towards optimal pattern recognition and causal linking abilities. Neuronal predictions of the future and decision making abilities are directly tied to our ability to see cause for effect and store this information in our cognitive map of the probabilistic nature of the world. The only problem with this is that I do not believe people are that good at understanding causal relationships. From a general and unscientific point of view, it just seems that people tend to see what they want or expect and not necessarily the true relationships. This must be tested experimentally and if confirmed, must be explained analytically.
Working Memory
I wrote a few things while I was on vacation this summer, but as I was without a computer these thoughts have been sitting on a notepad for a while now. Thus, this post along with the others that I plan to post today/tomorrow were not all from the same sitting, but rather are transcriptions from my notepad.
First, from reading Paul Glimcher's book, I reached the hypothesis (along with him I believe, but I do not speak for him or his book) that computational efficency of the nervous system is the asymptotic solution to evolution. Form follows the environmental function and is constantly getting better. Thus, our evolution is in an asymptotic sense closing in on the limits of computability. "Evolution acts on the algorithmic techniques by which the brain generates behavior" (Glimcher 2004).
Working memory is such that our "changing states of memory, as well as our estimates of the values of any course of action, reflect this dynamic property of our environment" (Glimcher 2004). The dynamic property referred to here is that prior probabilities in the real world that govern decision making are constantly changing. How the brain maps experience through a computation to a prior probability must be understood. Essentially, we can think of this as: sensory event -> Black Box -> p_new = f(p_old, information_new); and in turn: p_old = f(experience). It is the "Black Box" inside the neuronal structure of the brain that will explain how a decision is really made. We can determine some threshold probability, denoted Q, such that once p_event(A),good > Q, then we will be willing to engage in said event as we are willing to take the chance that it will be "good" for our inclusive fitness. Similarly, p_event(B),bad > Q would imply we will ensure decisions are made to prevent engaging in event(B). As we build a complex map of nearly all possible behaviors, we can see how complex, long-term goals are driving by probabilities stored in a working memory. But these, like the world, are dynamic.
First, from reading Paul Glimcher's book, I reached the hypothesis (along with him I believe, but I do not speak for him or his book) that computational efficency of the nervous system is the asymptotic solution to evolution. Form follows the environmental function and is constantly getting better. Thus, our evolution is in an asymptotic sense closing in on the limits of computability. "Evolution acts on the algorithmic techniques by which the brain generates behavior" (Glimcher 2004).
Working memory is such that our "changing states of memory, as well as our estimates of the values of any course of action, reflect this dynamic property of our environment" (Glimcher 2004). The dynamic property referred to here is that prior probabilities in the real world that govern decision making are constantly changing. How the brain maps experience through a computation to a prior probability must be understood. Essentially, we can think of this as: sensory event -> Black Box -> p_new = f(p_old, information_new); and in turn: p_old = f(experience). It is the "Black Box" inside the neuronal structure of the brain that will explain how a decision is really made. We can determine some threshold probability, denoted Q, such that once p_event(A),good > Q, then we will be willing to engage in said event as we are willing to take the chance that it will be "good" for our inclusive fitness. Similarly, p_event(B),bad > Q would imply we will ensure decisions are made to prevent engaging in event(B). As we build a complex map of nearly all possible behaviors, we can see how complex, long-term goals are driving by probabilities stored in a working memory. But these, like the world, are dynamic.
Tuesday, July 24, 2007
Ordinal vs. Cardinal Utility
If we wish to understand how people make decisions, we must understand the concept of utility. How does the brain make comparisons among many different objects? Does ever object have a utility value associated with it such that we can simply find the utility of a group of objects by adding these values? It would seem to be the case that purely linear utility does not exist intrinsically in the brain because we can observe decreasing marginal utility. Furthermore, it seems logical that simply getting more of something is not necessarily worth the same to someone when they already have a lot of the item.
Therefore, often in economic theory I have found the use of an ordinal concept of utility. It seems as if cardinal concepts are always mentioned but then pushed aside as not that important because as long as we can order objects we can use monotone transformations and preserve all of the meaning. However, we cannot really do the math one might desire with such a utility framework. We might like to know how certain choices rank in a cardinal sense because for example this would make complex decisions less complicated as they could be broken down into smaller decisions.
The obvious question then becomes how does one assign an objective number of utility units to a certain choice. This would be unique for each individual as everyone has their own utility structure. Furthermore, we must preserve transitivity in order to preserve some semblance of rationality. I think that neuroscience will allow us to understand how the brain creates a utility map and then uses this to make decisions. It might seem that we would have to monitor people's choice behavior and create the utility structure from observation. However, I believe that if we actually understand how the brain processes new information and updates its internal utility structure, we should be able to create a robust concept of cardinal utility albeit somewhat localized to individual behavior. That is, aggregate utility is most likely beyond the reach of this method, however aggregate utility in itself is flawed and thus this is not a major issue.
The goal must therefore be to study the action potential sequences generated in the parts of the brain most likely linked to cognitive decision making; use the sequences obtained when a subject is asked to make choices to create a model for the generation of such sequences given a choice to be made. We should then be able to predict actual choice behavior if we know the choices to be made and the action potential sequence observed when a subject is presented with separate alternatives. From this framework, we should be able to infer a utility structure that is cardinal for the individual being studied.
Therefore, often in economic theory I have found the use of an ordinal concept of utility. It seems as if cardinal concepts are always mentioned but then pushed aside as not that important because as long as we can order objects we can use monotone transformations and preserve all of the meaning. However, we cannot really do the math one might desire with such a utility framework. We might like to know how certain choices rank in a cardinal sense because for example this would make complex decisions less complicated as they could be broken down into smaller decisions.
The obvious question then becomes how does one assign an objective number of utility units to a certain choice. This would be unique for each individual as everyone has their own utility structure. Furthermore, we must preserve transitivity in order to preserve some semblance of rationality. I think that neuroscience will allow us to understand how the brain creates a utility map and then uses this to make decisions. It might seem that we would have to monitor people's choice behavior and create the utility structure from observation. However, I believe that if we actually understand how the brain processes new information and updates its internal utility structure, we should be able to create a robust concept of cardinal utility albeit somewhat localized to individual behavior. That is, aggregate utility is most likely beyond the reach of this method, however aggregate utility in itself is flawed and thus this is not a major issue.
The goal must therefore be to study the action potential sequences generated in the parts of the brain most likely linked to cognitive decision making; use the sequences obtained when a subject is asked to make choices to create a model for the generation of such sequences given a choice to be made. We should then be able to predict actual choice behavior if we know the choices to be made and the action potential sequence observed when a subject is presented with separate alternatives. From this framework, we should be able to infer a utility structure that is cardinal for the individual being studied.
Thursday, June 21, 2007
Fundamentally Stochastic
I have just finished reading a 2005 paper from Games and Economic Behavior entitled "Physiological utility theory and the neuroecomics of choice" by Glimcher et al. This is the best paper on neuroeconomics that I have read and follows along almost exactly the same lines that I have been pondering for my doctoral research. A thought came to my head while reading the paper that I found quite interesting.
Much data is presented in the paper that suggests that the precise pattern of activity in cortical neurons is stochastic. "The dynamics of exactly when an action potential is generated seems to depend on truly random physical processes." I have discovered this point of view previously in Dayan and Abbott's Theoretical Neuroscience, where they model action potentials using essentially Poisson processes. The fact that struck me is related to one of the common sticking points of the analysis of game theory: mixed stratgey equilibrium. The (very basic) idea is that actions in a game are in a stable equilibrium if the players of the game impose a probabilistic structure to their choice behavior. The problem that I often find when talking to people about this is that it does not matter if equilibrium exists if each of two actions is chosen 50% of the time, how does one actually do that outside of flipping a coin. Furthermore, in one-shot games, this concept of equilibrium seems to be lacking in that one intuitive thinks that a nonrandom choice would be better than simply rolling a die. The interesting idea that I garned from this paper is that the stochastic nature of a mixed strategy equilibrium is actually naturally encoded in our actions by the fundamental stochasticity of synaptic transmission.
I am essentially saying that stochastic analysis of game theory is not only elegant in that it provides equilbirum conditions under very general settings as described by Nash, but further it is essential to the understanding of real human behavior. The physiology of decision making is stochastic in nature. This in itself is enough to reject Descartes' dualism. This paper explicates this rejection along with the rejection of the point of view held by many current economists that decision making is divded into two parts: a rational component controlled by the cerebral cortex and an irrational component controlled by a separate part of the brain more associted with emotions. I believe I have even stated this point of view in a previous post. However, this was from my point of view as an amateur in the field of neuroscience.
The current view of the biology field seems to hold that decision making is made by the brain in a more unitary approach and that rational and irrational outcomes come from the same process. Henceforth, a predictive model must be able to unify both rational and irrational choices behavior and have the means of determining when each will occur. It is this goal that I think will be most fruitful going forward and which I intend to pursue.
Much data is presented in the paper that suggests that the precise pattern of activity in cortical neurons is stochastic. "The dynamics of exactly when an action potential is generated seems to depend on truly random physical processes." I have discovered this point of view previously in Dayan and Abbott's Theoretical Neuroscience, where they model action potentials using essentially Poisson processes. The fact that struck me is related to one of the common sticking points of the analysis of game theory: mixed stratgey equilibrium. The (very basic) idea is that actions in a game are in a stable equilibrium if the players of the game impose a probabilistic structure to their choice behavior. The problem that I often find when talking to people about this is that it does not matter if equilibrium exists if each of two actions is chosen 50% of the time, how does one actually do that outside of flipping a coin. Furthermore, in one-shot games, this concept of equilibrium seems to be lacking in that one intuitive thinks that a nonrandom choice would be better than simply rolling a die. The interesting idea that I garned from this paper is that the stochastic nature of a mixed strategy equilibrium is actually naturally encoded in our actions by the fundamental stochasticity of synaptic transmission.
I am essentially saying that stochastic analysis of game theory is not only elegant in that it provides equilbirum conditions under very general settings as described by Nash, but further it is essential to the understanding of real human behavior. The physiology of decision making is stochastic in nature. This in itself is enough to reject Descartes' dualism. This paper explicates this rejection along with the rejection of the point of view held by many current economists that decision making is divded into two parts: a rational component controlled by the cerebral cortex and an irrational component controlled by a separate part of the brain more associted with emotions. I believe I have even stated this point of view in a previous post. However, this was from my point of view as an amateur in the field of neuroscience.
The current view of the biology field seems to hold that decision making is made by the brain in a more unitary approach and that rational and irrational outcomes come from the same process. Henceforth, a predictive model must be able to unify both rational and irrational choices behavior and have the means of determining when each will occur. It is this goal that I think will be most fruitful going forward and which I intend to pursue.
Friday, June 15, 2007
Ultimatum Game
I recently have begun collecting and reading journal articles in various fields related to what I plan to do research on. I have been looking for articles relating to the relatively new field of neuroeconomics and the search has actually not proved as easy as I thought. The real problem lies with the majority of the material I have been finding is in psychology journals and is lacking in mathematical content. However, these papers do provide some insight into that area of work and definetly helps me understand some of the current problems being pursued. The problem that I have before me is to find a direction for research that involves my interest in neuroscience and economics, but that also is significantly mathematical in nature. This should not be incredibly difficult to find and I believe that some of the papers I have been reading, although nonmathematical, will help me to this end.
A 2003 paper in Science titled "The neural basis of economic decision-making in the Ultimatum Game" by Sanfey, et. al. was quite fascinating to me. They essentially seem to provide empirical evidence for irrational behavior. Therefore, rationality assumptions in economic models may not be accurate in predicting behavior in the real world. Traditionally, the joke is that economic models tend to always make assumptions that never hold in reality. Furthermore, the real difficulty in my mind seems to be in how to best implement the relaxation of rationality.
A specific example from this paper might be the best way to explain. The Ultimatum game is one where a fixed amount of money is to be split among two players. Player A makes an offer to Player B about how to split the money. Player B may accept the offer and each receives their respective payoffs, or Player B may reject and both receive nothing. The game is played only once and after Player B's decision the game is over. Rational players we traditional would expect to play such that Player A offers as little as possible to Player B and Player B will always accept as long as they are offered a positive quantity of money. This is because we assume that rational people prefer any amount of money to none. But, when a deal is viewed as "too unfair", the experimentors found that the second player rejected offers. Neurologically this was correlated to increased activity in the bilateral anterior insula. This area of the brain is often associated with negative emotional responses. Therefore, they essentially found that negative emotions in the brain can outweigh cognitive decision-making areas of the brain and cause one to act irrationally. Player B consciously turns money down in order to spite Player A whom they believe made an unfair offer. How do we reconcile an economic model to such a result?
It seems that the area that needs to be researched is a better definition of game theoretic equilibrium. We need to somehow add to models of decision behavior the fact that emotions can alter decisions from pure rationality. To do so seems almost impossible though just at a glace. However, that is the purpose of doing something original and what will take time and effort. Possibly one can introduce a rationality function that is specific to the decision at hand and that is responsive to how "fair" or "unfair" a situation might be. Further, the situation is completely different if a game like the ultimatum game is played repeatedly.
My simple yet original idea can be explicated by seeing how one might examine the payoff structure of the ultimatum game with irrationality. Let the total amount of money to be split be denoted by $T. Then the payoff to player A is defined as: P(A) = f(x)*x; where x is the amount that player A chooses to give herself in the deal. We then define y = T - x, and see that this is the amount offered to player B in the proposed deal. Therefore, P(B) = f(x)*y. We are simply left to define f(x). We want to have a function that is equal to 1 when the deal is accepted by player B and equal to 0 when the deal is rejected. This function is thus a rationality function. If we assume pure rationality then f(x) = 1 for all x. However, we can introduce an infinite amount of possible irrational behaviors by player B that will define the ultimate payout. For example, let us define f(x) = 1 for x <= 7 and f(x) = 0 for x > 7. This would follow along the same lines as the experiment in the aforementioned paper where the majority of deals where the money was to be split 8/2 were rejected and the 7/3 deals were accepted. We could further introduce a stochastic component and use a dirac delta function multipled by a separate continuous function in order to more accurately create an f(x) that models experimental behavior.
I believe that something along these lines, obviously more well developed, could be used along with different notions of equilibrium to create more accurate economic models. We could, for example, attempt to find robust equilibrium conditions that hold even when irrationality is assumed. This would be a giant step foward for macroeconomic theory where we tend to always have to assume a rational reprsentative consumer in order to aggregate much of the strong results from microeconomics.
A 2003 paper in Science titled "The neural basis of economic decision-making in the Ultimatum Game" by Sanfey, et. al. was quite fascinating to me. They essentially seem to provide empirical evidence for irrational behavior. Therefore, rationality assumptions in economic models may not be accurate in predicting behavior in the real world. Traditionally, the joke is that economic models tend to always make assumptions that never hold in reality. Furthermore, the real difficulty in my mind seems to be in how to best implement the relaxation of rationality.
A specific example from this paper might be the best way to explain. The Ultimatum game is one where a fixed amount of money is to be split among two players. Player A makes an offer to Player B about how to split the money. Player B may accept the offer and each receives their respective payoffs, or Player B may reject and both receive nothing. The game is played only once and after Player B's decision the game is over. Rational players we traditional would expect to play such that Player A offers as little as possible to Player B and Player B will always accept as long as they are offered a positive quantity of money. This is because we assume that rational people prefer any amount of money to none. But, when a deal is viewed as "too unfair", the experimentors found that the second player rejected offers. Neurologically this was correlated to increased activity in the bilateral anterior insula. This area of the brain is often associated with negative emotional responses. Therefore, they essentially found that negative emotions in the brain can outweigh cognitive decision-making areas of the brain and cause one to act irrationally. Player B consciously turns money down in order to spite Player A whom they believe made an unfair offer. How do we reconcile an economic model to such a result?
It seems that the area that needs to be researched is a better definition of game theoretic equilibrium. We need to somehow add to models of decision behavior the fact that emotions can alter decisions from pure rationality. To do so seems almost impossible though just at a glace. However, that is the purpose of doing something original and what will take time and effort. Possibly one can introduce a rationality function that is specific to the decision at hand and that is responsive to how "fair" or "unfair" a situation might be. Further, the situation is completely different if a game like the ultimatum game is played repeatedly.
My simple yet original idea can be explicated by seeing how one might examine the payoff structure of the ultimatum game with irrationality. Let the total amount of money to be split be denoted by $T. Then the payoff to player A is defined as: P(A) = f(x)*x; where x is the amount that player A chooses to give herself in the deal. We then define y = T - x, and see that this is the amount offered to player B in the proposed deal. Therefore, P(B) = f(x)*y. We are simply left to define f(x). We want to have a function that is equal to 1 when the deal is accepted by player B and equal to 0 when the deal is rejected. This function is thus a rationality function. If we assume pure rationality then f(x) = 1 for all x. However, we can introduce an infinite amount of possible irrational behaviors by player B that will define the ultimate payout. For example, let us define f(x) = 1 for x <= 7 and f(x) = 0 for x > 7. This would follow along the same lines as the experiment in the aforementioned paper where the majority of deals where the money was to be split 8/2 were rejected and the 7/3 deals were accepted. We could further introduce a stochastic component and use a dirac delta function multipled by a separate continuous function in order to more accurately create an f(x) that models experimental behavior.
I believe that something along these lines, obviously more well developed, could be used along with different notions of equilibrium to create more accurate economic models. We could, for example, attempt to find robust equilibrium conditions that hold even when irrationality is assumed. This would be a giant step foward for macroeconomic theory where we tend to always have to assume a rational reprsentative consumer in order to aggregate much of the strong results from microeconomics.
Thursday, June 14, 2007
Topology and Analysis
My first exposure to abstract mathematics was in an undergraduate course in topology. The textbook we used was Topology by James Munkres. I did well in the class and perceived my understanding to be rather high. However, I have been going back through the book to solidify my knowledge of the material and I have come to realize that this book is actually very in-depth and that in my course we did not fully cover what I am now finding in my second study.
My real analysis course at CU was an undergraduate course that was very much lacking in my mind and I have thus set the goal of teaching myself in a rigorous manner the entirety of analysis that is necessary for higher level mathematics. But, the question then arose how best to accomplish this? It seems that there is somewhat of a divide among mathematics over the best book for studying analysis. Royden vs Rudin. I have chosen to use Real Analysis by H.L. Royden and Real Analysis: Measure Theory, Integration, and Hilbert Spaces (Princeton Lectures in Analysis) by Elias M. Stein and Rami Shakarchi. The latter book I chose after reading the first couple chapters and finding that I rather enjoy the exposition of these authors over Royden.
I am just curious how this will turn out relative to what knowledge I might garner from retaking analysis at the graduate level or perhaps working with someone so that I concentrate on the important parts and get help where I may get stuck. Either way I think that I learn best by working on my own and this self-study should ideally also help prepare me for the independent nature of doctoral research.
The real question that keeps cropping up in my mind is whether expertise in analysis or in topology is more beneficial. It seems that analysis is more useful to the applied mathematican, of which I count myself. Hence, why study topology? But, in my reading of books on analysis and other areas I keep finding myself returning to Munkres for a topological point of view. The proofs generally seem more intuitive, which helps my understanding of the more analytic proofs. At the same time, however, much of the study of topology seems to be very abstract and hard to relate to some of the areas of applied mathematics that interest me.
In any case, I have set up some goals to work my way through much of my topology book as well as through the entirety of both analysis books. In so doing, I think that I will definently be adequetely prepared to start working on much higher levels of mathematics with a more substantial degree of confidence in my abilities.
My real analysis course at CU was an undergraduate course that was very much lacking in my mind and I have thus set the goal of teaching myself in a rigorous manner the entirety of analysis that is necessary for higher level mathematics. But, the question then arose how best to accomplish this? It seems that there is somewhat of a divide among mathematics over the best book for studying analysis. Royden vs Rudin. I have chosen to use Real Analysis by H.L. Royden and Real Analysis: Measure Theory, Integration, and Hilbert Spaces (Princeton Lectures in Analysis) by Elias M. Stein and Rami Shakarchi. The latter book I chose after reading the first couple chapters and finding that I rather enjoy the exposition of these authors over Royden.
I am just curious how this will turn out relative to what knowledge I might garner from retaking analysis at the graduate level or perhaps working with someone so that I concentrate on the important parts and get help where I may get stuck. Either way I think that I learn best by working on my own and this self-study should ideally also help prepare me for the independent nature of doctoral research.
The real question that keeps cropping up in my mind is whether expertise in analysis or in topology is more beneficial. It seems that analysis is more useful to the applied mathematican, of which I count myself. Hence, why study topology? But, in my reading of books on analysis and other areas I keep finding myself returning to Munkres for a topological point of view. The proofs generally seem more intuitive, which helps my understanding of the more analytic proofs. At the same time, however, much of the study of topology seems to be very abstract and hard to relate to some of the areas of applied mathematics that interest me.
In any case, I have set up some goals to work my way through much of my topology book as well as through the entirety of both analysis books. In so doing, I think that I will definently be adequetely prepared to start working on much higher levels of mathematics with a more substantial degree of confidence in my abilities.
Subscribe to:
Posts (Atom)