Thursday, September 13, 2007

Problems and Concerns with Current Theory

If a priori our bodies and minds are designed, i.e. have evolved, and continue to evolve to solve the problem of promoting inclusive fitness, then how do we reconcile certain behaviors that seem detrimental to health, e.g. anti-inclusive fitness. For example, the ability of humans to go on hunger strikes and in some extermes literally kill themselves for ideas (examples from Montague's book). This can even be seen in the actions of those with certain eating disorders. I think that solution to this seeming paradox becomes one for psychology in so much that these individuals actually believe such actions are in their best interest for the promotion of their genetic material. Neurologically we could explain this as a problem of plasticity and one of working memory. The brain must be able to conform and patterns of spikes must encode within the brain the probabilities that imply how good of an action death may be over the instinct of survival represented by hunger. The idea must replace survival ideas like gathering food and reproduction. This is almost obvious if one looks at drug addicts.

As humans trying to solve a problem we often use an experimental method that heuristically determines the optimal behavior for a given problem. Once this has been determined we often do not want to "fix what isn't broken." We believe that it must be inefficient to keep making mistakes. However, we often find that we repeat certain trials of behavior over time wherein we had previously found optimality (or nearly so). How can we reconcile this with a point of view that regards humans as rational (or nearly rational), optimizing economic agents? The answer is two-fold: we are ourselves dynamic, constantly evolving through space, time and hence experience; secondly, the world separate from our persons is dynamically changing. So our optimization strategy cannot admit a static equilibrium. A book I am currently reading by Don Ross, which I am only half way through, tends to imply that these features actually break the classical model of whole humans as persisting economic agents. In Ross's words, "This amounts to the flat denial of anthropocentric neoclassicism" (Ross 2005).

The idea of a static equilibrium as an optimum is thus plain false, this equilibrium concept itself is suboptimal. We must use an algorithm that reconciles our dynamic experience and the world's dynamic evolution together to create dynamic strategies such that they are always (read: often) testing for new possible optima. Thus, we have a maximization problem wherein the constraint set changes over time but also the goal (value function) changes over time. Our system of equilibrium is not equilibrium then in the colloquial sense or even in the Nash sense. Our equilibrium state is not "equilibrium" because it is nearly always suboptimal and continuously changing. But this itself is the truly optimal "steady state".

An agent who sticks to a strategy that is optimal today given the current constraints will lose evolutionarily to an agent who optimizes relative to flexible constraints and a dynamic utility structure. This would seem to require a new equilibrium definition as well as a new optimization strategy. This is a concept of stochastic dynamic optimization over a dynamic continuous game. I do not know right now whether such concepts are well known or even studied.

Efficient Competition

Efficient competition among intelligent opponents requires irreducibly uncertain behaviors. John Maynard Smith said that biologically we should see this uncertainty within and between individuals. However, it seems interesting that game theory provides us only with a theory of static equilibrium in games among two or more players. What about decisions we make (games we play) that seem to involve only ourselves? Do they always involve other parties, do we model "the world" or "nature" as the other player? These questions seem unanswered under my current understanding, this could be wrong as I gain a better understanding of game theoretic research, but nonetheless they are interesting ideas.

The implementation of a mixed strategy in decisions made by ourselves seems a difficult concept to grasp. We must find a neuronal basis for randomness, a basis that shows the fundamentally stochastic nature of our brains that allows us to implement algorithms that require mixed strategies and other probabilistic behaviors. I have written on this before, but I believe more work needs to be done to demonstrate this principle unequivocally. Our intuition would make one believe that we are not making decisions randomly even when we might actually be doing just that.

As intelligent agents we must develop algorithms to exploit structure in the environment. If any pattern exists that underlies the world around us, then those who can exploit this most efficiently will be more likely to promote their inclusive fitness. Henceforth, evolution is driving our brains and behaviors towards optimal pattern recognition and causal linking abilities. Neuronal predictions of the future and decision making abilities are directly tied to our ability to see cause for effect and store this information in our cognitive map of the probabilistic nature of the world. The only problem with this is that I do not believe people are that good at understanding causal relationships. From a general and unscientific point of view, it just seems that people tend to see what they want or expect and not necessarily the true relationships. This must be tested experimentally and if confirmed, must be explained analytically.

Working Memory

I wrote a few things while I was on vacation this summer, but as I was without a computer these thoughts have been sitting on a notepad for a while now. Thus, this post along with the others that I plan to post today/tomorrow were not all from the same sitting, but rather are transcriptions from my notepad.

First, from reading Paul Glimcher's book, I reached the hypothesis (along with him I believe, but I do not speak for him or his book) that computational efficency of the nervous system is the asymptotic solution to evolution. Form follows the environmental function and is constantly getting better. Thus, our evolution is in an asymptotic sense closing in on the limits of computability. "Evolution acts on the algorithmic techniques by which the brain generates behavior" (Glimcher 2004).

Working memory is such that our "changing states of memory, as well as our estimates of the values of any course of action, reflect this dynamic property of our environment" (Glimcher 2004). The dynamic property referred to here is that prior probabilities in the real world that govern decision making are constantly changing. How the brain maps experience through a computation to a prior probability must be understood. Essentially, we can think of this as: sensory event -> Black Box -> p_new = f(p_old, information_new); and in turn: p_old = f(experience). It is the "Black Box" inside the neuronal structure of the brain that will explain how a decision is really made. We can determine some threshold probability, denoted Q, such that once p_event(A),good > Q, then we will be willing to engage in said event as we are willing to take the chance that it will be "good" for our inclusive fitness. Similarly, p_event(B),bad > Q would imply we will ensure decisions are made to prevent engaging in event(B). As we build a complex map of nearly all possible behaviors, we can see how complex, long-term goals are driving by probabilities stored in a working memory. But these, like the world, are dynamic.