Lessons from the Edge
“Cellular
Automata, Collective Intelligence, and Phase Transition”
In which Bill Sulis discussed brain functioning from
a complexity perspective
William
Sulis, MD, PhD, McMaster University, Departments of Psychiatry and Psychology
The search for patterns
- Cellular Automata
help us examine how the mind operates. “The bulk of my problems are
in treating demented elderly,” Sulis explained. “Those problems are
not just for my patients but for the systems they impact. They create
a new social milieu. The hallmark of dementia is an erosion of conscious
processes. People suffering dementia operate mostly unconsciously. I
look for dynamics that would facilitate more controlled behavior.”
- Classical Cellular
Automata (CA) are stupid. They look at their state and the state of
their neighbors, then follow directions. CA generate patterns. They
can generate many patterns. Some are boringly linear. Some are periodic,
even complex, or apparently random.
- Sulis wanted
to know what we can say about the dynamics behind the pattern. “On
the surface, behavior seems regular. Yet we don’t know what is driving
that behavior. What’s important is understanding the patterns as a whole.
You need to understand the right set of patterns to solve your problems.”
Patterns in the brain
- How do our brains
solve problems? We don’t have a single cell that recognizes “grandma.”
Are there assemblies of cells that enable us to recognize things? There’s
been some evidence of this.
- Present a monkey
with the same stimulus a couple of hundred times, and we find that the
experience records differently from time to time. Only the average
firing rate offers clear information.
- The system
of the brain is extremely context dependent.
A rat’s hyppocampus fires similarly every time. If you keep the rat
in the same environment his responses remain alike. But after being
in a new environment, the pattern of firing has changed.
- How, Sulis asked,
does the brain understand change has occurred so it can interpret these
differences in firing patterns?
- Living things
must respond stably in their environments. A deer in the woods must
be able to respond to a predator. What’s important is not what the neurons
are doing, but what the body as a whole does. CA reflect this ability
to produce different patterns from different initial states.
Collective intelligence
- Collective intelligence
is adaptive behavior generated by many quasi-independent agents, interacting
locally.
- Social insects
colonies, for example, fall under causal influences: stochastic determinism;
interactive determinism. Their decisions are very democratic. Movement
to a new location appears random, but it’s not.
- Without hierarchical
structures, you should begin to see collective behaviors emerge. The
big problem is that agents can only access local information. That enables
them to be trapped in blind alleys. The environment must be shaped to
avoid dead ends and ineffective behavior. Given novel environments,
collective intelligence enables people to build remarkable structures.
Clarification, & other thoughts
- Consider the phenomenon
of emergence. Low level systems, whether ants or CA, examined locally
don’t demonstrate much order. Only when viewed as a system do
significant patterns emerge. We can witness cells forming patterns,
and those patterns forming meta-patterns depending on their environments.
- To understand
a system, you must understand the patterns it values. Each system decides
which patterns are important to it.
- For organizations,
the implications are striking. On top of the dynamics we acknowledge,
a set of dynamics occurs because of the nature of the interaction of
human parts. Those interactions cause a group dynamic that interacts
with the environment. Imposing demands on that system can be self-destructive
if they work counter to the inherent evolved dynamics.
- This is reflected
in CA’s, where you can’t understand the behavior of the cell except
in the context of its game; and you can’t understand the behavior of
the game except in the larger context of which it is a part.
- Without recognizing
these dynamics, it’s easy to ask a system to do things it simply is
not capable of doing.
- With too much
control, an organization reduces the opportunities for adaptive action;
with too little, it will make foolish mistakes.
Q& A with Ary Goldberger and Bill Sulis
Q:
Would it be correct to say that systems with
built-in randomness are more stable
than
systems without?
Ary: Systems with the ability to respond to random
signals have an intrinsic ability to
remain
more stable. To maintain stability the system has innate variability.
Equilibrium is the brink of extinction.
Bill: There are situations in which low levels of
noise facilitate better performance. In CA, the
presence of noise allows for a high level of recognition. Without noise,
systems can lock into a response. Noise keeps that from happening.
Q:
Can you sketch out guidelines to healthy level of noise? Or is it context
dependent?
Ary: That’s difficult to say. Until recently,
no one recognized the value of noise. How to use noise
is experimental. Modeling it is extremely difficult. You can experiment
to find out. Epilepsy control might include the introduction of some noise
before the brain gets locked into this response. This is an example of
complex issues about questions that would not have been asked before.
|