Wednesday, November 01, 2006

Conference Schedule and Abstracts

Schedule for INNS Texas/MIND/UTA Conference on

Goal-Directed Neural Systems

Friday, November 3, 2006, Nedderman Hall 105

8:30-9:00 Registration

9:00-9:30 Opening remarks

Paul Paulus, Dean of Science, UTA

Robert Kozma, SIG Coordinator, INNS

Daniel S. Levine, Program Chair, MIND

9:30-10:30 Jose Principe, University of Florida, ARRI Distinguished Lecturer

Nonlinear System Identification Using Kernels and Information Theoretic Learning

10:30-11:15 Frank Lewis, University of Texas at Arlington

Optimal Adaptive Control Using Nonlinear Network Learning Structures

11:15-11:30 COFFEE BREAK

11:30-12:30 Donald Wunsch, University of Missouri at Rolla

Combinatorial Complexity, Robustness in Learning, the Game of Go, and Related Applications

12:45-2:00 LUNCH AT THE UNIVERSITY CLUB

2:00-3:00 Leonid Perlovsky, Harvard University

Emotion, Language, and the Knowledge Instinct

3:00-3:45 Yoonsuck Choe, Texas A&M University

Prediction, a Prerequisite to Goal-directed Behavior, and Its Possible Origin in Delay Compensation

3:45-4:00 COFFEE BREAK

4:00-5:00 Paul Werbos, National Science Foundation

From Optimality to Brains and Behavior: The Road Ahead

5:00-5:30 General discussion

Saturday, November 4, 2006, Nedderman Hall 100

9:15-10:00 Gerhard Werner, University of Texas at Austin

Brain Dynamics Across Levels of Organization.

10:00-10:45 Derek Harter, Texas A&M University at Commerce

Catalytic Self-Organization of Hierarchies: A Dynamical Systems View of Cognition

10:45-11:00 COFFEE BREAK


11:00-11:45 Risto Miikkulainen, University of Texas at Austin

Constructing Intelligent Agents in Games

12:00-1:45 LUNCH AT AN INDIAN OR MEXICAN RESTAURANT

1:45-2:45 Robert Kozma, University of Memphis

Phase Transition Models of Intentional Cortical Dynamics

2:45-3:30 Ricardo Gutierrez-Osuna, Texas A&M University

Pattern Recognition for Optical Microbead Arrays with a Neuromorphic Model of the Olfactory Bulb

3:30-3:45 COFFEE BREAK

3:45-4:30 Horatiu Voicu, University of Texas Medical Center at Houston

Timing and Homeostatic Plasticity in Cerebellar Information Processing

4:30-5:15 Daniel Levine, University of Texas at Arlington

Brain Pathways for Rule Evaluation and Evolution

5:15-5:45 General discussion







Prediction, a Prerequisite to Goal-directed Behavior, and Its Possible Origin in Delay Compensation

Yoonsuck Choe, Jaerock Kwon, and Heejin Lim

Texas A&M University

choe@cs.tamu.edu

Goal-directed behavior is the hallmark of cognition. An important prerequisite to goal-directed behavior is that of prediction. In order to establish a goal and devise a plan, one needs to see into the future and predict possible future events. In this presentation, we will consider the possible origin of such a predictive ability in the biological nervous system. The first observation is that there is inevitably delay in the nervous system: axonal conduction delay, membrane integration time, and other forms of delay associated with the electrochemical processes in the cell. These delays, if not compensated for, will lead to the organism living hundreds of milliseconds in the past. However, there is an indication that the nervous system may be compensating for such a delay. Perceptual illusions such as the Flash Lag Effect (FLE) suggest that extrapolative processes are involved in delay compensation, to allow organisms to live in the immediate present, not in the past. In FLE, a moving object is perceived to be overshooting an abruptly flashed object, when the two objects are in fact precisely aligned. The overshoot suggests that a motion-extrapolation mechanism is operating. Our idea is that such a delay compensation mechanism, if slightly pushed beyond its normal use, can result in predictive capabilities.

Also, we propose that single neuron dynamics, such as facilitating synapses, can be the neural basis for delay compensation. We will present early computational results supporting these claims. In sum, prediction may have its humble origin in delay compensation, and, taking a leap of argument, a possibly adverse condition of delay may have contributed to the emergence of goal directed behavior, quite a positive conclusion.

Pattern Recognition for Optical Microbead Arrays with a Neuromorphic model of the Olfactory Bulb

B. Raman, T. Kotseroglou, M. Lebl, L. Clark, and R. Gutierrez-Osuna

Texas A&M University

rgutier@cs.tamu.edu

We present a biologically-inspired approach for sensor-based machine olfaction that combines a prototype chemical detection system based on microbead array technology with a computational model of signal processing in the olfactory bulb. The sensor array contains hundreds of microbeads coated with solvatochromic dyes adsorbed in, or covalently attached on, the matrix of various microspheres. When exposed to odors, each bead sensor responds with intensity changes, spectral shifts and time-dependent variations associated with the fluorescent sensors. The microbead array responses are subsequently processed using a computational model that captures two key functions in the early olfactory pathway: chemotopic convergence of receptor neurons onto glomeruli, and center on-off surround lateral interactions mediated by granule cells. The first circuit, based on Kohonen self-organizing maps, is used to perform dimensionality reduction, transforming the high-dimensional microbead array response into an organized spatial pattern (i.e., an odor image). The second circuit, based on Grossberg's additive model, is used to enhance the contrast of these spatial patterns, improving the separability of odors. The model is validated on an experimental dataset containing the response of a large array of microbead sensors to five different analytes. Our results indicate that the model is able to improve the separability between odor patterns compared to that available at the receptor or glomerular levels.

Catalytic Self-Organization of Hierarchies: A Dynamical Systems View of Cognition Derek Harter, Texas A&M University – Commerce

Derek_Harter@tamu-commerce.edu

The mechanisms involved in goal formation, selection and prioritization have received little attention when compared with other cognitive processes. It can be argued that the fundamental properties of intelligent actions are in the service of goals. Goal mechanisms play a fundamental role in intentional behavior, but how people manage multiple conflicting goals, drives and needs remains obscure.

Complex systems theories of the organization of large, heterogeneous networks, may be applicable to the processes of behavior production and goal formation in biological organisms. The point is that we now think of behavior generation no longer as a simple hierarchical organization, nor as a single meshwork. Goal formation and organization appears, as in many other complex systems, as a combination of both hierarchies and heterarchies of elements. It therefore shares more in common with processes that we observe in the formation of catalytic processes, than in other systems that are strictly or mostly of one type or the other.

In this talk we discuss insights that complex systems theories bring to understanding emergent properties of goal mechanisms in biological organisms. We will look at some new ideas, such as viewing cognition as a catalytic process, that might help explain biological organism's goal formation and management mechanisms. We will also discuss some of the possible neural correlates of the dynamics of such mechanisms, including the K-IV model as a realization of this view of cognition as a catalytic process.

Phase Transition Models of Intentional Cortical Dynamics

Robert Kozma

University of Memphis

rkozma@memphis.edu

Nonlinear dynamic models of intelligent behavior in biological systems are studied. Intentionality includes goal-oriented behavior, but it goes beyond sophisticated manipulations with representations to achieve given goals. Intentionality is a dynamical state endogenously rooted in the agent and it can not be implanted into it from outside by any external agency. We propose phase transitions to describe intentional dynamical systems embedded in their environment.

Two types of models will be introduced to describe cognitive phase transitions. One model uses K (Katchalsky) sets which are based on the solution of ODEs with distributed parameters introduced in the 70’s by W. J. Freeman. We elaborate on a novel higher-order K set. In a join work with B. Bollobas, we address discrete models rooted in random graph theory, called neuropercolation. Scale-free behavior and small-world effects in cortical tissues are important properties exhibited by neuropercolation. Unlike phase transitions in physical systems, neural phase transitions have intermittent and mesoscopic character. They represent a new class of phenomena observed in living substances. Potential applications in building artificially intelligent agents are outlined

Brain Pathways for Rule Evaluation and Evolution

Daniel S. Levine

University of Texas at Arlington

levine@uta.edu

As roles for different brain regions become clearer, a picture emerges of how primate prefrontal cortex executive circuitry influences subcortical decision making pathways inherited from other mammals. Specifically, the ventral striatum is a “gate” for activating or inhibiting motor behavior patterns under the influence of the amygdala (emotional salience) and hippocampus (context relevance). This gating system is further modulated by a prefrontal network that organizes behavioral actions and avoidances into rules. That network includes at least three prefrontal regions: orbitofrontal as socio/emotional “censor”; anterior cingulate as detector of possible conflicts; and dorsolateral as conflict resolver based on plans and working memory linkages.

Yet an autonomous system also needs to evaluate its own rules and change them if they are no longer satisfactory. In work in progress, Leon Hardy and Nilendu Jani and I treat behavioral rules as attractors for a Cohen-Grossberg-style dynamical system. We consider a continuous-time simulated annealing algorithm for moving between attractors under the influence of noise that represents “discontent” combined with “initiative.”

Optimal Adaptive Control Using Nonlinear Network Learning Structures

F.L. Lewis, M. Abu-Khalaf, A. Al-Tamimi, D. Vrabie

Automation & Robotics Research Institute, University of Texas at Arlington

http://arri.uta.edu/acs

Adaptive control systems use on-line tuning methods to produce feedback controllers that stabilize systems, without knowing the system dynamics. Some severe assumptions on the system structure are needed, such as linearity-in-the-parameters. It is by now known how to relax such assumptions using neural networks as nonlinear approximators.

Most developments in Intelligent Controllers, including neural networks and fuzzy logic, that have rigorously verifiable performance have centered around using the approximation properties of these nonlinear network structures in feedback-linearization-type control system topologies, possibly involving extensions using backstepping, singular perturbations, dynamic inversion, etc.

However, naturally occurring and biological systems are optimal, for they have limited resources in terms of fuel or energy or time. Likewise, many manmade systems, including electric power systems and aerospace systems, must be optimal due to cost and limited resources factors.

Unfortunately, feedback linearization, backstepping, and standard adaptive control approaches do not provide optimal controllers.

On-line methods are known in the computational intelligence community for dynamic programming using neural networks to solve for the optimal cost in Bellman's relation by on-line tuning using numerically efficient techniques rooted in "approximate dynamic programming" or "neurodynamic programming". It is also known that if the neural network has the control signal, as well as the state, as an input, then the system dynamics can in fact be unknown; only the performance measure need be known.

In this talk, a framework is laid for rigorous mathematical control systems design with performance guarantees using such nearly optimal control methods for on-line tuning. Both discrete-time and continuous-time systems are considered. Connections are drawn between the computational intelligence and feedback control theory approaches.

Some effective recent design methods are given for solving Hamilton-Jacobi equations on-line using learning methods to obtain nearly optimal H2 and H-infinity robust controllers. The result is a class of adaptive controllers that converge to optimal

Constructing Intelligent Agents in Games

Risto Miikkulainen

University of Texas at Austin

risto@utexas.edu

Whereas early research on game playing focused on utilizing search and logic in board games, machine-learning-based techniques have recently become a viable alternative. In many games, intelligent behavior can be naturally captured through interaction with the environment, and techniques such as evolutionary computation, neural networks, and reinforcement learning are well suited for this task. In particular, neuroevolution, i.e. constructing neural network agents through evolutionary methods, has shown much promise in many game domains.

Based on sparse feedback, complex behaviors can be discovered for single agents and for teams of agents, even in real time. In this talk, I will review recent advances in neuroevolution, and demonstrate how it can be used to construct intelligent agents in a new genre of video games.

Emotions, Language, and the Knowledge Instinct

Leonid Perlovsky

Harvard University

leonid@deas.harvard.edu

Language is widely considered as a mechanism for communicating conceptual information. Emotional contents of language are less appreciated within scientific community, its role in intelligence, its evolutionary significance, and related mechanisms are less known. The talk will discuss primordial undifferentiated conceptual-emotional mechanisms of animal cries and the evolution of language toward conceptual contents, while diminishing emotional contents. The brain mechanisms involved will be described. These will be related to a mathematical description of the knowledge instinct mechanisms of differentiation and synthesis responsible for adaptation and evolution of cognition. Emotional contents of language will be connected to the language grammar. Effects on individual adaptation and cultural evolution will be analyzed, including cooperation and opposition between differentiation and synthesis. Cultural advantages of "conceptual" pragmatic cultures, in which differentiation overtakes synthesis, resulting in fast evolution at the price of self doubt and internal crises will be compared to those of "emotional" cultures with differentiation lagging behind synthesis, resulting in cultural stability at the price of stagnation.

The talk briefly describes mathematical approaches to cognition and language; difficulties related to combinatorial complexity of algorithms and neural networks used in the past; and new approaches overcoming these difficulties, using neural modeling fields and dynamic logic. Mathematical results are related to cognitive science, linguistics, and psychology.

Nonlinear System Identification Using Kernels and Information Theoretic Learning

Jose C. Principe

University of Florida

principe@cnel.ufl.edu

This talk will introduce a new methodology for nonlinear system identification based on reproducing kernels Hilbert spaces. Recent work in Information Theoretic Learning at the Computational NeuroEngineering Laboratory led to the definition of a new similarity function called correntropy that can be used as correlation, but includes higher order statistics about the data. This function defines a new reproducing kernel Hilbert space where optimal nonlinear solutions in the input space can be solved in close form, hence extending the Hammerstein and Wiener models.

The RKHS defined by correntropy is different from the RKHS proposed by Parzen to study Gaussian processes and also the RKHS normally utilized in kernel methods. The talk will present the properties of correntropy, and the relationship between these different RKHS. Applications to correntropy Wiener filter, correntropy PCA and some very interesting applications of the new methodology will be presented.

Timing and Homeostatic Plasticity in Cerebellar Information Processing

Horatiu Voicu

University of Texas Medical Center, Houston

horatiu@voicu.us

The organized and repetitive cerebellar architecture is amenable to an input/output description that interests both neuroscientists and roboticists. In this talk I will present a series of experimental and computational studies that address the input/output function of the cerebellum by investigating timing and homeostatic processes at cellular and circuit levels. The computational setup consists of a detailed simulation of the cerebellum which includes more than 10000 integrate and fire neurons.

The experimental setup is the rabbit eyelid conditioning paradigm which is both simple enough to produce robust behavioral responses and sophisticated enough to unravel the intricacies of cerebellar information processing.

Since an increasing number of experimental studies suggest that the cerebellum contributes to cognitive processing and emotional control in addition to its role in motor coordination, such detailed information about cerebellar processing could be used as a starting point into deeper understanding of the neural basis of cognition.

From Optimality to Brains and Behavior: The Road Ahead

Paul Werbos, National Science Foundation

pwerbos@nsf.gov

The principle of optimality has been of tremendous value in unifying and making coherent our causal understanding of basic physics. There is growing reason to believe it can do the same for brains and behavior, though of course the brain can only learn to improve its approximation to generating optimal behavior; it is not so exact as the universe. Optimality is particularly important in helping us understand how organisms can actually become more and more successful in achieving their goals -- which is especially important when we want to understand and enhance the success of our own intelligence. See www.werbos.com.

Previous speakers have summarized important practical progress by engineers in advancing optimality in machine intelligence far beyond the "Q learning" level. The immediate challenge there is to make the new methods more accessible to more users, through more dissemination and better, more reliable toolkits. Formal mathematical analysis can be important in making tools more robust and in helping users understand how to teach these tools and use them better. Only if enough people join this enterprise can neuroscience and psychology derive full benefits from them. The larger challenge is to upgrade the best adaptive critic designs for even greater spatial complexity, temporal complexity and creativity/imagination. Principles have been described for doing so with all three. Temporal complexity is crucial to new emerging insights into prefrontal cortex. But for now, spatial complexity may be the easiest to move ahead, given testbeds in image and video processing, in electric power grids and in games like Go, and new tools for new types of recurrent network. Ultimate exploitation of symmetry to handle spatial complexity in brains must exploit new principles of reverberatory generalization, multiplexed or "metaphorical" prediction, and multimodular parallel use -- network topology properties far beyond today's neuroscience models, though the human brain seems to include only the first two.

Brain Dynamics Across Levels of Organization.

Gerhard Werner

University of Texas at Austin

gwer1@mail.utexas.edu

After presenting evidence that the electrical activity recorded from the brain surface can reflect metastable state transitions of neuronal configurations at the mesoscopic level, I will suggest that their patterns correspond to the distinctive spatio-temporal activity of the Dynamic Core (DC) and the Global Neuronal Workspace (GNW), respectively, in the models of the Edelman group on the one hand, and of Dehaene-Changeux, on the other. In both cases, the recursively reentrant activity flow in intra-cortical and cortical-subcortical neuron loops plays an essential and distinct role. I will then give reasons for viewing the temporal characteristics of this activity flow as signature of self-organized Criticality. This has two implications: 1) it enables the use of statistical mechanics approaches for exploring phase transitions, scaling and universality properties of DC and GNW, with relevance to the macroscopic electrical activity in EEG and EMG; 2) it suggest critical dynamics in Percolation and Cellular Automata of the type recently studied by Kozma et al. as appropriate models of the Neurodynamics in Neuropiles of cortical-subcortical reentry circuitry.

Combinatorial Complexity, Robustness in Learning, the Game of Go, and Related Applications

Donald Wunsch

University of Missouri at Rolla

dwunsch@umr.edu

True computational complexity is not really present in most games, because evaluation is easily accomplished. Even the Chess game tree, with about 10120 leaves, yields to brute force approaches. In real life, this is not the case. Nor is it the case in Go. This makes Go too difficult for existing search techniques to solve, even when combined with expert knowledge bases. This motivates the investigation of alternatives. This talk will discuss the game of Go and its subtleties, approaches to dealing with its challenges, and related lines of research.

0 Comments:

Post a Comment

<< Home