The nervous system of a simple animal is connected both to the external world and to the animal’s interior milieu.As the author admits, these are only preliminary and somewhat incoherent notes (they are for him, not so much for us as readers) - but I find his work interesting in that it creates a neuronal link between subjective experience and external reality.
Its neural net is divided into at least four partitions. Each partition has two classes of neurons. One is linked to the world outside the nervous system while the other is linked only to other neurons. The latter neurons may be linked to neurons within the partition, neurons in other partitions, or both. These partitions are as follows, identified by their external links:
External Effectors: Produces effects in the external world through coupling to the motor system.
External Sensors: Senses the state of the external world through sensors of various types.
Internal Effectors: Produces effects in the internal milieu by secreting chemicals or affecting the states of muscle fibers.
Internal Sensors: Senses the internal milieu through appropriate sensors.
Who knows if he is right - but it's cool that he is sharing his thoughts with the public as he is developing them.
Here is the brief introduction and then the section he recommends one read first to get the gist of his project - the whole article is 52 pages, so hopefully the author will not mind these brief quotes to generate some interest.
William Benzon
affiliation not provided to SSRN
March 6, 2011
Abstract:
These notes explore the use of Sydney Lamb’s relational network notion for linguistics to represent the logical structure of complex collection of attractor landscapes (as in Walter Freeman’s account of neuro-dynamics). Given a sufficiently large system, such as a vertebrate nervous system, one might want to think of the attractor net as itself being a dynamical system, one at a higher order than that of the dynamical systems realized at the neuronal level. A mind is a fluid attractor net of fractional dimensionality over a neural net whose behavior displays complex dynamics in a state space of unbounded dimensionality. The attractor-net moves from one discrete state (frame) to another while the underlying neural net moves continuously through its state space.
I. IntroductionFrom "Simple Animal":
This document consists of working notes on a new approach to thinking about minds, both animal and human. As such these notes are primarily for the purpose of reminding me of what I have been thinking on these matters. They are not written, alas, in a way designed to convey these ideas to others.
Thus, they are variously rambling, inconsistent, vague, incomplete, and, no doubt, wrong-headed in places.
I would recommend that interested readers go through the notes in this order:
1. “Simple Animal” (the third section of notes, III)
2. “Lamb Notation” (the second section of notes, II)
3. “Minds in Nets” (VII)
While that is not the order in which I wrote the notes, the “Simple Animal” notes do a bit better job on the basic issues than the “Lamb Notation” section, which I had written first. Perhaps one should then read the rather long final section “Minds in Nets” (VII), which suggests some of the larger implications of this conception. If energy holds, I suggest the section on “Consciousness and Control” (VI). The section “Brief Notes” (V) is just that, and is dispensable.
The section on “Assignment” (IV) discusses one particular construction in some (not entirely satisfactory) detail.
III. Attractor Nets: Toward a Simple AnimalRead the whole article as a free PDF.
By attractor net (A-net) I mean the relationships among the attractors of a neural net. A net is said to be stressed if it is not at “equilibrium” or perhaps that should be “at frame.” In general, stress is applied to an NFA from outside. When it is at frame it is in one of its attractor basins.
[Note: Recall the note at the bottom of page 4 about the term "equilibrium" being a misleading one.]
Homogenous Attractor Nets
An attractor net is said to be homogenous if all of its attractors are can be related to one another through logical ‘or’. Thus:
In this diagram the rectangle is some neural net while the superimposed graph is a homogeneous attractor net. The zero node indicates “equilibrium” of “at frame.”[2] We can think of nodes s1, s2, and s3 as different patterns of stress on the network. Each pattern of stress is associated with a different path to frame, where the paths terminate at different points in the system’s state space. Those points are attractors, a1, a2, a3, with which I have labeled the arcs. The stressors are linked to the frame state through an ‘or’ connector (in Lamb’s notation).
What is stress in this context? Where the neurons are regulating the state of muscle fibers I am inclined to think of stress as the difference between the current state of the muscle fibers and the desired state. That is, it is error, in the sense that Powers’ uses the term in his control theory account of behavior.
If I do that on the motor side, then, I would like to do it on the sensory side as well. In effect, sensory systems are designed to respond to the difference between expected sensations and actual sensations. The general role of preafference in the regulation of behavior is in favor of this view. It is not clear to me how far such preafference extends toward the periphery. I believe there is some evidence in the auditory system that the preafference extends to the inner ear. In the visual system I believe there are efferent fibers in the optic nerves, though I’m not sure whether or not anyone has good ideas about what those optic efferents are doing. I don’t have any notions on the other sensory systems.
[Notice that how this notation transparently expresses Lamb’s observation that the only real “content” for such a network is at the periphery. Arc labels in the net are just a notational convenience that make it easier to read the diagram.]
Having said all that, I propose to use a slightly different notation in these notes, as follows:
As a convenience we use nodes a1, a2, and a3 to represent attractors. The arcs will be left unlabeled.
The danger of this notation is that it temps one to reify attractors into physical things, like neurons, or collections of neurons, or collections of synapses scattered about in some population of neurons. This temptation must be avoided.
Neural Interpretation, Mine & Lamb’s
Now let us think about this at the neural level and compare this with what I take to be Lamb’s neural interpretation of his notation. We’ve got a meshwork of tightly interconnected neurons. The net can be said to be at frame when, in Hays’ formulation, inputs have been “accounted for.” The net will have arrived at one of its attractor states and so aroused a gestalt that “absorbs” the externally induced stress, the input. This, of course, is some particular pattern of activity in the net. Each attractor state corresponds to a different pattern of neural activity.
An attractor node, then, represents some state, or set of states (the so-called attractor basin) of the neural net. The arcs connecting an A-node to the 0-node through the logical operator thus correspond to trajectories through the state space. Neither the attractors, the operators, nor the trajectories are physical things that one could discover though dissection and visual inspection. Rather, they the salient aspects of the topology of the net’s phase space.
This is somewhat different from Lamb’s interpretation. As I’ve indicated above, Lamb uses labels on his arcs where I use attractor nodes, He clearly thinks of his nodes, the logical connectors (which he calls nections, from connection) as collections of neurons, with the arcs being collections of axons. He offers thumbnail calculations of the number of neurons per nection (based on Mountcastle’s work on cortical micro-anatomy), and so forth. Thus he uses his notation in a more concrete way than I am proposing.
If one thinks about my proposal, however, one might wonder what the neurons in a homogeneous A-net are connected to, other than one or another. For, as I have defined it, a homogeneous A-net corresponds to a single Lamb or-nection, nothing more. These patterns of neural activity don’t seem to go anywhere, except to frame. I thus introduce the notion of a partitioned net, where each partition has an or-nection A-net. Partitions whose neurons are interconnected thus influence one another’s states; there are dependencies among their attractors.
Partitioned Nets
An attractor network is said to be partitioned if its attractors are in two or more sets such that an attractor from each set is required for the network as a whole to be at frame. Given the way the notion of an attractor is defined, this would seem to be an odd thing to happen; indeed, it would seem to be impossible, by definition. We must remember, however, that real neural nets almost certainly have a small world topology.
Every neuron is connected to each other neuron by at most only a small number of links. Some neurons are connected to one another directly (order 1); obviously, these neurons will have a strong influence on one another’s states. Other neurons will be connected through a single intermediary, making two links between them (order 2); still others through two intermediaries (order 3); and so on. Partitioning might arise in situations where two or more sets of neurons are strongly connected within the set through connections of order N or less while almost all connections between neurons in different sets are greater than order N.
Tags:
No comments:
Post a Comment