Skip to main content
SearchLoginLogin or Signup

Where the Turtles End

This paper addresses the question of materiality’s role in historical cybernetic approaches to consciousness and being through a close reading of the Adaptive Reorganizing Automaton.

Published onNov 09, 2017
Where the Turtles End
·

Where the Turtles End: Materiality in a Historic Cybernetic Experiment

 

Kevin Hamilton

Associate Professor, School of Art and Design, University of Illinois, Urbana-Champaign

Email: [email protected]

Web: http://www.complexfields.org

 

Abstract

This paper addresses the question of materiality’s role in historical cybernetic approaches to consciousness and being through a close reading of the Adaptive Reorganizing Automaton (ARA), an experimental device created at the University of Illinois’s Biological Computer Lab in 1960. After examining the project’s aims in the context of peer research and influences, I describe how the ARA functioned within a specific scientific process or ‘experimental system.’ Based on both a survey of the field and a close reading of the experiment, I conclude that only a synthesis of competing approaches to materiality within the history of computation can account for the work’s life in the world. The paper also addresses various ethical, political, and social implications of rendering materiality more or less important to processes of being, knowing, and relating to others.

 

Keywords

Systems, science, epistemology, ontology, cybernetics, cognition, perception, order, memory, sensation, history, theory

 

Introduction

The value and function of materiality has long been a central point of contention for cybernetics, with many disciplinary turns hanging on new conceptions of physiology’s role in an organism’s processes of learning, remembering, and adapting.

Early work by McCullough and Wiener launched the field by marrying a mathematical approach to signal analysis with an understanding of memory and perception as materially constituted in neural physiology. Later, the field’s first significant split—in which artificial intelligence emerged as its own field—saw a move away from the physical to the procedural by those who would also shape the future of cognitive science. After the split, those who remained within cybernetics would lean harder into materiality, celebrating the centrality of embodiment in ‘second order’ systems.

The question of materiality persists even in contemporary scholarship on cybernetics. Much recent scholarship and theory takes up the question of materiality’s role through posing ontology against epistemology as vying for attention in historical cybernetics. These same theoretical and historical efforts also endeavor to distinguish between the “dehumanizing” cybernetic processes of feedback and control at work in contemporary surveillance systems and finance, and the more “empowering” application of the same processes in cybernetic art, activism, or interactive media. The former, we’re often told, typically leave the physical and material behind, where the latter keep sensation in play as a physiological and informational process.

At stake throughout is a common question—is approaching perception, memory, and action in terms of information, feedback, and control fundamentally a support or a threat to organic life? If an organism can be described solely in terms of processes without regard to its material form, what is the significance of flesh?

The answer has proved elusive and divisive, not least because of the question’s limited scope. To describe an organism in terms of information flows is to bring a particular epistemological frame to the world. Discerning the social, political, or even moral implications of such a frame requires study of a broad range of a technology’s life in society, including not only the design processes behind such technologies, but their actual performance in unique social contexts. Such studies of cybernetic systems in recent years have yielded rich results, rescuing decades-long debates from fruitless abstraction.

By attending carefully to the material instantiation of cybernetic thought in spaces of science, manufacturing, art, and government, recent historians have shown the importance of accounting for the material in even the most utopian or Manichean pursuits of cybernetic ‘autopoesis.’ [1] Likewise, epistemologists of science have in recent years emphasized the crucial role of actual experimental apparatuses, sketches, photographs, and models in the formation of new knowledge.

This ‘material turn’ in science studies thus offers a great deal to the sometimes circular question of whether a cybernetic understanding of consciousness accounts for the role of bodies in being and knowing. As a new literacy in feedback and control seems on the rise under the guise of algorithm studies, a new look at some old experiments seems all the more timely.

 

A Machine at the Heart of a Cybernetic Hub

For an example, let’s take one of the experimental cybernetic machines produced at the Biological Computer Laboratory (BCL) at the University of Illinois. Founded by Heinz von Foerster, and described by many as the birthplace of second-order cybernetics, the lab was home at one time to such central cybernetic figures as Gordon Pask, Ross Ashby, Humberto Maturana, and Francisco Varela. The Adaptive Reorganizing Automaton (ARA), constructed around 1959, was among a handful of efforts at the BCL to manifest the adaptive behavior of organisms through a functioning network of “artificial neurons.” Murray Babcock, who created the machine under the guidance of von Foerster, [2] described the experiment as “an attempt to construct an adaptive automaton whose internal structure has a similarity to living tissue.” [3] In his influential 1961 book An Approach to Cybernetics, Gordon Pask described the ARA as “by far the most advanced automaton” of its day—an example of a learning and evolutionary system based on emerging understanding of brain function. [4]

Though Babcock uses words like “model” and “illustration” to describe the aims and work of the ARA, the material mechanisms of his experiment are clearly not a mere instrumental means to an illustrative end. Throughout his report on the project, Babcock emphasizes that the system possesses both structural and functional aspects of nervous tissue. He spends a great deal of time discussing the electrical functions of neurons as plastic, contingent matter. His survey of other automata, such as Grey Walter’s Tortoise or Ashby’s Homeostat, focuses primarily on their ‘physiological’ systems, which he seems to view as importantly material, and not merely mathematical. In his survey of possible approaches to designing such a system, Babcock discards those that depend on storage, retrieval, and comparison of stimuli, opting instead for those that see adaptation emerge in the very structures through which information flows. He is also particularly fond of Hayek’s account of memory and the brain.

Though there is very little mention of the device or apparent use of it elsewhere in research (the machine itself was likely lost in a flood), [5] Babcock presented the project as a success in his doctoral thesis, which also served as a final technical report to the sponsoring agency, the Office of Naval Research. Because of this experiment, Babcock writes in his introduction “a form of ‘biological computer’ is now available which is extremely flexible in its operation and potential connectivities.” [6] He intended the device to be used as a “predictive device” in the study of adaptive or self-organizing systems, so that “knowledge of various systems will be gained and eventually used to construct more elaborate systems.” [7]

Though the Biological Computer Lab created very few actual ‘biological computers’ (arguably only two in addition to the ARA over its near twenty year history), it is quite likely that knowledge gained through constructing the ARA was important to the success of the lab’s next and largest project, an audio signal analyzer or ‘artificial ear.’ [8] The ARA has received very little attention by scholars, and deserves attention as one of the very few cybernetic computers devised and presented in the context of scientific experiment, rather than in an applied setting such as art, commerce, or management. What follows is a somewhat simplified description of the machine’s function.

 

Adaptive Reorganizing Automaton: A Description

The ARA was a modular electrical system composed of three basic components, each repeated throughout the device. Two of these components are most important to a brief description of this complex machine.

The first of these was based on an understanding of neurons at that time. Each of these ‘artificial neurons’ emitted an electrical pulse at a regular interval, sent through an output to other components. Each of these components had multiple inputs, and the rate of a neuron’s pulse interval increased or decreased based on which input first received a pulse from another unit during a single firing cycle. So each time a component fired a pulse, that pulse raced out against other pulses to try to beat them to some other component within the network.

A second, different component that recurred throughout the modular system determined the winner of this race—a component that Babcock dubbed the “facilitator.” Any pulse that passed from one neuron component to another first passed through a facilitator component. Facilitators functioned somewhat like variable resistors—they simply let pulses pass through at different rates of speed. The key here is that unlike the neuron components, rates of flow weren’t “reset” for each firing cycle, but rather remained at whatever rate they last acquired. Each facilitator acquired its rate either manually through the scientist’s adjustments or, more often, by an internal mathematical function created through circuitry. This function depended on two factors—the length of operation time for the component (since initial power-up), and the current rate at which pulses were entering the facilitator.

In this way, Babcock imagined the facilitators as a kind of memory for the unit, conveying pulses through the system based on the intervals of past pulses and past rates of facilitation. It also appears that, on par for any cybernetic system, Babcock at least sometimes configured the entirety of the components in such a way that one collection of “neurons” and “facilitators” functioned as the primary system while a second collection functioned as the “observing system.” The latter took its cues from the current state of the facilitators in the former, and in some way fed its state back into the primary system.

As with so many cybernetic systems, such description of an informational process can quickly become too theoretical, not grounded enough in the matter that Babcock so carefully describes. This mistake is best avoided by description of the experiment as performed in situ. Such an examination will follow after a brief look at Babcock’s contemporary sources and competitors.

 

Friedrich Hayek and the Mind

Babcock’s approach to memory, learning, and adaptation closely follows that of Friedrich Hayek, the influential economist and political theorist. Hayek’s The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology receives mention by Babcock as an influence in the design of the ARA. Though this book is arguably central to the economist’s theories, it is less well known, so some explanation is required.

In The Sensory Order, Hayek describes a theory of learning and consciousness in which external stimuli have a physiological impact on the brain; such outside events create new material “linkages” between the brain’s ganglion cells. In Hayek’s view, these linkages, rather than the ganglion cells themselves, make sensation and experience possible, facilitating impulses from cell to cell. [9] The linkages form in an adaptive manner and so the world begins to appear—and to appear sensibly—to an individual over time. A brain acquires more “memory” in the form of new linkages.

The process of learning and even of consciousness is to Hayek a kind of classification process in which the brain takes in new stimuli through the effects of old stimuli. If a “linkage” does not occur in the brain for at least some part of a new stimulus, the event cannot be perceived.

Many portions of Hayek’s book are perfect descriptions of Babcock’s machine, as designed and described. Hayek’s “linkages” have their correlates in the ARA’s facilitators, even down to the extent to which Hayek imagines that such linkages take their physiological shape from both past and present stimuli—partly to reconcile the paradox of seeing the present through the past. Over time, the brain’s linkages form a distinctive “map,” while new stimuli traveling across this map create a “model” of the present external world.

Hayek is adamant about the material nature of mind, describing the order that constitutes consciousness as a sort of “sub-order” of the physical universe. “This order which we call mind,” Hayek writes in the book’s final chapter, “is thus the order prevailing in a particular part of the physical universe—that part of it which is ourselves.” [10] Yet ultimately, Hayek does not see this order as knowable:

What could be regarded as the ‘physical aspect’ of this […] entity would not be the individual neural processes but only the complete order of all these processes; but this order, if we knew it in full, would then not be another aspect of what we know as mind, but would be mind itself. [11]

Hayek thus sees something of a challenge to representation in the task of understanding the mind, and ultimately he sees no potential for ‘knowing’ the mind, but only for creating a model of it—which is just what Babcock did. The ARA models the mind much as Hayek’s mind models the world—not through representation, but through analogy.

 

From One Mind to Many

Hayek’s foil here is behaviorism and most likely physicalism, behaviorism’s precedent. [12] The Sensory Order, which takes on behaviorism, is based on an early article that argued with physicalism. The two schools shared a belief that the more scientific path to understanding an ordered world lies in identifying the indisputable connections between cause and effect, stimulus and response.

Given the influence of Hayek’s work on twentieth-century economics, it is worth noting that the physicalists saw reason to apply their belief to governance and social organization as well. Where Hayek hung his economic theories on a belief in the emergent and unpredictable abstractions of price and value, the physicalists sought to build economies on statistically demonstrable need. Otto Neurath, a leading proponent of physicalism and a rival of Hayek’s mentor, Ludwig von Mises, [13] even designed a universal visual language called Isotype to help reduce such statistical questions to visually indisputable conclusions.

While the physicalists sought to understand and shape material and political life in terms of rationalist cause and effect, their antecedent behaviorists sought to explain individual human action in similar terms. Both Hayek and the behaviorists shared an interest in how external stimuli affected the brain, but Hayek felt their approach naïve for how it took the physical world for granted. [14] Behaviorist psychologists, in Hayek’s view, overlooked the basic problem of subjectivity in their quest to identify which parts of the brain responded to which objects in the world. Hayek’s model of mind and learning not only foregrounded adaptation and process over objective correlation, but sought to include interpretation as a vital part of consciousness—through his theory of the process of “linkage” formation.

So in Hayek we have an approach that is, like that of behaviorism and physicalism, adamant about the basis of thought in material processes. Unlike the behaviorists and physicalists, however, Hayek pursues interpretation as central to the process of consciousness. Somewhat unexpectedly, this emphasis on interpretation does not translate to a confidence in representation. Hayek largely eschews representation, whereas in a figure like Neurath we see the possibility of images playing a central role.

From Hayek’s perspective, no explanation can offer full knowledge of a mind’s process without essentially becoming that process. Instead, a model of that process will inevitably and solely be an analog to that process, just as the brain models the external world without reducing it to a perfect correlation. The relationships between the parts of a brain correlate to the relationships between objects in the world, but more because both realms hold to the same principles of movement and emergent order. One can find no direct correlate between some relationship in the world and a relationship between two parts of the brain.

For Hayek, mind is to the world as representation is to thought. In both cases, correlations fail and analogies hold the day, even as materiality is still the predominant order of the day (as opposed to spiritualist or metaphysical accounts). In this way, he shares much in common with his contemporaries in the arts who sought to manifest in material form the inner workings of the world without claiming to represent that world. As a fairly direct interpretation of Hayek’s work, the ARA thus holds as much or more in common with the abstract paintings of Mondrian, Malevich, or Kandinsky as with precedents in the history of technological automata.

 

Another Way in Artificial Intelligence

The ARA resonates even more with Hayek’s theories when viewed in contrast to approaches by peers. In a 1958 article that for many helped initiate the field of Artificial Intelligence as distinct from cybernetics, authors Newell, Shaw, and Simon seemed to share more confidence than Hayek in the possibility of describing the mind through explanation or representation.

In their influential 1958 article “Elements of a Theory of Human Problem Solving,” they ascribe great explanatory potential to something called “the program” in their conception of information processing by organisms. Such organisms, they argue, depend on three basic ingredients: memories; “primitive information processes” to act on those memories in predictable ways; and a “program” or set of rules for combining such processes into a larger process of learning or consciousness. [15]

“An explanation of an observed behavior of the organism,” they wrote, “is provided by a program of primitive information processes that generates this behavior.” [16] In their view, a program—something with no inherent physical basis—was an explanation of thought. Hayek believed, in contrast, that “to provide a full explanation of even one particular mental process, such an explanation would have to run entirely in physical terms.” [17]

Newell and company even went so far as to dismiss such contemporary efforts as Babcock’s in one passage of their influential article:

We wish to emphasize that we are not using the computer as a crude analogy to human behavior—we are not comparing computer structures with brains, nor electrical relays with synapses. Our position is that the appropriate way to describe a piece of problem-solving behavior is in terms of a program: a specification of what the organism will do under varying environmental circumstances in terms of certain elementary information processes it is capable of performing. This assertion has nothing to do—directly—with computers. Such programs could be written […] if computers had never existed. A program is no more, and no less, an analogy to the behavior of an organism than is a differential equation to the behavior of the electrical circuit it describes. [18]

The authors go on to differentiate their approach from those that rely on “traces” from past stimuli to create order, opting instead for the concept of “storage,” which they see as less passive, less molded by the world. By building and storing possible procedures with which to handle new stimuli, the system in Newell and Simon’s view enacts a more active response to stimuli. Here they have, like Hayek, the behaviorists in view, with a critique of their over-reliance on objective stimulus-response relationships. But for these budding cognitive scientists, a neuronal view in which the physiology of the brain is altered in any way by the outside world is still too behaviorist, too bound to causation. They look to see the processes of thought rendered independent of physiology as formulae and mathematics or as software to the hardware of biology.

Representation is thus possible for this strain of research into thought and mind, if as a form of reduction or equivalency that Hayek eschewed. If Hayek’s approach to mind and representation, like that of Babcock’s ARA, saw close cousins in modernist abstraction, representation for Newell and Simon anticipates the role of representation in the later conceptual art of Sol Lewitt, Mel Bochner, or Joseph Kosuth. In such work, translation and equivalency figure more largely than materiality, reduction is more important than abstraction, and comprehension reigns over interpretation or even perception. As in conceptual art, Newell and Simon seem more ambivalent about materiality, even as they see no cause for explanation in any other source.

 

Troubling Legacies

In light of the work of Babcock and Hayek, the work of these early cognitive scientists seems more instrumental, less attentive to the significance of bodies in knowledge. Certainly their approach to understanding consciousness as software left their legacy open to ample criticism as dehumanizing, and their early preoccupation with active responses to stimuli as opposed to passive bears the marks of a masculinist epistemology.

Yet Hayek’s reticence about representation provides some cause for concern as well, and remains in tension with his emphasis on interpretation. As with abstraction in early modern art, there lies in his work a shade of iconoclasm, or even a fatalism about the possibility of models and worlds reflecting any sort of objective correlation to their referents. A radical subjectivity—indeed, a radical constructivist epistemology—hangs closely by, wherein the individual human reigns supreme, and the collective struggles to find a hold. If the connections that constitute consciousness reflect in their order the same connections that exist among objects in the world, then we might ask whether everything is conscious or nothing is at all.

More significantly, Hayek’s history of applying his approach to cognition and emergent order to social and economic organization looms large for many. The effects of free market economics seem far from humane, and of a piece with his view of emergent order in consciousness.

In conception and application, both approaches seem in tension with some aspect of life. Historians of technology have shown that platforms don’t dictate politics, and both approaches have also found some application in the flourishing of life. But in light of such seemingly key moments of disagreement and departure as that of artificial intelligence’s split from cybernetics in the early sixties, how are we to understand the stakes and implications, the connection of conception to application? A closer look at the ARA in light of recent scholarship on cybernetics and scientific method can help us through the mire of potentially essentialist arguments about the political nature of scientific frames.

 

Reading Epistemology Out of The Picture

In his 2010 book The Cybernetic Brain, Andrew Pickering makes a case for understanding the lineage of machines to which the ARA belongs in terms of ontology, rather than epistemology. [19] In comparing cybernetics to artificial intelligence, he describes Babcock’s branch of cybernetics as primarily preoccupied with performance, where approaches such as that of Newell and Simon were more concerned with representation.

Pickering doesn’t write about the ARA, but does write at length about two devices that Babcock describes as major influences and precedents—Grey Walter’s Tortoise and Ross Ashby’s Homeostat. Ontology and performance surface in the book’s analysis based in part on careful observation of how these devices function temporally. As outlined above for the ARA, the Tortoise and the Homeostat are very much focused on the present. Unlike Newell and Simon’s conception of deliberative action based on stored routines, cybernetic machines change physiologically as they receive stimuli. The “facilitator” components in the ARA do this through both retaining their last “known” level of resistance, and through monitoring rates of new input. They construct their present not based on retrieval of path states; rather, as in Hayek’s model, their past state is still present in their physiological condition.

As Pickering puts it, “cybernetics stages for us a vision not of a world characterized by graspable causes, but rather one in which reality is always ‘in the making.’” [20] Performance does indeed play a strong role here, and even against representation, which as we have seen, Hayek eschewed. Pickering describes performance in this branch of cybernetics as “prior to representation,” in that the machines take no “cognitive detour” through an externalized description as part of action. To borrow from Lucy Suchman’s terminology, Pickering looks at devices such as the Tortoise or the ARA as possessing no plan distinguishable from action. [21]

Pickering sees these devices as going against the grain of much modern science—especially as manifest in the likes of Simon and Newell—which he sees as typically focused on representation rather than performance. As such, “modern science” in Pickering’s view works to create difference, separation from the world—and ultimately, control.

It is true that research after Newell and Simon in cognitive science eventually turned to the study of intention as a marker of consciousness, where intent is understood as preceding action. Intention or purpose in such cases begins to assume the quality of an atemporal or a priori representation. Against this line of work, Pickering describes the quintessentially performative machines of cybernetics as “goal-seeking” rather than goal forming.

But why does epistemology need to fall so far out of this picture—so much so that when it comes to the work of Ashby, Pickering draws an explicit line about which work he will support? Here Pickering seems to bring a slant so strong that he takes a failed experiment of Ashby’s as evidence of his argument, when Babcock’s more successful project might have helped him tell a different story. Pickering seems to take this route as a way to navigate the problematic legacies and stakes of cybernetics—namely, how cybernetic approaches to knowledge and self-organization seem to lend themselves so well to the organization of social control.

 

Seizing on a Failed Experiment

The Ashby project in question here is one that anticipates the ARA by a few years—a device known as the Dispersive and Multistable System (DAMS). Much like the ARA, this device consists of a network of possible pathways for electrical pulses. Instead of “facilitator circuits,” the DAMS possessed Neon lamps that would conduct electricity only when certain voltages were applied. Theoretically, current might then flow through the network in different stochastic patterns that would adapt over time.

According to Ashby’s records—at least in Pickering’s account—the device never worked. Pickering explains this failure in terms of Ashby’s struggles with finding useful inputs for the device—a problem that Babcock seems to have addressed through his “artificial neurons.” The DAMS seems to have functioned properly as a manifestation of how connections develop, a demonstration of stochastic order, but Ashby couldn’t seem to establish any ground rules for the process in the service of science.

Pickering seizes on a journal entry in which Ashby explains the problem in terms of an engineering project that has no blueprint, where perhaps the design must evolve through as emergent and evolutionary a process as the functioning system itself. Pickering represents this as Ashby’s honest confrontation with cybernetics’ “nonmodern” or pre-representational nature, in which a living process not only cannot be described, but cannot even be anticipated as a plan or blueprint, a set of knowable conditions.

In his conclusion to The Cybernetic Brain, Pickering differentiates from a “classical” view of control, in which power “flows in just one direction in the form of instruction for action,” and control as he sees it at work in cybernetics’ “ontology of unknowability and becoming.” Cybernetics, he argues, “never imagined that the classical mode of control was in fact possible.” [22]

Pickering is likely looking to distance cybernetics from some ready legacies in contemporary technologies of subjugation, from finance to surveillance or autonomous warfare. In his view, to accept such devices as the ARA in terms of epistemology as much as ontology is to be content not only with a “black box” approach to being—which Pickering accepts as reasonable—but also a “black box” approach to knowing, in which intention and representation are wholly separable from action and knowing. Such an approach would render materiality second to knowledge, and knowledge as a form of subjugation.

Pickering’s articulation of the unique temporalities at work in such devices as the Tortoise or the Homeostat is of great help to understanding how they model being and consciousness. However, in order to keep these processes in the service of the material world, as opposed to metaphysically denying or transcending it, he has to take the scientist out of science. To be sure, Pickering bases his story of the DAMS failure on Ashby’s own assessment, and his description of the performative, ontologically-inclined nature of these machines is apt. But in claiming the DAMS failure as the fullest realization of cybernetics’ approach to science is to discredit work that is disciplinary, scientific, and importantly, material even in its life as science.

A closer look at Babcock’s ARA might have yielded a different conclusion, but also would have required that Pickering accept more of cybernetics’ legacy in technologies of control. To wrestle with the latter is to struggle with the question of how a system that celebrates materiality as a basis for life might ultimately lead to approaches that deny it.

 

A Broader View of the ARA

Babcock’s approach to memory as a physiological phenomenon resulted in a storage and retrieval process very much “performed” in the present, without the clear separation of past and present enacted by Newell and Simon’s model. That said, other significant time scales figure in the functioning of the ARA. As an experiment, the machine’s “performance” of memory took place within a larger context of scientific process, routines of lab and labor that also bear examination.

As described above, the key element in ARA’s approach to memory is the “facilitator” component, which allows pulses to pass from ‘neuron’ to ‘neuron’ according to a calculated rate. This rate is sticky, in that change always happens in relation to its current state, rather than being set back to a base state each time.

Just as important to the facilitator’s function as memory is that the component determines new changes to ‘rate of flow’ based on both short-term and long-term pasts. For the short-term, the component checks the rate at which pulses are currently reaching the device. For the long-term reference, the component checks how long the whole experiment has been running. This latter aspect sets a periodicity for the whole experiment—the ARA can only run for as long of a duration as the components are able to measure. Babcock states that the maximum duration for the machine’s operation is likely around 20 minutes; charts in his final report are based on a maximum of 103 seconds, or around 16 minutes. [23]

Considered in this light, the operation of the machine actually depends on at least two nested approaches to memory—that of the machine’s internal system, and that of the machine as an experiment embedded in a larger memory project—that of scientific process. In this latter frame, the device’s inputs and outputs come more clearly into view, as do Babcock’s successes where the DAMS project failed. The ‘blueprint’ version of the ARA does predictably embody an approach to memory as imagined by Hayek, a performed ontology of becoming. It does so, however, only because it is situated within a larger process that brings its own distinct approach to materiality and knowledge.

Conceived as a system, the ARA functioned thusly—Babcock would create some initial settings and configurations for the device and then run the unit, making notes or taking photographs, based on output in the form of indicator lights or meters. From these artifacts, he would create graphs to mark changes in the internal workings of the device over time. These graphs captured some aspect of the network’s functions or potential functions, and also informed each new configuration of the device’s settings for subsequent tests.

Babcock’s goals for the project revolved around the prediction of unpredictability. He sought to devise a device that could be used to study the emergence and organization of life, a condition that inevitably involves stimuli. It was not consequential for the experiment where the stimuli came from, or what connection they had to the world. Rather, the experiment’s success hung on whether he was able to say with some specificity which blueprint of initial components might be most conducive to the emergence of order in material form. The blueprint existed in the design of the machine’s components, and the machine’s configurations for each experiment cycle represented something of a performance of this blueprint. The scientist’s records of each configuration and outcome functioned as his short-term memory, and his final publication in thesis form functions as the long-term memory and a verification of the design’s viability.

Let’s look at this process within the frames of Hayek and of Newell and Simon. Considering the ARA’s individual outputs—via indicator lights or meters—as the “pulses” moving through the system of Babcock’s experiment, how does order or knowledge emerge in the lab? Is it ultimately a matter of matter, the physiology of the lab? Or is there an immaterial program at work in this process?

Babcock’s notes, oscilloscope photographs, and interstitial graphs created as part of each new round of tests in some ways function as Hayek’s “linkages.” Each new chart is meaningful only in relation to previous charts, and provides Babcock with the basis for new configurations for the machine. New outputs from the ARA find their meaning and context in these charts; however, the charts themselves are not the whole material basis of the experiment’s emergent order. They facilitate flow of information between the device and the experimenter, who then reconfigures the machine to enact a new process. In this way, the ARA and Babcock himself function as two different sorts of ‘neurons’ within the system, firing in response to stimuli.

No single configuration of the ARA, no one setting of initial values or linking of sub-circuits, seems to demonstrate the viability of the device as an environment for the emergence of living order. Rather, it is only through multiple cycles and observations that the experiment’s blueprint proves true, because the blueprint is for a device in which multiple different sorts of processes could develop—much like the brain (at least in Hayek’s view).

Babcock writes near the conclusion of his report:

…it is seen that the total number of systems which can be studied with even this limited automaton […] is extremely large. Thus it is fairly obvious that no exhaustive treatment of all the systems is possible nor is it the purpose here to present such a study. [24]

Babcock does distinguish between the functions of these potential systems (filtering, counting, switching, etc.) and the systems themselves, but ultimately doesn’t see one as reducible to the other:

…it is apparent that when the circuit interconnections become great enough and the environment complex enough, then the circuits can only be analyzed and predictively studied by means of this or a similar automaton. [25]

Such a conclusion is much more in the spirit of Hayek or other cybernetic work than that of Newell and Simon, and it affirms the role of materiality in adaptation and living order. That said, the experiment as a whole conforms to Newell and Simon’s approach to knowledge, in that once complete, the entire work does exist as a sort of program to be repeated. Working from Babcock’s thesis, one could in fact reconstruct the machine to create the conditions for emergent material order. [26] In terms of science and record, the whole of Babcock’s experiment could be, and has been, stored as a collection of routines, ready to be called upon again in response to future stimuli.

We could not describe the actual ARA in Newell and Simon’s terms even if we tried because they called for a computer that serves as a site for the observation of processes and deduction of properties, which results in an externalized program. To do so for any one of the ARA’s possible configurations is to either merely reconstruct that configuration or reduce it to its outcomes. (The latter would be irrelevant, given that often the ARA’s outcomes were less unique than their process of reaching them.)

However, Newell and Simon’s description of a computer’s function within science applies fairly well to the actual scientific process within which the ARA performed. If by “program” we do not refer to the ARA’s specific computational processes, but rather to the construction of an experiment within which organism-like behavior can emerge in material form, then Newell and Simon’s language is more apt. From the conclusion to Newell and Simon’s article, making some substitutions for the word “computer”:

First, [Babcock’s experiment] provides us with a [process] capable of realizing programs, and hence of actually determining what behavior is implied by a program under various environmental conditions. Second, a program is a very concrete specification of the processes, and permits us to see whether they are sufficient to produce the phenomena. [27]

The difference between describing the ARA and describing the process within which the ARA operated is more than a difference of scale from micro to macro. Such a move represents a shift in emphasis from ontology—how a being establishes itself as a discrete event or process—to epistemology, or how knowledge results from this process.

Crucially though, the knowledge that results in this case is knowledge not solely for the process or entity in question—but for others. In at least this case, the key questions of epistemology regard not what the machine can know, nor even what the scientist can know, but what others can know as a result of this process.

In moving from asking questions about order and organization in a single body to the same questions for a group of bodies, different accounts or theories may in fact serve. In at least the case of Babcock’s experiment, Hayek and cybernetics are best suited for describing a single body’s process of adaptation and order, where Newell and Simon’s less materialist approach is well suited for describing how order emerges for a group—for others who might enter into a conversation with Babcock’s experiment.

These two perspectives on the ARA are of course inextricably linked, as the machine’s process and the broader process of the lab require one another. Materiality is essential to the entire process, but plays different roles within it. Without emergence of order at the material level, nothing would exist to pass between bodies. But to pass between finite bodies, order needs to be conveyable without matter as a form of program, or discrete collection of routines, actions, and performances.

This is where Ashby’s DAMS experiment failed. The failure to achieve a blueprint-based design in that case did not represent a threat to being, but rather a threat to being with others.

 

Figure 1 - From BCL/IGB, Kevin Hamilton and Skot Wiedmann, 2011. Document of in-process re-enactment of Babcock’s experiment as part of BCL/IGB, larger artwork on the legacy of the Biological Computer Laboratory. © Kevin Hamilton and Skot Wiedmann, 2011. Used with permission.

Conclusion

As alluded to briefly above, both Hayek’s approach and the more common programmatic approach represented by Newell and Simon have led to some troubling applications in the organization of social and political order. Though they both contend with behaviorism, and by extension totalitarianism, both risk dehumanization when applied across an analysis of the ontological and epistemological spheres. The programmatic approach risks the sacrifice of materiality to an assertion of fast finitude, while the cybernetic approach risks the loss of finitude in its assertion of materiality.

By removing the material from its central role in being, the programmatic approach on its own leaves only a nested series of reducible programs, multiplying like endless matroshka dolls. The resemblance here of Newell and Simon’s program approach to the behavior of inheritance in object-oriented programming is no accident given the history of computation. And as Galloway points out in his critique of object-oriented ontology, “object-oriented computer languages not only structure business but also influence the logic of identifying, capturing, and mediating bodies and objects more generally.” [28] A world of production based on not just transferrable but reducible properties, in Galloway’s view, stands in danger of ignoring not only materiality’s role in being, but also the painfully material actions that have ordered injustice throughout history.

On the other hand, as an account not only of being, but of being with others, the cybernetic approach condemns subjects to a confusing vacillation between isolation and immanence. The self-organized subject, as seen in Hayek and as manifested in Babcock’s experiment, reflects her world only through analogy. She is dependent on the outside world for her own becoming, even as she bears only an analogical relationship to that world, with no way to verify the connection between internal and external events. Though this view facilitates a mode of becoming that is vitally performed and unrepeatable, and therefore distinct and valuable, as an account of relationality, it falters.

The relational cybernetic subject is either stranded in subjectivity, as in Gordon Pask’s famous diagram of the “Man in the Bowler Hat,” or inescapably bound up in a web of others’ becoming. In the latter, as in many instances of aesthetic modernism, particularities eventually fade away in service of grand abstractions, the principles and processes that surpass all difference. Such a picture is equally descriptive of the most spiritualist dreams of modern art and the most utopian aspirations of Hayekian economics, in which collective, adaptive processes such as price and value surpass all other concerns in the facilitation of a free market.

This paper has argued that the best way to understand the contributions of Murray Babcock’s Adaptive Reorganizing Automaton, one of the few machines completed by the Biological Computer Lab, [29] is to draw only in part from cybernetic theory. Babcock’s device may have been founded on a cybernetic understanding of emergent order, but it ultimately called for interaction and expansion on more programmatic terms.

Within the history and philosophy of science, this argument is not so novel. Hans-Jörg Rheinberger has described science in very much this way, through his description of the “experimental system” as science’s basic unit. [30] In his view, instances of such a system must ultimately be capable of generating differences through reproduction—as demonstrated in the repeatability of the ARA as an experiment, or in the failure of Ashby’s DAMS. At the heart of this process, however, in Rheinberger’s view is an essentially material process, an epistemic one in which the experimenter produces representations or “graphemes,” material interpretations of the world that also serve as traces of action. Out of these, the scientist forms his model, not first based on comparison of the model to the world, but based on comparison of representations within the space of scientific process. The model emerges as real in a crucially material, and representational, manner.

In the context of debates over the social and political legacies of cybernetics and its sibling science, programmatic computation, this approach offers some new perspectives. Besides allowing for—and indeed insisting upon—a heterogeneous variety of approaches to materiality’s role within the emergence of order, life, and knowledge, this approach also productively inverts some time-worn questions. Instead of framing science as a quest for the right blueprints, Rheinberger understands science as in dialogue with—and through—material representations. These dialogues, when successful, produce not blueprints but records of exchange for use in creating new dialogues. Quoting François Jacob, Rheinberger describes this process as a “machine for making the future.” [31] Much as Geoff Bowker describes the material dimension of scientific labor, [32] here we see material work leading to new knowledge, rather than disembodied knowledge in plan form leading to new material labor.

Finally, this approach expands some nagging questions surrounding epistemology in first and second-order cybernetics. It asks “what can be known” from an experiment not only by the experimenter or the experimental apparatus, but by others who enter into conversation with the scientist’s material products. As a hybrid analysis, it does this while avoiding the endless recursivity offered by either radical constructivism’s immanent-ism or object-oriented ontology’s Manichean chains of inheritance.

There is, to borrow from a popular phrase, a bottom to the turtles. At the bottom of the question of order and life in this analysis is the material act, the trace made in relation to other traces not based on an inherent, non-hierarchical network of relationships, but on a move toward understanding, sharing, and starting anew.

 

References and Notes

[1] Eden Medina, Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile (Cambridge, MA: MIT Press, 2011).

[2] Though von Foerster was Babcock’s supervisor on this project, and likely developed the idea in an earlier project, the sole document of the work is listed in Babcock’s name, and so I will refer to the experiment as Babcock’s.

[3] Murray Lewis Babcock, Reorganization by Adaptive Automation (Urbana, IL: Electrical Engineering Research Laboratory, Engineering Experiment Station, University of Illinois, 1960), i.

[4] Gordon Pask, An Approach to Cybernetics (London: Hutchinson, 1961), 82.

[5] According to conversations with Paul Weston and James Hutchinson.

[6] Murray Lewis Babcock, Reorganization by Adaptive Automation, 124.

[7] Ibid., 125.

[8] Peter Asaro, "Heinz von Foerster and the Bio-Computing Movements of the 1960s," in An Unfinished Revolution? Heinz von Foerster and the Biological Computer Laboratory | BCL 1958-1976, eds. Albert Müller and Karl H. Müller (Vienna: Edition Echoraum, 2007).

[9] Bruce Caldwell, “Some Reflections on F.A. Hayek’s The Sensory Order,” Journal of Bioeconomics 6, no. 3 (2004): 239-254.

[10] Friedrich A. von Hayek, The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology (Chicago: University of Chicago Press, 1952), 178.

[11] Ibid., 178.

[12] Bruce Caldwell, “Some Reflections on F.A. Hayek’s The Sensory Order.”

[13] Ibid.

[14] Friedrich A. von Hayek, The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology, 25.

[15] Allen Newell, J. C. Shaw, and Herbert A. Simon, “Elements of a Theory of Human Problem Solving,” Psychological Review 65, no. 3 (1958): 151-166.

[16] Allen Newell, J. C. Shaw and Herbert A. Simon, “Elements of a Theory of Human Problem Solving.”

[17] Friedrich A. von Hayek, The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology, 190.

[18] Allen Newell, J. C. Shaw, and Herbert A. Simon, “Elements of a Theory of Human Problem Solving,” 153.

[19] Andrew Pickering, The Cybernetic Brain: Sketches of Another Future (Chicago and London: University of Chicago Press, 2010).

[20] Ibid., 19.

[21] Lucy Suchman, Human-Machine Reconfigurations: Plans and Situated Actions, 2nd ed. (Cambridge and New York: Cambridge University Press, 2007).

[22] Andrew Pickering, The Cybernetic Brain: Sketches of Another Future, 383.

[23] Murray Lewis Babcock, Reorganization by Adaptive Automation, 109.

[24] Ibid., 110.

[25] Ibid., 113.

[26] In fact, the author is currently engaged in just this activity, with great success, in collaboration with Skot Wiedmann, whose expertise also made this paper possible.

[27] Allen Newell, J. C. Shaw, and Herbert A. Simon, “Elements of a Theory of Human Problem Solving,” 166.

[28] Alexander R. Galloway, “The Poverty of Philosophy: Realism and Post-Fordism,” Critical Inquiry 39, no. 2 (2013): 347-366.

[29] Peter Asaro, "Heinz von Foerster and the Bio-Computing Movements of the 1960s."

[30] Hans-Jörg Rheinberger, “Experimental Systems, Graphematic Spaces,” in Inscribing Science: Scientific Texts and the Materiality of Communication, ed. Timothy Lenoir (Stanford: Stanford University Press, 1998), 285-303.

[31] Ibid., 288.

[32] Geoffrey C. Bowker, Memory Practices in the Sciences (Cambridge, MA: MIT Press, 2005).

 

Author Biography

Kevin Hamilton is a Professor at the University of Illinois, Urbana-Champaign, where he holds appointments in the School of Art and Design and the program in Media and Cinema Studies. His work as an artist and scholar has earned support from the National Science Foundation, the National Endowment for the Humanities, and the Illinois Arts Council. Kevin works in a variety of disciplinary settings, with publications on interdisciplinary research methodologies and bias in algorithmic systems, and commissioned or exhibited artworks on the histories of cybernetics, race, and landscape. His book on the role of film in American nuclear weapons testing and policy will be published by Dartmouth College Press in Fall 2018, and co-authored with Ned O’Gorman.

Comments
1
shine waston:

Our adobe AD0-E717 dumps is the most reliable solution to quickly prepare for your Adobe Designing Adobe Azure Infrastructure Solutions. We are certain that our Adobe AD0-E717 practice exam will guide you to get certified on the first try. Here is how we serve you to prepare successfully.