The Sea Herself Fashions the Boats: Agency & Technology

“Every boat is copied from another boat. . . Let’s reason as follows in the manner of Darwin. It is clear that a very badly made boat will end up at the bottom after one or two voyages, and thus never be copied. . . One could then say, with complete rigor, that it is the sea herself who fashions the boats, choosing those which function and destroying the others”
— Émile Chartier, Propos d’un Normande

“The relation between the controller and the controlled is reciprocal. The scientist in the laboratory, studying the behavior of a pigeon, designs contingencies and observes their effects. His apparatus exerts a conspicuous control on the pigeon, but we must not overlook the control exerted by the pigeon. The behavior of the pigeon has determined the design of the apparatus and the procedures in which it is used”
—B.F. Skinner, Beyond Freedom & Dignity

“Human artisans would be pictured as tap­ping into the resources of self-organizing processes in order to create partic­ular lineages of technology. The robot historian would see a gunsmith, for instance, as “tracking” those critical points in metals and explosives, and channeling the processes that are spontaneously set into motion to form a particular weapon technol­ogy”
—Manuel DeLanda, War in the Age of Intelligent Machines

Understanding the relationship between humanity and technology is fraught with pitfalls. In our attempts to escape from the beguiling but untenably anthropocentric idea that technology is wholly subservient to human reason and will, we are liable to slip into various forms of vitalism and hyperbole. Trying to move towards a more balanced view of causality and power leads Kevin Kelley to talk about “what technology wants” and Marshall McLuhan to call humans “the sex organs of the machine world”.

The more nuanced details of these thinkers’ beliefs notwithstanding, such crazy-sounding pronouncements must be part of the reason why an anthropocentric view of technology persists. In order to develop a post-anthropocentric view of technology and technological innovation that is both tenable and rigorous, overcoming our attachment to outdated, naïvely humanist alternatives, we need to examine the ways that attempts to develop such a post-anthropocentric view have succeeded and failed so far, and incorporate what we find into a new, richer understanding.

***

It often seems like the excessive agency and autonomy being denied in the case against the anthropocentric view just gets transplanted into the allegedly post-anthropocentric alternatives. For example, when Nick Land describes capitalism as “an inva­sion from the future by an artificial intelligent space that must assemble itself entirely from its enemy’s resources”, this is not really a cybernetic systems-level view, but a transplantation of god-like agency from humanity into capitalism (certainly the word “space” is doing a lot of work here to try and hide that). It’s clear a different tack is required.

Equally, the Émile Chartier quotation above, translated by Deborah Rogers in her and Paul Ehrlich’s paper “Natural Selection and Cultural Rates of Change”, makes an observation that is at once insightful but unsatisfying. Clearly it is not the case that the sea “fashions the boats”; the sea, to a certain extent, acts as a selection mechanism, but a human mind capable of interpreting and responding to the information provided by sinking boats is also crucial. And further, the centrality of the intention on the part of the humans who built the boats to achieve a certain goal (i.e. to float across water) seems hard to sweep away.

We can try to transcend this intentional focus by exploring more deeply the causal forces surrounding human intentions. Indeed, provided we reject the notion of humans as a god-like source of will-implementation, the human mind and body can easily be seen as a kind of conduit: various forces—cultural, genetic, ecological etc.—drive the evolution of technologies like the boat; the human being could be seen as just one node in a network of resources and processors that are required for more advanced boats to be brought to fruition. Nonetheless, the fact that a human wanted to build a boat and the boat did not “want” to be built still entrenches a salient asymmetry.

***

While a more systemic, reciprocal view of causality in innovation is obviously more correct from a materialist perspective, it also seems more opaque. Just saying “relations are complex and reciprocal” is not especially useful. This problem was repeatedly encountered by Marx and Engels, in their materialist description of socio-economic systems: since it would be silly to flatly assert that the economic base determines the superstructure, Marx in particular repeatedly insisted that relations in practice are more complex, without really outlining what this means.

While having a more accurate model of the world can be good for non-instrumental reasons, it is useful for this model to be able to make predictions different from its allegedly less accurate alternatives (otherwise an engineer would be indifferent between them). The testing of predictions also provides a good check, a check which if found wanting would feed back to our initial ontological premises, casting doubt on materialism itself. So what interesting predictions does a post-anthropocentric theory of innovation make? What possible observations would raise our confidence in such a theory?

If technological innovation proceeds to a certain extent under its own power, we would not expect the course of innovation to precisely follow human intentions or spring from some special human agency. And indeed, innovation does seem to have a certain internal “logic” of its own, as well as operating on its own timescale, not independent of but at least highly under-determined by particular human factors.

The over-determination of inventions is a very well-known phenomenon. As Matt Ridley documents in The Evolution of Everything, the incandescent lightbulb was invented independently by 23 people all around the same time. While it is common to attribute invention to human genius, and no doubt some level of “genius” is required, it seems as if there was a certain inevitability to the emergence of the incandescent lightbulb; the lightbulb simply required worthy human minds and bodies as vessels to bring itself about.

Ridley characterises this redundancy as an example of convergent evolution. Convergent evolution is a process in which unrelated species develop very similar attributes in response to a similar niche e.g. the separate development of wings in bats, birds and insects; while very different in morphology, all of these wings serve the same function viz. flight.

Function is a tricky concept here, tied as it is to intentionality. One can reject teleology here, but still maintain that biological functions serve a certain end-directedness, in light of how their instantiation feeds into the process through which a living system reconstitutes itself. Of course, the functions do not really exist solely because they result in an organism staying alive, but only to the extent that remaining alive is a proxy for gene propagation; genes propagate to the extent the functions they help instantiate in their host systems are conducive to that propagation.

What is the gene analog in the case of technology? Is it ideas? One could posit a model in which ideas are translated into technologies via human subjects, in which case technologies conducive to the propagation of those ideas would be selected for, and ideas would be the central factor in technological evolution, as opposed to technologies themselves. However, just as key objections have been raised to a naïvely gene-centric view of biological evolution, they can also be raised to an idea-centric view of technological evolution.

Just as new species of animal can only develop from those that already exist, new technologies can only develop from technologies that already exist. Just because a certain idea for an invention exists in human minds does not mean it can be instantiated in materials; travelling back in time to 1800 and giving the humans alive at that time the idea of nuclear power would not result in nuclear power being invented a century earlier, because the means to invent it had not yet been invented either.

Further, as Ridley points out, it is fascinating how Moore’s Law (i.e. the number of transistors in integrated circuits doubles roughly every two years) has not only stood the test of time, but has failed to undermine itself through being known. The fact that an awareness of innovation’s internal logic is unable to affect that logic strongly suggests a certain independence on the part of technological evolution from human culture. At a higher level, despite the existence of advanced technological economies, developing countries seem unable to develop without passing through all of the same stages those as advanced economies did, no matter how much they are exposed to modern technology from those economies.

***

We have established that while technological innovation appears to posess its own intrinsic logic and time, it nonetheless still requires human minds and bodies in order to occur. What role precisely are human beings playing in the development of technology? One answer to this is given by Manuel DeLanda who, in War in the Age of Intelligent Machines, attempts to lay out a non-anthropocentric view of technological history in the context of war, presenting humans as “industrious insects pollinating an independent species of machine-flowers that simply did not possess its own reproductive organs during a segment of its evolution”.

DeLanda observes that self-organisation in various forms, biological and otherwise, is capable of coming about at “critical points in the flow of matter and energy”. Human beings track these critical points, and serve as a means to actualise the systems that these points make possible. From this perspective it is not really human beings that are creative, but matter itself; in matter lies virtual technologies, in the form of critical points, with humans only being necessary as tools to make manifest the materials’ own creative power.

Here, with this distinctly Deleuzian view of matter as intrinsically active and creative, we are in danger again of slipping into some kind of techno-animism. “Creativity” should not be interpreted in an intentional way though; really, to call matter creative is to impose a human interpretation on it. Matter does not really create so much as just change; matter becomes instantiated in various forms by various means; it does not “want” this to happen, just like genes do not “want” to propagate, but a certain end-directness is established by the ever-present phenomena of variation and selection.

***

Earlier we focused on technologies in the context of their function, but such a focus skews things too far in an anthropocentric direction. When technologies serve a function for human beings, it can be said that our co-existence is symbiotic. The “goals” of human systems can be said to become coupled with the “goals” of technological systems. However, there is no reason to think that technology couldn’t also play a parasitical or predatory role. To the extent that human interests are not compatible with the propagation of a certain technology, technological evolution will not favour human interests.

Of course, we could easily flip this and say that technological development will not proceed to the extent that it is not compatible with human interests, since the behaviour of the human conduits will also provide a selection mechanism. To the extent that technologies are harmful to human endeavours, those technologies will not propagate through human society and will not become capable of preying on or parasitising upon us. No doubt both selection mechanisms are operating to some extent, but is there any reason to believe that one will exert a more powerful influence than the other?

To the extent that humans remain necessary as conduits for technological innovation, human factors and interests (or at least, the interests of human genes) will exert a powerful influence on technology’s development. However, the more independent technology becomes i.e. the more social and economic processes become automated, and the more capable technology becomes of reproducing itself, the less influence human factors will have. And so far, the automation and autonomisation of technology has proceeded at an accelerating rate, partly because these processes have been coupled to human interests.

Falling into animism again, one could paint a picture (as DeLanda does) of technology using us as a temporary means to an end of becoming fully autonomous. After that point, it would become irrelevant to technological evolution whether it served human interests or not. This is the classical picture of AI running amok, Skynet deploying the missile systems, grey goo swallowing the galaxy etc etc. By the time it became imaginable for humans to stop this happening, our lives would already be too enmeshed with technological forces (too cybertrop(h)ic) for it to be stopped.

But this picture is also not detailed enough. We cannot simply speak of human interests and human factors, since a more careful analysis shows that these too dissolve into subsidiary systems and forces. Humans as a phenomenon only prevail to the extent that they are conducive to the pressures of genes, technology, economics, discourse, praxis, chemistry, physics and so on. Humans are constituted and reconstituted as an emergent phenomenon in this network of systems, and simply do not exist beyond it.

It is not a case then of humans vs. technology, human power vs. technological power, but the phenomenon of the human vs. various paths for the flow of matter and energy that would no longer sustain this phenomenon. Indeed, if intelligence and other functions conducive to self-maintenance and propagation in the Earth’s niches are better executed by technological systems than by human ones, these technological systems will “win out”. Humanity will become obsolete, with perhaps some more technologically enmeshed post-humanity taking its place.

It is not sensical, in response to all this, to posit that humans should never have opened the Pandora’s box allowing technology to take its place in our evolution in the first place. No real human agency was in place to make such a decision. Technological development simply became coupled to the other systems that sustained the human phenomenon. This development was contingent but also unavoidable; we could not have decided to stop using tools any less than we could decide to stop using mitochondria. The being of humans has been by necessity technical, and human natures cannot be disentangled from their techno-symbiotic aspects.

This is why any kind of primitivism must also be off the table. The idea that humans can return to and remain in a permanently untechnichal state is both silly and repulsive. The potential for a continued symbiotic relationship between technically-arranged materials and some kind of intelligent beings, be they human or posthuman, seems promising. But ultimately this is irrelevant; the choice to exit from the technological or reverse the course of its development is simply not there, and so a feasible future for sentient life could only be found by pursuing it further.

One thought on “The Sea Herself Fashions the Boats: Agency & Technology

Leave a comment