PERSPECTIVE article

Front. Ecol. Evol., 18 October 2021

Sec. Models in Ecology and Evolution

Volume 9 - 2021 | https://doi.org/10.3389/fevo.2021.755981

Intelligence as Information Processing: Brains, Swarms, and Computers

  • 1. Departamento de Ciencias de la Computación, Instituto de Investigaciones en Matemáticas Aplicadas y Sistemas, Universidad Nacional Autónoma de México, Mexico City, Mexico

  • 2. Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Mexico City, Mexico

  • 3. Lakeside Labs GmbH, Klagenfurt am Wörthersee, Austria

Article metrics

View details

6

Citations

6,9k

Views

1k

Downloads

Abstract

There is no agreed definition of intelligence, so it is problematic to simply ask whether brains, swarms, computers, or other systems are intelligent or not. To compare the potential intelligence exhibited by different cognitive systems, I use the common approach used by artificial intelligence and artificial life: Instead of studying the substrate of systems, let us focus on their organization. This organization can be measured with information. Thus, I apply an informationist epistemology to describe cognitive systems, including brains and computers. This allows me to frame the usefulness and limitations of the brain-computer analogy in different contexts. I also use this perspective to discuss the evolution and ecology of intelligence.

1. Introduction

In the 1850s, an English newspaper described the growing global telegraph network as a “nervous system of the planet” (Gleick, 2011). Notice that this was half a century before Ramón y Cajal (1899) first published his studies on neurons. Still, metaphors have been used since antiquity to describe and try to understand our bodies and our minds (Zarkadakis, 2015; Epstein, 2016): humans have been described as made of clay (Middle East) or corn (Americas), with flowing humors, like clockwork automata, similar to industrial factories, etc. The most common metaphor in cognitive sciences has been that of describing brains as computers (von Neumann, 1958; Davis, 2021).

Metaphors have been used in a broad range of disciplines. For example, in urbanism, there are arguments in favor of changing the dominant narrative of “cities as machines” to “cities as organisms” (Batty, 2012; Gershenson, 2013b).

We can have a plethora of discussions on which metaphors are the best. Still, being pragmatic, we can judge metaphors in terms of their usefulness: if they help us understand phenomena or build systems, then they are valuable. Notice that then, depending on the context, different metaphors can be useful for different purposes (Gershenson, 2004). For example, in the 1980s, the debate between symbolists/representationists (brain as processing symbols) (Fodor and Pylyshyn, 1988) and connectionists (brain as network of simple units) (Smolensky, 1988) did not end with a “winner” and a “loser,” as both metaphors (computational, by the way) are useful in different contexts.

There have been several other metaphors used to describe cognition, minds, and brains, each with their advantages and disadvantages (Varela et al., 1991; Steels and Brooks, 1995; Clark and Chalmers, 1998; Beer, 2000; Gärdenfors, 2000; Garnier et al., 2007; Chemero, 2009; Froese and Ziemke, 2009; Kiverstein and Clark, 2009; Froese and Stewart, 2010; Stewart et al., 2010; Downing, 2015; Harvey, 2019). It is not my purpose to discuss these here, but to notice that there is a rich variety of flavors when it comes to studying cognition. Nevertheless, all of these metaphors can be described in terms of information processing. Since computation can be understood as the transformation of information (Gershenson, 2012), “computers,” broadly understood as machines that process information can be a useful metaphor to contain and compare other metaphors. Note that the concept of “machine” (and thus computer) could also be updated (Bongard and Levin, 2021).

Formally, computation was defined by Turing (1937). A computable function is that which can be calculated by a Universal Turing Machine (UTM). Still, there are two main limitations of UTMs related to modeling minds (Gershenson, 2011a):

  • UTMs are closed. Once a computation begins, there is no change in the program or data, so adaptation during computation is limited.

  • UTMs compute only once they halt. In other words, outputs depend on a UTM “finishing its computation.” Still, minds seem to be more continuous than halting. Then the question arises: what function would a mind be computing?

As many have noted, the continuous nature of cognition seems to be closely related to that of the living (Maturana and Varela, 1980; Hopfield, 1994; Stewart, 1995; Walker, 2014). We have previously studied the “living as information processing” (Farnsworth et al., 2013), not only at the organism level, but at all relevant scales. Thus, it is natural to use a similar approach to describe intelligence.

Note that the limitations of UTMs apply only for theoretical computation. In practice, many artificial computation systems are continuous, such as reactive systems. An example would be an operating system, that does not precisely halt, but is always expecting events (internal or external) and responding to these.

In the next section, I present a general notion of information and its limits to study intelligence. Then, I present the advantages of studying intelligence in terms of information processing. Intelligence is not restricted to brains, and swarms are a classic example of this, which can also be described as information processing systems. Before concluding, I exploit the metaphor of “intelligence as information processing” to understand its evolution and ecology.

2. Information

Shannon (1948) proposed a measure of information in the context of telecommunications, that is equivalent to Boltzmann-Gibbs entropy. This measure characterizes how much a receiver “learns” from incoming symbols (usually bits) of a string, based on the probability distribution of previously known/received symbols: if new bits can be completely determined from the past (as in a string with only one repeating symbol), then they carry zero information (because we know that the new symbols will be the same as previous ones). If previous information is useless to predict the next bit (as in a random coin toss), then the bit will carry maximum information. Elaborating on this, Shannon calculated how much redundancy is required to reliably transmit a message over an unreliable (noisy) channel. Even when Shannon's purpose was very specific, the use of information in various disciplines has exploded in recent decades (Haken, 1988; Lehn, 1990; Wheeler, 1990; Gell-Mann and Lloyd, 1996; Atlan and Cohen, 1998; DeCanio and Watkins, 1998; Roederer, 2005; von Baeyer, 2005; Cover and Thomas, 2006; Prokopenko et al., 2009, 2011; Batty et al., 2012; Escalona-Morán et al., 2012; Gershenson, 2012, 2020, 2021b; Fernández et al., 2014, 2017; Zubillaga et al., 2014; Haken and Portugali, 2015; Hidalgo, 2015; Murcio et al., 2015; Amoretti and Gershenson, 2016; Roli et al., 2018; Equihua et al., 2020; Krakauer et al., 2020; Scharf, 2021).

We can say that electronic computers process information explicitly, as we can analyze each change of state and information is encoded in a precise physical location. However, humans and other animals process information implicitly. For example, we say we have memories, but these are not physically at a specific location. And it seems unfeasible to represent precisely the how information changes in our brains. Still, we do process information, as we can describe “inputs” (perceptions) and “outputs” (actions).

Shannon assumed that the meaning of a message was agreed previously between emitter and receiver. This was no major problem for telecommunications. However, in other contexts, meaning is not a trivial matter. Following Wittgenstein (1999), we can say that the meaning of information is given by the use agents make of it. This has several implications. One is that we can change meaning without changing information [passive information transformation; (Gershenson, 2012)]. Another is the limits on artificial intelligence (Searle, 1980; Mitchell, 2019), as the use of information in artificial systems tends to be predefined. Algorithms can “recognize” traffic lights or cats in an image, as they are trained for this specific purpose. But the “meaning” for computer programs is predefined, i.e., what we want the program to do. The quest for an “artificial general intelligence” that would go beyond this limit has produced not much more than speculations.

Even if we could simulate in a digital computer all the neurons, molecules, or even elementary particles from a brain, such a simulation would not yield something akin to a mind. On the one hand, interactions generate novel information at multiple scales, so we would need to include not only brain, but body and world that interacts with the brain (Clark, 1997). Moreover, such a simulation would require to model not only one scale, but all scales relevant to minds (see below). On the other hand, as mentioned above, observers can give different meanings to the same information. In other words, the same “brain state” for different people could refer to different “mental states.” For example, we could use the same simple “neural” architecture of a Braitenberg vehicle (Braitenberg, 1986) that exhibits phototaxis, but connect the inputs to different sensors (e.g., sound or odor, instead of light), and the “meaning” of the information processed by the same neural architecture would be very different. In a sense, this is related to the failure of Laplace's daemon: even with full information of the states of the components of a system, prediction is limited because interactions generate novel information (Gershenson, 2013a). And this novel information can determine the future production of information at different scales through upward or downward causation (Campbell, 1974; Bitbol, 2012; Farnsworth et al., 2017; Flack, 2017), so all relevant scales should be considered (Gershenson, 2021a). An example of downward causation can be given with money: it is a social contract, but has a causal effect on matter and energy (physics), e.g., when we extract minerals from a mountain. This action does not violate the laws of physics, but the laws of physics are not enough to predict that the matter in the mountain will be extracted by humans for their own purposes.

In spite of all its limitations, the computer metaphor can be useful in a particular way. First, the limits on prediction by interactions are related to computational irreducibility (Wolfram, 2002). Second, describing brains and minds in terms of information allows us to avoid dualisms. Thus, it becomes natural to use information processing to describe intelligence and its evolution. Finally, information can contain other metaphors and formalisms, so it can be used to compare them and also to exploit their benefits.

3. Intelligence

There are several definitions of intelligence, but not a single one that is agreed upon. We have similar situations with the definitions of life (De Duve, 2003; Aguilar et al., 2014), consciousness (Michel et al., 2019), complexity (Lloyd, 2001; Heylighen et al., 2007), emergence (Bedau and Humphreys, 2008), and more. These concepts could be said to be of the type “I know it when I see it,” to quote Potter Stewart.

Still, having no agreed definition is no motive nor excuse for not studying a phenomenon. Moreover, having different definitions for the same phenomenon can give us broader insights than if we stick to a single, narrow, inflexible definition.

Thus, we could define intelligence as “the art of getting away with it” (Arturo Frappé), or “the ability to hold two opposed ideas in mind at the same time and still retain the ability to function. One should, for example, be able to see that things are hopeless and yet be determined to make them otherwise” (F. Scott Fitzgerald). Turing (1950) proposed his famous test to decide whether a machine was intelligent. Generalizing Turing's test, Mario Lagunez suggested that in order to decide whether a system was intelligent, first, the system has to perform an action. Then, an observer has to judge whether the action was intelligent or not, according to some criteria. In this sense, there is no intrinsically intelligent behavior. All actions and decisions are contextual (Gershenson, 2002). Like with meaning, the same action can be intelligent or not, depending on the context and on the judge and their expectations.

Generalizing, we can define intelligence in terms of information processing: An agent a can be described as intelligent if it transforms information [individual (internal) or environmental (external)] to increase its “satisfaction” σ.

I have previously defined satisfaction σ ∈ [0, 1] as the degree to which the goals of an agent have been fulfilled (Gershenson, 2007, 2011b). Certainly, we still require an observer, since we are the ones who define the goals of an agent, its boundaries, its scale, and thus, its satisfaction. Examples of goals are sustainability, survival, happiness, power, control, and understanding. All of these can be described as information propagation (Gershenson, 2012): In this context, an intelligent agent will propagate its own information.

Brains by themselves cannot propagate. But species of animals with brains tend to propagate. In this context, brains are parts of agents that help process information in order to propagate those agents. From this abstract perspective, we can see that such ability is not restricted to brains (Levin and Dennett, 2020). Thus, there are other mechanisms capable of producing intelligent behavior.

4. Swarms

There has been much work related to collective intelligence and cognition (Hutchins, 1995; Heylighen, 1999; Reznikova, 2007; Couzin, 2009; Malone and Bernstein, 2015; Solé et al., 2016). Interestingly, groups of humans, animals or machines do not have a single brain. Thus, information processing is distributed.

A particular case is that of insect swarms (Chialvo and Millonas, 1995; Garnier et al., 2007; Passino et al., 2008; Marshall et al., 2009; Trianni and Tuci, 2009; Martin and Reggia, 2010), where not only information processing is distributed, but also reproduction and selection occur at the colony level (Hölldobler and Wilson, 2008).

To compare the cognitive architectures of brains and swarms, I previously proposed computing networks (Gershenson, 2010). With this formalism, it can be shown that the differences in substrate do not necessarily imply a theoretical difference in cognitive abilities. Nevertheless, in practice, the speed and scalability of information processing of brains is much superior than that of swarms: neurons can interact in the scale of milliseconds, and mammal brains can have a number of neurons in the order of 1011 with 1014 synapses (several species have more neurons than humans, including elephants and some whales, orcas having the most and more than twice as humans). The largest insect swarms that have been registered (locusts) are also in the order of 1011 individuals (covering 200Km2). However, insects interact in the scale of seconds, and only with their local neighbors. In theory, it might not matter much. But in practice, this limits considerably the information processing capacities of swarms over brains.

Thus, the brain as computer metaphor is not appropriate for studying collective intelligence in general, nor swarm intelligence in particular. However, the intelligence of brains and swarms can be described in terms of information processing, as an agent a can be an organism or a colony, with its own satisfaction σ defined by an external observer.

Another advantage of studying intelligence as information processing is that we can use the same formalism to study intelligence at multiple scales: cellular, multicellular, collective/social, and cultural. Curiously, at the global scale (where we might reach a scale of 1011 humans later this century), the brain metaphor has also been used (Mayer-Kress and Barczys, 1995; Börner et al., 2005; Bernstein et al., 2012), although its usefulness remains to be demonstrated.

5. Evolution and Ecology

If we want to have a better understanding of intelligence, we must study how it came to evolve. Intelligence as information-processing can also be useful in this context, as different substrates and mechanisms can be used to exhibit intelligent behavior.

What could be the ecological pressures that promote the evolution of intelligence? Since environments and ecosystems can also be described in terms of information, we can say that more complex environments will promote—through natural selection—more complex organisms and species, which will require a more complex intelligence to process the information of their environment and of other organisms and species they interact with (Gershenson, 2012). In this way, the complexity of ecosystems can also be expected to increase though evolution. It should be noted that we understand complexity as a balance between order and chaos, stability and change (Packard, 1988; Langton, 1990; Lopez-Ruiz et al., 1995; Fernández et al., 2014; Roli et al., 2018). Thus, species cannot be too robust or too adaptable in order to thrive in a complex ecosystem. This certainly will depend on how stable or volatile the ecosystems will be Equihua et al. (2020), but it is clear that organisms require to match the variety that their environment poses (Ashby, 1956; Gershenson, 2015) (see below).

These ideas generalize Dunbar's (1993, 2003) “social brain hypothesis”: larger and more complex social groups put a selective pressure on more complex information processing (measured as the neocortex to bodymass ratio), which gives individuals more cognitive capacities to recognize different individuals, remember who can they trust, multiple levels of intentionality (Dennett, 1989), and so on. In turn, increased cognitive abilities lead to more complex groups, so this cycle reinforces the selection for more intelligent individuals.

One can make a similar argument using environments instead of social groups: more complex ecosystems put a selective pressure for more intelligent organisms, social groups, and species; as they require greater information-processing capabilities to survive and exploit their environments. This also creates a feedback, where more complex information processing by organisms, groups, and species produce more complex ecosystems.

However, individuals can “offload” their information processing to their group or environment, leading to a decrease in their individual information processing abilities (Reséndiz-Benhumea et al., 2021). This is to say that intelligence does not always increase. Although there is a selective pressure for intelligence, its cost imposes limits that depend as well on the usefulness of increased cognitive abilities.

Generalizing, we can say that information evolves to have greater control over its own production (Gershenson, 2012). This leads to more complex information-processing, and thus, we can expect intelligence to increase at multiple scales through evolution, independently on the substrates that actually do the information processing.

Another way of describing the same: information is transformed by different causes. This generates a variety of complexity (Ashby, 1956; Gershenson, 2015). More complex information requires more complex agents to propagate it, leading to an increase of complexity and intelligence through evolution.

At different scales, since the Big Bang, we have seen an increase of information processing through evolution. In recent decades, this increase has been supraexponential in computers (Schaller, 1997). Although there are limitations for sustaining this rate of increase (Shalf, 2020), we can say that the increase of intelligence is a natural tendency of evolution, be it of brains, swarms, or machines. This will not lead to a “singularity,” but to an increase of the intelligence and complexity of humans, machines, and the ecosystems we create.

6. Conclusion

Brains are not essential for intelligence. Plants, swarms, bacterial colonies, robots, societies, and more exhibit intelligence without brains. An understanding of intelligence (and life, Gershenson et al., 2020) independently of its substrate, in terms of information processing, will be more illuminating that focussing only on the mechanisms used by vertebrates and other animals. In this sense, the metaphor of the brain as a computer, is limited more on the side of the brain than on the side of the computer. Brains do process information to exhibit intelligence, but there are several other mechanisms that also process information to exhibit intelligence. Brains are just a particular case, and we can learn a lot from them, but we will learn more if we do not limit our studies to their particular type of cognition.

Funding

This work was supported by UNAM's PAPIIT IN107919 and IV100120 grants.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Statements

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.

Author contributions

CG conceived and wrote the paper.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  • 1

    AguilarW.Santamaría BonfilG.FroeseT.GershensonC. (2014). The past, present, and future of artificial life. Front. Robot. AI1:8. 10.3389/frobt.2014.00008

  • 2

    AmorettiM.GershensonC. (2016). Measuring the complexity of adaptive peer-to-peer systems. Peer-to-Peer Netw. Appl. 9, 10311046. 10.1007/s12083-015-0385-4

  • 3

    AshbyW. R. (1956). An Introduction to Cybernetics. London: Chapman & Hall. 10.5962/bhl.title.5851

  • 4

    AtlanH.CohenI. R. (1998). Immune information, self-organization and meaning. Int. Immunol. 10, 711717. 10.1093/intimm/10.6.711

  • 5

    BattyM. (2012). Building a science of cities. Cities29, S9S16. 10.1016/j.cities.2011.11.008

  • 6

    BattyM.MorphetR.MassuciP.StanilovK. (2012). Entropy, complexity and spatial information, in CASA Working Paper, 185. London, UK.

  • 7

    BedauM. A.HumphreysP. (eds.). (2008). Emergence: Contemporary Readings in Philosophy and Science. Cambridge, MA: MIT Press. 10.7551/mitpress/9780262026215.001.0001

  • 8

    BeerR. D. (2000). Dynamical approaches to cognitive science. Trends Cogn. Sci. 4, 9199. 10.1016/S1364-6613(99)01440-0

  • 9

    BernsteinA.KleinM.MaloneT. W. (2012). Programming the global brain. Commun. ACM55, 4143. 10.1145/2160718.2160731

  • 10

    BitbolM. (2012). Downward causation without foundations. Synthese185, 233255. 10.1007/s11229-010-9723-5

  • 11

    BongardJ.LevinM. (2021). Living things are not (20th century) machines: updating mechanism metaphors in light of the modern science of machine behavior. Front. Ecol. Evol. 9:147. 10.3389/fevo.2021.650726

  • 12

    BörnerK.Dall'AstaL.KeW.VespignaniA. (2005). Studying the emerging global brain: analyzing and visualizing the impact of co-authorship teams. Complexity10, 5767. 10.1002/cplx.20078

  • 13

    BraitenbergV. (1986). Vehicles: Experiments in Synthetic Psychology. Cambridge, MA: MIT Press.

  • 14

    CampbellD. T. (1974). ‘Downward causation’ in hierarchically organized biological systems, in Studies in the Philosophy of Biology, eds AyalaF. J.DobzhanskyT. (New York City, NY: Macmillan), 179186. 10.1007/978-1-349-01892-5_11

  • 15

    ChemeroA. (2009). Radical Embodied Cognitive Science. Cambridge, MA: The MIT Press. 10.7551/mitpress/8367.001.0001

  • 16

    ChialvoD.MillonasM. (1995). How swarms build cognitive maps, in The Biology and Technology of Intelligent Autonomous Agents, Vol. 144, ed SteelsL. (Berlin; Heidelberg: Springer), 439450. 10.1007/978-3-642-79629-6_20

  • 17

    ClarkA. (1997). Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT Press. 10.7551/mitpress/1552.001.0001

  • 18

    ClarkA.ChalmersD. (1998). The extended mind. Analysis58, 719. 10.1093/analys/58.1.7

  • 19

    CouzinI. D. (2009). Collective cognition in animal groups. Trends Cogn. Sci. 13, 3643. 10.1016/j.tics.2008.10.002

  • 20

    CoverT. M.ThomasJ. A. (2006). Elements of Information Theory. Hoboken, NJ: Wiley-Interscience.

  • 21

    DavisM. (2021). The brain-as-computer metaphor. Front. Comput. Sci. 3:41. 10.3389/fcomp.2021.681416

  • 22

    De DuveC. (2003). Live Evolving: Molecules, Mind, and Meaning. Oxford: Oxford University Press.

  • 23

    DeCanioS. J.WatkinsW. E. (1998). Information processing and organizational structure. J. Econ. Behav. Organ. 36, 275294. 10.1016/S0167-2681(98)00096-1

  • 24

    DennettD. C. (1989). The Intentional Stance. Cambridge, MA: MIT Press. 10.1017/S0140525X00058611

  • 25

    DowningK. L. (2015). Intelligence Emerging: Adaptivity and Search in Evolving Neural Systems. Cambridge, MA: MIT Press. 10.7551/mitpress/9898.001.0001

  • 26

    DunbarR. I. M. (1993). Coevolution of neocortical size, group size and language in humans. Behav. Brain Sci. 16, 681735. 10.1017/S0140525X00032325

  • 27

    DunbarR. I. M. (2003). The social brain: mind, language and society in evolutionary perspective. Ann. Rev. Anthrop. 32, 163181. 10.1146/annurev.anthro.32.061002.093158

  • 28

    EpsteinR. (2016). The empty brain. Aeon.

  • 29

    EquihuaM.Espinosa AldamaM.GershensonC.López-CoronaO.MunguíaM.Pérez-MaqueoO.Ramírez-CarrilloE. (2020). Ecosystem antifragility: beyond integrity and resilience. PeerJ8:e8533. 10.7717/peerj.8533

  • 30

    Escalona-MoránM.ParedesG.CosenzaM. G. (2012). Complexity, information transfer and collective behavior in chaotic dynamical networks. Int. J. Appl. Math. Stat. 26, 5866. Available online at: https://arxiv.org/abs/1010.4810

  • 31

    FarnsworthK. D.EllisG. F. R.JaegerL. (2017). Living through downward causation: from molecules to ecosystems, in From Matter to Life: Information and Causality, eds WalkerS. I.DaviesP. C. W.EllisG. F. R. (Cambridge, UK: Cambridge University Press), 303333.

  • 32

    FarnsworthK. D.NelsonJ.GershensonC. (2013). Living is information processing: from molecules to global systems. Acta Biotheor. 61, 203222. 10.1007/s10441-013-9179-3

  • 33

    FernándezN.AguilarJ.Pi na-GarcíaC. A.GershensonC. (2017). Complexity of lakes in a latitudinal gradient. Ecol. Complex. 31, 120. 10.1016/j.ecocom.2017.02.002

  • 34

    FernándezN.MaldonadoC.GershensonC. (2014). Information measures of complexity, emergence, self-organization, homeostasis, and autopoiesis, in Guided Self-Organization: Inception, Vol. 9 of Emergence, Complexity and Computation, ed ProkopenkoM. (Berlin; Heidelberg: Springer), 1951. 10.1007/978-3-642-53734-9_2

  • 35

    FlackJ. C. (2017). Coarse-graining as a downward causation mechanism. Philos. Trans. R. Soc. A375:20160338. 10.1098/rsta.2016.0338

  • 36

    FodorJ. A.PylyshynZ. W. (1988). Connectionism and cognitive architecture: a critical analysis. Cognition28, 371. 10.1016/0010-0277(88)90031-5

  • 37

    FroeseT.StewartJ. (2010). Life after ashby: ultrastability and the autopoietic foundations of biological autonomy. Cybern. Hum. Know. 17, 750. 10.1007/s10699-010-9222-7

  • 38

    FroeseT.ZiemkeT. (2009). Enactive artificial intelligence: investigating the systemic organization of life and mind. Artif. Intell. 173, 366500. 10.1016/j.artint.2008.12.001

  • 39

    GärdenforsP. (2000). Conceptual Spaces: The Geometry of Thought. Cambridge, MA: MIT Press; Bradford Books. 10.7551/mitpress/2076.001.0001

  • 40

    GarnierS.GautraisJ.TheraulazG. (2007). The biological principles of swarm intelligence. Swarm Intell. 1, 331. 10.1007/s11721-007-0004-y

  • 41

    Gell-MannM.LloydS. (1996). Information measures, effective complexity, and total information. Complexity2, 4452. 10.1002/(SICI)1099-0526(199609/10)2:1<44::AID-CPLX10>3.0.CO;2-X

  • 42

    GershensonC. (2002). Contextuality: A Philosophical Paradigm, With Applications to Philosophy of Cognitive Science. POCS Essay, COGS, University of Sussex.

  • 43

    GershensonC. (2004). Cognitive paradigms: which one is the best?Cogn. Syst. Res. 5, 135156. 10.1016/j.cogsys.2003.10.002

  • 44

    GershensonC. (2007). Design and Control of Self-organizing Systems. Mexico: CopIt Arxives. Available online at: http://tinyurl.com/DCSOS2007

  • 45

    GershensonC. (2010). Computing networks: a general framework to contrast neural and swarm cognitions. Paladyn J. Behav. Robot. 1, 147153. 10.2478/s13230-010-0015-z

  • 46

    GershensonC. (2011a). Are Minds Computable? Technical Report 2011.08, Centro de Ciencias de la Complejidad. https://arxiv.org/abs/1110.3002

  • 47

    GershensonC. (2011b). The sigma profile: a formal tool to study organization and its evolution at multiple scales. Complexity16, 3744. 10.1002/cplx.20350

  • 48

    GershensonC. (2012). The world as evolving information, in Unifying Themes in Complex Systems, Vol. VII, eds MinaiA.BrahaD.Bar-YamY. (Berlin; Heidelberg: Springer), 100115. 10.1007/978-3-642-18003-3_10

  • 49

    GershensonC. (2013a). The implications of interactions for science and philosophy. Found. Sci. 18, 781790. 10.1007/s10699-012-9305-8

  • 50

    GershensonC. (2013b). Living in living cities. Artif. Life19, 401420. 10.1162/ARTL_a_00112

  • 51

    GershensonC. (2015). Requisite variety, autopoiesis, and self-organization. Kybernetes44, 866873. 10.1108/K-01-2015-0001

  • 52

    GershensonC. (2020). Information in science and Buddhist philosophy: towards a non-materialistic worldview, in Vajrayana Buddhism in Russia: Topical Issues of History and Sociocultural Analytics, eds Alekseyev-ApraksinA. M.DronovaV. M. (Moscow: Almazny Put), 210218.

  • 53

    GershensonC. (2021a). Emergence in artificial life. arXiv:2105.03216.

  • 54

    GershensonC. (2021b). On the scales of selves: information, life, and buddhist philosophy, in ALIFE 2021: The 2021 Conference on Artificial Life, eds ČejkováJ.HollerS.SorosL.WitkowskiO. (Prague: MIT Press), 2. 10.1162/isal_a_00402

  • 55

    GershensonC.TrianniV.WerfelJ.SayamaH. (2020). Self-organization and artificial life. Artif. Life26, 391408. 10.1162/artl_a_00324

  • 56

    GleickJ. (2011). The Information: A History, A Theory, A Flood. New York, NY: Pantheon.

  • 57

    HakenH. (1988). Information and Self-organization: A Macroscopic Approach to Complex Systems. Berlin: Springer-Verlag. 10.1007/978-3-662-07893-8

  • 58

    HakenH.PortugaliJ. (2015). Information Adaptation: The Interplay Between Shannon Information and Semantic Information in Cognition, Volume XII of SpringerBriefs in Complexity. Cham; Heidelberg; New York, NY; Dordrecht; London: Springer. 10.1007/978-3-319-11170-4

  • 59

    HarveyI. (2019). Neurath's boat and the sally-anne test: life, cognition, matter and stuff. Adapt. Behav. 1059712319856882. 10.1177/1059712319856882

  • 60

    HeylighenF. (1999). Collective intelligence and its implementation on the web. Comput. Math. Theory Organ. 5, 253280. 10.1023/A:1009690407292

  • 61

    HeylighenF.CilliersP.GershensonC. (2007). Complexity and philosophy, in Complexity, Science and Society, eds BoggJ.GeyerR. (Oxford: Radcliffe Publishing), 117134.

  • 62

    HidalgoC. A. (2015). Why Information Grows: The Evolution of Order, From Atoms to Economies. New York, NY: Basic Books.

  • 63

    HölldoblerB.WilsonE. O. (2008). The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies. New York, NY: W. W. Norton & Company.

  • 64

    HopfieldJ. J. (1994). Physics, computation, and why biology looks so different. J. Theor. Biol. 171, 5360. 10.1006/jtbi.1994.1211

  • 65

    HutchinsE. (1995). Cognition in the Wild. Cambridge, MA: MIT Press. 10.7551/mitpress/1881.001.0001

  • 66

    KiversteinJ.ClarkA. (2009). Introduction: mind embodied, embedded, enacted: one church or many?Topoi28, 17. 10.1007/s11245-008-9041-4

  • 67

    KrakauerD.BertschingerN.OlbrichE.FlackJ. C.AyN. (2020). The information theory of individuality. Theory Biosci. 139, 209223. 10.1007/s12064-020-00313-7

  • 68

    LangtonC. G. (1990). Computation at the edge of chaos: phase transitions and emergent computation. Phys. D42, 1237. 10.1016/0167-2789(90)90064-V

  • 69

    LehnJ.-M. (1990). Perspectives in supramolecular chemistry–from molecular recognition towards molecular information processing and self-organization. Angew. Chem. Int. Edn. Engl. 29, 13041319. 10.1002/anie.199013041

  • 70

    LevinM.DennettD. C. (2020). Cognition all the way down. Aeon.

  • 71

    LloydS. (2001). Measures of Complexity: A Non-Exhaustive List. Department of Mechanical Engineering, Massachusetts Institute of Technology.

  • 72

    Lopez-RuizR.ManciniH. L.CalbetX. (1995). A statistical measure of complexity. Phys. Lett. A209, 321326. 10.1016/0375-9601(95)00867-5

  • 73

    MaloneT. W.BernsteinM. S. editors (2015). Handbook of Collective Intelligence. Cambridge, MA: MIT Press.

  • 74

    MarshallJ. A.BogaczR.DornhausA.PlanquéR.KovacsT.FranksN. R. (2009). On optimal decision-making in brains and social insect colonies. J. R. Soc. Interface. 6:10651074. 10.1098/rsif.2008.0511

  • 75

    MartinC.ReggiaJ. (2010). Self-assembly of neural networks viewed as swarm intelligence. Swarm Intell. 4, 136. 10.1007/s11721-009-0035-7

  • 76

    MaturanaH.VarelaF. (1980). Autopoiesis and Cognition: The Realization of Living. Dordrecht: Reidel Publishing Company. 10.1007/978-94-009-8947-4

  • 77

    Mayer-KressG.BarczysC. (1995). The global brain as an emergent structure from the worldwide computing network, and its implications for modeling. Inform. Soc. 11, 127. 10.1080/01972243.1995.9960177

  • 78

    MichelM.BeckD.BlockN.BlumenfeldH.BrownR.CarmelD.et al. (2019). Opportunities and challenges for a maturing science of consciousness. Nat. Hum. Behav. 3, 104107. 10.1038/s41562-019-0531-8

  • 79

    MitchellM. (2019). Artificial Intelligence: A Guide for Thinking Humans. London, UK: Penguin.

  • 80

    MurcioR.MorphetR.GershensonC.BattyM. (2015). Urban transfer entropy across scales. PLoS ONE10:e0133780. 10.1371/journal.pone.0133780

  • 81

    PackardN. H. (1988). Adaptation toward the edge of chaos, in Dynamic Patterns in Complex Systems, eds KelsoJ. A. S.MandellA. J.ShlesingerM. F. (Singapore: World Scientific), 293301.

  • 82

    PassinoK. M.SeeleyT. D.VisscherP. K. (2008). Swarm cognition in honey bees. Behav. Ecol. Sociobiol. 62, 401414. 10.1007/s00265-007-0468-1

  • 83

    ProkopenkoM.BoschettiF.RyanA. J. (2009). An information-theoretic primer on complexity, self-organisation and emergence. Complexity15, 1128. 10.1002/cplx.20249

  • 84

    ProkopenkoM.LizierJ. T.ObstO.WangX. R. (2011). Relating fisher information to order parameters. Phys. Rev. E84:041116. 10.1103/PhysRevE.84.041116

  • 85

    Ramón y CajalS. (1899). Textura del Sistema Nervioso del Hombre y de los Vertebrados: Estudios Sobre el Plan Estructural y Composición Histológica de los Centros Nerviosos Adicionados de Consideraciones Fisiológicas Fundadas en los Nuevos Descubrimientos, Vol. 1. Madrid: Moya.

  • 86

    Reséndiz-BenhumeaG. M.SangatiE.SangatiF.KeshmiriS.FroeseT. (2021). Shrunken social brains? A minimal model of the role of social interaction in neural complexity. Front. Neurorobot. 15:72. 10.3389/fnbot.2021.634085

  • 87

    ReznikovaZ. (2007). Animal Intelligence From Individual to Social Cognition. Cambridge, UK: Cambridge University Press.

  • 88

    RoedererJ. G. (2005). Information and its Role in Nature. Heidelberg: Springer-Verlag. 10.1007/3-540-27698-X

  • 89

    RoliA.VillaniM.FilisettiA.SerraR. (2018). Dynamical criticality: overview and open questions. J. Syst. Sci. Complex. 31, 647663. 10.1007/s11424-017-6117-5

  • 90

    SchallerR. (1997). Moore's law: past, present and future. IEEE Spectr. 34, 5259. 10.1109/6.591665

  • 91

    ScharfC. (2021). The Ascent of Information: Books, Bits, Genes, Machines, and Life's Unending Algorithm. New York, NY: Riverhead Books.

  • 92

    SearleJ. R. (1980). Minds, brains, and programs. Behav. Brain Sci. 3, 417424. 10.1017/S0140525X00005756

  • 93

    ShalfJ. (2020). The future of computing beyond Moore's law. Philos. Trans. R. Soc. A Math378:20190061. 10.1098/rsta.2019.0061

  • 94

    ShannonC. E. (1948). A mathematical theory of communication. Bell Syst. Techn. J. 27, 379423; 623–656. 10.1002/j.1538-7305.1948.tb00917.x

  • 95

    SmolenskyP. (1988). On the proper treatment of connectionism. Behav. Brain Sci. 11, 123. 10.1017/S0140525X00052432

  • 96

    SoléR.AmorD. R.Duran-NebredaS.Conde-PueyoN.Carbonell-BallesteroM.Monta nezR. (2016). Synthetic collective intelligence. Biosystems148, 4761. 10.1016/j.biosystems.2016.01.002

  • 97

    SteelsL.BrooksR. (1995). The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents. New York City, NY: Lawrence Erlbaum Associates.

  • 98

    StewartJ. (1995). Cognition = life : Implications for higher-level cognition. Behav. Process. 35, 311326. 10.1016/0376-6357(95)00046-1

  • 99

    StewartJ.GapenneO.Di PaoloE. A. (eds.). (2010). Enaction: Toward a New Paradigm for Cognitive Science. Cambridge, MA: MIT Press. 10.7551/mitpress/9780262014601.001.0001

  • 100

    TrianniV.TuciE. (2009). Swarm cognition and artificial life, in Advances in Artificial Life. Proceedings of the 10th European Conference on Artificial Life (ECAL 2009).Hungary.

  • 101

    TuringA. M. (1937). On computable numbers, with an application to the entscheidungs problem. Proc. Lond. Math. Soc. s2–42, 230265. 10.1112/plms/s2-42.1.230

  • 102

    TuringA. M. (1950). Computing machinery and intelligence. Mind59, 433460. 10.1093/mind/LIX.236.433

  • 103

    VarelaF. J.ThompsonE.RoschE. (1991). The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press. 10.7551/mitpress/6730.001.0001

  • 104

    von BaeyerH. C. (2005). Information: The New Language of Science. Cambridge, MA: Harvard University Press.

  • 105

    von NeumannJ. (1958). The Computer and the Brain. New Haven, CT: Yale University Press.

  • 106

    WalkerS. I. (2014). Top-down causation and the rise of information in the emergence of life. Information5, 424439. 10.3390/info5030424

  • 107

    WheelerJ. A. (1990). Chapter 19: Information, physics, quantum: the search for links, in Complexity, Entropy, and the Physics of Information, volume VIII of Santa Fe Institute Studies in the Sciences of Complexity, ed ZurekW. H. (Reading, MA: Perseus Books), 309336.

  • 108

    WittgensteinL. (1999). Philosophical Investigations, 3rd Edn. Upper Saddle River, NJ: Prentice Hall.

  • 109

    WolframS. (2002). A New Kind of Science. Champaign, IL: Wolfram Media.

  • 110

    ZarkadakisG. (2015). In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence. Pegasus Books.

  • 111

    ZubillagaD.CruzG.AguilarL. D.ZapotécatlJ.FernándezN.AguilarJ.et al. (2014). Measuring the complexity of self-organizing traffic lights. Entropy16, 23842407. 10.3390/e16052384

Summary

Keywords

mind, cognition, intelligence, information, brain, computer, swarm

Citation

Gershenson C (2021) Intelligence as Information Processing: Brains, Swarms, and Computers. Front. Ecol. Evol. 9:755981. doi: 10.3389/fevo.2021.755981

Received

09 August 2021

Accepted

22 September 2021

Published

18 October 2021

Volume

9 - 2021

Edited by

Giorgio Matassi, FRE3498 Ecologie et Dynamique des Systèmes Anthropisés (EDYSAN), France

Reviewed by

Thilo Gross, Helmholtz Institute for Functional Marine Biodiversity (HIFMB), Germany; Alberto Policriti, University of Udine, Italy

Updates

Copyright

*Correspondence: Carlos Gershenson

This article was submitted to Models in Ecology and Evolution, a section of the journal Frontiers in Ecology and Evolution

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics