Impact Factor 4.134

The 2nd most cited  journal in Physiology

This article is part of the Research Topic

Physiology and Network Science

Opinion ARTICLE

Front. Physiol., 19 March 2015 | https://doi.org/10.3389/fphys.2015.00088

Demystifying cognitive science: explaining cognition through network-based modeling

Emma K. Soberano and Damian G. Kelty-Stephen*
  • Grinnell College, Grinnell, IA, USA

Network science has made what once seemed miracle into practical science: we can set up conditions for self-building systems, and we can use their patterns of variability to identify and explain processes underlying prospective control in biological systems.

Behavior lifts itself from a sprawling mass of random events and directs itself prospectively toward events in the future not existing presently except as possibilities (Turvey, 1992). Possible effects become causes for current action. This control balks ordinary cause-and-effect sequences: organisms move themselves to seek stimuli which may never exist. Other sciences must wait for outside forces to stimulate a system into action, but cognitive science has begged your patience for a central article of faith: if you accept that a system can steer itself toward the future, we might find important insights into behavior.

Biology has not discouraged this article of faith. Anticipatory, context-sensitive pragmatism is a defining feature of life, even in brainless forms (Saigusa et al., 2008; Latty and Beekman, 2011). Anticipation suffuses popular neo-Darwinist understandings of evolution as life tailoring itself according to variable transmission of genotypes (see Brooks, 2005; Fodor and Piattelli-Palmarini, 2010; Griffiths and Gray, 2005; Lewontin, 1982; Smith, 2008 for widely ranging views for/against this interpretation). Life seems different, straddling a boundary between thermodynamics and information: thermodynamic provides the fuel, and information encodes biological particulars into signals guiding phenotypes away from dangerous future unknowns (Rosen, 1985). We never think of non-biological stuff interpreting anything, but if we lend biology powers of interpretation (Dennett, 1971), then systems steering themselves toward the future might seem less mysterious.

Cognitive science leaves biology to its own mysteries, but it extends the line of credit to the brain. Cartesian metaphors of the brain as seat of a soul-as-observer have been much maligned as question-begging. Everyone knows to swear off this ghostly homunculus (or perhaps, for a more modern era, “personcule”), but cognitive science still has had trouble explaining its faith in steering toward the future. We have learned from Lashley's (1950) futile search for specific cortical structures storing memories and Hebb's (1949) rephrasing of “representations” from contents of neural lockboxes to less offensive “fire-together/wire-together” patterns. Then again, we steer clear of vague holism blandly allowing that “Everything might do everything.” Cognitive science has grown confident carving brains and/or minds into modules, simple parts with domain-specific functions and evolutionary roots (Pinker, 1997). The trouble with this bounty of inherited modules is threefold: evolution does not know what purpose is Lewontin (1982), Fodor (2005) the genome carries much more genetic material than necessary to code for proteins while also leaving much of the hard work of building tissues to codeless, lifeless physics (Denton et al., 2003; Pearson, 2006); and brains do not support a stable anatomical organization respecting different cognitive, perceptual or behavioral domains (Graziano et al., 2002; Anderson, 2010; Hickok, 2014). This latter point requires spirited and repeated restatement against all-too-easy intuitions to the contrary.

Network theory was born of a hope that, one day, systems steering themselves toward future states might not require so much faith. Computer science, cybernetics and complex-systems theories raised fascinating new questions about the capacity of material systems to absorb sensory inputs and develop these sensory input into goal-directed behaviors (McCulloch and Pitts, 1943; Ashby, 1956). Turing (1936, 1937, 1950) led the vanguard in asking how intelligent machines could be. Some very simple machines did literally build themselves from dead material parts to choose sensors and learn rudimentary things about their surroundings (Pask, 1958). They were notoriously difficult to control and useless for our everyday purposes (Cariani, 1993). Some engineers suggested that orderly natural systems were built of modular parts and so that only modular systems were controllable ones (Simon, 1969)—and greater financial interest went toward building docile machines (e.g., for typing messages upon) than in breathing life into autonomous ones. Ultimately, these pioneering minds began to ponder whether intelligent behavior might build itself out of the busy connections among many non-intelligent component agents (Rosenblatt, 1958; Selfridge, 1959). This now-immense program of research began in fits and starts, always looking forward to the future possibility that the right interactive architecture might allow autonomous behavior steering toward the future (Minsky and Papert, 1969).

Like any newborn organisms learning to toddle around, some of these networks behaved more obediently than others. Some behaved well according to a supervisor's “delta rule” (Widrow and Hoff, 1960), dutifully computing upon sensory inputs and comparing its behavior with the engineer's feedback. Other networks were moodier and acted up on their own. Without any supervision, they began to amplify small fluctuations and share them among local parts, producing large, coherent behaviors that their designers had not planned. These unsupervised networks would not stop when they had an accurate or adaptive response. Instead, they kept generating new structures building on what they had done before or breaking it down. It is this latter, unruly sort of network that might be more interesting provided they ever learned anything. The former only acts to follow rules, but the latter has something close to creativity in its coordination, both attributes we find intriguing in biological systems.

These unruly networks clamber to their feet and generate their own large-scale behaviors using familiar methods: in the sand- or rice-pile network, single grains dropped at regular intervals pile up into dunes, and gradually, pile instability yields an avalanche, and one avalanche begets another (Jensen, 1998). We find similar avalanches in artificial neural networks built on similar principles. Neuronal avalanches appear as local field potentials in explicitly neural networks (Beggs, 2008). Both types of avalanches follow an inverse power-law structure consistent with “fractal” statistics: both probability density functions of avalanches and power spectra of avalanche time series' power spectrum decay slowly with increasing size or frequency, respectively. This slow decay entails that power laws never converge and never stop growing, suggesting that interactions among avalanches have a rich creativity bounded only by the size of the network (Bak et al., 1987). Of course, rice-piles are not brains and exhibit none of the intelligence that cognitive science seeks to explain (Wagenmakers et al., 2005). Novel work in a sort of neural network exhibiting what is called “critical branching” proposes to implicate the strength of power-law scaling of neural-spike trains with improved memory and computing capacity (Kello, 2013; Rodny and Kello, 2014). The self-organizing network is growing up and developing into an intelligent animal.

Power-law structure in self-organizing networks suggests that empirical evidence of power-law structure indicates interaction-driven, self-organizing processes (Bak, 1996; Van Orden et al., 2003; Friston et al., 2012). These proposals are more tantalizing in cognitive sciences than in “natural” ones, the latter being inured to the world's self-organization. Only in cognitive sciences does the glow of consciousness cast shadows on “natural-ness.” In the cognitive sciences, the rise of choice, goals, or forward-looking pragmatism make everything seem suddenly “not so natural” (Lewontin, 2010). Whatever “natural” is, cognitive science deals in behaviors driven by “agents,” “selves,” or whatever we call “personcular” ghosts. Agents stand apart from the self-organizing world beyond in their interpretive stance, browsing through reams of sensory codes to plot its leap forward into an imagined future, balancing nimbly between energy-burning process of collecting information and energy-conserving computations over that information (Brooks, 2005; Smith, 2008; Friston et al., 2012).

To iron out the ghosts lurking in this information-thermodynamics divide, we disregard any primitive distinction between conservable “information” and dissipated energy. Self-organizing networks give cognitive science two points of leverage. First, fractal statistics in empirical data from cognitive performance implicate similarly interactive architecture (Holden and Rajaraman, 2012; Abney et al., 2014). Fractal statistics are not simply concurrent with but outright predictive of cognitive performance (Stephen et al., 2009; Stephen and Hajnal, 2011). Second, given the overwhelmingly full-body evidence of fractal statistics beyond the confines of the skull (Hausdorff, 2007), network modeling allows us to clothe fractal systems with new theoretical elaborations.

A crucial point for theoretical development of networks is that our infant networks needed the “right” mix of constraints and unsupervised randomness to exhibit interesting self-organization (Beggs, 2008). Different interactive architectures leave networks free to discover different response patterns—just as task constraints can strengthen or diminish evidence of fractal structure (Kuznetsov and Wallot, 2011), it is possible to build a network that will not produce power-law-structured responses (Csányi and Szendröi, 2004). We might gradually tune control parameters to generate networks with gradually more fractal or less fractal architectures (Sporns, 2006), and we can use these parameters to describe the “connectome” of functional interactions spanning the brain (Zuo et al., 2012). Subtle changes in network topology in terms of connection strengths and connection patterns offers a rich testbed for creating novel hypotheses about development of perceiving-acting systems (Gorochowski et al., 2012; Ma et al., 2013). For instance, recent pioneering work in robotics has attempted to flesh out such neural networks, embedding networks into musculoskeletal architectures in order to model the dramatic effects of subtle factors such as uterine pressures and twitches during sleep on sensorimotor development (Blumberg et al., 2013; Mori and Kuniyoshi, 2013). In this way, network science has gradually blurred the lines between infant models and actual human infants.

Even without human-infant morphology, network modeling has brought its share of surprises. For instance, the sand/rice-pile model can fail to produce straightforward fractal fluctuations (Jensen et al., 1989). Even more stunning has been the evidence that the sand/rice-pile model may not just be fractal but, in fact, multifractal: it may exhibit several power-law forms at once (Tebaldi et al., 1999; Cernak, 2006; Bonachela and Muñoz, 2009), making it more complex. This multifractal wrinkle in the self-organization narrative may be exactly what's needed to help cognitive science play by more ordinary scientific rules. Observation of multifractal fluctuations offers the possibility that fractal fluctuations might interweave and spread into one another (Halsey et al., 1986). Where we might once have envisioned anatomical parts each with their own mysterious capacities, there may be less rigidly defined regions engaging in ongoing exchange of fractal and multifractal fluctuations.

The sharing of multifractal fluctuations has empirical anchoring in behaviors extending beyond the brain. Network analyses such as vector autoregression (VAR; Sims, 1980) allow us to depict the flow of information across nodes in full-body network, even from measurements of living, breathing organisms. For instance, infants' spontaneous leg kicking has been intuitively understood as an exploratory process, bringing “external” or “peripheral” information about gravity and leg kinematics “inward” to central nervous structures. However, VAR can elevate this intuition to empirically demonstrable fact: the flow of multifractal fluctuations along infants' legs from ankle to knee to hip does become a rigorously testable hypothesis (Stephen et al., 2012). Additionally, recent work in perceptual learning showed that use of visual feedback for a manual wielding task depends on time-varying fractal structure of head sway. Multifractality of head sway supports the pick up of visual information. VAR showed further that simply receiving visual feedback lets multifractality at the head spread down to the hand, thereby changing subsequent manual wielding (Kelty-Stephen and Dixon, 2014). The sharing of multifractal fluctuations may underwrite body-wide coordinations in ways that only network analyses have revealed.

Exciting as simulations may be, we see more promise in this latter attempt to draw from fractal statistics and matrix algebra to help us probe the full-body network. Distributing cognition across the body is still not repaying the loans of intelligence, but it may diminish the borrowed principal. Network modeling thus allows us to envision behavior—real, observed behavior—as the time-varying mixture of an extended field of multifractal fluctuations. Through this lens, behavior begins to require much less faith and much more like generic physical processes. Cognitive science need not ask to play by different rules or to start with different assumptions. On the contrary, network science might allow cognitive science operate on the same playing field as other sciences, whether sciences of living systems or otherwise.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Abney, D. H., Paxton, A., Dale, R., and Kello, C. T. (2014). Complexity matching in dyadic conversation. J. Exp. Psychol. Gen. 143, 2304–2315. doi: 10.1037/xge0000021

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Anderson, M. L. (2010). Neural reuse: a fundamental organization principle of the brain. Behav. Brain Sci. 33, 245–313. doi: 10.1017/S0140525X10000853

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Ashby, W. R. (1956). An Introduction to Cybernetics. London: Chapman and Hall.

Google Scholar

Bak, P. (1996). How Nature Works. New York, NY: Springer.

Google Scholar

Bak, P., Tang, C., and Wiesenfeld, K. (1987). Self-organized criticality: an explanation of 1/f noise. Phys. Rev. Lett. 59, 381–384. doi: 10.1103/PhysRevLett.59.381

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Beggs, J. M. (2008). The criticality hypothesis: how local cortical networks might optimize information processing. Philos. Trans. R. Soc. A 366, 329–343. doi: 10.1098/rsta.2007.2092

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Blumberg, M. S., Marques, H. G., and Iida, F. (2013). Twitching in sensorimotor development from sleeping rats to robots. Curr. Biol. 23, R532–R537. doi: 10.1016/j.cub.2013.04.075

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Bonachela, J. A., and Muñoz, M. A. (2009). Self-organization without conservation: true or just apparent self-similarity? J. Stat. Mech. P09009. doi: 10.1088/1742-5468/2009/09/P09009

CrossRef Full Text

Brooks, D. R. (2005). The nature of the organism: life has a life of its own. Ann. N.Y. Acad. Sci. 901, 257–265. doi: 10.1111/j.1749-6632.2000.tb06284.x

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Cariani, P. (1993). To evolve an ear: epistemological implications of Gordon Pask's electrochemical devices. Syst. Res. 10, 19–33.

Google Scholar

Cernak, J. (2006). Inhomogeneous sandpile model: crossover from multifractal scaling to finite-size scaling. Phys. Rev. E 73:066125. doi: 10.1103/PhysRevE.73.066125

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Csányi, G., and Szendröi, B. (2004). Fractal—small-world dichotomy in real-world networks. Phys. Rev. E 70:016122. doi: 10.1103/PhysRevE.70.016122

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Dennett, D. (1971). Intentional systems. J. Philos. 68, 87–106. doi: 10.2307/2025382

CrossRef Full Text | Google Scholar

Denton, M. J., Dearden, P. K., and Sowerby, S. J. (2003). Physical law not natural selection as the major determinant of biological complexity in the subcellular realm: new support for the pre-Darwinian conception of evolution by natural law. Biosystems 71, 297–303. doi: 10.1016/S0303-2647(03)00100-X

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Fodor, J. (2005). The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology. Cambridge, MA: MIT Press.

Google Scholar

Fodor, J., and Piattelli-Palmarini, M. (2010). What Darwin Got Wrong. Cambridge, MA: MIT Press.

Google Scholar

Friston, K., Breakspear, M., and Deco, G. (2012). Perception and self-organized instability. Front. Comput. Neurosci. 6:44. doi: 10.3389/fncom.2012.00044

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Gorochowski, T. E., di Bernardo, M., and Grierson, C. S. (2012). Evolving dynamical networks: a formalism for describing complex systems. Complexity 17, 18–25. doi: 10.1002/cplx.20386

CrossRef Full Text | Google Scholar

Graziano, M. S. A., Taylor, C. S. R., Moore, T., and Cooke, D. F. (2002). The cortical control of movement revisited. Neuron 36, 1–20. doi: 10.1016/S0896-6273(02)01003-6

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Griffiths, P. E., and Gray, R. D. (2005). “Darwinism and developmental systems,” in Cycles of Contingency: Developmental Systems and Evolution, eds S. Oyama, P. E. Griffiths, and R. D. Gray (Cambridge, MA: MIT Press), 195–218.

Google Scholar

Halsey, T. C., Jensen, M. H., Kadanoff, L. P., Procaccia, I., and Shraiman, B. I. (1986). Fractal measures and their singularities: the characterization of strange sets. Phys. Rev. A 33, 1141–1151. doi: 10.1103/PhysRevA.33.1141

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Hausdorff, J. M. (2007). Gait dynamics, fractals, and falls: finding meaning in the stride-to-stride fluctuations of human walking. Hum. Mov. Sci. 26, 555–589. doi: 10.1016/j.humov.2007.05.003

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Hebb, D. W. (1949). The Organization of Behavior. New York, NY: Wiley.

Google Scholar

Hickok, G. (2014). The Myth of Mirror Neurons. New York, NY: Norton.

Holden, J. G., and Rajaraman, S. (2012). The self-organization of a spoken word. Front. Psychol. 3:209. doi: 10.3389/fpsyg.2012.00209

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Jensen, H. J. (1998). Self-Organized Criticality. Cambridge: Cambridge University Press.

Google Scholar

Jensen, H. J., Christensen, K., and Fogedby, H. C. (1989). 1/f noise, distribution of lifetimes, and a pile of sand. Phys. Rev. B 40, 7425–7427. doi: 10.1103/PhysRevB.40.7425

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Kello, C. T. (2013). Critical branching neural networks. Psychol. Rev. 120, 230–254. doi: 10.1037/a0030970

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Kelty-Stephen, D. G., and Dixon, J. A. (2014). Interwoven fluctuations during intermodal perception: fractality in head sway supports the use of visual feedback in haptic perceptual judgments by manual wielding. J. Exp. Psychol. Hum. Percept. Perform. 40, 2289–2309. doi: 10.1037/a0038159

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Kuznetsov, N. A., and Wallot, S. (2011). Effects of accuracy feedback on fractal characteristics of time estimation. Front. Integr. Neurosci. 5:62. doi: 10.3389/fnint.2011.00062

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Lashley, K. (1950). In search of the engram. Soc. Exp. Biol. Symp. 4, 454–482.

Google Scholar

Latty, T., and Beekman, M. (2011). Irrational decision-making in an amoeboid organism: transitivity and context-dependent preference. Proc. R. Soc. 278, 307–312. doi: 10.1098/rspb.2010.1045

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Lewontin, R. C. (1982). “Organism and environment,” in Learning, Development and Culture, ed H. C. Plotkin (New York, NY: Wiley), 151–170.

Lewontin, R. C. (2010). Not-so-natural selection. N.Y. Rev. Books 57. Available online at: http://www.nybooks.com/articles/archives/2010/may/27/not-so-natural-selection/

Ma, T., Holden, J. G., and Serota, R. A. (2013). Distribution of wealth in a network model of the economy. Phys. A 392, 2434–2441. doi: 10.1016/j.physa.2013.01.045

CrossRef Full Text | Google Scholar

McCulloch, W. S., and Pitts, W. H. (1943). A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133.

Google Scholar

Minsky, M., and Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: MIT Press.

Mori, H., and Kuniyoshi, Y. (2013). Infant's primitive walking reflex from the perspective of learning in the uterus. Adv. Cogn. Neurodyn. 3, 243–250. doi: 10.1007/978-94-007-4792-0_33

CrossRef Full Text | Google Scholar

Pask, G. (1958). “The growth process inside the cybernetic machine,” in Second International Conferences on Cybernetics (Namur), 765–794.

Pearson, H. (2006). What is a gene? Nature 441, 398–401. doi: 10.1038/441398a

CrossRef Full Text | Google Scholar

Pinker, S. (1997). How the Mind Works. New York, NY: Norton.

Google Scholar

Rodny, J., and Kello, C. T. (2014). “Learning and variability in spiking neural networks,” in Proceedings of the 37th Annual Meeting of the Cognitive Science Society, 1305–1310.

Google Scholar

Rosen, R. (1985). Anticipatory systems: Philosophical, Mathematical, and Methodological Foundations. New York, NY: Springer.

Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization of the brain. Psychol. Rev. 65, 386–408. doi: 10.1037/h0042519

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Saigusa, T., Tero, A., Nakagaki, T., and Kuramoto, Y. (2008). Amoebae anticipate periodic events. Phys. Rev. Lett. 100:018101. doi: 10.1103/PhysRevLett.100.018101

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Selfridge, O. G. (1959). “Pandemonium: a paradigm for learning,” in Proceedings on the Symposium on Mechanisation of Thought Processes, eds D. V. Blake and A. M. Uttley (London: National Physical Laboratory), 511–529.

Simon, H. A. (1969). The Sciences of the Artificial. Cambridge, MA: MIT Press.

Google Scholar

Sims, C. A. (1980). Macroeconomics and reality. Econometrica 48, 1–48. doi: 10.2307/1912017

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Smith, E. (2008). Thermodynamics of natural selection I: energy flow and the limits on organization. J. Theor. Biol. 252, 185–197. doi: 10.1016/j.jtbi.2008.02.010

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Sporns, O. (2006). Small-world connectivity, motif composition, and complexity of fractal neuronal connections. Biosystems 85, 55–64. doi: 10.1016/j.biosystems.2006.02.008

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Stephen, D. G., Boncoddo, R. A., Magnuson, J. S., and Dixon, J. A. (2009). The dynamics of insight: mathematical discovery as a phase transition. Mem. Cogn. 37, 1132–1149. doi: 10.3758/MC.37.8.1132

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Stephen, D. G., and Hajnal, A. (2011). Transfer of calibration between hand and foot: functional equivalence and fractal fluctuations. Atten. Percept. Psychophys. 73, 1302–1328. doi: 10.3758/s13414-011-0142-6

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Stephen, D. G., Hsu, W.-H., Young, D., Saltzman, E., Holt, K. G., Newman, D. J., et al. (2012). Multifractal fluctuations in joint angles during infant spontaneous kicking reveal multiplicativity-driven coordination. Chaos Solitons Fractals 45, 1201–1219. doi: 10.1016/j.chaos.2012.06.005

CrossRef Full Text | Google Scholar

Tebaldi, C., De Menech, M., and Stella, A. L. (1999). Multifractal scaling in the Bak-Tang-Wiesenfeld sandpile and edge events. Phys. Rev. Lett. 83, 3952–3955. doi: 10.1103/PhysRevLett.83.3952

CrossRef Full Text | Google Scholar

Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proc. London Math. Soc. 42, 230–265.

Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem: a correction. Proc. London Math. Soc. 43, 544–546.

Google Scholar

Turing, A. M. (1950). Computing machinery and intelligence. Mind 59, 433–460. doi: 10.1093/mind/LIX.236.433

CrossRef Full Text | Google Scholar

Turvey, M. T. (1992). Affordances and prospective control: an outline of the ontology. Ecol. Psychol. 4, 173–187. doi: 10.1207/s15326969eco0403_3

CrossRef Full Text | Google Scholar

Van Orden, G., Holden, J. G., and Turvey, M. T. (2003). Self-organization of cognitive performance. J. Exp. 132, 331–350. doi: 10.1037/0096-3445.132.3.331

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Wagenmakers, E.-J., Farrell, S., and Ratcliff, R. (2005). Human cognition and a pile of sand: a discussion on serial correlations and self-organized criticality. J. Exp. Psychol. 134, 108–116. doi: 10.1037/0096-3445.134.1.108

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Widrow, B., and Hoff, M. E. (1960). Adaptive switching circuits. IRE WESCON Conv. Rec. 4, 96–104.

Google Scholar

Zuo, X.-N., Ehmke, R., Mennes, M., Imperati, D., Castellanos, X., Sporns, O., et al. (2012). Network centrality in the human functional connectome. Cereb. Cortex 22, 1862–1875. doi: 10.1093/cercor/bhr269

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Keywords: cognitive science, network analysis, vector autoregression, fractal, multifractal, perception

Citation: Soberano EK and Kelty-Stephen DG (2015) Demystifying cognitive science: explaining cognition through network-based modeling. Front. Physiol. 6:88. doi: 10.3389/fphys.2015.00088

Received: 29 December 2014; Accepted: 04 March 2015;
Published: 19 March 2015.

Edited by:

Bruce J. West, United States Army Research Laboratory, USA

Reviewed by:

Christopher Kello, University of California, Merced, USA
John G. Holden, University of Cincinnati, USA

Copyright © 2015 Soberano and Kelty-Stephen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Damian G. Kelty-Stephen, foovian@gmail.com