Complexity

Reading time: 5 minute
...

📝 Original Info

  • Title: Complexity
  • ArXiv ID: 1109.0214
  • Date: 2011-09-02
  • Authors: Carlos Gershenson

📝 Abstract

The term complexity derives etymologically from the Latin plexus, which means interwoven. Intuitively, this implies that something complex is composed by elements that are difficult to separate. This difficulty arises from the relevant interactions that take place between components. This lack of separability is at odds with the classical scientific method - which has been used since the times of Galileo, Newton, Descartes, and Laplace - and has also influenced philosophy and engineering. In recent decades, the scientific study of complexity and complex systems has proposed a paradigm shift in science and philosophy, proposing novel methods that take into account relevant interactions.

💡 Deep Analysis

Figure 1

📄 Full Content

Classical science and engineering have used successfully a reductionist methodology, i.e. separate and simplify phenomena in order to predict their future. This approach has been applied in a variety of domains. Nevertheless, in recent decades the limits of reductionism have become evident in phenomena where interactions are relevant. Since reductionism separates, it has to ignore interactions. If interactions are relevant, reductionism is not suitable for studying complex phenomena.

There are plenty of phenomena that are better described from a non-reductionist or ‘complex’ perspective. For example, insect swarms, flocks of birds, schools of fish, herds of animals, and human crowds exhibit a behavior at the group level that cannot be determined nor predicted from individual behaviors or rules. Each animal makes local decisions depending on the behavior of their neighbors, thus interacting with them. Without interactions, i.e. with reductionism, the collective behavior cannot be described. Through interactions, the group behavior can be well understood. This also applies to cells, brains, markets, cities, ecosystems, biospheres, etc.

In complex systems, having the ’laws’ of a system, plus initial and boundary conditions, is not enough to make a priori predictions. Since interactions generate novel information that is not present in initial nor boundary conditions, predictability is limited. This is also known as ‘computational irreducibility’, i.e. there is no shortcut to determine the future state of a system other than actually computing it.

Since classical science and philosophy assume that the world is predictable in principle, and relevant interactions limit predictability, many people have argued that a paradigm shift is required, and several novel proposals have been put forward in recent years.

There is a broad variety of definitions of complexity, depending on the context in which they are used. For example, the complexity of a string of bits, i.e. a sequence of zeroes and ones, can be described in terms of how easy it is to produce or compress that string. In this view, a simple string (e.g, ‘010101010101’) would be easily produced or compressed, as opposed to a more ‘random’ one (e.g. ‘011010010000’). However, some people make a distinction between complexity and randomness, placing complexity as a balance between ordered and chaotic dynamics.

A well-accepted measure of complexity is the amount of information required to describe a phenomenon at a given scale. In this view, more complex phenomena will require more information to be described at a particular scale than simpler ones. It is important to note that the scale is relevant to determine the amount of information, since e.g. a gas requires much more information to be described at an atomic scale (with all the details of positions and momentums of molecules) than at a human scale (where all the molecular details are averaged to produce temperature, pressure, volume, etc.)

Complexity has been also used to describe phenomena where properties at a higher scale cannot be reduced to properties at a lower scale, i.e. when the whole is more than sum of its parts (see Emergence). For example, a piece of gold has color, conductivity, malleability, and other ’emergent’ properties that cannot be reduced to the properties of gold atoms. In other words, there is a potentiality of novel behaviors and properties, i.e. a system with coordinated interacting elements can perform more complex functions than the independent aggregation of the same elements. Emergent properties cannot be reduced to the components of a system because they depend on interactions. Thus, an approach to study complex systems requires the observation of phenomena at multiple scales, without ignoring interactions. Formalisms such as multi-agent systems and network theory have proven to be useful for this purpose.

The scientific study of complexity, under that label, started in the 1980’s. Some people argue that it is a science in its infancy, since it has been only a few decades since its inception and it has yet to reveal its full potential. However, some people argue that complexity will never be a science itself, because of its pervasiveness. Since complexity can be described in every phenomenon, e.g. the amount of information to describe it, a science of complexity would be too broad to be useful. A third camp defends that complexity is already a science in its own right. This debate certainly depends on the notion of what a science is. Moreover, one can argue that all three viewpoints are correct to a certain degree. A scientific study of complex phenomena exists; this is not debated. People also agree that this study is offering new insights in all disciplines, and has a great potential, already yielding some fruits. The pervasiveness of complexity is also agreed upon. A scientific approach where interactions are considered, i.e. non-reductionist, has been propagating in

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut