American Civilization as a Dynamical System:

The Mathematical and Scientific Foundation ©

by

Gerald L. Atkinson

Copyright 1 February 2004

Introduction

We humans live in a simple world. We experience the world around us primarily through our senses. Primitive man lived in a world ‘governed’ by cause and effect – as his life’s experiences unfolded and became recognized through these senses. All of us start out in the world in this same state. As babies, toddlers, and small children, we soon learn that, if we hit our head with a block, it hurts. We learn that if we touch an open flame, it burns. We learn that if we go outside in the winter without a jacket, we get cold. An extension of this concept guides most of us through the early stages of learning about the world around us by ‘searching’ for the ‘cause’ of a given ‘effect.’ Philosophers from Plato to Kant to present-day deconstructionists have argued this matter in terms of the existence of a reality outside the realm of our human senses and the constructs of the human mind.

From the days of Aristotle to the present, science has depended on the collection of ‘evidence’ made known to us through our senses to enable the construction of ‘theories’ and methods of testing them in the real world to ascertain scientific ‘truth’ – an objective reality of nature’s existence – independent of the limitations of our senses. Medicine is a field in which this concept is carried to its fullest reward. If you have certain symptoms, you are judged to have a certain ailment. Except for the most intractable diseases, this method works very well – even though many ordinary diseases manifest some of the same ‘symptoms’ as others.

This concept has been extended through the invention and use of ‘instruments’ that can ‘measure’ certain things that produce certain ‘effects’ in the universe that do not register on our human senses. But the use of this extension of our senses still follows the same paradigm – look for the ‘cause’ of the ‘effects’ that are observed. And man has learned to take the ‘simplest’ explanation of these measurements to ascertain the difference between and the connection among ‘causes’ and ‘effects.’ This is known as Occam’s Razor – the best choice among alternative ‘explanations’ is to choose the simplest one. In a word, in simplicity is ‘beauty’ — is truth.

Scientists use this method by taking ‘measurements’ that reveal the fact that the world is not perfectly ‘measurable.’ There are ‘gray areas’ in our measurements. That is, there are uncertainties in the values of the measurements taken on certain ‘effects.’ The existence of these ‘uncertainties’ has led to the development of a measurement philosophy which uses statistics to build a model of the behavior of a system that is so ‘measured.’ Out of the Law of Large Numbers, early scientists discovered that many human attributes, and nearly all measurable quantities observed in our everyday lives, obey a Gaussian distribution law – to some, the dreaded ‘Bell Curve’ – which is also called the Normal Distribution. Our height, weight, IQ, and many other entities in our universe of daily observations that are measurable obey this ‘law.’

Modern science has come to understand that the universe outside the realm of our ability to sense or to ‘measure’ is not as simple as mankind has heretofore assumed. There are systems which do not exhibit behavior that is governed by a strict and discernible ‘cause’ and ‘effect’ relationship. Those systems that do behave in a direct cause and effect way are the ‘linear’ systems that we observe and measure as discussed above. Some dynamical systems that do not behave this way are classified as complex, non-linear iterative feedback systems which have a property of self-organization and often of self-similarity. It is these systems, the behavior of which we wish to understand, that are not ‘explainable’ by the simple notions of a direct ‘cause’ and ‘effect’ and the application of Occam’s Razor. Rather, the ‘explanations’ of the behavior of such systems requires an approach with roots in the study of dynamical systems – using Chaos Theory, Network Theory, and Fuzzy Logic at its core. Our American civilization is one such system which can be ‘explained’ using this approach.

This journal presents essays which address the condition of American civilization based on the realization that it is a dynamical system — a special kind of dynamical system. It is a complex, non-linear iterative feedback system. As described at the links: ‘Generations’ and ‘The Fourth Turning,’ the four generations which comprise a saeculum — a nominal longest human life span (88-100 years) — each have different ‘peer personalities’ which are repeated in the same order in each successive saeculum. In the language of science, these four generations are cyclical with period four. Within each saeculum, two major Social Moments occur, one a Secular Crisis (worldwide economic depression or war of major scale, e.g. a world war) and the other a Spiritual Awakening. Social Moments are also cyclical with period two and occur within the saeculum at about forty-year intervals.

There is an historical connection between the generations and the social moments which has been observed by those who claim a regularity between these two entities. That is, each generation is affected by the secular crisis in such a way that a ‘peer personality’ is developed for each of the four generations in the saeculum and those ‘peer personalities,’ in turn determine whether or not America succeeds in ‘weathering the storm’ and whether or not the Spiritual Awakening is carried out in such a manner that the two cycles produce a system that exhibits stable equilibrium.

Such a system has a complex, non-linear iterative behavior because the ‘peer personalities’ affect the Social Moments and the latter determine the ‘peer personalities’ of the four generations in the next saeculum. Such an iterative feedback system can be described in the same terms as the behavior of physical systems in science. The behavior of such physical systems is well-known and is described by mathematical formulations (models) which can ‘predict’ the behavior of such systems.

If the behavior of such a social system produces a periodic stable equilibrium, there is ‘predictive’ power of a general nature for future ‘states’ of the system. If the system is self-organized and the underlying ‘glue’ (the parameter of complexity — a mathematical entity associated here with the simple mathematical dynamical system, the quadratic iterator) is not disturbed or changed beyond some (unknown and probably unknowable) critical point, prediction of the future is possible. When such systems reach a critical point, a transition (phase change) occurs which produces a chaotic state. Within such a state, prediction is impossible and chaos prevails. The system may reach states of great richness or precipitously disappear to nothingness — all in the blink of an eye in millennial time.

It was nearly ten years ago when I developed and taught a Masters Degree level course in Chaos Theory — as part of the Artificial Intelligence curriculum I designed and taught at Florida Tech’s Naval Air Test Center branch at Patuxent River, MD -- that I began to understand American civilization as a complex, non-linear, iterative feedback system; a dynamical system. In fact, during one quarter I taught from the ‘Generations’ book by Strauss and Howe as a ‘text’ in this course after recognizing that the authors had described an American history, which was iterative, non-linear and has been in a state of stable (periodic) equilibrium over the past 400 years. Such systems can, however, be expected to become chaotic under certain conditions. It is just this state that essays in this journal explore — as applied to American civilization.

Chaos Theory and Dynamical Systems

The study of chaos theory has its roots in the late 19th century with some initial ideas, concepts and results from the monumental French mathematician, Henry Poincaré. The more recent path of the theory has traced the route from order into chaos, by Mitchell Fiegenbaum’s demonstration of the ‘universality’ of chaos in many physical systems — e.g. hydrodynamic properties of water, helium, and mercury; the electronic properties of diodes, transistors, and Josephson junctions; the properties of lasers, laser feedback; and for the acoustic properties of helium — all connected by the same behavior described below.

The Telegraphers Equation: The Quadratic Iterator

The iterative equation,

Xn+1 = a Xn (1— Xn)

where: 1) a is a constant parameter, one that can change value, one for each set of ‘solution’ points in the

plane of a plotted against XConverge — the value (point) or (points) to which the iteration converges.

And

2) Xn+1 is the (n+1)st iteration, that is, the ‘next’ value for the iterative equation, which represents

the state of a simple system that exhibits one of only three possible modes of behavior: a

stable fixed point, a stable periodic equilibrium, or a completely random, unpredictable, and

unstable chaotic state.

This equation, the Quadratic Iterator, is a simplified form of the Telegrapher’s Equation [1]. It is a normalized equation for which Xn takes on values between 0 and 1 (in discrete time steps characterizing a ‘generation’ of some entity or population) and the parameter, a, takes on values between 0 and 4 (on a continuum of points on the scale of the rational numbers). One starts the iteration with some initial value, X0, and runs the iteration (for convenience, on a computer — or for necessity for rational numbers with a large number of digits after the decimal place) until the iteration settles down to the convergent point or points, XConverge.


In summary, the bifurcation curve represents three regions: the first is populated by single stable points from values of the ‘complexity parameter,’ a = 1 to the boundary where the curve ‘splits’ into two periodic solutions in the region where a is greater than 3. In the periodic region between a = 3 and a = 3.5699456… there is a period doubling phenomenon [2, 4, 8, 16, … solution points]. The end of the ‘doubling’ region occurs at the threshold where a equals 3.5699456… [named the Feigenbaum point, after its discoverer].

At the Fiegenbaum point, the final state-diagram (the bifurcation curve) for the Quadratic Iterator splits into two very distinct parts: the period-doubling tree on the left, and the area governed by chaos on the right. In the latter region, the solution points are completely unpredictable — with some surprising exceptions.

In the ‘chaotic’ region one would imagine that utter chaos reigns. That is not the case. It is in this region that is hidden (in the minute details of smaller and smaller intervals of the ‘complexity parameter,’ a) a variety of beautiful structures — wherein ‘windows’ of ‘self-similarity’ [within vertical white spaces] repeat the pattern that was displayed from the beginning of the ‘doubling’ region to its end, but in reverse order, from right to left within each ‘window.’ This amazing result reveals that structure exists, even within the chaotic region. That structure is a miniature microcosm of the overall behavior of the system in the periodic region. Thus, the system is not only self-organized, it is self-similar.

At the point where a = 4, utter chaos reigns. That is, chaos governs the entire region for XConverge in the entire region of the vertical axis in the interval [0, 1]. At that point, all structure disappears and the solution points are completely unpredictable.

Networks as Dynamical Systems

During the last half of the 1990s, multi-disciplinary research teams comprised of physicists, mathematicians, computer scientists, and social scientists have made startling advances in the modeling of dynamical systems by ‘growing’ networks in such a way that they model physical and social systems. The Santa Fe Institute, 30 miles south of the Los Alamos National Laboratory, has been the center of such research. This research is an extension of the institute’s primacy in studying self-organizing dyanmical systems, which often exhibit chaotic behavior.

This research leads one to believe that Network Theory and experimental evidence in the physical world, which validates models based on this branch of science, will lead us to apply it to the social fabric of American civilization. This application will analyze the various ‘communities’ of people in the United States, each of which are comprised of individuals who are ‘linked’ in a direct way to each other via a common ‘best interest.’ The ‘best interest’ may be a belief in a common idea, a vested interest in an outcome which rewards each individual personally (political reward, economical reward, career and professional growth, a paternal or maternal interest in the ‘best interest’ of a son or daughter, and many other ‘interests’ which are a part of our human existence in a complex society).

Usually such ‘communities’ are sufficiently disparate in character and makeup that they seldom ACT in concert with one another in support of a common goal. In fact, many such ‘communities’ oppose each other or are at best, only tolerant of the other ‘communities which make up American society. But when the circumstances are just right, they can come together in an onrushing cascade to support a cause, a concept, a goal, or an idea that sweeps the nation by storm. This, of course, is the idea of a ‘Tipping Point’ [2] in a Network Theory explanation of the result of social interactions in American civilization.

Of course, the outcome of such a cascade may well be the same as that which conspiracy theorists invent to ‘explain’ the phenomenon. Those who grasp such theories and continue to hold on to them dearly, even in the face of evidence that is contradictory to a particular conspiracy, are captives of primitive man’s early understandings of the direct linkages between ‘cause’ and ‘effect’ in our daily lives. They do not wish to consider alternative explanations of the phenomenon at hand. What they do not understand is that there need not be a ‘spider’ at the center of the web. There need not be an all-knowing, all-powerful, dark central individual or small group of sinister conspirators who secretly bring about the cascade by nearly super-human powers of intellect, knowledge, organization and power.

In contrast, a ‘self-organizing’ network is one which can result in chaos when the system reaches a certain state of complexity — just as if it were planned and carried out by a super-human person or small group. Many examples of such ‘explanations’ abound in contemporary American affairs. Such ‘explanations’ are based on the ‘science of surprise,’ Chaos Theory. Scientific research during the 1990s has expanded this field to Network Theory [3] [4] as another tool to understand the nature of failures, disasters, and catastrophes — the ‘surprises’ that abound in our daily lives as well as in our nation’s dealing with other countries in the world.

Current research in the field of Network Theory has provided support for these explanations in terms of dynamically growing networks. These networks display the same self-organizing mechanisms that are described in Chaos Theory texts. The popular book, ‘The Tipping Point,’ contains a less esoteric ‘explanation’ of this ‘science of surprise’ in terms that the ‘man in the street’ can understand — epidemics, fads, and other cascading events observed in our normal lives.

Albert-Laszlo Barabasi, a physicist at Notre Dame University, is a leading researcher in the field of Network Theory. He and his colleagues have found that [5] “Nature normally hates power laws. In ordinary systems, all quantities follow bell curves, and correlations decay rapidly, obeying exponential laws.”

“But all of that changes if the system is forced to undergo a phase transition [akin to water molecules freezing to snowflakes in the clouds or iron atoms all aligning themselves in the same direction in a sufficiently strong magnetic field or a material becoming a superconductor under certain conditions]. Then power laws emerge — nature’s unmistakable sign that chaos is departing in favor of order. The theory of phase transitions told us loud and clear that the road from disorder to order is maintained by the powerful forces of self-organization and is paved by power laws. It told us that power laws are not just another way of characterizing a system’s behavior. They are the patent signatures of self-organization in complex systems.”

The tutorial below lays a solid foundation for these claims. And, of course, American civilization is a fabric whose warp and weave are comprised of individual threads, each of which exhibit complex behavior themselves. Indeed, American civilization is a self-organizing system. It is comprised of choices made by independent people, free to choose at every level of their existence — within the context of a regulatory system (again freely chosen) governing the morality of the individual, Christianity.


Gaussian or Normal Distributions

The figure at the right represents a Gaussian Distribution. The vertical axis is the probability, p(k), that an attribute of some system (for example, the height of a large number of men, or women, or horses, or cows, etc.) has a certain value, k. The average value of the population of things being measured is at the peak of the curve, taking the value <k>. The values of p(k) range from 0 to some value less than 1.0, with a possible maximum of 1.0 for a ‘spiked’ curve.

Observe that at each ‘tail’ of the curve, the distribution goes to zero. That is due to the fact that there are no super-extreme values for these attributes in nature. There are no men who are 20 ft. tall. There are no men who are 12 inches tall. This fact is important to the discussion of power laws which follows.


So many ‘effects’ that we measure in our everyday world – and in the world that is ‘invisible’ to our naked senses – are distributed according to a Normal Curve (if you will, a Gaussian Law) that most all of us, including our leading scientists, simply assume this distribution for nearly every measurable entity. It was only recently discovered (in the late 1990s) that certain systems do not obey such a ‘law.’ Instead, they obey a power law. And this leads to a very different understanding of the universe in which such ‘laws’ prevail. They are especially applicable to dynamical, self-organizing systems such as networks (electric power grids, the Internet, the World Wide Web, and many other systems which have dynamic growth and decay – including civilizations).


What is a Power Law?

According to Duncan Watts [6], a mathematician who now specializes in applying Network Theory to social systems at Columbia University (and a former associate at the Santa Fe Institute which specializes in research in Dynamical Systems and Chaos Theory), “Power laws are another very widespread kind of distribution in natural systems, although their origin is a good deal murkier than the origins of normal-type distributions.”


A power law distribution is shown in the figure to the left. Observe that this distribution decreases rapidly with k, but it does so much slower at the extremities than does the Normal Distribution above. Here, p(k) and k have the same definitions as for the Normal Distribution. The values on the vertical axis represent the probability that some attribute of a system has a value, k.

According to Watts [7], “Power laws have two features that make them strikingly different from normal distributions.” First, unlike a normal distribution, a power law doesn’t have a peak at its average value. Rather, as in the figure to the left, it starts at a maximum value and then decreases relentlessly all the way to infinity. Second, the rate at which the power law decays [at the extremities] is much slower than the decay rate for a normal


distribution implying a much greater likelihood of extreme events. Compare, for example the distribution of sizes of people in a large population to the distribution of sizes of cities. The average height for an American adult male is roughly five feet nine inches, and although there are plenty of men who are shorter or taller than this, no one is even close to being twice as tall (almost twelve feet!) or half as tall (less than three feet). By contrast, the population of New York City [which follows a power law], at just over eight million people, is almost 300 times the size of a town like Ithaca. Extreme differences like this would be inconceivable in a normal distribution but are entirely routine for power laws.”

A key characteristic of a power-law distribution is a quantity called the exponent, which in essence describes how the distribution changes as a function of the underlying variable. For example, in its simplest form, the probability density function for a power law could be described by the equation:

p(k) = kalpha

where: the exponent, alpha, is the slope of the line on a log-log plot of p(k) vs. k.

That is, if we take the logarithm of this equation,

We get,

log p(k) = log kalpha = – alpha

And a log-log plot of the data appears as plotted below. Note: The HTML code does not have a character representation for the Greek letter for alpha.


Observe that for a power-law distribution on a log-log plot, the exponent, alpha, is the slope of the line (the line drops by an amount alpha for each unit on the horizontal axis).

Consequently, once we have enough data, all we need to do is plot it on a log-log scale and measure the slope of the resulting line. For example [8], if the number of cities of a given size decreases in inverse proportion to the size, then we say the distribution has an exponent of 1.0. In that case, we would expect to see cities the size of Ithaca, NY roughly three times as frequently as cities like Albany, NY that are three times as large and ten times more often than cities like Buffalo that are ten times as large.


But if instead the distribution decreases inversely with the square of the size, then we would say it has an exponent of 2.0 and would expect that towns like Ithaca would arise nine times (3 times 3) as often as towns like Albany and one-hundred times (10 times 10) as often as cities like Buffalo.

Such power-law distributions have been known for some time to hold for the distribution of wealth. In the nineteenth century, a Parisian engineer, Vilfredo Pareto, was the first person to note this phenomenon, subsequently called Pareto’s law, and demonstrated that it held true in every European country for which the relevant statistics existed. Pareto showed [9] that regardless of which country he looked at, the wealth distribution was a power law with a slope somewhere between two and three. The law’s main consequence is that very many people possess relatively little wealth, while a very small minority are extremely wealthy.

What Makes a Power Law Distribution ‘Scale-Free?’

By contrast to the properties of a power-law distribution, if we plot a normal distribution on the same log-log scale, we see, as in the figure below, that at some point it starts to curve down rapidly, displaying what is called a cutoff. It occurs where the curve disappears into the horizontal axis.


In general, the cutoff sets an upper bound on whatever quantity the distribution represents [10]. When it is applied specifically to the degree distribution [the probability that a new node in a dynamically growing network will be connected to the nodes which are themselves well-connected] of a network, the significance of the cutoff is that it limits how well connected any member of the population can be. If an average person can be connected to only a small fraction of the entire population, then the same will be true even of the best-connected person.

Watts tells us that “Another way to think about the cutoff is that it defines an intrinsic scale for the distribution. And because a power law stretches on and on without ever encountering a cutoff, we say that it is scale-free. Scale-free networks, therefore, have the property … that most nodes will be relatively poorly connected, while a select minority of hubs [defined herein] will be very highly connected.


Examples of Power Laws at Work in the Real World

The World Wide Web, which operates on an interconnected set of routers [maintained by many independent institutions] called the Internet, is a dynamically growing (new web sites appear every day) network which is characterized by a property called preferential attachment. That is, we do not link randomly to every site on the Web — we do not link to ordinary nodes. We most often choose to link to the most popular web sites. When choosing between two pages, one with twice as many links as the other, about twice as many people link to the more connected page [11]. While our individual choices are highly unpredictable, as a group we follow strict patterns. These two properties, growth and preferential attachment, guarantee that the network, any network, will be scale-free. That is, it obeys a power law. This will also guarantee that the network will be self-organizing. It grows dynamically without the invisible hand of an all-controlling spider at the center of the web.

The spread of the AIDS virus followed the pattern of networks which are scale-free, with a few highly connected nodes (hubs) and which have the property of diffusion. According to Barabasi [12], “Gaetan Dugas, once a French Canadian flight attendant, is often called Patient Zero of the AIDS epidemic. This is not because he was the first to be diagnosed with the disease but rather because at least 40 of the 248 people diagnosed with AIDS by April 1982 had either had sex with him or with someone who had. He was at the center of an emerging complex sexual network among gay men, a web anchored between the East and West Coasts of North America, spanning San Francisco, New York, Florida, and Los Angeles.”

“[Dugas] figured that he had about 250 sexual partners a year. While some estimates put the total number of his partners as high as 20,000, his decade of promiscuity in gay clubs and bathhouses clearly put him in sexual contact with at least 2,500 people...Dugas played an important role in turning the AIDS epidemic in a few short years from an obscure and rare ‘gay cancer’ (Kaposi’s sarcoma) to a North American health care crisis. He is a terrifying example of the failure of classical epidemic models and evidence of the power of hubs in our highly mobile and connected society. Indeed, when it comes to viruses and epidemics, hubs make a deadly difference.”

Barabasi observes that recent research on sexual behavior, characterized by Gaeton Dugas and Wilt Chamberlain’s (self-proclaimed 20,000 heterosexual encounters over his lifetime) sexual appetites, are not unique [13]. “The scale-free topology [of the network of sexual associations] implies that, though most people have only a few sexual links, the web of sexual contacts is held together by a hierarchy of highly connected hubs. They are the Wilt Chamberlains and the Gaetan Dugases, collecting an astounding number of sexual partners.”

“The deadly [AIDS] virus must have followed the route already spotted [by network researchers] in the spread of innovation and computer viruses: Hubs are among the first infected thanks to their numerous sexual contacts. Once infected, they quickly infect hundreds of others. If our sex web formed a homogeneous, random network, AIDS might have died out long ago. The scale-free topology at the AIDS virus disposal allowed the virus to spread and persist.”

Much has been made recently of the huge scientific accomplishment of mapping the humane genome – that vast chain of DNA which encodes every gene in our bodies. Barabasi brings this accomplishment down to earth [14]. “To be sure, the sequencing of the human genome is a triumph, the result of modern molecular biology’s ability to reduce complex living systems to their smallest parts. It is undoubtedly a catalyst of a new era in both medicine and biology. But the genome project has brought along a new realization: The behavior of living systems can seldom be reduced to their molecular components.”

Barabasi continues. “Our inability to find a single gene responsible for manic depression is the best illustration. A list of suspected genes is not sufficient. To cure most illnesses, we need to understand living systems in their integrity. We need to decipher how and when different genes work together, how messages travel within the cell, which reactions are taking place or not in any given moment, and how the effects of a reaction spread along this complex cellular network. To achieve this we must map out the network within the cell. This web of life determines whether a cell develops into skin or labors constantly in the heart, decides the cell’s response to external disturbances, holds the key to survival in constantly changing environments, tells the cell when to divide or die, and is responsible for illnesses ranging from cancer to psychiatric disorders. As the historic Science article that reported the decoding of the human genome concluded, ‘there are no ‘good’ genes or ‘bad’ genes, but only networks that exist at various levels.’”

Then Barabasi goes on to describe [15] the research conducted on understanding the molecular metabolic and regulatory networks that govern this ‘map of life.’ “[The research suggests that] the scale-free nature of the protein interaction network is a generic feature of all organisms … Taken together, the similar large-scale topology of the metabolic and the protein interactions networks [in cells] indicate the existence of a high degree of harmony in the cell’s architecture: Whichever organizational level we examine, a scale-free topology greets us. These journeys within the cell indicate that Hollywood [the Kevin Bacon game describing the ‘connectedness’ of the 500,000 actors in the 250,000 movie database] and the Web have only rediscovered the topology that life had already developed 3 billion years earlier. Cells are really small worlds, [that is, have only a few ‘degrees of separation’ between nodes] that share the topology of many other non-biological networks, as if the architect of life could design only these.”

“How did life arrive at this architecture? Almost as soon as we asked the question, we had the answer … Each of three independent research groups offered the same simple elegant explanation, claiming that the cell’s scale-free topology is a result of a common mistake cells make while reproducing.”

Barabasi explains how this and other such research on the cellular networks of living things resulted in describing the neural network [16] of the C. elegans, a miniscule little worm. “In 1996 the decoding of the yeast genome gave the scientific community a shock: It contained as many as 6,300 genes. Only about a quarter of these were expected and could be assigned vague functions. To be on the safe side, and boosted by humans’ perceived importance as the pinnacle of evolution, biologists estimated that the human genome would have at least 100,000 genes. This number was believed to be sufficient to account for the high complexity of Homo sapiens. Then came February 2001 and the publication of the human genome. It turned out that we have less than a third of the anticipated genes – only about 30,000. Therefore, a mere one-third increase in genes must explain the difference between us and the unsophisticated Caenorhabditis elegans worm – quite a provocative idea when we consider that the 20,000 genes of C. elegans need to encode only three hundred neurons, whereas our extra 10,000 genes have to account for the billion nerve cells present in our brain.”

In short, it is now clear that the number of genes is not proportional to our perceived complexity. Then what does complexity mean? Networks point to the answer. Framed in terms of networks, our question becomes: How many different potentially distinct behaviors can a generic network display with the same number of genes? In principle, two cells that are identical except that a specific gene is on in the first cell and off in the second could behave differently. Assuming that each gene can be turned on or off independently, a cell with N genes could display 2N (2 to the power N) distinct states. If we adopt as a measure of complexity the potential number of distinct behaviors displayed by a typical cell, the difference between a worm and humans is staggering: Humans could be viewed as 103,000 (10 to the power 3,000) times more complex than our wormy relatives!”

Much of the research on dynamic networks in the late 1990s was carried out modeling such physical systems, borrowing techniques from physicists and the science of biology. But a great deal of the research was carried out on the Internet and the World Wide Web. Barabasi tells us that we shouldn’t underestimate the enormous services the search engines [Google, Alta Vista, Inktomi, etc.] and their robots [software that enters a web site and copies the text on it for indexing and making it available to others] offer us [as Web users] [17]. “We often sigh in desperation, calling the Web a ‘jungle.’ The truth is, without robots it would be a black hole. Space would curve around it such that anything falling in would never get out. Robots keep the World Wide Web from collapsing under its increasing complexity. They fold the space out, maintaining order in the chaos of nodes and links.”

In the late 1990s, researchers found that [18] “...connectivity distribution of the Internet routers follows a power law...They showed that [this] collection of routers linked by various physical lines, is a scale-free network.” It is not a random network. Consequently, it is a self-organizing system. According to Barabasi, “Routers are added where there is a demand for them, and demand depends on the number of people wanting to use the Internet. Thus there is a strong correlation between population density and the density of Internet nodes.”

“The distribution of routers on the map of North America forms a fractal set, a self-similar mathematical object discovered in the 1970s by Benoit Mandelbrot. [On the Internet there is] an interplay of growth, preferential attachment, distance dependence and an underlying fractal structure. Each of these forces alone, if taken to the extreme, could destroy the [Internet’s] scale-free topology...But the amazing thing is that these coexisting mechanisms delicately balance each other, maintaining a scale-free Internet. This very balance of power is the Internet’s own Achilles heel.” It is susceptible to cascading failures and/or attack by those with malicious intent (crackers).

The World Wide Web comprised of over a billion individual web sites around the world has no central design. It, as the Internet itself, is self-organized. It evolves from the individual actions of millions of users. As a result, its architecture is much richer than the sum of its parts. It cannot be shaped by any single user or institution. Most of the Web’s truly important features and emerging properties derive from its large-scale self-organized topology.

An example of this property is given by Barabasi [19] — democracy on the Web. “A scale-free topology means that the vast majority of [Web Sites] are hardly visible, since a highly popular minority has all the links. Yes, we do have free speech on the Web. Chances are, however, that our voices are too weak to be heard. [Sites] with only a few incoming links are impossible to find by casual browsing. Instead, over and over we are steered toward the hubs. It is tempting to believe that robots (software that is designed to enter a web site and copy all of the text on it) — primarily used by search engines (e.g. Google, Alta Vista, Inktomi, etc.), can avoid this popularity-driven trap. They could, but they don’t. Instead, the likelihood that a [Web Site] will be indexed by a search engine depends strongly on the number of its incoming links. Documents with only one incoming link have less than a 10 percent chance of being noticed by any search engine. In contrast, robots find and index close to 90 percent of pages that have twenty-one to one hundred incoming links.”

As a gauge of this concept, every Web Site is ranked on a scale of 0 to 10 by the search engine, Google, in terms of its popularity and its ‘importance’ — the number of instances that it is at a link on the more popular hubs on the Web. For example, Google itself has rank of 10 (most important). The Yahoo and Microsoft search engines both have a Google-rank of 10. The New York Times and Washington Post web sites have Google-ranks of 7 and 8 respectively. The PBS and Washington Times web sites have a Google-rank of 5. This web site [www.newtotalitarians.com] has a Google-rank of 4. This shows the power-law nature of the World Wide Web. A quality site can be rated right up there with the ‘big boys’ if it is increasingly ‘linked to’ by other web sites and especially those web sites with high Google-rank. Thus, quality is an important aspect of a web site that determines its rank. Indeed, in the concept of a ‘Tipping Point,’ small beginnings can result in large results. A web site that continues to grow on the basis of its quality can rise to the rank of a major hub within its sphere of interest. Google itself, among search engines, has proven this maxim as it quickly rose to the top over those which were previously established.

Barabasi explains this phenomenon. “The architecture of the Web controls just about everything, from access to consumers to the probability of being visited by surfing along the links. But the science of the Web increasingly proves that this architecture represents a higher level of organization than the [regulatory] code. Your ability to find my Webpage is determined by one factor only: its position [ranking] on the Web. If many people find my page interesting and they link to me, my node will slowly turn into a minor hub, and search engines will inevitably notice. If everybody ignores my Webpage, so will the search engines. I will join the ranks of invisible Websites, which are the majority anyway. Thus the Web’s large-scale topology — that is, its architecture — enforces more severe limitations on our behavior and visibility on the Web than government or industry could ever achieve by tinkering with the [regulatory] code. Regulations come and go, but the topology and the fundamental natural laws governing it are time invariant. As long as we continue to delegate to the individual the choice of where to link, we will not be able to significantly alter the Web’s large-scale topology, and we will have to live with the consequences.” Freedom of choice is the key here.

The same can be said for our civilization. American civilization has been in stable equilibrium over the past 400 years and is likely to remain in such a state as long as it remains scale-free in the same sense that networks are scale-free. It is always possible that the mysterious ‘constant of complexity’ will change to render our civilization either a fixed-point dictatorship or suddenly disappear in the confusion of chaos.

Some would argue that either of the latter two possibilities, if they arise, would result from some huge, dark conspiracy directed from some central core of control. This is entirely possible, but highly unlikely. If such a state evolves, it will most likely be the result of the dynamics of the ‘network connections’ of the entities which comprise our social fabric. Scientists have made the same mistake during their research in Network Theory.

Duncan Watts informs us that [20] “An important example of how a purely structural approach to networks has led many analysts into a reassuring but ultimately misleading view of the world is the case of centrality. One of the great mysteries of large distributed systems — from communities and organizations to brains and ecosystems — is how globally coherent activity can emerge in the absence of centralized authority and control … [Nevertheless], notions of centrality have been enormously popular in the literature.”

“But what if there just isn’t any center? Or what if there are many ‘centers’ that are not necessarily coordinated or even on the same side? What if important innovations originate not in the core of a network but in its peripheries, where [the central authorities] are too busy to watch? What if small events percolate through obscure places by happenstance and random encounters, triggering a multitude of individual decisions, each made in the absence of any grand plan, yet aggregating somehow into a momentous event unanticipated by anyone, including the [participants] themselves?”

“In a multitude of systems from economics to biology, events are not driven by any preexisting center but by the interaction of many near-equals and a few hubs that exist in scale-free networks.” Barabasi strengthens this argument by informing us that research carried out on dynamic networks which model competition in complex systems indicate that systems incorporating both growth and fitness results in two possible outcomes — one a winner-takes-all result (based on the physics of the Bose/Einstein condensate) or a rich-get-richer result where there are hubs and other nodes, all with some degree of contribution to the outcome.

For the winner-takes-all outcome, there is a star topology wherein there is a central hub which has all of the connections. This represents a stable fixed point in the Chaos Theory bifurcation curve of the Quadratic Iterator. Here the fittest node grabs up all the links — raw power of a dictator — and the scale-free nature of the network is destroyed [there are no other hubs].

For the rich-get-richer outcome, there is a hierarchy of hubs whose size distribution follows a power law. Google, the Web’s best and most popular search engine is an example of a hub in such a network. It is very large in relationship to other hubs but does not have complete dominion over the network. Such networks are in stable equilibrium. They balance order and chaos in such a way that the system is stable. American civilization is, and has been for the past 400 years, in such a state. As long as it remains scale-free, it will continue to enjoy the prosperity of its past. The attribute that guarantees its scale-free nature is the freedom of the individual to choose and the limitations placed on the raw exercise of power by either individuals or groups over those who lack such power. The separation of powers derived from our Founding Documents are the key to the scale-free nature of our civilization. We will remain so as long as we do not take actions that destroy the scale-free nature of our important connections with each other.

The third region of the bifurcation curve of the Chaos Theory paradigm — between stable periodic equilibrium and the complete chaos at a = 4 — represents a region which, in terms of a dynamic network system fluctuates randomly from one state to another without converging to a stable state. This region represents the process through which a stable America can drift from socialism, to communism, to anarchy, to dissolution as a civilization. In this region, the scale-free nature of our ‘connectivity’ disappears, leaving a stable fixed point — the decay, dissolution and death of American civilization.

Barabasi reveals how much he and his colleagues have learned about the natural laws governing other complex systems by studying the properties of the Internet and the World Wide Web. “One of the most exciting aspects of this exploration has been uncovering laws whose validity does not stop at the gates of cyberspace. These laws, applying equally well to the cell and the ecosystem, demonstrate how unavoidable nature’s laws are and how deeply self-organization shapes the world around us. By virtue of its digital nature and enormous size, the World Wide Web offers a model system whose every detail can be uncovered. We have never gotten this close to any network before. It will continue to be a source of inspiration and ideas to anybody aiming to grasp the properties of our web-like universe.”

Barabasi concludes this discussion with the observation, “Whereas the twentieth century was seen as the century of physics, the twenty-first is oten predicted to be the century of biology. A decade ago it would have been tempting to call it the century of the gene. Few people would dare say that any longer about the century we have just entered. It will most likely be a century of complexity. It must be a century of biological networks as well. If there is any area in which network thinking could trigger a revolution, I believe that biology is it.”

Or could it be the century of network theory and experiment applied to the preservation of American civilization? Which is more important as a national policy goal – finding ways to prolong life (a lifelong pursuit of the me,me, Boomer generation) or finding ways to assure that our freedoms, based on our Founding documents, are passed on to our grandchildren and their grandchildren?

________________________________________________________________________________________________

Footnotes:

1) This material on Chaos Theory is taken from the book, “Chaos and Fractals: New Science Frontiers,” by Heinz-Otto Pietgen, Harmut Jürgens, and Dietmar Saupe, Springer-Verlag, 1992.

2) Gladwell, Malcolm, “The Tipping Point: How Little Things Can Make a Big Difference,” Little, Brown and Company, 2002.

3) Barabasi, Albert-Laszlo, “Linked: The New Science of Networks,” Perseus Publishing, 2002.

4) Watts, Duncan J., “Six Degrees: The Science of a Connected Age,” W.W. Norton & Company, 2003.

5) Ibid, Barabasi, Albert-Laszlo, pp. 77.

6) Ibid, Watts, pp. 104.

7) Ibid.

8) Ibid, pp. 106.

9) Ibid.

10) Ibid, pp. 107.

11) Ibid, Barabasi, pp. 85.

12) Ibid, Barabasi, pp. 123.

13) Ibid, Barabasi, pp. 138.

14) Ibid, Barabasi, pp. 181.

15) Ibid, Barabasi, pp. 182-189.

16) Ibid, Barabasi, pp. 196-197.

17) Ibid, Barabasi, pp. 178.

18) Ibid, Barabasi, pp. 150-152.

19) Ibid, Barabasi, pp.174-175.

20) Ibid, Watts, pp. 51.

________________________________________________________________________________________________

Return to:

Home Generations Chaos Theory & the Fourth Turning


A plot of the ‘bifurcation curve’ resulting from a computer run of all such points is presented in the figure at the left:

The Quadratic Iterator equation has been studied exhaustively over the past three decades. The graph of the convergent points, XConverge , (on the vertical axis) plotted against a [called here, the parameter of complexity] (on the horizontal axis), is a so-called bifurcation curve that represents the behavior of this equation in the three regions described above. The normalized values of XConverge range from [0, 1] as a takes values in the interval [1, 4]. XConverge is zero (0) for all a values less than 1. For a detailed mathematical description of this bifurcation curve, refer to the book, ‘Chaos and Fractals: New Frontiers of Science,’ by Heinz-Otto Pietgen.