Printer Friendly

On the entropy of social systems: a revision of the concepts of entropy and energy in the social context.

INTRODUCTION

The term entropy is very often found in works about societal complexity. The theoretical framework that tackles the concepts of complexity as an emergent phenomenon is of course that of systems theory (Wiener, 1961; Bertalanffy, 1968). But the meaning of entropy in the social context is often not clear, and it results in obscure arguments about the factors that continuously reproduce complexity. In particular, references to entropy usually imply disorganization and quite often leave aside the fact that disorganization (or disorder) can justifiably be considered as constraints imposed on an observer by his language, that is, his system of distinctions (Foerster, 2003, p. 280). Theoretical frameworks such as the system entropy theory (SET) (Bailey, 1990, 1997a, 1997b, 2006a, 2006b) have been developed and refined to measure entropy as an indicator of the internal state of social systems, namely, their disorder as a temporal variable.

More specifically, entropy is generally considered as a measure of the ability to predict the next state of a system. If the next state is highly predictable, then entropy is considered to be low and vice versa; consequently, a system that presents low entropy is considered to be organized and, by deduction, desirable. Therefore, predictability seems to be the keyword when it comes to organization and when references to entropy appear (Wiener, 1961; Arnopoulos, 2001). If this is the case, then the univocal use of the thermodynamic meaning of entropy in the social sciences context could contingently lead to all kinds of misunderstandings (Bateson, 2000, pp. 458-459). Entropy has at least two distinct scientific meanings and also has its relevant counterparts: energy and certainty.

In this paper, we try to shed light on the two different meanings of entropy and draw clear distinctions as to the contexts that those meanings pertain to. That may supply contemporary systems theory with new perspectives, which could bring forth a new conception of the importance of otherness for social systems.

ENTROPY IN THE THERMODYNAMICS CONTEXT

First, let us try to clarify the older (historically speaking) meaning of entropy, that is, the entropy of thermodynamics. There are two ways to consider and measure entropy: (i) a measure of the unavailable energy in a closed thermodynamic system; and (ii) a measure of disorder of a closed thermodynamic system. The first measure is associated with the conversion of heat energy to mechanical energy. The second is associated with the probabilities of the occurrence of a particular molecular arrangement in a gas.

To recall that concept, let us use an example. Suppose we have an adiabatic envelope (a completely insulated chamber). We have a source of energy, say a lighter, inside that envelope, and the envelope itself is full of gas. If, by any means, we use the energy contained into our source to heat the gas (i.e. we just light up the lighter), after some time, we will end up to a state where the gas will have the same temperature: every molecule will have absorbed the same amount of energy (more or less). That will result to an unpredictable (and faster than before) movement of the molecules of the gas to any possible direction; no prediction about their movement can be done, therefore, no certainty is possible in a micro level. Our energy source will be exhausted and so will be our ability to probabilistically predict their next position. We use to call this situation chaotic. But at this point, keep in mind that there's no such thing as a perfectly isolated envelope (Popper, 1957, p. 151).

Without any further investigation, we can note some interesting aspects of our experiment:

(1) The procedure took time to complete. The dissipative energy raised and our energy source was exhausted. That is, the amount of available energy dropped to zero and the amount of entropy rose to maximum. We can say that, as soon as the convection started, the gradual increase of entropy could be used as a timer, ticking the moments to the end; inversely, we could use the decrease of energy in our energy source as a measure of time.

(2) During our experiment, we were able to predict the course of the molecules of the gas, with a certain degree of statistical certainty. The colder molecules were going down, and the warmer ones were going up, forming a current of hot gas. That was a work in progress, an intentional change. We conceive of work as the process that ensures intentional changes in a context: so, there was no work before the experiment, and there cannot be any work after the end of it. There was no certainty before we started using our energy source, and there is no certainty after it was exhausted.

(3) The amount of energy contained into our adiabatic envelope is constant; no energy is lost (because of the first law-the conservation of energy). But we do not have any more a form of energy that we can use within that envelope to produce work (because of the second law). [Correction made here after initial online publication.] That is, whenever we talk of work, we refer implicitly to the available (i.e. useful or organized) energy. If now we try to collect the energy from the molecules back to our original source, that will mean a production of work, (1) and there is no energy--at least not in an appropriate form--to use it so as to complete that task.

(4) Consequently, our original source of energy was in an appropriate form (so to produce work).

So, we used our source until it was exhausted, and we ended up with a total inability to do anything else. Before our experiment, there was potential; during the experiment, there was statistical certainty; and at the end, we have concurrently total certainty (for we are sure there is nothing more we can do) and total uncertainty (as of the trajectories of the molecules of the gas). And we are stuck. We can make no decisions because there are no options to select from. We reached a dead end.

But before we go on, what was that that we called a 'dead end'? Clearly, it is the state at which there is no potential, that is, no alternatives to select from. To put it differently, there are no distinctions to draw; the state (the final conditions) is given, and there is nothing we can do to select another. Up to this point, we can conclude that entropy in the domain of thermodynamics measures useless energy (and not lack of energy) and indirectly reflects uncertainty, and also, that maximum entropy signifies complete inability to select a successive state.

ENTROPY IN THE COMMUNICATIONAL CONTEXT

Let us try now to examine the meaning of entropy in the context of information exchanging systems. (2) Based on the work published by Shannon (1948), we wish to concentrate on the properties and characteristics of discrete channels. This preference occurs because of the fact that communication is triggered by 'a sequence of choices from a finite set of elementary symbols' (Shannon, 1948, p. 3), that is, a natural language, spoken or otherwise, (3) and, to put it more generally, a sequence of discrete selections or states, as it is the case in the domain of cybernetics (Ashby, 1957). Also, following Luhmann (1986, 1995), we consider social systems to be communications systems, that is, systems that are constituted through communications (communicative selections), so to examine entropy in the communicational context is more appropriate and, as we intent to prove, more plausible.

Shannon points out that each symbol in the sequence of a message depends on certain probabilities, varying according to the previous symbols already transmitted, that is, what he calls a 'residue of influence' (Shannon, 1948, p. 8). Therefore, he suggests that we could perceive of a discrete source as a stochastic process, a process where each successive selection is dependent on the previous ones. (4) Thus, entropy in Shannon's approach is defined as a measure of probability of the next symbol to appear in the message sequence, and therefore, entropy in the communicational context refers to a generalization of Boltzmann's statistical entropy. Consequently, entropy refers to the variation of uncertainty during the transmission of a message; in Shannon's own words, 'Quantities of the form H = -[SIGMA][p.sub.i]log [p.sub.i] ... play a central role in information theory as measures of information, choice and uncertainty' (Shannon, 1948, p. 11). It is of utmost importance for our analysis that Shannon refers to informational entropy as a measure of 'information, choice and uncertainty' for it is exactly the communicative selections (choices) of systems that produce and reproduce those specific concepts ('information, choice and uncertainty'). (5)

Shneider (2010, p. 3) following M. Tribus suggests that uncertainty could also be called 'surprisal'. That is, if there is a set of M available symbols, (6) and, at a certain point of the message sequence, a symbol u, which has a probability of appearance [P.sub.i], that approaches 0 eventually appears, the receiver will be surprised. That means that the receiver has expectations, and those are constituted during and because of the communication and are defined (or bounded) by the receiver's conception of the communicational content and context (e.g. a language, or more generally, a culture). This leads eventually to a circular procedure: communication produces expectations, which in turn reproduce communication. And this recursive process stabilizes certain bilateral expectations that, so to speak, define an intersubjective space that makes communication possible (Luhmann, 1995). Exactly this is what Shannon defines as 'redundancy'. Redundancy is defined as 'One minus the relative entropy' (Shannon, 1948, p. 14). But how can we conceive of the notion of redundancy? Shannon's definition is strictly mathematical. To exemplify on the notions of entropy and redundancy, let us try a simple example. Suppose you toss a (supposedly) fair dice. You can say 'I know that the outcome will be in the sample space [1, 2, 3, 4, 5, 6], and additionally, I know that the possible outcomes are equiprobable with a probability equal to 1/6'. How can you know that? The answer is, because of prior experience, you know what a fair dice is, you know what tossing is and there are no extra variables in your experiment, and therefore, your argument will always be valid. Your knowledge constructs a context that we call redundancy, that is, an informational framework about what is going to happen next or is happening already, by what you already know. In the certain example, entropy (the measure of uncertainty) drops to zero and of course (according to Shannon) redundancy equals to 1 (100%). (7) Again, you have expectations, and these arise because of redundancy. But suppose now you toss the dice 100 times, and surprisingly, the 90 times you get a '6' and then different outcomes at the rest of the times; redundancy starts to drop (or certainty starts to collapse), surprisal steps in and entropy rises accordingly as you get over 100 tosses trying to verify that the outcomes are equiprobable, but every next set of tosses verifies that something is wrong. Eventually, you realize that this is not a fair dice, and that using it gives a probability of 0.9 to '6' and perhaps 0.02 to the rest of the numbers. A new variable steps into your environment, namely, the notion of a 'crooked dice'. Now, you have new information, albeit about that certain dice, and that happened because for a while, the entropy had risen. Your knowledge has changed, that is, you experienced a change (Maturana and Varela, 1980, pp. 11-12) of your own state. 'One can speak of change only in relation to structures. Events cannot change, because there is no duration between their emergence and their passing away ... Only structures keep what can be continued (and therefore changed) relatively constant. Despite the irreversibility of events, structures guarantee a certain reversibility of relationships. On the level of expectations.... a system can learn, can dissolve what has been established, and can adapt to external or internal changes' (Luhmann, 1995, p. 345). And also, 'By information we mean an event that selects system states' (Luhmann, 1995, p. 67). The event that Luhmann refers to was in our case the abnormal 'behavior' of the dice. Of course, after a while, when certainty about the specific dice will have risen high, you would stop tossing it, exactly because entropy will start to drop down again: the event '... retains its meaning in repetition but loses its value as information ... The information is not lost, although it disappears as an event. It has changed the state of the system and has thereby left behind a structural effect; the system then reacts to and with these structural effects'. Redundancy then can be conceived as '[a] surplus of informational possibilities ...' (Shannon, 1948, p. 172), that is a way to reduce entropy, which of course, is in the same vein with Sharmon's original idea.

So entropy measures the 'surprisal' or--perhaps more specifically--the importance of an event (8) for the receiver; that is to say, it is the measure of deviations from his expectations, and the measure of surprisal of the receiver is correlated to the selections of the source. But what is the relation of surprisal to information and information to energy or entropy?

THE RELATION OF INFORMATION TO UNCERTAINTY

As we saw, entropy measures the uncertainty, albeit indirectly in the thermodynamic context and in a straightforward manner in the theory of information. Therefore, entropy is also a reverse (9) measure for information. But what is information?

From Shannon's viewpoint, information is the measure of reduction of the statistical entropy. Before a transmission (or an event), there is an H(x) uncertainty to the receiver as to the next symbol to be transmitted. After the transmission (of symbol y), the uncertainty (R) is reduced (10):

R=H(x)-Hy(x) (1)

To be sure, the symbol R stands for the rate of actual transmission (Shannon, 1948, p. 20) and clearly shows that information is transmitted if and only if H(x) > Hy(x) > 0. If Hy(x) = 0, then y is independent to x, and there is no information. Exactly the same applies if H(x) = Hy(x), for in the latter case, there is no transmission at all (R = 0). Shannon states clearly that Hy(x) is the conditional entropy, that is, the possibility of occurrence of the event y after the event x. It deduces that if Hy(x) = 1, that is, if the receiver is 100% sure that y will occur and it actually occurs, then the information produced is 0 [for H=-[p.sub.i] log([p.sub.i])]. Therefore, in the latter case, it is irrelevant from an informational point of view if y is transmitted at all. Certain conclusions can be drawn from those remarks:

(1) The rate of transmission of a signal is independent from the rate of transmission of information. That is, the transmission of a signal alone does not necessarily pertain to transmission of information, and consequently, transmission does not guarantee communication. That is to say that the rate of information transmission is always lower than the rate of the transmission of the signal.

(2) If a state (or symbol) that is absolutely expected eventually presents itself, then the uncertainty is not reduced.

(3) States that have low probability of appearance produce a large quantity of information. The lower the probability, the higher the communicational value of the symbol transmitted (or of the next state presented).

(4) Only unpredictable sequences of states produce information and therefore constitute communication. If certainty is not disrupted, information is not produced.

(5) Because the receiver defines the informational value of the symbol y as the conditional probability of y occurring after x, then it is the receiver who connects the states together as successive instances of the same phenomenon: namely, communication. To put it differently, the receiver attributes meaning to y according to x, perceiving them as successive phenomena in a deterministic fashion. It must be clear here that the deterministic aspect of that procedure is constructed also by the receiver. In case the phenomena turn to seem independent, then no information is produced; that is to say that no meaning can connect them [and Hy(x) is indefinite because P(y | x) = 0].

It is of utmost importance to be clear at this point that what we are examining is an observing system, to be sure a self-referential system that observes its environment, and our discussion moves around the uncertainty the observing system experiences when interacting with an entropic environment.

Now, into the core of the theoretical framework of systems theory lays Bateson's famous definition of the information: 'Of this infinitude [of differences], we select a very limited number, which become information. In fact, what we mean by information--the elementary unit of information--is a difference, which makes a difference, and it is able to make a difference because the neural pathways along which it travels and it is continually transformed are themselves provided with energy. The pathways are ready to be triggered. We may even say that the question is already implicit in them.' (Bateson, 2000, p. 459). The whole paragraph is included here because as we will see, it says somewhat more than an abstract definition of the meaning of information. This reference, when examined within its original systemic theoretical context, describes the relation of a system to its environment. Let us try to elicit certain conclusions from that:

(1) Only differences (i.e. changes through time, or different aspects of concurrent phenomena) produce information. Continua do not produce information; therefore, a steady environment does not contribute to information.

(2) Not every difference accounts as information.

(3) The types of sensors a system has (i.e. inputs) along with their pathways predefine what the system can conceive as information, and as a consequence, the system is inevitably bounded into a specific abstraction of its environment. That is, the environment of the system is an abstraction of its contingent environment. We deduce that if the system finds a way to develop new sensors, its environment expands.

(4) The system consumes energy to collect information, for the 'pathways are ... provided with energy' (Bateson, 2000, p. 459), that is, the system uses energy to conceive of its environment. The system not only needs energy to trigger its outputs but also needs to read its inputs. It follows that the system must be open to energy sources from its environment, or else, the second law of thermodynamics applies and entropy tends to maximum.

(5) The system is an observer.

(6) The observer conceives of his environment by placing questions and, namely, only those questions he is able to form.

(7) The transformation of surprisal to information takes place within the observing system only. The environment cannot define the production of meaning, because the environment cannot define the observing system's inputs (let alone its internal organization).

A PRIMER ON AUTOPOIESIS

In order to achieve an integration of the aforementioned concepts, we need to recall the theory of autopoiesis (Maturana and Varela, 1980) and use it as a tier so to speak, which can help bring those views (theories) together.

The term autopoiesis (Greek, auto: sell poiesis: creation) was coined by the Chilean biologists Humberto Maturana and Francisco Varela in an endeavour to define rigorously the characteristics of living systems. They came up with the conclusion that a system can be considered as living, if and only if that system continuously recreates itself; they called it an 'autopoietic system'. By definition, 'An autopoietic machine is a machine organized (defined as a unity) as a network of processes (transformation and destruction) components that produce the components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in the space in which they (the components) exist by specifying the topological domain of its realization as such a network' (Maturana and Varela, 1980, p. 72). [Correction made here after initial online publication.] The consequences of this definition are paramount. In short, 'autopoietic machines are autonomous' and 'subordinate all changes to the maintenance of their own organization' (Maturana and Varela, 1980, p. 80); they have individuality, they always function as unities and their autopoietic network is strictly internal and circular: the living machine regenerates itself, and therefore, autopoiesis triggers autopoiesis in a circular process.

That is not to say that living systems remain unchangeable; on the contrary, it is exactly the change--a continuous process of becoming--that guarantees the continuation of the living system as such in a nontrivial environment. Therefore, autopoiesis is a continuous process of becoming that conserves being. We need to emphasize here that in no way autopoiesis is governed by the living system's environment; autopoiesis remains autonomous or else there is no autopoiesis and the living system disintegrates. Putting it in another way, we can say that the system changes in a circular homeostatic procedure, which is triggered but in no way defined by the environment.

This is surely more than adaptation; the system reacts to its environment by reconstitution of its boundaries. Thus, it continuously manifests its difference from its environment, and doing so, the system internalizes this distinction as its basic mode of existence (Luhmann, 1995).

Another important aspect of the theory of autopoiesis is that of the system's autocatalysis; the living system, because of its autopoietic procedure, destroys its very own components and creates new ones; at any given moment, the only thing that is really important is the ability of the components to participate supportively in the autopoietic cycle, despite other characteristics they may hold (which of course could signify their own autonomy). Autopoietic machines--or living systems--are in fact relation-static systems rather than homeostatic (Maturana and Varela, 1980). And it is only through continuous catalysis/recreation that the system achieves the continuous reconstitution of itself; namely, the confirmation and manifestation of its individuality, continuously identifying itself as such, achieves a unity--in an ontological sense--in time and space: 'Pure ostension plus identification conveys, with the help of some induction, spatiotemporal spread' (Quine, 1980, p. 68).

But those remarks bring forth a great problem if one intends to use the autopoietic paradigm in the sociological context; the idea of a 'super-process' that destroys the components of social systems is obviously not appealing--in fact, it is grotesque. For if one conceives of social systems as networks of people (e.g. actors), then one would boldly refuse an autopoietic procedure that--for instance--would 'work for the public interest', disregarding the individuals (or worse).

Thankfully, Niklas Luhmann paved the way to a new theoretical apparatus that brings the notion of autopoiesis into sociology and evades the problem of systemic autocatalysis; he came up with the notion that humans (psychic systems) are not in fact part of the social systems but of their environment. Apparently, that poses a paradox; how can there be a society without people? But there were never psychic systems in the society, replies Luhmann. Social systems are constituted from communications and function by production of meaning. That is, psychic systems do not communicate to each other directly (as if their nervous systems were interacting directly) but through the social system, and doing so, they reproduce it; every communicative action is inherently social and vice versa: there cannot be any communication out of the social system. The 'components' of the social systems are exactly those communicative actions. Admittedly, it is now understandable how the social systems can catalyse their components; a stable repetitive communicative action does not contribute to communication (for in that case, R=0). Consequently, in an ongoing communicative context, the content consists of communicative actions that must be catalysed in order to give place to the next ones so that communication can produce communication, and every previous step paves the way for the next one. And communicative actions must also be connected to each other with meaning (for Hy(x) must be greater than zero). Therefore, Luhmann concludes that social systems owe their coherence to meaning (Luhmann, 1986, 1995).

THE IMPORTANCE OF ENTROPY FOR SOCIAL SYSTEMS

At this point, we wish to clarify the meaning of certain terms together with their counterparts, namely, certainty/uncertainty, organization/ disorganization and action/inaction. Usually, for reasons that are widely considered obvious, certainty correlates to organization and uncertainty to disorganization. Consequently, predictability (i.e. a clear answer to 'what if ...') is considered a sine qua non for any work, that is, to take action. This is so trivial that it led to several misunderstandings and gave birth to a paradox: if--as we saw--only uncertainty can contribute to information, are we to deduce that information and action belong to competitive--and perhaps mutually exclusive--realms? This is definitely absurd because it leads to the conclusion that only uninformed systems take action. Another path we could take is to suppose that information and action are independent to each other, but everyone knows that this is never the case, for an action is an endeavour to change a state of affairs, and so, we must accept that an action is an informed selection.

Let us try to inspect that matter more closely. Following Bateson (2002, p. 95), we can theorize two systems that function in interdependence: a system that opens a gate or relay, that is, a network of triggers and a system whose energy flows through that gate when it is opened. It follows that only uncertainty triggers action, that is, only a difference can bring forth an endeavour for change. But living systems--as autopoietic systems--need to take continuous action to avoid disintegration. And the only source of differences that can account as information must be outside the boundaries of themselves, that is, the only reason for the living system to act, lies in its environment--and this includes any operation and especially autopoiesis. Furthermore, that environment must be unpredictable, at least to a certain degree. The autopoietic system is in need of information, therefore surprisal, to continuously trigger its own self-creation: 'Every event, every action appears with a minimal feature of surprise, namely, as different from what preceded it ... Uncertainty is and remains a condition of structure. Structure would cease were all uncertainty to be eradicated, because structure's function is to make autopoietic reproduction possible despite unpredictability' (Luhmann, 1995, p. 288). Again, this is not adaptation, for it is the system that transforms the event to information--not the environment. The autopoietic system is closed with respect to meaning production; therefore, the autopoietic system 'adapts' to its environment only by (and because it can retain) its individuality as autopoietic organization. And that organization (autopoiesis) initially emerges because its environment is unpredictable.

This leads to a spectacular inference: uncertainty need not entail disorganization. On the contrary, uncertainty fires up a living system's self-reconstructing processes (i.e. autopoiesis) and even more: if we take into account the notion of internal differentiation (Luhmann, 1992, 2002; Willke, 1996), we conclude that uncertainty actually triggers the emergence of new systems, for as a living system tries to compensate every new experience, it can--contingently--create subsystems as solutions to its environment's unpredictability, according to Ashby's Law of Requisite Variety (Ashby, 1957, pp. 206-207), or because of structural coupling (Maturana and Varela, 1980, pp. 107-111), a number of systems can form a new entity of autopoietic nature, just because of their repetitive (and continuously changing) interactions, as each one of them beholds the others as its own environment.

Therefore, uncertainty can attract a living system to higher complexity: the system absorbs uncertainty from its environment and transforms it to organization increasing its internal complexity. It follows that the living system evolves because of its environment's entropy. To put it another way, systems in general are solutions to the problem of complexity, because a surplus of complexity fires up the basal operation of distinction that precedes and entails organization--environmental complexity motivates the self-referential systems: 'There can be no distinction without motive, and there can be no motive unless contents are seen to differ in value' (Spencer Brown, 2008, p. 1). And the self-referential systems are both solutions and problems, for their continuation (the preservation of their individuality) becomes their main problem, and also, being autopoietic, they become complex for the other self-referential systems in their environment.

If, on the other hand, the environment becomes highly predictable (i.e. statistical entropy tends to zero), the living system, as observer, faces a twofold problem. First, a number of its subsystems become obsolete and probably dispensable. Those systems may be autopoietic themselves, and that could lead to a contradiction between the interests of a suprasystem and its subsystems and therefore to an operational conflict. This phenomenon is not uncommon--one has only to remember the problems that many military industries in the USA faced when the former USSR eventually collapsed. Second, the whole system itself may become obsolete. This could enforce the system to seek a new role, a new niche in its wider environment, thus attracting the system to high instability and jeopardizing its own existence.

As Karl Popper and Konrad Lorenz point out, 'the real moment of freedom relies on uncertainty' (Popper, 2003, p. 31), and 'our willingness to take risks is connected to the pursuance of the best possible lifeworld (...). The absence of problems can cause a deadlock' (Popper, 2003, p. 35). Those matters may be considered trivial, but the conception of uncertainty that unfolded already is not, which of course calls for a total reconsideration of the role of uncertainty, entropy and disorganization.

But before we get to that, we face another problem: are we to assume that uncertainty is an attractive condition? We know that a lot of theoretical concepts try to tackle this certain problem, theories like double contingency, management theories, governance problems, etc., just to name a few. On the other hand, we already proved here that uncertainty is not 'bad'; in fact, it is uncertainty that triggers creativity as well as creation. However, it is evident that at the end of the day, everyone expects a stabilized environment and tries to avoid surprises, and in general, everyone tries to trivialize his lifeworld. We use caller IDs at our cell phones, alarm systems at our homes and GPS navigation systems at our cars so to avoid a wrong turn. And people before us, in the Middle Ages or antiquity, used to have great walls around their cities so as to keep the unexpected out. So is it not a problem when our experience (let alone common sense and the history of humanity) contradicts the theoretical framework we just presented? The answer is that the reason we developed such artefacts (alarms, GPS devices, architecture and even theories in general) is the presence of uncertainty. Furthermore, the reason we developed systems that develop other systems is to deal with uncertainty. And the same holds true for the structured symbolic systems such as science, theories, technology, metatheories and even the very language we are using, that is, the systems that we develop through communication, and they exist only in it, as networks of regulative norms trying to reduce complexity.

Of course, statistical entropy, uncertainty and complexity are not substantial entities like, say, a human or a chain They are just explanatory principles, but nothing more than that. Those are terms that describe certain aspects of a relation; in particular, they are used to express the inability of an observer to establish a causal explanation for the phenomena he or she observes or to conceive them as a unity. To assume that 'this is a complicated situation' is in fact to express our own inability (which could be temporal but nevertheless our own) to classify our own experience, that is, to produce meaning. So the role of the observer and that of the context is crucial here. For instance, a network of interrelated behaviors in the social domain could be identified as 'disorganized' when, in fact, it could be considered as a different pattern of organization (and thus organization) beyond the organizational patterns that the observer is able to comprehend, or a financial crisis could be signified as a destructive phenomenon for a country or a company, whereas in a wider context, one could speak of a 'wide economical system self-regulation'. The importance of the level of observation and the different conceptions it entails have been excessively described elsewhere and in different scientific contexts (Joslyn, 1990; Luhmann, 1992, 1995; Foerster, 2003; Spencer Brown, 2008), and we do not need to go into details here.

Already, from 1980, Humberto Maturana and Francisco Varela concluded that '... compensation of deformation keeps the autopoietic system in the autopoietic space' (Matttrana and Varela, 1980, p. 93), thus underlining the importance of deformation for living systems, which can be attributed to a nontrivial environment. It follows that in life and its matters, we face a twofold situation: systems formation and emergent forms of organization that try to oppose to complexity, and if successful in a temporal dimension, their own operation as autopoietic entities triggers the emergence of complexity again, which in turn signifies the need for new complexity-absorbing complex formations, that is, systems. We conclude then that

(1) Systems are solutions. No solution can have a substantial meaning unless there is a problem (or a class of problems) defined.

(2) Systems are problems for their environments. Every time a system emerges, its environment faces a rise in complexity and reacts by upgrading its own complexity.

(3) Autopoietic systems are problems for themselves. Their continuation demands their continuous circular self-destruction and reestablishment; but in order to do so, they need to conceive of their environment as a problem pool using it as a guide so to select their next state.

(4) Autopoietic systems are self-organizing systems, and self-organization is the manifestation of their autopoiesis.

(5) Therefore, complexity creates complexity, and precisely, this is evolution. On the other hand, trivialization stops the evolutionary process, forcing systems to disintegration. Systems that manage to trivialize their environment by narrowing it or controlling it excessively (e.g. the case of dictatorships) are triggering their own collapse.

In a nutshell, the raison d'etre of the systemic phenomenon (i.e. organization) is statistical entropy. That is why ideas such as double contingency, regulation (as opposed to deregulation) or even organized knowledge have been proved fruitful and effective throughout history. Those are manifestations of autopoiesis (and not concrete unchangeable structures) that emerge and constantly change as a result of the continuous transformation of living systems. We should note though, that we cannot overemphasize here the importance of time: systems take time to compensate the events they perceive, although it is reasonable to assume that that time reduces inversely to their internal complexity, for higher complexity implies higher redundancy and therefore more available internal solutions to external or internal problems, conflicts and contradictions.

Thus, statistical entropy in social systems theory is the reason of creation rather than destruction; quite aptly, Niklas Luhmann notes: 'If such a system [i.e. autopoietic] did not have an environment, it would have to invent it as a horizon of its hetero-referentiality' (Luhmann, 1986, p. 176). We could say that a system that conceives of its environment as an entropic one reacts (if it can) with higher self-organization. And so it evolves--otherwise evolution halts.

WHAT ABOUT ENERGY?

Of course, as we already saw, entropy has another meaning, the one that pertains to thermodynamics. In order to use the notion of thermodynamic entropy though, it is necessary to include the notion of energy in social systems. It is obvious that if one wants to speak legitimately about 'thermodynamic entropy' pertaining to social systems, then one cannot avoid the need to refer to the 'energy' of social systems; otherwise, any use of thermodynamic entropy is meaningless. But what could be considered as energy in this context? Clearly, we cannot anymore confuse energy with information, as it is sometimes the case in humanities. Information is correlated to energy, and energy is needed to gather information, but it is not deduced that energy and information are the same variable.

Therefore, we are in need of a parallel theoretical framework, interrelated and in coherence to the one we already unfolded, to see how we are to perceive of the energy of social systems. Admittedly, this is not going to be easy; to exempt a notion from a whole theoretical apparatus (like we tried to do hereto), and then rehabilitate it in a different manner, is a demanding task that we expect to take time and, hopefully, a lot of debate. So here, we will only present some preliminary thoughts, hoping to initiate a wider discussion on the topic of social energy.

Georgescu-Roegen (1986) has already introduced an approach to the problem of thermodynamic entropy in the economical system, but he strictly refuted that his theory implied the conception of economic capital as a form of energy (Gowdy and Mesner, 1998, p. 140). This of course poses an obvious paradox: how can one talk of thermodynamics and entropy and leave the notion of energy out of the discussion? This is to make a distinction, indicate one side of it and disregard the other.

In order to talk about energy, we need to recall that in thermodynamic terms, it denotes potentiality; to be sure, we are talking about what we already denoted as an 'appropriate form' of energy that is energy in a form so to produce work. The nature of that work is contingent, and (the measure of) energy symbolizes the potentiality that is available to whom controls the energy resource(s). Self-referential systems temporalize their experience, living always in the present and designing a future so as to know how to act 'here-and-now' (Foerster, 1971). That is, those systems develop expectations of the (their) future, conceiving here-and-now as a subject to change, that is, a temporal situation (Luhmann, 1995). It follows that self-referential systems make decisions (i.e. they make selections) based on the knowledge they have gathered and organized (i.e. in their past) in an endless endeavour to trivialize their future. And, for all those operations, systems are triggered by information and they consume energy. Thus, the work we are talking about (i.e. the meaning of energy) is change; self-referential systems, being coupled to their environment, trigger mutual and recursive changes between the 'other' and the 'self', with 'self' (the internal process of meaning construction) being their only criterion.

Now, it is very common to say that some social systems 'exert power and authority', or sometimes 'apply force' over other social systems or on their environment in a more general sense, while attaining certain ends. In the social context, the term power is often used interchangeably with the term force (which of course is not the case in physics), and this is how we will use it here. But what is the meaning of the word 'force' that we so easily use? According to Arnopoulos (2001, p. 20), '... force is a central concept because it serves to produce a change of state ...' But, in order to use force, one needs energy at one's disposal; thus, we deduce that in order to gain the ability to change a state of affairs, one needs energy to apply force (or 'exert power'): 'As Parson's social action theory emphasizes, energy plays a crucial role in society ... Since social action requires energy, active societies can only be those with an excess of energy' (Arnopoulos, 2005, p. 31).

But where does energy stem from, or put differently, which are the energy resources of social systems? Pierre Bourdieu (1989, p. 17) writes, '... these fundamental powers are economic capital (in its different forms), cultural capital, social capital, and symbolic capital, which is the form that the various species of capital assume when they are perceived and recognized as legitimate'. What exactly are those forms of capital? Bourdieu notes, 'Depending on the field in which it functions, and at the cost of the more or less expensive transformations which are the precondition for its efficacy in the field in question, capital can present itself in three fundamental guises: as economic capital, which is immediately and directly convertible into money and may be institutionalized in the form of property rights; as cultural capital, which is convertible, in certain conditions, into economic capital and may be institutionalized in the form of educational qualifications; and as social capital, made up of social obligations ("connections"), which is convertible, in certain conditions, into economic capital and may be institutionalized in the form of nobility' (Bourdieu, 1986, p. 242).

How could we bring those views together in terms of systems theory? We could recall that autopoietic systems try to 'increase the number of choices' (Foerster, 2003, p. 227) at their disposal, conceiving the multiplicity of this contingency as the extent of their freedom (Willke, 1996). Thus, in order for those systems to gain and retain freedom, energy is needed. And that that secures the ability of social (or psychic) systems to perform an act of change could be capital, in its widest sense: social influence for instance, or political power, or money and property (economic capital), or knowledge (cultural capital) and numerous other notions, that all seem to converge to one thing, exactly that which Pierre Bourdieu defines as symbolic capital. Therefore, we propose to conceive of systemic energy as the symbolic capital that a self-referential system possesses. This assumption, though, needs further clarification.

SYMBOLIC CAPITAL AS SOCIAL ENERGY

Bourdieu suggests we conceive of capital as a 'vis insita, a force inscribed in objective or subjective structures, but (...) also a lex insita, the principle underlying the immanent regularities of the social world' (Bourdieu, 1986, p. 241). Here we encounter the first point of deviation from Bourdieu's theory, because vis insita implies an immanent potentiality in the structures, which remains unexplained as to its causal nature (in a structuralist context). Of course, one could ascribe vis insita in the structure's own efforts to accumulate symbolic capital in any form that this may be, but still, one premise holds true energy is not work per se; it has always to be manifested in a wider context, that is, at the environment of any structure. One could oppose to that claim, pointing out that--in certain cases--the structure may accumulate energy to compensate its own internal problems and that this has nothing to do with its externalities, but such an assumption would overlook the importance of the environment as a pool of potential energy sources and a horizon of potential events, let alone it might lead to the chimaera of a perpetual machine. Moreover, that would theorize the structure as a system, attracting us back to systems theory.

On the contrary, considering symbolic capital strictly as lex insita brings it right in the centre of our theoretical framework. Energy is contingency, that is to say, potential to produce work. And because a social system is manifested purely by communication, what we mean as work is communicative actions. Those actions, whatever they might be and however interpreted by the environment (psychic systems or other social systems) can only be asserted as externalities of the communicating system. It goes without saying that a wide horizon of available systemic externalities signifies a higher ability of the system to reconstitute its boundaries reacting to its environments uncertainty. And this is what activates a system to accumulate energy; the fact that double contingency is always present as the knowledge of the never ending existence of the 'other' and of his (hers or 'its') systemic autopoietic nature, and, thus, as a potential threat to systemic identity. This is to say that the self-referential system is constantly aware of the contingency of its environment, and so, it constantly accumulates and expends energy so to reform it, targeting to the 'best possible lifeworld' (Popper, 2003).

Thus, entropy may have one more meaning, analogous to that of the thermodynamics; the lack of symbolic capital--the lack of an appropriate form of energy. It follows that high thermodynamic entropy signifies the inability of a system to compensate the disturbances and so (i) the system might disintegrate, or (ii) the system might select (if that is feasible) a scenario of isolation, trying to reduce the information it receives, and thus, narrowing its horizon of meaning.

ECONOMIC CAPITAL OR MEANING?

Bourdieu, as we already saw, defines three distinct types of symbolic capital, namely, economic capital, cultural capital and social capital. Although, the potentiality of economic capital needs no further explanation, we encounter here a second point of deviation from his conception; Bourdieu (1986, p. 265) claims that '... every type of capital is reducible in the last analysis to economic capital ...', a conception that he fails to recognize as such, characterizing it a '... brutal fact ...'. This factualization of conception, an objectification of a theoretical apparatus, disregards the problems posed by the contingency of the meaning, reduces meaning to a trivial variable and therefore is totally incompatible with contemporary systems theory, especially in the field of self-referential systems. It is the blind spot of structuralism that has been exposed numerous times in the works of system theorists (e.g. Foerster, 1984, 2002, 2003; Luhmann, 1990, 1995, 2002; Checkland, 1999; Bateson, 2000, 2002; Heylighen and Joslyn, 2001) and by many scholars from a wider context (e.g. Wittgestein, 1978; Popper, 2003; Heidegger, 2006; Spencer Brown, 2008), and there is no need to get into detail here about the problems and contradictions it entails. In fact, as analysed by Niklas Luhmann (2002, pp. 187-193), it is the blind spot of modernism, interestingly enough, a spot indicated also by Bourdieu himself in his later works (Bourdieu, 2005a, 2005b).

Of course, the problem of survival could be introduced here as an opposing argument; should the system not protect its own existence before everything else? And if so, is that not a brutal problem that can only be solved through economy? What about people who are starving to death? Such questions disregard the whole concept of social systems altogether; what we are dealing with here is distinct systems, namely, psychic and social systems that emerge together through and because of communication, and the integral operation of communication is meaning construction. Undoubtedly, biological organisms are a prerequisite, and their survival is important. But in no way their mere existence guarantees the formation of highly complex autopoietic structures such as the psychic or the social systems. And furthermore, the biological death of individuals does not entail the disintegration of social systems (not even in biology, let alone human societies). It is the meaning that stands out as a sine qua non in social systems, not survival; the reconstruction of their identity as sameness, that is, a condensation of the temporal manifestations of the system as a self-referencing, always present, unity.

This deviation from survival to meaning as the main criterion of systems' self-governance has profound consequences. Only by detaching the notion of symbolic capital from the economical deduction can we explain the social phenomena that otherwise pose paradoxes. The Christian martyrs for instance, or the political prisoners in brutal authoritarian regimes that suffer tortures and face capital sentence, are typical examples of psychic systems that choose meaning over survival; in a strict economic sense that selection is irrational, to choose not-being over being cannot be explained unless we consider meaning as the basic variable of systemic reconstruction rather than survival.

One can present numerous similar examples (cf. Willke, 1996) offering empirical proof about the importance of meaning rather than economical profit or survival. At this point, one may also recall that Max Weber (2006, pp. 55-68) actually suggests to conceive of the production processes in the capitalistic context, as circular meaning reconstitution processes; the economical profit takes the place of meaning in the Weberian analysis rather than that of a financial telos: the meaning of profit is to reproduce the ability to produce profit--a clear case of an autopoietic system.

Thus, we suggest conceiving of the economic capital as a distinct form of symbolic capital rather than an underlying nexus connecting the other forms of capital. Therefore, we can assume that the 'connecting pattern' between the different forms of capital is meaning.

From this point on, several options are opened to our investigation--and of course the field is so extended that it is impossible to cover it here. But we can try to examine the other two forms of capital suggested by Bourdieu, namely, cultural and social capital.

We already mentioned that information per se is useless--unless it leads to an informed selection, that is, an act of change. But the same holds true for energy (or just power for that matter). Accumulated energy is meaningless; only a horizon of contingent selections can turn a surplus of energy into something meaningful. And thus, although information and energy are distinct variables, only their coexistence can grant them with substance--a mutual influence between information and energy.

The advantage of the economic capital lies on its flexibility; as a symbol of credit, it can be easily exchanged, and in different contexts, providing its owner with many different options. However, it is self-evident that there are still numerous perspectives that cannot come into existence just by exchanging them with money, namely, those situations that depended on nonrational conceptions, such as 'fatherland' or 'love'. Even knowledge is not something that can be acquired simply by paying for it--it requires personal efforts (and no one can do it for you, even if you pay him). For instance, it would be nice to pay the price and instantly learn German, but yet, this cannot be done. You have to pay and invest on your own time and personal efforts. To put it clearly, learning German can give you many options, some of whom could grant you access to new sources of economic capital. But economic capital alone cannot help you with that. Thus, the flexibility of economic capital is bounded.

And this is the case also with cultural and social capital. They can be transformed to other forms, but they are flexible only in certain contexts. For instance, consider the head of the Catholic Church (or any other religious leader for that matter). He stands in a position that is charged with a surplus of authority ('Devine authority' that supposedly stems right from God). That is, he can legitimately exert his power directly onto the Catholic Church system, in any aspect; but just right there, his unconditional authority ends. He can change any state of affairs directly, but out of the boundaries of the system he belongs to, his power diminishes dramatically; it is obvious that if it was differently, then, for instance, the abortion would be banned (in fact, unthinkable) throughout the world. So the Pope has a social capital, a surplus of energy, but even he is bounded to transform it to action only into a certain context.

To use a Bourdieu-ish expression, the 'predisposition' of the environment is what guarantees the ability to transform capital into other forms. Furthermore, the same predisposition is what turns capital to energy. The point is that every form of capital can offer potentiality if, and only if, it is manifested in a proper environment. It follows that context is the crucial factor that gives substance to the capital: only in a proper context the capital can be enfolded with meaning and thus be turned to energy. This is precisely the type of environment (i.e. another system) that has proper sensors so to be able to conceive of the capital as such. This is not to say that there are social systems where the notion of symbolic capital does not apply. It merely means that not any capital is useful in any context, not even economic capital, although it is designed with flexibility in mind.

CAN SYSTEMIC ENERGY BE CALCULATED?

This is a very wide topic indeed. The problem of finding a way to calculate a system's energy can be roughly analysed as the problem of calculating the impact, that is, the degree of change a system's selection will have on its own environment. Already, the first problem with this approach should be clear: to account a change in the environment to a system's selection would mean to account the phenomena to one cause, and thus, one gets back to a strict causal explanation and, inevitably, to an oversimplification of the world. In the case of the autopoietic systems, the problem of meaning slips in, creating a situation that renders quantitative measurements problematic. Consider a network of autopoietic systems A, B, C,... etc. It is nearly impossible to predict the reaction of C if A selects a new state--for example, a new mode of operation. System A may select a small change that could attract C to a profound one, because the latter might want to compensate an event in its relation to B, caused by A's change; in that case, one could grant system A with high energy levels, but someone else could account that energy to B or to the relation between B and C. It is self-evident then that one cannot have an 'objective'--so to speak--method to measure the potentiality of an autopoietic system.

On the other hand, it is logical to assume that very big and influential systems, such as powerful national states, 'alpha' cities in the globalized world (cf. Sassen, 2007; Mavrofides and Papageorgiou, 2009; GaWC, 2010) or some NGOs (Willke, 2007), have a greater symbolic capital than others. So it might be possible to have a loose indication of those systems potentiality (i.e. energy) with methods similar to those proposed by SET (Bailey, 1990), but nothing more than that. In general, very big (and ultracomplex) systems could be candidates for such measurements because of their systemic inertia, which is caused by the excessive control they perform over their environment and their own sophisticated internal differentiation, which is a result of a prolonged interaction with their environment. But those measurements are in fact an effort to predict the influence of the system measured, rather than a measure of a change caused by that system per se. And it is widely accepted that such predictions always rely on statistical data and result in probabilities. For instance, what is the amount of energy that one would attribute to the Lehman Brothers Holdings Inc. in 2005? And what would it be now (2011)? The point is that a system's symbolic capital is a variable that cannot be accurately calculated (in fact, far than that); it cannot be predicted, and it is always accounted (if at all) in a temporal dimension.

In a few words, to search for a way to accurately calculate the energy of a social system is to search for a way to predict the future. From an epistemological point of view, this could be considered meaningless (Popper, 2003).

CONCLUSIONS AND FUTURE WORK

In this paper, we tried to draw clear distinctions between the different meanings of entropy, namely, between thermodynamic and statistical entropy. We argued that when one refers to entropy as uncertainty in a societal context, then one refers specifically to statistical entropy, and that should be explicit so to avoid confusion with the notion of energy. On the other hand, there is a place for the concept of energy in the social systems theory, because those systems need to undertake action every time they make a selection. Therefore, the notion of thermodynamic entropy has also a place in the social systems theoretical apparatus, although in a somewhat different theoretical framework, which brings Pierre Bourdieu's theory of symbolic capital in the theoretical context of meaning developed by Niklas Luhmann, all things considered of course.

Many questions still remain open, but there is one prominent among them: does the second law of thermodynamics apply to social systems or not? We ask this because we could observe that a small amount of money and a clue (information) about a company's new product could multiply that amount in the stock market. The increase of capital in that case pertains to investors that spend their money but hope that they get greater symbolic capital (stocks) in return, and sometimes, they find themselves right. But where can this end? Does that mean that in the long run, the total amount of symbolic capital available is grossing irreversibly? Our final remark would be that although the co-existence of psychic systems does not guarantee the emergence of social systems, the humans are an a priori condition before every social phenomenon. And therefore, in the last analysis, we get down to the problem of natural resources and their growing scarcity, which simply means that the second law still remains valid and therefore entails new questions as to the governing institutions.

DOI: 10.1002/sres.1084

ACKNOWLEDGEMENT

Thanks to Evangelia Dimaraki Ed.D., Department of Cultural Technology and Communication, University of the Aegean, for her useful remarks during the revision of this paper.

Received 23 March 2010

Accepted 28 January 2011

REFERENCES

Arnopoulos P. 2001. Sociophysics and sociocybernetics: an essay on the natural roots and limits of political control. In Sociocybernetics: Complexity, Autopoiesis and Observation of Social Systems, Geyer F, van der Zouwen J. (eds.). Greenwood Press: Westport, CT; 17-40.

Arnopoulos P. 2005. Sociophysics: Cosmos and Chaos in Nature and Culture. Nova Science Publishers Inc: Hauppauge, New York.

Ashby RW. 1957. An Introduction to Cybernetics. Chapman & Hall Ltd: London.

Bailey KD. 1990. Social entropy theory: an overview. Systems Practice 3(4): 365-382.

Bailey KD. 1997a. The autopoiesis of social systems: assessing Luhmann's theory of self-reference. Systems Research and Behavioral Science 14(2): 83-100.

Bailey KD. 1997b. System entropy analysis. Kybernetes 26(6/7): 674-688.

Bailey KD. 2006a. Sociocybernetics and social entropy theory. Kybernetes 38(3/4): 375-384.

Bailey KD. 2006b. Living systems theory and social entropy theory. Systems Research and Behavioral Science 23: 291-300.

Bateson G. 2000. Steps to an Ecology of Mind. University of Chicago Press: Chicago.

Bateson G. 2002. Mind and Nature--A necessary unity. Hampton Press: Cresskill, NJ.

Bourdieu P. 1986. The forms of capital. In Handbook of Theory and Research for the Sociology of Education, Richardson JG. (ed.). Greenwood Press: New York/ Westport/London; 241-258.

Bourdieu P. 1989. Social space and symbolic power. Sociological Theory 7(1): 14-25.

Bourdieu P. 2005a.(2001). Science de la science et reflexivite, Cours du College de France 2000-2001, Athens, Patakis. [TEXT NOT REPRODUCIBLE IN ASCII], (Trans).

Bourdieu P. 2005b.(2004). Esquisse pour une autoanalyse (Greek translation), Athens, Patakis.

Giannopoulou Efi, (Trans). Checkland P. 1999. Systems Thinking, Systems Practice, John Wiley and Sons Ltd: Chichester, West Sussex.

GaWC. 2010. Globalization and World Cities (GaWC) Study Group and Network, Access: 16/03/2010, link: http://www.lboro.ac.uk/gawc/citylist.html

Georgescu-Roegen N. 1986. The entropy law and the economic process in retrospect. Eastern Economic Journal XII(1): 3-25.

Gowdy J, Mesner S. 1998. The evolution of Georgescu-Roegen's bioeconomics. Review of Social Economy LVI(2): 136-156.

Heidegger M. 2006 (1926), Being and Time. John Macquarrie, Edward Robinson, (trans). Blackwell Publishing: London.

Heylighen F, Joslyn C. 2001. Cybernetics and second-order cybernetics. In Encyclopedia of Physical Science & Technology, Meyers RA. (ed.). Academic Press: New York; 155-170.

Ho M-W. 2010, What is (Schrodinger's) Negentropy?, access: link: http://www.ratical.org/co-globalize/ MaeWanHo/negentr.pdf

Joslyn C. 1990, On the semantics of entropy measures of emergent phenomena. Cybernetic and Systems 22(6): 631-640.

Luhmann N. 1986. The autopoiesis of social systems. In Sociocybernetic Paradoxes. Geyer F, van der Zouwen J. (eds.). Sage: London; 172-192.

Luhmann N. 1990. Essays on Self-Reference. Columbia University Press: New York.

Luhmann N. 1992, Operational closure and structural coupling: the differentiation of the legal system. Cardozo Law Review 13: 1419-1441.

Luhmann N. 1995. Social Systems. Stanford University Press: CA.

Luhmann N. 2002. Theories of Distinction. Stanford University Press: CA.

Maturana HR, Varela FJ. 1980. Autopoiesis and Cognition: The Realization of the Living. Reidel Publishing Company: Boston.

Mavrofides T, Papageorgiou D. 2009. The participation of a region in the global network: integration or exclusion as consequences of the use of ICT, conference proceedings: 2nd Greco-Russian Social & Scientific Forum, 14-18 June, St. Petersburg, Russia.

Popper K. 1957. Irreversibility; or entropy since 1905. The British Journal for the Philosophy of Science 8(30): 151-155.

Popper K. 2003. (2002). Alle Menschen sind Philosophen (Greek edition), Athens, Melani editions. Michalis Papanikolaou, (Trans).

Quine WvO. 1980. From a logical point of view. Harvard University Press: Cambridge.

Sassen S. 2007. A Sociology of Globalization. W. W. Norton & Company, Inc: New York.

Schrodinger E. 1944. What is Life?, access: 10-10-2010, link: http://witsend.cc/stuff/books/Schrodinger-What-is-Life.pdf

Shannon CE. 1948. A mathematical theory of communication. Bell System Technical Journal 27(July and October): 379-423 [TEXT NOT REPRODUCIBLE IN ASCII] 623-656.

Shneider TD. 2010. Information Theory Primer, access: 08/02/2010, link: http://www.ccrnp.ncifcrf.gov/ ~toms/papers/primer/primer.pdf

Spencer Brown G. 2008. Laws of Form. Bohmeier Verlag: Leipzig.

Weber M. 2006. The Protestant Ethic and the Spirit of Capitalism (Greek edition). Gutenberg: Athens.

Wiener N. 1961. Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press--John Wiley & Sons, Inc.: New York, London.

Willke H. 1996 (1993), Systemtheorie: eine Einfuhrung In die Grundprobleme der Theorie sozialer Systeme (Greek edition), Kritiki. Livos Nikolaos (Trans).

Willke H. 2007. Smart Governance--Governing the Global Knowledge Society. Campus Verlag: Frankfurt.

Wittgestein L. 1978. Tractatus Logigo-Philosophicus (Greek translation), Athens, Papazisis Editions. Kitsopoulos, Thanassis, (Trans).

von Bertalanffy L. 1968. General System Theory--Foundations, Development, Applications. Braziller: New York.

von Foerster H. 1971. Perception of the Future and the Future of Perception, access: 10/04/2006, link: http:// grace.evergreen.edu/~arunc/texts/cybernetics/ heinz/perception/perception.html

von Foerster H. 1984. Observing Systems. Intersystems Publications: Seaside California.

von Foerster H. 2003. Understanding understanding--Essays on Cybernetics and Cognition. Springer: Verlag, New York.

von Foerster H, Poerksen B. 2002. Understanding Systems--Conversation on Epistemology and Ethics. Carl-Auer-Systeme Verlag: Heidelberg.

(1) And would also imply the existence of a perpetual machine (Popper, 1957, p. 152)

(2) The reader at this point should keep in mind that the term data is often preferred over the term information, because usually, the latter is understood to carry interpretative connotations.

(3) We chose to note that communication is 'triggered' rather than 'constituted' from a set of symbols, for reasons that hopefully will become clear in the next pages of this article.

(4) Those processes are also known as Markov processes.

(5) Informational entropy is often referred to as negentropy or negative entropy. One possible explanation for this, adopted by numerous authors (Ho, 2010), is the minus sign that Shannon put before his formula. The only reason for that seems to be that the logarithms of probabilities always yield a negative number, because a probability is either 1 (i.e. certainty and log 1 = 0) or less than 1. Schrodinger (1944, p. 26) also used the term negative entropy, but stating explicitly that he refers to free energy. In the same page, Schrodinger reforms Boltzmann's equation as -(entropy) = k log (l/D) [correction made here after initial online publication], thus, introducing a negative sign; apparently, for the same reasons, Shannon did the same four years later. The term is also used by Bertalanffy (1968, p. 42) as 'information' and Wiener (1961, p.11) as information being the 'negative' of entropy. In our discussion here, we chose not to use the term negentropy in order to stay close to Shannon's original approach and avoid introducing possible ambiguities.

(6) Those symbols could also represent the states of a system, in this case, those of the source of the message.

(7) In the case you try to predict the outcome of one only toss, the entropy is at maximum. In Shannon's own words, 'Thus, only when we are certain of the outcome does H ... "(entropy)" ... vanish. Otherwise H is positive ... H is a maximum and equal to log n when all pi are equal (i.e. 1/n).This is also intuitively the most uncertain situation' (Shannon, 1948, p. 11). So Shannon explicitly equals maximum uncertainty to maximum entropy.

(8) Maturana and Varela use the term perturbation to denote an event that causes a structural change: '... we can view these perturbing independent events as inputs, and the changes of the machine that compensate these perturbations as outputs' (1980, p. 82). Thus, the term perturbation could be considered an approximate synonym to the term event as used by Niklas Luhmann when he refers to structural changes, that is, changes to the perception of the environment (see above Luhmann's reference on the correlation of event to information and structural change). Luhmann himself though did seem to prefer the term event.

(9) Thus the term 'negative entropy'.

(10) Shannon denotes the conditional probability as y(x), but in other texts, it is denoted as (y|x), so formula (1) could equivalently be written as R = H(x)-H(y|x).

Thomas Mavrofides (1) *, Achilleas Kameas (2), Dimitris Papageorgiou (1) and Antonios Los (1)

(1) University of the Aegean, Mytilene, Greece

(2) Hellenic Open University, Patras, Greece

* Correspondence to: Thomas Mavrofides, University of the Aegean, Terpandrou 5 Str., 81100 Mytilene, Greece.

E-mail: blacktom@aegean.gr

([dagger]) This article was published online on 2 March 2011. An error was subsequently identified in the ordering of author first and surnames and some additional minor amendments have been made to the text. This notice is included in the online and print versions to indicate that both have been corrected 11 March 2011.
COPYRIGHT 2011 John Wiley & Sons, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2011 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Paper
Author:Mavrofides, Thomas; Kameas, Achilleas; Papageorgiou, Dimitris; Los, Antonios
Publication:Systems Research and Behavioral Science
Article Type:Report
Geographic Code:4EUGR
Date:Jul 1, 2011
Words:10917
Previous Article:Impact of combined feedback-feedforward control-based ordering policies on supply chain stability and responsiveness.
Next Article:A machine learning approach to policy optimization in system dynamics models.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters