Printer Friendly


"Mathematics is what begins, as the Nile, in humbleness and ends in

In the preface to his book Dynamic Programming, R. Bellmann makes a highly interesting interpretation of the role of mathematical models in scientific knowledge, namely: "The researcher's goal is to become aware of surrounding phenomena which he analyses in order to prove that he is thoroughly familiar with such phenomena; he must be capable to foresee their progress and, in order to do so, he needs quantitative measures. For the purpose of making satisfactory quantitative estimates, a device is needed which leads to figures, for which a mathematical model is necessary. It is only natural to assume that, the more accurately it mirrors the real world, the more precise the indication. But the real world is highly complex and if we try to include too many facets of reality into our mathematical model, we will end up with highly sophisticated equations. Therefore, since no mathematical model can provide a comprehensive description of reality, we have to content ourselves with any attempts to know the real world using increasingly complete model rows".

The famous physicist A.S. Eddington has a very poetic representation of the entire model dialectics: "In short, the physicist draws a detailed plan of the atom and then gradually deletes each detail. What is left is the very atom of modern physics!" This image illustrates the ongoing dynamics of models based on the fundamental contradiction of shaping: the model does not coincide with the object, but serves to make it known, knowledge which is impossible to grasp in a single vision, but through permanent improvement of the models, through the dialectic negation of a model by a more complex one.

Drafting mathematical models in various sciences leads to the idea of object approximation due to a row of models (instead of a single approximation, which comprises the entire complexity of the object), and each of such models stands for an intermediary which indicates how objective reality is reflected by human thinking. The fact that such a model is short-lived represents precisely the existence of dialectic knowledge, organized on stages, each stage representing a step further as compared to the previous one.

Any model which is replaced by another transfers many of its features, a process which characterizes the dialectic progress of knowledge, thus taking us a step closer to absolute truth, which is impossible to grasp in its entirety. On the other hand, the fact that some mathematical models have a shorter life is somehow sad, but encouraging at the same time, because the need to constantly design new models boosts mathematical progress in the meaning that such new models generate new theories which break away from the mathematical content from which they originated, and become independent mathematical disciplines. Suffice it to mention the information theory and the game theory, which are general and abstract and may widely be used in various fields of activity.

The role of mathematics in scientific knowledge is illustrated by mathematical models which, due to their content, belong to other sciences and not mathematics per se.

Failure to understand the dialectics of mathematical progress starting from mathematical models in various fields leads to conclusions such as those formulated by physicist E.P. Wigner: "The miracle of the appropriate mathematical language for the formulation of physics laws is a wonderful gift, which we neither understand, nor deserve". This conclusion is natural if mathematics is seen as a game, as an arbitrary construction driven by esthetic purposes. Here is how the above-mentioned physicist continues in the same context: "One may say that mathematics is the science of interesting operations with concepts and rules invented precisely to this end... The largest part of the more advanced mathematical concepts... was invented so as to become adequate topics in relation to which the mathematician may prove his creativity and the sense of formal beauty. The mathematician may formulate few interesting theorems without defining concepts which are beyond those used in axioms which would allow him to design creative logical operations to lead to highly general outcomes".

These observations reflect the total ignorance of the permanent relationship between mathematics and objective reality, and of the occurrence of mathematical theories derived from the transfer of mathematics to various other fields, of the dialectics of mathematical progress based on its own internal laws stemming from reality.

Mathematics "applies" to the outer world precisely because it derives from it, because it expresses a part of the relationships existing in the material world.

In respect of the applications of mathematics in economy, one may claim that they have an old tradition, since the mathematical modelling of economic laws based on quantitative features is all but well known. Mathematical modelling offers numerous opportunities to find solutions to solve economic synthesis issues, but also the settlement of ordinary tasks of production organization.

Economic laws, which express the essential internal relations of economic phenomena, the main features and trends of economic development, require a quantitative establishment of all the processes and correlations which they mirror.

In setting the quantitative aspects of economic laws, a significant part is played by mathematical models whose value is given by the hypotheses generating them, by the principles according to which essential links are selected during the drafting process, by the extent to which hypotheses and simplifications reflect reality. For these reasons, it is necessary, before an economic law is modelled, to perfect the system of collection of data necessary for the economic and mathematical analysis, so that any proposed methods be used with maximum efficiency. Well-organised, oriented and real economic information, the accurate qualitative analysis of economic categories, are the factors which fully dictate the selection of various methods of mathematical processing of initial data. Therefore, in order to sketch theoretical models, the starting point is the adequate reflection of real and complex data referring to the phenomenon which is to be modelled.

Models take the form of one or several relationships between two or several variables; the selection of variables resulting from the analysis of the said phenomenon is of outmost importance. This selection is usually difficult, since the behaviour of a phenomenon is influenced by a number of variables, both dependent and random. We have to opt for one or several sizes which, in connection to the variable under analysis, are deemed determining for the level and behaviour of the variable itself.

Since the probability field is a global measure used to identify the determination corresponding to a given phenomenon, the probability theory has become the background upon which one of the most revolutionary scientific models, i.e. information transfer, relied.

We will illustrate how the intuitive reality information concept is turned into mathematics; this concept represents the basis of all modelling processes relying on the mathematical information theory.

Let us consider a real phenomenon which can be modelled with the help of a finite probability field "X" with the probabilities of its elementary events [p.sub.1], [p.sub.2]... [p.sub.n]. We start from the assumption that everything we know about this phenomenon is the above-mentioned probabilistic model, and we expect that, during an experiment, an event with a higher probability will occur, but we do not know for sure which elementary event will take place. By knowing this elementary event (which actually took place), certain additional information is obtained on the phenomenon under study, which can be quantitatively characterized. We will mark Ik the quantitative value of the additional information obtained when, during an experiment with the phenomenon under study, elementary event k takes place. The average value [[SIGMA].sup.n.sub.k=1][p.sub.k] [I.sub.k] of this random variable [(I).sub.k {1 2,...n}] which we do not know yet and which we do not know if it actually exists (i.e. if the concept of average information can be mathematically shaped) is called an entropy.

Assuming that such entropy exists, it should naturally check the following hypotheses:

1) entropy is a function which depends on n variables: H ([p.sub.1], [p.sub.2]...[p.sub.n]) defined for all ([p.sub.1], [p.sub.2]...[p.sub.n]) positive number systems, so that [[SIGMA].sup.n.sub.k=1][p.sub.k] = 1, n [member of] {2,3...}.

2) function H ([p.sub.1], [p.sub.2]...[p.sub.n]) is continuous and symmetric in its arguments.

3) if n [greater than or equal to]3 and q=[p.sub.1]+ [p.sub.2]>0, then:

H ([p.sub.1], [p.sub.2]...[p.sub.n]) = H (q, [p.sub.3], [p.sub.4]...[p.sub.n]) + q H([[p.sub.1]/2], [[p.sub.2]/2])

If the justification of hypotheses 1) and 2) is obvious, hypothesis 3) requires some explanation. Let us assume that an experiment was conducted, and the average of the amount of information obtained through the awareness of the elementary event (which actually occurred) is done as follows: first of all, we become aware that there was an occurrence of an event of the following index k [member of] {3,4..., n} or that no such event took place. Thus, on average, we get the amount of information H (q, [p.sub.3]...[p.sub.n]), because we actually deal with a probability field whose elementary events have the following probabilities: q, [p.sub.3],... [p.sub.n].

Secondly, if no event of the index 3, 4....n took place, we will find out which k [member of] {1,2} occurred, in wich case there is a probability field whose elementary event have the following probabilities [[p.sub.1]/q], [[p.sub.2]/q], therefore we obtain an average of additional information equal to H ([[p.sub.1]/q], [[p.sub.2]/q]). Therefore, in this latter case, we obtain a total average information amount of:

H(q, [p.sub.3],... [p.sub.n]) + H ([[p.sub.1]/q];[[p.sub.2]/q]).

The former case has the probability 1-q, in the latter case the probability is q, therefore on the whole, the average information amount, i.e. H ([p.sub.1], [p.sub.2]...[p.sub.n]) is (1-q) H(q, [p.sub.3],..., [p.sub.n]) + q [H(q, [p.sub.3],..., [p.sub.n]) + H([[p.sub.1]/q], [[p.sub.2]/q])]= H(q, [p.sub.3],..., [p.sub.n]) + q H([[p.sub.1]/q],[[p.sub.2]/q]) in other words, hypothesis 3.

Hypotheses 1, 2 and 3 are a mathematical modelling of the intuitive information concept, with the help of which the following are deduced mathematically:

(*) H ([p.sub.1], [p.sub.2]...[p.sub.n])= - c [[SIGMA].sup.n.sub.k=1] [p.sub.q] [log.sub.a] [p.sub.k]

(usually c= 1 and a=2).

The fact that this determination of the information amount is reflective is undoubtedly obtained. Indeed, by using mathematical methods (using the (*) formula) a series of entropy properties are obtained which have natural intuitive interpretations. The relevance of the entropy is mainly given by the essential role it plays in many other mathematical models. An example would be self-decoding encodings, in which entropy has a vital part. Here is what it is: may X be a finite group, and each x [member of] X must contain a word [c.sub.x] made up of two letters (we encoded X with an alphabet made of two letters). This encoding can decipher itself: if [c.sub.x][not equal to][c.sub.y], for any different x, y [member of] X, and if no [c.sub.x] can be the initial of another word [c.sub.y]. This means that if we have a sequence of words, this last sequence is determined, therefore in a "text", the words come by themselves. Therefore if X={1, 2, 3} and [c.sub.1]=(1), [c.sub.2]=(01), [c.sub.3]=(001), then in the sequence 10101111010010101 we must have [c.sub.1] [c.sub.2][c.sub.2] [c.sub.1] [c.sub.1] [c.sub.1] [c.sub.2] [c.sub.3][c.sub.2] [c.sub.2]

If we note with [n.sub.x] the length of the word x, i.e. the number of letters contained in this word, then the necessary and sufficient condition for a self-decoding code (of two letters) for x is:

[mathematical expression not reproducible]

If (p (x), x [member of]X) is an X probability field, the average L length of the code (supposedly self-decoding) is: L = [[SIGMA].sub.x[member of]X][n.sub.x] p(x)

In case of self-decoding encoding of the Cartesian product X x X x...x X(n times) and [L.sub.n] is the average length of this (supposedly self-decoding) code, then [[Ln]/n] is the average length of the code as referring to X.

If L(X) = [[Ln]/n] inf., n [member of] {1,2...}, for all X-related self-decoding codes, L(X) is the average minimum length through which X probability field can be encoded (through repetition), and FANNO's fundamental theorem sets the equality: L(X) = H(X) = - [[SIGMA].sub.X[MEMBER OF]X] p(x) [log.sub.2] p (x)

Therefore, the average minimum length through which Z probability space can be encoded is precisely the entropy of X.

The conclusion is that the large class of real processes and phenomena which can be modelled within the mathematical information theory reveals that synthesizing mathematical models are not arbitrary constructs of the human mind, but genuine Roentgen machines used to study reality.

Radu Despa, Ion Otarasanu, Catalina Visan (*)

(*) Radu Despa is Associate Professor at URA, e-mail:

Ion Otarasanu is Teacher at A. D. Xenopol College.

Catalina Visan is Assistent Professor at URA, e-mail:
COPYRIGHT 2017 Romanian-American University
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Despa, Radu; Otarasanu, Ion; Visan, Catalina
Publication:Romanian Economic and Business Review
Article Type:Report
Geographic Code:4EXRO
Date:Sep 22, 2017

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |