Printer Friendly

Fractal dimension versus process complexity.

1. Part I: Theoretical Setting

In the first part of the paper, we will define the basic notions we work with. In particular, we will fix on a computational model: small Turing machines with a one-way infinite tape. For these machines, we will define the so-called space-time diagrams which are a representation of the memory state throughout time. For these diagrams, we will define a notion of fractal dimension. Next, some theoretical results are proven about this dimension.

1.1. Complexity Measures. Complexity measures are designed to capture complex behavior and quantify how complex, according to that measure, that particular behavior is. It can be expected that different complexity measures from possibly entirely different fields are related to each other in a nontrivial fashion. This paper explores the relation between two rather different but widely studied concepts and measures of complexity. On the one hand, there is a geometrical framework in which the complexity of spatiotemporal objects is measured by their fractal dimension. On the other hand, there is the standard framework of computational (resources) complexity where the complexity of algorithms is measured by the amount of time and memory they take to be executed.

The relation we have between both frameworks is as follows. We start in the framework of computations and algorithms and for simplicity assume that they can be modeled as using discrete time steps. Now, suppose we have some computer [tau] that performs a certain task [tau](x) on input x. We can assign a spatiotemporal object to the computation corresponding to [tau](x) as follows.

We look at the spatial representation [[sigma].sub.0] of the memory when [tau] starts on input x. Next we look at ax: the spatial representation of the memory after one step in the computation and so forth for [[sigma].sub.2], [[sigma].sub.3], .... Then, we "glue" these spatial objects together into one object [SIGMA]([tau], x) by putting each output in time next to the other: <[[sigma].sub.0], [[sigma].sub.1], [[sigma].sub.2], ...>. Each [[sigma].sub.i] can be seen as a slice of [SIGMA]([tau], x) of the memory at one particular time i in the computation. This is why we call [SIGMA]([tau], x) the space-time diagram of [tau](x). It is of these spatiotemporal objects [SIGMA]([tau], x) and in particular the limit of x going to infinity that we can sometimes compute or estimate the fractal dimension that we shall denote by d([tau]).

One can set this up in such a way that d([tau]) becomes a well defined quantity. Thus, we have a translation from the computational framework to the geometrical framework. Next, one can then investigate the relation between these two frameworks and, in particular, whether complex algorithms (in terms of time and space complexity) get translated to complex (in the sense of fractal dimension) space-time diagrams.

It is this main question that is being investigated in this paper. The computational model that we choose is that of Turing machines. In particular, we look at small one-way infinite Turing machines (TMs) with just two or three states and a binary tape alphabet.

For these particular machines, we define a notion of dimension along the lines sketched above. In exhaustive computer experiments, we compute the dimensions of all machines with at most three states. Among the various relations that we uncover is the notion that such a TM runs in at most linear time if the corresponding dimension is 2. Likewise, if a TM (in general) runs in superpolynomial time and uses polynomial space, we see that the corresponding dimension is 1.

Admittedly, the way in which fractal geometry measures complexity is not entirely clear and one could even sustain the view that fractal geometry entirely measures something else. Nonetheless, dimension is clearly related to degrees of freedom and as such related to an amount of information storage.

In [1], space-time diagrams of Turing machines and one-dimensional cellular automata were investigated in the context of algorithmic information theory. Notably, an incompressibility test on the space-time diagrams led to a classification of the behavior of CAs and TMs thereby identifying nontrivial behavior [2]. The same type of space-time diagrams was also investigated in connection to two other seminal measures of complexity [3-5] connected to Kolmogorov complexity, namely, Solomonoff's algorithmic probability [2, 6] and Bennett's logical depth [7, 8]. Interesting connections between fractal dimension and spatiotemporal parameters have also been explored in the past [9-11], delivering a range of applications in landscape analysis and even medicine in the study of time series.

The results presented in this paper were found by computer experiments and proven in part. To the best of our knowledge, it is the first time that a relation is studied between computational complexity and fractal geometry, of a nature as presented here.

Outline. The current paper is naturally divided into three parts. In the first part (Sections 1.2-1.4), we define the ideas and concepts and prove various theoretical results. In the second part, Sections 2.1-2.2, we describe our experiment and its results to investigate those cases where none of our theoretical results would apply. Finally, in the third part, we present a literature study where we mention various results that link fractal dimension to other complexity notions.

More in detail, in Section 1.2, we describe the kind of TMs we will work with. This paper can be seen as part of a larger project where the authors mine and study the space of small TMs. As such, various previous results and data sets could be reused in this paper and in Section 1.2 we give an adequate description of these used data sets and results.

In Section 1.3, we revisit the box-counting dimension and define a suitable similar notion of fractal dimension d([tau]) for TMs [tau]. We prove that d([tau]) = 2 in case [tau] runs in at most linear time in the size of the input. Next, in Section 1.4, we prove an upper and a lower bound for the dimension of Turing machines. The Upper Bound Conjecture is formulated to the effect that the proven upper bound is actually always attained. For special cases, this can be proved. Moreover, under some additional assumptions, this can also be proven in general. In our experiment, we test whether in our test space the sufficient additional assumptions were also necessary ones and they turn out to be so.

Section 2.1 describes how we performed the experiment, what difficulties we encountered, and how they were overcome, and also some preliminary findings are given. The main findings are presented in Section 2.2.

We conclude the paper with Section 3.1 where we present various results from the literature that link different notions of complexity to put our results within this panorama.

1.2. The Space of Small Turing Machines. As mentioned before, this paper forms part of a larger project where the authors exhaustively mine and investigate a set of small Turing machines. In this section, we will briefly describe the raw data that was used for the experiments in this paper and refer for details to the relevant sources.

1.2.1. The Model. A TM can be conceived as both a computational device and a dynamical system. In our studies, a TM is represented by a head moving over a tape consisting of discrete tape cells where the tape extends infinitely in one direction. In our pictures and diagrams, we will mostly depict the tape as extending infinitely to the left. Each tape cell can contain a symbol from an alphabet. Instead of symbols, we speak of colors and in the current paper we will work with just two colors: black and white.

The head of a TM can be in various states as it moves over the cells of the tape. We will refer to the collection of TMs that use n states and k symbols/colors as the (n, k)-space of TMs. We will always enumerate the states from 1 to n and the colors from 0 to k - l. In this paper, we work with just two symbols so that we represent a cell containing a 0 with a white cell and a cell containing a 1 with a black cell.

A computation of a TM proceeds in discrete time steps. The tape content at the start of the computation is called the input. By definition, our TMs will always start with the head at the position of the first tape cell, that is, the tape cell next to the edge of the tape; in our pictures, this is normally the rightmost tape. Moreover, by definition, our TMs will always commence their computation in default start state 1.

A TM [tau] in (n, k) space is completely specified by its transition table. This table tells what action the head should perform when it is in State l [less than or equal to] j [less than or equal to] n at some tape cell c and reads there some symbol 0 [less than or equal to] i < k. Such an action in turn consists of three aspects: changing to some state (possibly the same one); the head moving either one cell left or one cell right but never staying still; writing some symbol at c (possibly the same symbol as before). Consequently, each (n, k)-space consists of [(2 * n * k).sup.n*k] many different TMs. We number these machines according to Wolfram's enumeration scheme [12, 13] which is similar to the lexicographical enumeration.

Clearly, each TM in (n, k) space is also present in (m, k) space for m [greater than or equal to] n, by just not using the extra states since they are "inaccessible" from State 1. Many rules in a (n, k) space are trivially equivalent in the computational sense up to a simple transformation of the underlying geometry, for example, by relabeling states by reflection or complementation, hence, for all identical purposes. In the literature, machines that have equivalents are sometimes called amphicheiral; we will sometimes refer to them as machine twins.

We say that a TM halts when the head "falls off" the tape on the right-hand side, in other words, when the head is at the rightmost position and receives an instruction to move right. The tape configuration upon termination of a computation is called the output.

We will refer to the input consisting of the first m tape cells by black on an otherwise white tape as the input m (this is in slight discrepancy with the convention in [14]). In this context, a function is a map sending an input m to some output tape configuration. We call the function where the output is always identical to the input the tape identity function.

By Rice's theorem, it is in principle undecidable if two TMs compute the same function. Nonetheless, for spaces (n, 2) with n small, no universal computation is yet present [15, 16]. In [14], the authors completely classify the TMs in (3, 2) space among the functions they compute, taking pragmatic approaches that possibly produce small errors to deal with undecidability and unfeasibility issues.

1.2.2. Space-Time Diagrams. As previously mentioned in this paper, a central role is played by the so-called space-time diagrams. A space-time diagram for some computation is nothing more but the joint collection of consecutive memory configurations. We have included a picture of space-time diagrams for a particular TM for inputs 1 to 14 in Figure 1.

Since these space-time diagrams are such a central notion to this paper, let us briefly comment on Figure 1. The top-row of each of these fourteen diagrams always represents the input tape configuration of the TM. We have chosen to depict the space-time diagrams of our TM on inputs 1 to 14. The rightmost cell in the diagram is actually not representing a tape cell. Rather it represents the end of the tape so that we depict it with a different color/grey-tone.

Remember that the computation starts with the head of the TM in State 1 in the rightmost cell. Each lower row represents the tape configuration of a next step in the computation. So, there can at most be one cell of different color between two adjacent rows in a space-time diagram. We see that this particular (2, 2) TM with number 346 first moves over the tape input erasing it. Then, it gradually moves back to the edge of the tape writing alternatingly black and white cells to eventually fall of the tape, whence it terminates.

Clearly, these space-time diagrams define spatiotemporal objects by focusing on the black cells. We wish to measure the geometrical complexity of these spatiotemporal objects. Subsequently, we wish to see if there is a relation between this geometrical complexity and the computational complexity (space or time usage) of the TM in question.

In Section 1.3.2, we will see how to assign a measure of geometrical complexity to these space-time diagrams and call this measure the dimension of the TM. Various relations between computational complexity of a TM on the one hand and its dimension on the other hand can be proven. Other relations will be investigated via experiments.

1.2.3. On Our Coding Convention. Note that for this paper it is entirely irrelevant how to numerically interpret the output tape configuration whence we will refrain from giving such an interpretation. However, it has been a restrictive choice to represent our input in a unary way. That is to say, the notion of a function in our context only looks at a very restricted class of possible inputs: blocks of n consecutive black cells for n > 0. The main reason why we do this is that if we do not do this, our functions all behave in a very awkward and highly undesirable way. In [14], this undesirable behavior is explained in the so-called Strips Theorem.

Basically, the Strips Theorem boils down to the following. Let us consider a TM [tau] in (n, 2) space on input x and suppose [tau](x) is a terminating computation. If we number the cells on the tape by their distance to the edge, let i be the largest cell number that is visited in the computation of [tau](x). Clearly, any tape input that is equal to x on the first i cells but possibly different on the cells beyond i will perform exactly the same computation and in this sense it is input-independent.

We have chosen our input-output convention in such a way to prevent the Strips Theorem. There are two undesirable side effects of our coding. Firstly, it is clear that any TM that runs in less than linear time actually runs in constant time. Secondly, the thus defined functions are very fast growing if we were to represent the output in binary. In particular, the tape identity represents an exponentially fast growing numerical function in this way.

A positive feature of our input convention is that the amount of symmetry present in the input coding facilitates various types of analysis and in particular automated function-completion seems to run more smoothly. Warning. A clear drawback of our convention is that one tends to think of the nth input as the number n. In the context of this paper, this would not be good practice since it would, for example, yield a linear primality test and factorization algorithm.

We will here describe an alternative way of representing the input a and denote the representation by [rho](a). This representation [rho](a) will be such that it avoids the Strips Theorem yet does not intrinsically entail exponential growth of the tape identity and similar functions in case we would interpret our output configuration in binary. Although we do not use nor explore the alternative input coding, we find it worth mentioning here and hope that future investigations can take up the new coding.

In order to represent the input a according to p, we first write the input a in binary as [[SIGMA].sup.[infinity].sub.n=0] [a.sub.n][2.sup.n] with all but finitely many [a.sub.n] = 0. Let us denote the cells on the tape by [c.sub.0], [c.sub.1], [c.sub.2], .... Here, [c.sub.0] is the cell at the edge, [c.sub.1] is the cell immediately next to it, and so forth. For a [not equal to] 0, let k = [[log.sub.2](a)] + 1 and k = 1 otherwise. That is, k + 1 is the number of digits in the binary expansion of a.

For each 0 [less than or equal to] i [less than or equal to] k, we will represent at in [c.sub.2*i] in the canonical way: we set [c.sub.2*i] to be one/black whenever [a.sub.i] = 1 and we set [c.sub.2*i] to be zero/white otherwise. Moreover, we set all odd-labeled cells [c.sub.2*i+1] to be zero with the sole exception at cell 2k + 1 that we define to be one/black.

It is clear that p avoids the Strips Theorem. Moreover, if we were to interpret the output as binary, the tape identity defines a function whose growth rate is only in the order of x [right arrow] [x.sup.2].

1.3. Fractal Dimensions. In this section, we will briefly recall the definition of and ideas behind the box-counting dimension which is a particular fractal dimension having the better computational properties whence better suited for applications. In Section 3.1, we relate the box dimension to various other notions of fractal dimension and in particular to the well-known Hausdorff dimension.

After revisiting the notion of box-counting dimension, we see how to apply these to Turing machines and their space-time diagrams.

1.3.1. Box Dimension. We will use the notion of box dimension. This notion of fractal dimension can be seen as a simplification of the well-known Hausdorff dimension (see [17] and our survey section, Section 3.1). The Hausdorff dimension is mathematically speaking more robust than the box dimension. However, the box dimension is easier to compute and is known to coincide with the Hausdorff dimension in various situations.

Let us briefly recall the definition of the box dimension and the main ideas behind it. The intuition is as follows. Suppose we have a mathematical object S of bounded size whose "volume" V(S) we wish to estimate. For example, let us work with a space [R.sup.n] that has dimension n large enough to embed our object S. The idea now is to cover the object S by boxes in [R.sup.n] and estimate the "volume" V(S) of S as function of the total number of boxes N(S) needed to cover S. Clearly, the number of boxes N(S) needed to cover S depends on the size of the boxes used. Therefore, in the analysis, we will take along the parameter [tau] which denotes the length of the edge of the boxes used, and we will write the number of boxes needed to cover S as N(S, r).

If S is a line, which is a one-dimensional object, the corresponding notion of "volume" V(S) is just the length of the line segment. To estimate the length V(S), we clearly have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

if this is well defined.

If S is a plane, or more in general a two-dimensional manifold, the corresponding notion of "volume" V(S) is just the surface of the plane/manifold segment for which we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

if this is well defined.

Likewise, for a three-dimensional object, to estimate its volume, we would have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

if this is well defined, and in general, for a d-dimensional object, we would obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

The idea behind the definition of the box dimension is to take (4) as a defining equation of dimension if this makes sense mathematically speaking. Thus, solving for d in (4), we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

The last equality is justified in case S is bounded as by assumption and V(S) is finite so that [lim.sub.r[down arrow]0] (V(S)/ log(r)) = 0.

The reflections above form the main ideas behind the definition of box dimension that we will use in this paper.

Definition 1 (box dimension). Let S be some spatiotemporal object that can be embedded in some [R.sup.n]; let N(S, r) denote the minimal number of boxes of size [tau] needed to fully cover S. The box dimension of S is denoted by [delta](S) and is defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (6)

in case this limit is well defined. In all other cases, one will say that [delta](S) is undefined.

1.3.2. Box Dimension for Space-Time Diagrams. Let us see how we can adapt the notion of box dimension to our space-time diagrams. The spatiotemporal figure S that we wish to measure will be defined by the black cells in the space-time diagram. Clearly, for each particular input on which the TM halts, the corresponding space-time diagram is finite and has dimension d(S) = 2: each black cell defines a piece of surface. It gets interesting when we consider limiting behavior of the TM on larger and larger inputs.

A First Attempt. Let [tau] be some TM and let S([tau], x) denote the space-time diagram corresponding to TM [tau] on input x if this is well defined, that is, if [tau] eventually halts on input x which we will denote by [tau](x)[down arrow]. The question is as follows: what is the sensible way to define the dimension d([tau]) of our TM [tau]? It does not make much sense to define d([tau]) = [lim.sub.x[right arrow][infinity]][delta](S([tau], x)) for a couple of reasons.

Firstly, [tau] might diverge on various inputs. We can easily bypass that by tacitly understanding [lim.sub.x[right arrow][infinity]] as x getting larger and larger among those x for which [tau](x)[down arrow] demanding that there are infinitely many such x. In case there are just finitely many x on which [tau] converges, we could say that d([tau]) is undefined.

The second objection is more serious. As for each x with [tau](x)[down arrow], we have that [delta](S([tau], x)) = 2; we see that all limits converge to the value 2 if they are well defined. This of course is highly undesirable. We can overcome this objection by scaling the length of each S([tau], x) to some figure scale(S([tau], x)) whose length has unit size. Thus, the black areas in scale(S([tau], x)) become more and more fine-grained so that it seems to make sense to define d([tau]) = [delta]([lim.sub.x[right arrow][infinity]]scale(S([tau], x))).

A Second Attempt and Formal Definition. The new candidate d([tau]) = [delta]([lim.sub.x[right arrow][infinity]]scale(S([tau], x))) has many good properties. However, for this new candidate, we again see two main objections.

The first objection is that [lim.sub.x[right arrow][infinity]]scale(S([tau], x)) need not exist at all and, even worse, is likely not to be well defined in most cases. We could try to remedy this by working with subsequences for which the limit is defined but it all seems very hairy.

The second objection is that this new definition seems hard to numerically approximate at first glance. We will see how to overcome the second objection which will yield to us automatically a solution to the first objection.

As we mentioned before, we cannot first approximate [lim.sub.x[right arrow][infinity]]scale(S([tau], x)) and then compute the corresponding [delta] as this would always yield the answer 2. However, what we can do is simultaneously approximate both [delta] and [lim.sub.x[right arrow][infinity]]scale(S([tau], x)). There are a lot of choices in how we approximate and in how fast we approximate [delta] and how fast we approximate [lim.sub.x[right arrow][infinity]]scale(S([tau], x)) relatively to the approximation of [delta].

There seems to be a canonical choice though. The approximation of the dimension [delta] is dependent on the size [tau] of the boxes. It seems very natural to take the size of our boxes to be exactly the size of one tape cell. The size of one tape cell is naturally determined by scale(S([tau], x)). Let us determine [tau] as dictated by scale(S([tau], x)). In order to facilitate our discussion, we first fix some notation.

Definition 2. For [tau] a TM and x an input so that [tau](x)[down arrow], we denote by t([tau], x) the amount of time steps [tau] needed on input to terminate. Likewise, s([tau], x) denotes the amount of space cells used by the computation of [tau] on input x. Thus, s([tau], x) measures the distance between the edge of the tape and the furthest tape cell visited by the head during the computation.

We will sometimes write [t.sub.[tau]](x) or even just t(x) if the context allows us to, similar for the space-usage function s([tau], x).

By the nature of our input-output protocol, there exist no TMs whose runtime is sublinear but not constant. Let us first concentrate on the TMs that run in at least linear time and deal with the constant time TMs later. If a TM halts in nonconstant time, the least it should do is read all the input, do some calculations, and then go back to the beginning of the tape. Thus, clearly t([tau], x) > s([tau], x), whence the scaling of the figure S([tau], x) is best done by resizing the runtime to get length 1. Consequently, the size of [tau] scales to r = l/t([tau], x).

Recall that N([SIGMA], r) denotes the minimal number of boxes of size [tau] needed to cover the spatiotemporal object [SIGMA]. Now that we have determined the size of r, we can write N([tau], x) instead of N(S([tau], x), 1/t([tau], x)) and it is clear that N([tau], x) is just the number of black cells in the space-time diagram of [tau] on input x. Thus, the second attempt of defining d([tau]) then translates to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

In this definition, we could address the issue of undefinedness by replacing lim by lim sup or lim inf. Notwithstanding the theoretical correctness of this move, it seems hardly possible to sensibly compute lim sup or lim inf in the general setting.

In the current paper, however, we have only considered TMs with either two or three states and just two colors. It turned out that in this setting we could determine both lim inf and lim sup. In all cases that we witnessed where the limit (7) outright was not well defined, we were able to identify different subsequences where the limit (7) did converge so that we could choose to go with either lim sup or lim inf. It turns out that, in general, the lower the dimension, the more interesting the corresponding TM, so that we decided to work with lim inf.

For TMs with constant runtime, we know that only a constant number of cells will be visited and possibly changed color. For these TMs, the figure S([tau], x) can only be sensibly scaled by using the input size. By doing so, we see that in the limit we just get a black line whose dimension is clearly equal to one. However, as we consider constant runtime TMs as a degenerate case so to say, we will for convenience define the dimension of such machines to be equal to 2. We do so in order to have them more like linear time TMs (see Lemma 4). All these considerations and reflections lead us to the following definition.

Definition 3 (box dimension of a Turing machine). Let [tau] be a TM that converges on infinitely many input values x. In the case of [tau](x)[down arrow], let N([tau], x) denote the number of black cells in the space-time diagram of [tau] on input x and let t([tau], x) denote the number of steps needed for [tau] to halt on x.

We will define the box dimension of a TM [tau] and denote it by d([tau]). In case t([tau], x) is constant from some x onwards, we define d([tau]) := 2. Otherwise, we define

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

Note that our definition of dimension can readily be generalized to nonterminating computations. Also, restricting to computational models with discrete time steps is not strictly necessary.

1.3.3. Linear Time Turing Machines. For certain TMs [tau], we can actually compute their box dimension. Let us reconsider TM 346 again whose space-time diagrams were displayed in Figure 1. Due to the extreme regularity in the space-time diagrams, we can see that TM 346 runs in linear time, that is to say, linear in the length of the representation of the input.

Thus, after scaling each space-time diagram so that the vertical time axis is rescaled to 1, we will always have a little surface in the shape of a black triangle in the scaled space-time diagram. The box dimension of a triangle is of course 2. We may conclude that d(2, 2-TM 346) = 2. Of course the only important feature used here is the linear-time performance of 2, 2-TM 346. We can summarize this observation in a lemma.

Lemma 4. Let [tau] bea TM that runs at most linear time. Then, d([tau]) = 2.

Proof. We fix some TM [tau] that runs at most linear time. Our input/output convention is such that either [tau] is constant time or [tau] is linear time. In case [tau] runs in constant time, we have that d([tau]) = 2 by definition.

Let us consider the case that [tau] runs in linear time. It must be the case that the head goes all the way to the end of the tape input; if not, [tau] would run in constant time from some input y onwards. Input x is represented by x + 1 consecutive black cells. In the worst case (the fewest amount of black cells), at all the first steps, the input is erased and replaced by a white cell as is the case in Figure 1.

However, [tau] runs in linear time, for example, for any x, the machine [tau] runs at most a-(x+1) many steps with 2 [less than or equal to] a < [infinity]. After scaling, the input will get size (x + 1)/(a - (x + 1)) = 1/a. Thus, in the worst case, the upper triangle has size 1/2[a.sup.2] which is independent of x and thus nonvanishing. Clearly, the box dimension of a triangle of whatever size equals 2. Thus, d([tau]) = 2 as what we wanted to see.

1.4. The Space-Time Theorem and Applications. Above we saw that for linear time TMs we can actually compute the corresponding dimension. However, for nonlinear TMs, we can only prove an upper bound on the box dimension.

1.4.1. The Space-Time Theorem: An Upper Bound

Theorem 5 (Space-Time Theorem). For a given TM [tau], lets(x) denote the amount of cells visited by [tau] on input x, and let t(x) denote the amount of computation steps it took [tau] to terminate on input x:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (9)

Proof. The box dimension is maximal in case all cells under consideration are black. This number is bounded above by s-t. Plugging this in the definition of d([tau]) gives us our result:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

As we will see, in all cases, the upper bound given by the Space-Time Theorem is actually attained in our experiment. It is unknown, however, whether it holds in general.

1.4.2. A Lower Bound. We first observe that for any Turing machine [tau] we have that d([tau]) [greater than or equal to] l. The main idea is that if there are too many white cells, then the Turing machine would enter a loop and either finish straightaway or never finish (we would like to thank an anonymous referee for suggesting this simple argument to us).

Theorem 6. The dimension d([tau]) [greater than or equal to] l for any Turing machine [tau].

Proof. Let [sigma] be the number of states of some fixed TM [tau]. It is clear that if we have a sequence of a consecutive steps in a computation of [tau] where the tape is entirely white, then [tau] will enter a loop. Thus, for [tau], in order to exhibit nontrivial behavior, we should have, modulo an additive constant, that

[[N([tau], x)]/[t([tau], x)]] > [1/[sigma]]

For linear or constant time TMs [tau]', we had already observed in Lemma 4 that d([tau]') [greater than or equal to] l so we may assume that [tau] has super-linear runtime asymptotic behavior. But then, from (11), it follows that in the limit we have log (N([tau], x))/ log (t([tau], x)) [greater than or equal to] l as was to be shown.

The method in proving the lower bound seems very crude: no blocks of a consecutive entirely white tapes may occur. It seems that more in-depth analysis could yield sharper lower bounds.

1.4.3. The Asymptotic Conjectures. In Theorem 6, we proved d([tau]) [greater than or equal to] l. However, we conjecture that something stronger actually holds.

Conjecture 7 (space-time ratio conjecture). For each TM t which runs in more than linear time, we have that [lim.sub.x[right arrow][infinity]](s([tau], x)/t([tau], x)) = 0.

In certain cases, the Space-Time Theorem (Theorem 5) and the lower bound as proved in Theorem 6 coincide.

Lemma 8. In case a TM [tau] uses polynomial space and runs superpolynomial time, one has that d([tau]) = l.

More in general, if for a TM [tau] we have that [lim.sub.x[right arrow][infinity]](s([tau], x)/t([tau], x)) = 0, then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (12)

Proof. By combining our general lower and upper bound as proven in Theorems 6 and 5, respectively, we see that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (13)

Lemma 8 shows us that, in certain cases, the upper bound as given in the Space-Time Theorem is actually attained. We will empiricallyverifythat this is always the case in (3, 2) space and conjecture that holds more in general.

Conjecture 9 (Upper Bound Conjecture). We conjecture that for each n [member of] [omega] and each TM [tau] in (n, 2) space we have d([tau]) = l + lim [inf.sub.x[right arrow][infinity]](log ([s.sub.[tau]](x))/log ([t.sub.[tau]] (x))).

Thus, Lemma 8 provides a proof of the Upper Bound Conjecture in certain situations. For any TM that performs at most in linear time, we have also proven the Upper Bound Conjecture in Lemma 4. In Lemma 11, we will prove the Upper Bound Conjecture for some other situations too. In order to prove this, we first need an additional insight.

Lemma 10. For each TM [tau], there is a constant [c.sub.[tau]] [member of] [0, l] with

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (14)

Proof. Since [N.sub.[tau]](x) is bounded above by [s.sub.[tau]](x) * [t.sub.[tau]](x) (we observed this before in the proof of Theorem 5), we get that for each TM [tau] we have for each x that [N.sub.[tau]](x)/([s.sub.[tau]](x) * [t.sub.[tau]](x)) [member of] [0, 1]. But then clearly lim inf is well defined and within the closed interval [0, 1].

Lemma 11. In case [lim.sub.x[right arrow][infinity]]([N.sub.[tau]](x)/([s.sub.[tau]](x) * [t.sub.[tau]](x))) [not equal to] 0, one can prove the Upper Bound Conjecture; that is,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)

Proof. We may assume that [tau] runs in at least linear time for, otherwise, the claim is proved by Lemma 4. Thus, [lim.sub.x[right arrow][infinity]](1/[t.sub.[tau]](x)) = 0. Our assumption gives us [lim.sub.x[right arrow][infinity]]([N.sub.[tau]](x)/([s.sub.[tau]](x) * [t.sub.[tau]](x))) = [c.sub.[tau]] for some [c.sub.[tau]] [not equal to] 0. Note that in this assumption we have a limit and not lim inf so that any subsequence converges to the same limit. Consequently, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (16)

which implies the Upper Bound Conjecture provided [c.sub.[tau]] [not equal to] 0.

The following proposition provides an almost equivalent formulation of the Upper Bound Conjecture.

Proposition 12. For each TM [tau], one has that if

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (17)

then the Upper Bound Conjecture holds.

Moreover, if the Upper Bound Conjecture holds uniformly for some TM [tau], that is, d([tau]) = l + [lim.sub.x[right arrow][infinity]](log ([s.sub.[tau]](x))/ log ([t.sub.[tau]](x))), then lim [inf.sub.x[right arrow][infinity]](log ([N.sub.[tau]](x))/log ([s.sub.[tau]](x)[t.sub.[tau]](x))) = 1.

Proof. If lim [inf.sub.x[right arrow][infinity]](log([N.sub.[tau]](x))/log([s.sub.[tau]](x)[t.sub.[tau]](x))) = 1, we also have [lim.sub.x[right arrow][infinity]](log([N.sub.[tau]](x))/log([s.sub.[tau]](x)[t.sub.[tau]](x))) = 1 since log ([N.sub.[tau]](x)) [less than or equal to] log ([s.sub.[tau]](x)[t.sub.[tau]](x)) for each x. Consequently,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (18)

For the other direction, we assume d([tau]) = 1 + [lim.sub.x[right arrow][infinity]](log([s.sub.[tau]](x))/log ([t.sub.[tau]](x))). Consequently,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (19)

Using this identity lim [inf.sub.x[right arrow][infinity]](log ([N.sub.[tau]](x))/log ([t.sub.[tau]](x))) = [lim.sub.x[right arrow][infinity]](log([s.sub.[tau]](x)[t.sub.[tau]](x))/log([t.sub.[tau]](x))), we see that for any subsequence [x.sub.n] [right arrow] [infinity]

we have that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (20)

Consequently,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (21)

But since [N.sub.[tau]](x) [less than or equal to] [s.sub.[tau]](x)[t.sub.[tau]](x), the possibility lim [inf.sub.x[right arrow][infinity]](log ([N.sub.[tau]](x))/log ([s.sub.[tau]](x)[t.sub.[tau]](x))) > 1 cannot occur and we are done.

1.4.4. The Space-Time Theorem and P versus NP. As usual, we denote by P the class of problems that can be solved by a TM which uses an amount of time that is bounded by some polynomial applied to the size of the input (representing an instantiation of the particular problem).

Likewise, we denote by NP the class of problems so that any solution of this problem can be checked to be indeed a solution to this problem by a TM which uses an amount of time that is bounded by some polynomial applied to the size of the input. Here, N in NP stands for nondeterministic. That is to say, a nondeterministic TM would run in polynomial time by just guessing the right solution and then checking that it is indeed a solution. It is one of the major open questions in (theoretical) computer science whether P = NP or not.

By PSPACE we denote the class of problems that can be solved by a TM which uses an amount of memory space that is bounded by some polynomial applied to the size of the input. It is well known that NP [subset or equal to] PSPACE. Thus, by Lemma 8, we can state a separation of P and NP in terms of dimensions.

Let [PI] be some NP-complete problem. If for each PSPACE Turing machine [tau] that decides [PI] we have that d([tau]) = 1, then P [not equal to] NP.

Clearly, this does not constitute a real strategy since, for one, in general it is undecidable whether d([tau]) = 1 [18].

2. Part II: Experimental Setting

In this second part of the paper, we describe the experiment we have performed to empirically test whether the theoretical results also hold in cases that do not satisfy the necessary requirements for the theoretical results to be applied.

2.1. The Experiment. We have already proven on purely theoretical grounds that there is a relation between runtimes and fractal dimension of the space-time diagrams. However, our theoretical results only apply to a restricted class of TMs.

In the experiment, we wanted also to study the fractal dimension of the space-time diagrams in cases where our theoretical results do not apply. Moreover, guided by the first outcomes of our experiment, we formulated the Upper Bound Conjecture (Conjecture 9) and gathered data as to investigate if the conjecture holds in (3, 2) space.

2.1.1. Slow Convergence. For TMs [tau] that run in at most linear time, we have proven in Lemma 4 that d([tau]) = 2. Our aim is to use computer experiments to compute the box dimension of all TMs [tau] where d([tau]) is not predicted by any theoretical result.

A substantial complication in this project is caused by the occurrence of logarithms in the definition of d([tau]). As a consequence, increase in precision of d([tau]) requires exponentially larger inputs. This makes direct brute-force computation unfeasible. As an example, let us consider 2, 2-TM 346 again whose space-time diagrams we saw in Figure 1. By Lemma 4, we know that the box dimension of this Turing machine equals two. However, Figure 2 shows us how slow the rate of convergence is.

Our way out here is to apply numerical and mathematical analysis to the functions involved so that we can retrieve their limit behavior. In particular, we were interested in three different functions.

As before, for [tau] a TM we denote by [t.sub.[tau]](x) the amount of time steps needed for [tau] to halt on input x; by [N.sub.[tau]](x) we denote the number of black cells in the space-time diagram of [tau] on input x and by [s.sub.[tau]](x) the distance between the edge of the tape and the furthest cell visited by [tau] on input x.

With these functions and knowledge of their asymptotic behavior, we can compute the corresponding dimension d([tau]) and the upper bound 1 + lim [inf.sub.x[right arrow][infinity]]([s.sub.[tau]](x)/[t.sub.[tau]](x)). The functions are guessed by looking at large enough initial sequences of their outcomes in a mechanized fashion. The few cases that cannot be done in a mechanized version were analyzed by hand.

It is important to bear in mind this process and the fact that we work with guesses that can be wrong in principle. For example, if we speak of a TM [tau] that performs in time of order [n.sup.2], this means in this paper that, by definition, after applying our particular analyzing process, [tau] was classified as an O([n.sup.2]) time performer. It may well be that in reality [tau] needs exponential time. However, there are strong indications that our guessing process is rather accurate [14, 19].

2.1.2. Methodology. In this section, we will describe the steps that were performed in obtaining our results. Basically, the methodology consists of the following steps:

(1) Each TM that lives in 2, 2 space also occurs in (3, 2) space so for the final results it suffices to focus on this data set. The TMs that diverge on all inputs were removed from the initial list of 2 985 984 TMs in the (3, 2) space, since for them the dimension is simply not defined. For the remaining TMs, we erased all diverging inputs from the sequence to which we were to apply our analysis. Since we are only interested in limit behavior of any subsequences, this does not alter our final results.

We isolated the TMs for which there is no theorem that predicts the corresponding dimension. By Lemmas 4 and 8, this means that we only needed to pay attention to those TMs which use more than linear time. Moreover, we also removed all simultaneous EXP-time and PSPACE performers to finally end up with a collection of TMs. The distribution of the resulting collection is summarized in Table 1.

In addition, there are 1792 TMs that perform in exponential time and linear space, but clearly they needed no further analysis since we know on theoretical grounds that their corresponding dimension is 1. All other machines in (3, 2) space were very simple in terms of time computational complexity; that is, they perform at most in linear time.

(2) Per TM [tau], we determined/guessed its function [s.sub.[tau]] (x) corresponding to the space usage of [tau] on input x. Although this guessing was already performed in [14], we decided to redo the process. The main reasons to do this were a new release of our analyzing tool Mathematica together with the fact that the authors had obtained new insights into how to best perform the analysis. Our results coincided in large part with the ones obtained in [14] but also showed minor discrepancies.

(3) Per TM [tau], we determined its function [t.sub.[tau]](x) corresponding to the time usage of [tau] on input x.

(4) Per TM [tau], we determined its function [N.sub.[tau]](x) corresponding to the number of black cells in the space-time diagram of [tau] on input x.

(5) Per TM [tau], we computed lim [inf.sub.x[right arrow][infinity]]([s.sub.[tau]](x)/[t.sub.[tau]](x)).

(6) Per TM [tau], we computed its dimension d([tau]) as d([tau]) = lim [inf.sub.x[right arrow][infinity]](log ([N.sub.[tau]](x))/log ([t.sub.[tau]](x))).

(7) Per TM [tau], we compared its dimension d([tau]) to its theoretical upperbound l + lim [inf.sub.x[right arrow][infinity]] (log ([s.sub.[tau]](x))/ log ([t.sub.[tau]](x))) which we computed separately.

(8) Per TM [tau], we computed lim [inf.sub.x[right arrow][infinity]](log ([N.sub.[tau]](x))/ log ([S.sub.[tau]](x)[t.sub.[tau]](x))) and lim [inf.sub.x[right arrow][infinity]]([N.sub.[tau]](x)/([s.sub.[tau]](x) * [t.sub.[tau]](x)).

2.1.3. Alternating Convergent Behavior. Some of the Turing machines possessed alternating asymptotic behavior. This has been already observed in [14]. Typically, the alternation reflects modular properties of the input like being odd or even or of the number of states.

The differences between the alternating subsequences can be rather drastic though. The most extreme example we found is reflected in Figure 3.

Figure 3 shows the space-time diagrams for TM [tau] with number 1728 529 for inputs 1 to 7. For convenience, we have changed the orientation of the diagrams so that time "goes from left to right" instead of going from "top to bottom." This machine runs in linear time for even inputs and exponential time for odd inputs. The runtime is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (22)

The number of black cells ([N.sub.[tau]](x)) in the space-time diagram exhibits the same behavior. Note, however, that the space that r uses is linear in the size of the input and in particular the amount of tape cells used is equal to the size of the output.

Moreover, we note that the sequence of outputs is of a very simple and regular nature. The outputs can be grouped in series of two, where the output on input 2*n+1 consecutive black cells is equal to the output on input 2*n + 2 consecutive black cells. So, in a sense this TM incorporates two different algorithms to compute this output: one in linear time and the other in exponential time.

We have found alternating sequences of periodicities 2, 3, and 6. Like we noted in [14], the periodicity typically reflects either the number of states, the number of colors, or a divisor of their product. Figure 4 shows an example of TM number 1159 345 whose corresponding box-counting sequence [N.sub.[tau]] (x) has periodicity six.

In Figure 4(a) we show the points [N.sub.[tau]](x) on the vertical axis plotted against the input x. In Figure 4(b) we estimated a fit from below.

This alternating behavior reflects the richness of what we sometimes refer to as the microcosmos of small Turing machines. It is this alternating behavior that complicated analyzing the data set in a straightforward automated fashion.

2.1.4. Determining the Important Functions. In this section, we would mainly like to stress that most of the computational effort for this paper has actually been put into determining/guessing the functions [s.sub.[tau]](x), [t.sub.[tau]](x), and [N.sub.[tau]](x) and computing the corresponding limits.

As may have become manifest from the previous section, it is hard to automatically guess perfect matches for these functions in case there is alternating behavior present. Finally, we could deal with all functions in a satisfactory way. Notwithstanding our confidence, it is good to bear in mind that all classifications provided in this paper are given the current methodology.

We will here briefly describe how we proceeded to guess our functions. The methodology is fairly similar to that performed in [14]. However, for this project, we used newer tools and a slightly more sophisticated methodology which accounts for possible differences from [14]. Schematically, the guessing process can be split into the following steps:

(1) We collected the sequences for time usage [t.sub.[tau]](x) and space usage [s.sub.[tau]](x) from the TM data set as described in Section 1.2 of this paper.

(2) These sequences [t.sub.[tau]](x) and [s.sub.[tau]](x) are only given for the first 21 different inputs. We used an initial segment of 15 elements of these sequences to guess in an automated fashion the corresponding function that allegedly generates this sequence. In some cases, the beginning of the sequence (up to three elements) was removed because the beginning did not match the general pattern that only occurred later on in the sequence. If we would leave the first values, Mathematica was no longer able to find the general pattern. The guessing process was done in Mathematica versions 8 and 9 using the FindSequenceFunction as built-in in this software. In some cases, FindSequenceFunction came with a solution; in other cases, it did not. The function FindSequenceFunction does various standard numerical and algebraic analyses on the sequences but also checks for obvious recurrence patterns. The function, built-in into the computer algebra system Mathematica, takes a sequence of integer values {[a.sub.1], [a.sub.2], ..., [a.sub.m]} to define a function that yields a sequence [{[a.sub.n]}.sub.n[member of][omega]] which coincides on the first m values. FindSequenceFunction finds results in terms of a wide range of integer functions such as sums and series coefficients, as well as implicit solutions to difference equations using early elements in the list to find candidate functions, and then validates the predicted function by looking at later elements.

(3) Thus, we obtain two lists: a list [L.sub.1] of TMs where we found a guess and a list [L.sub.2] where we did not find any guess. From the initial list of 528 runtime sequences, we could not guess 11, and from the 167 space sequences, we could not guess 15. Note that this number need not be equal since TMs from various different functions had the same space sequence. Moreover, 288 runtime sequences and 85 space sequences in [L.sub.1] were alternators. Mathematica guessed the right function using terms like [(-1).sup.x]. However, in computing lim inf, we manually split those sequences into, for example, an even and an odd part, to obtain the corresponding limits.

(4) We performed a check on our guesses as collected in [L.sub.1] by applying the guessed function to inputs 16-21. In almost all cases, our guess turned out to be predictive and coincided with the real values. For those few cases where there was a discrepancy between the guesses and the actual values, we made a new guess based on a larger initial segment, now consisting of the first 18 elements, and then testing it once more on new real values. Finally, we were able to guess and successfully check all of the sequences, both space and time usage, in [L.sub.1].

(5) From the list [L.sub.1], we deleted all complexities for which we knew the dimension on theoretical grounds so as to obtain a list [L.sub.3].

(6) For the TMs in [L.sub.3], we used the supercomputing resources of CICA (Andalusian Center for Scientific Computing) to compute the corresponding sequences [N.sub.[tau]](x) with a C++ TM simulator. To reduce the computational effort, for each set of equivalent TMs (up to a geometrical transformation, such as state mirroring), only one representative was run. We applied the guessing process as described above for [t.sub.[tau]](x) and [s.sub.[tau]](x) also to [N.sub.[tau]](x) to come up with corresponding functions.

(7) For the sequences in [L.sub.2], we applied a semimanual process. Basically, there were three different procedures that we applied so as to find solutions also in [L.sub.2] for the sequences [s.sub.[tau]](x), [t.sub.[tau]](x), and [N.sub.[tau]](x).

(a) In most of the cases, there was alternating behavior present. We could read off the periodicity from looking at graphs as, for example, in Figure 4. Sometimes, looking directly at the space-time diagrams was more informative. In all of these cases but one, we finally did find functions for the subsequences using our methodology as described above. As splitting the sequences into 2, 3, or 6 alternating ones reduces the length of the input sequence of FindSequenceFunction, we run in some cases 40 or 60 more inputs with the C++ simulator to end up with a sufficiently large data set.

One alternating TM did not succumb to this methodology. This was TM 582263 whose treatment is included in Section 2.2.3. We run this TM for 35 inputs with the simulator and observed that [N.sub.[tau]](x)/([s.sub.[tau]](x)[t.sub.[tau]](x)) clearly converges to a constant, one for each subsequence, so we approximated [N.sub.[tau]](x) by c*[s.sub.[tau]](x)[t.sub.[tau]](x) which was enough for the log-limit without knowing the exact value of c.

(b) In some cases, the regularity was not obvious to Mathematica but was evident when looking at space-time diagrams and/or the binary expansion of the output. In these cases, we could manage by just feeding our insight into Mathematica in that we let it work, for example, on the binary expansion of the sequences.

(c) In some cases, the recurrences were just too complicated for Mathematica version 8. In these cases, we carefully studied the space-time diagrams analyzing what kind of recurrences were present. Then, the observed recurrences were fed into FindSequenceFunction where we let FindSequenceFunction find out the exact nature and coefficients of the corresponding recurrences. One such example concerns the TM that produces the largest possible outputs in (3, 2) space: the so-called Busy Beaver as detailed in Section 2.2.2.

(8) After having successfully (allegedly) found the functions [s.sub.[tau]](x), [t.sub.[tau]](x), and [N.sub.[tau]](x), we could compute the values for d(x) = lim [inf.sub.x[right arrow][infinity]]([N.sub.[tau]](x)/[t.sub.[tau]](x)) and lim [inf.sub.x[right arrow][infinity]]([s.sub.[tau]](x)/[t.sub.[tau]](x)). Inmost cases, a simple limit sufficed. For alternating behavior, we had to select most of the times the subsequences by hand so as to end up with the lim inf value. For some alternating sequences, the lim inf value could just be obtained by combining on the one hand liminf of [N.sub.[tau]](x) (as depicted in Figure 4) or [s.sub.[tau]](x), respectively, and on the other hand [t.sub.[tau]](x).

2.2. Most Salient Results of the Experiment. In this section, we will present the main results of our investigations. The space of TMs which employ only 2 colors and 2 states is clearly contained in (3, 2) space. However, we find it instructive to dedicate first a section to the findings in (2, 2) space. Apart from the first section, all other results in this section refer to our findings in (3, 2) space.

2.2.1. Findings in (2, 2) Space. In (2, 2) space, there was a total of 74 different functions. Of these functions, only 5 of them where computed by some superlinear time TMs. Note that this does not mean that all TMs computing this function performed in superlinear time. For example, the tape identity has many constant time performing TMs that compute it but also some exponential time performing TMs that compute it.

In total, in (2, 2) space, there are only 7 TMs that run in superpolynomial time. Three of them run in EXP-time, all computing the tape identity. The other four TMs compute different functions. These functions do roughly compute a function that doubles the tape input (see Figure 5).

All these four TMs perform in quadratic time and linear space. We computed the dimension for these functions and all turned out to have dimension 3/2. We observe that this is exactly the upper bound as predicted by the Space-Time Theorem. We saw this phenomenon in (3, 2) space as well.

The only three exponential time performers used linear space so by Lemma 8 we already know that the dimension of those TMs should be one. This has been checked also in Mathematica. The check was not really performed to check our theoretical results; rather, the check was used as a test case for our analyzing software.

We saw that a TM in (2, 2) space runs in superpolynomial time if and only if its dimension equals 1. This observation is no longer valid in (3, 2) space though.

2.2.2. Exponential Space and the Busy Beaver. In the remainder of this section, we will focus on the TMs in (3, 2) space. That space contains 2 985 984 many different TMs which compute 3 886 different functions. Almost all TMs used at most linear space for their computations. The only exception to this was when the TM used exponential space. Curiously enough, in (3, 2) space, there was no space usage in between linear and exponential space.

In [20], one can see an overview of the EXP-space performing TMs. For most of these TMs, it was not too hard to find an explicit formula for the space usage. An example is TM with number 683 863 whose corresponding space usage is

[s.sub.683.863](x) = 2([[x + 1]/2] + [2.sup.(x + 1)/2] -1). (23)

The space-time diagrams for TM 683 863 contained sufficiently much regularity so that Mathematica could guess the corresponding functions. For various other EXP-space performers, we had to help Mathematica by suggesting to it what kind of recursion it should look for. This occurred also with the so-called Busy Beaver.

Classically speaking, the Busy Beaver function outputs on input n the longest time that any TM with n states runs when executed on a two-way infinite tape with empty input [21]. In analogy, in the context of this paper, we will call a TM [beta] a Busy Beaver whenever, for each TM [tau], there is some value [x.sub.0] so that for all x [greater than or equal to] [x.sub.0] we have [t.sub.[beta]] (x) [greater than or equal to] [t.sub.[tau]] (x). The equivalent machines 599 063 and 666 364 are the Busy Beavers in the (3, 2) space. They compute the largest runtime, space, and boxes sequences. They also produce the longest output strings. For the remainder of this section, we will denote the Busy Beaver TM by [beta]. As mentioned, there are of course two actual TMs that compute the Busy Beaver but they have the exact same behavior and we will not distinguish between them.

Figure 6 shows the execution of machine 666 364 for inputs 1 to 3. The diagrams have been rotated to save space. As one can see, the series of outputs is very regular and so is the sequence of cells used by the computation. Nonetheless, Mathematica did not find a recurrence between the consecutive values. This was due to a minor error term.

That is, if one looks at the amounts of used cells for consecutive inputs and their differences, then modulo a small error term, there is a clear tendency. Let x denote the number of consecutive black input cells. Looking at the ratio between consecutive values helped us to isolate the disturbing difference term (Table 2).

So, ignoring the exact nature of the error term, the recurrence equation for the space is given in the following:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (24)

where g(p) is a function that takes values in {-1, 0, l}. When we forced Mathematica to focus on the error term, it came up with the exact recurrence relation where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] after defining [s.sub.[beta]](-1) = [s.sub.[beta]](0) = 0.

The runtime depends on the space and we found the following recurrence relation for it:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (25)

Finally, by close inspection on the space-time diagrams, we could guide Mathematica to look for specific kind of recurrences to finally come up with

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (26)

Using these recurrence equations, we could finally compute the limits. We computed the limits by both standard methods on limits of recurrence relations and employing Mathematica and both methods gave the same answers to the effect that all simultaneous EXP-time and EXP-space TMs in (3, 2) space have fractal dimension 3/2.

2.2.3. The Space-Time Theorem Revisited. One of our most important empirical findings is that the upper bound as given by the Space-Time Theorem is actually always attained in (3, 2) space. Moreover, we found two related empirical facts for (3, 2) space. We mention them in this section.

Finding 0. For all TMs [tau] in (3, 2) space, we found that d([tau]) [greater than or equal to] l. More in particular, we found that for each TM [tau] in (3, 2) space which performed in superlinear time we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (27)

And we conjecture that this holds in general for TMs with a larger number of states.

Finding 1. For all TMs [tau] in (3, 2) space, we found that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (28)

and we conjectured in the Upper Bound Conjecture 9 that this holds in general for TMs with a larger number of states.

In Proposition 12, we saw that a sufficient condition for the Upper Bound Conjecture to hold is that lim [inf.sub.x[right arrow][infinity]](log ([N.sub.[tau]](x))/log ([s.sub.[tau]](x)[t.sub.[tau]](x))) = 1 but it is not known if this is also a necessary condition. The following finding is related to this.

Finding 2. For all TMs [tau] in (3, 2) space, we found that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (29)

In Lemma 11, it was shown that [lim.sub.x[right arrow][infinity]]([N.sub.[tau]](x)/([s.sub.[tau]](x) * [t.sub.[tau]](x))) [not equal to] 0 is a sufficient condition for the Upper Bound Conjecture to hold but it is not known if it is also necessary. The following finding is related to this.

Finding 3. For all TMs [tau] in (3, 2) space, we found that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (30)

if this limit was well defined. Thus, in particular, we found that [lim.sub.x[right arrow][infinity]]([N.sub.[tau]](x)/([s.sub.[tau]](x) - [t.sub.[tau]](x))) = 0. Moreover, we found that only limited amounts of numbers were attained as limits of this quotient. The values found in (3, 2) for [lim.sub.x[right arrow][infinity]]([N.sub.[tau]](x)/[s.sub.[tau]](x)[t.sub.[tau]](x)) are

1/9, 1/6, 7/30, 1/4, 5/18, 5/16, 1/3, 3/8, 8/21, 7/18, 5/12, 3/7, 4/9, 7/15, 1/2, 5/9, 9/16, 2/3, 3/4, 7/9, 1.

It is possible that a few other limit values exist but were not found by the way we computed the functions generating [N.sub.[tau]](x).

For two of the EXP-space performers, we could not find the boxes function. These TMs were 582 263 (and their twin machine), whose execution for inputs 1 to 6 is shown in Figure 7. This TM possesses alternating behavior with periodicity two. For these two machines, we used Lemma 10 to settle the computation of d([tau]).

For the sequences of even number of consecutive black input cells, we found that the fraction N(x)/([s.sub.[tau]](x) - [t.sub.[tau]](x)) tended to 0.31 whereas for the odd number of consecutive black input cells we saw it tended to 0.11. The exact value of the fraction is of course irrelevant in the computation of the limit that determines d([tau]).

Finding 4. For all TMs [tau] in (3, 2) space, we found that d([tau]) = 1 if and only if the TM [tau]an in superpolynomial time using polynomial space. We suspect that this equivalence holds no longer true in higher spaces, that is, spaces (n, 2) for n > 3.

Finding 5. For all TMs [tau] in (3, 2) space, we found that d([tau]) = 2 if and only if the TM [tau]an in at most linear time. It is unknown if this equivalence holds true in higher spaces. Part of it holds in general (Lemma 4): if [tau] runs in at most linear time, then d([tau]) = 2.

2.2.4. Richness in the Microcosmos of Small Turing Machines. The authors have explored the space of small Turing machines before. On occasion, they have been so much impressed by the rich structures present there that they came to speak of the microcosmos of small Turing machines. For this paper, we had to mine (3, 2) space even further and at some point were surprised to be surprised once more.

In particular, Figure 8 shows a very curious phenomenon that we call symmetric performers. There turned out to be a pair of different TMs so that the space-time diagram on every even input of the one machine is the exact symmetric image of the space-time diagram of the other TM on the same input.

Of course, this can only happen in case the TM computes the tape identity since the input must equal the output in order to yield a symmetric image. At first, one might be tempted to think that this phenomenon is bound to occur since we can define for each TM [tau] its reversed machine [??]: replace each instruction <color, state> [right arrow] <color', state', direction> by its canonical reversal:

<color', state'> [right arrow] <color, state, [bar.direction]>, (31)

where [bar.direction] changes right to left and vice versa. However, note that both machines start in State 1 so that this imposes already a strong condition on possible solutions of symmetric performers.

Let us denote by [tau] and [??] a pair of symmetric performers. It is clear that if a TM [tau] terminates on input x, it does so in an even number of steps: for each computation where the head moves one to the left (the end of the tape is on the right by our convention), there must be a step where the machine moves one step to the right. In particular, for symmetric performers that terminate in 2n many steps on input x, we have that if the tape configuration at step m differs from the tape configuration at step m + 1 in [tau](x), then the head position in step m on [tau](x) is the same as the head position in step 2n - (m+ 1) on [??](x).

Indeed, it comes as a surprise that all these constraints can be met in (3, 2) space, if only just for the even inputs.

3. Part III: A Brief Literature Survey

In the third and final part of the paper, we will try to locate our results within the landscape of known theoretical results that link fractal dimensions to other notions of complexity.

3.1. Relations between Fractal Dimensions and Other Notions of Complexity: An Incomplete Survey of the Literature. In this paper, we have worked with a variant of box-counting dimension and with space and time complexity for processes implemented on Turing machines. These are just some out of a myriad of different complexity measures in the literature. Since eventually the notion of being complex or not is relative to a framework and the ultimate framework in which all these complexity notions can be embedded in is our own cognitive system, on philosophical grounds, one can expect relations between the various a priori unrelated complexity notions (see [22, 23]). And, indeed, in the literature, we find various relations between different notions of complexity.

In this final section, we wish to place our results in the context of other results in the literature that link different complexity notions. Our point of departure will be fractal dimensions and possible relations to complexity notions of a computational nature.

Neither is the current section self-contained nor do we pretend to give an exhaustive overview of the literature. Rather, we will try to provide sufficient pointers so that this section at least can serve as a point of departure for a more exhaustive and self-contained study.

3.1.1. Box-Counting Dimension within the Landscape of Topological and Fractal Dimensions. In this paper, we decided to work with a variant of box-counting dimension since this has many desirable computational properties and applications. Let us first see where box-counting dimension fits into the landscape of various versions of fractal and other dimensions.

Edgar divides geometrical dimensions in two main groups, topological and fractal dimensions (see [24]). Topological dimensions are invariants of topological spaces in that they are invariant under homeomorphisms. Moreover, topological dimensions have integer values although some versions allow transfinite (ordinal) values too.

The most basic of all topological dimensions is the so-called cover dimension, also called Lebesgue dimension. In order to describe this dimension, we need some additional notions.

The order of a family A of sets is [less than or equal to]n by definition when any n + 2 of the sets have empty intersection. We denote this by o(A) [less than or equal to] n. We say that o(A) = n when o(A) [less than or equal to] n but not o(A) [less than or equal to] n - l. Thus, for example, if any two sets in A have empty intersection, the order of A is 0.

The cover dimension of a set S is n, we write Cov(S) = n, whenever each open covering of S has a refinement of order n. Thus, for example, a collection of two separate points in [R.sup.n] has cover dimension 0 since we can separate the points by two disjoint opens. Likewise, any line-like space admits an open cover of order 1; that is, any intersection of three different opens is empty. Similarly, we can cover a planar set by open tiles where each row of tiles is shifted to the right, for example, with respect to the adjacent rows of tiles. This collection of tiles has order two since any collection of four different ones of such open tiles will be empty.

Fractal dimensions on the other hand can have noninteger values. In a sense, the fractal dimension of some object S is an indication of how close S is to some integer-valued dimensional space. Dimension in integer-valued dimensional spaces in a sense expresses degrees of freedom and as such this provides us with an information theoretical focus on dimension. More common is the geometrical focus on (fractal) dimension as, for example, expressed by Falconer

[25]: "Roughly, dimension indicates how much space a set occupies near to each of its points."

The most fundamental and most common notion of fractal dimension is that of Hausdorff dimension [17] which was introduced already in 1919 building forth upon ideas of Caratheodory from 1914 [26].

In order to relate our box-counting dimension to the more common Hausdorff dimension, we will outline the definition and some basic properties of Hausdorff dimension.

For S a subset of some metric space, we can consider countable open coverings A of S and define

[H.sup.s.sub.[epsilon]] (S) := inf [summation over (A[member of]A)] [(diam A).sup.s]. (32)

Here, diam A denotes the usual diameter of A as the supremum of distances between any two points in A. The infimum is taken over all A that are countable open e-covers of S. This means that the diameters of the open sets in our cover do not exceed e. It is essential that we may take the diameters of the open sets in our cover to vary and in particular we can choose them as small as convenient. Next, we define

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (33)

The main theorem about these [H.sup.s](S) is that there is a unique s so that

(i) [H.sup.t.sub.[epsilon]](S) = [infinity] for [tau] < s;

(ii) [H.sup.t.sub.[epsilon]](S) = 0 for [tau] > s.

This unique s is called the Hausdorff dimension of S: [dim.sub.H](F). As mentioned, this dimension was introduced in 1919 by Hausdorff [17] and the main theory was later developed mainly by Besicovitch and his students [27-31] so that [17] Mandelbrot often speaks of Hausdorff-Besicovitch dimension.

The Hausdorff dimension comes with a natural dual dimension called packing dimension. Although the notion of packing dimension is natural and related to the Hausdorff dimension, it was only introduced about sixty years later by Tricot Jr. [32] and Sullivan [33].

The main idea behind packing dimension of some spatiotemporal object F is to somehow measure the volume of disjoint balls one can find so that the center of these balls lies within F. As with the case of Hausdorff dimension, one parametrizes this concept with the target dimension s:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (34)

Since [lim.sub.[delta][right arrow]0][P.sup.s.sub.[delta]](F) is not a measure (this is easy to see by considering countable dense sets of some F with positive dimension), one applies a standard trick which transforms this into a measure by defining

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (35)

Here, the infimum is taken over countable collections of sets [F.sub.i] so that F [subset or equal to] [[union].sup.[infinity].sub.i=1][F.sub.i]. The main theorem of this notion [P.sup.s](F) shows that packing dimension is in a sense dual to Hausdorff dimension: There is a unique s so that

(i) [P.sup.t](F) = 0 for t < s;

(ii) [P.sup.t](F) = [infinity] for t > s.

This unique s is called the packing dimension of F and we write [dim.sub.P](F). It is not hard to see that packing dimension is an upper bound to Hausdorff dimension; that is, [dim.sub.H](F) [less than or equal to] [dim.sub.P](F).

A fundamental property that is not hard to prove of the dimensions we have seen so far is that Cov(F) [less than or equal to] [dim.sub.H](F). Mandelbrot defines a fractal to be any set F with Cov(F) < [dim.sub.H](F). However, this notion of fractal is often considered (also by Mandelbrot himself) a notion of fractal that is too broad, since it admits "true geometric chaos." Taylor proposes (see [34]) to denote by fractals only Borel sets F for which [dim.sub.H](F) = [dim.sub.P](F).

We can now see how box-counting dimensions (or box dimensions for short) naturally fit the scheme of fractal dimensions we have seen above. In particular, the box dimension is like Hausdorff dimension only that we now cover the spatial object by balls/boxes of fixed size rather than by balls of flexible size not exceeding some maximum value [epsilon].

Alternatively and equivalently, in order to define the box dimension, we can divide space into a regular mesh with mesh size [delta] and count how many cells NS(F) are hit by a set F. Then, we define [B.sup.s.sub.[delta]](F) := [N.sub.[delta]](F)[[delta].sup.s] and [B.sup.s](F) := lim [inf.sub.[delta][right arrow][infinity]] [N.sub.[delta]](F)[[delta].sup.s].

Again, there is a cut-off value [s.sub.0] so that [B.sup.s](F) = [infinity] for s < [s.sub.0] and [B.sup.S](F) = 0 for s > [s.sub.0]. This cut-off value is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (36)

which is close to the notion we started out with in this paper in Definition 1. Inspired by this cut-off value, we define

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (37)

In case [[dim.bar].sub.B](F) = [[bar.dim].sub.B](F), we call this the box-counting dimension: [dim.sub.B](F) which now exactly coincides with Definition 1. One can easily show that box dimension always provides an upper bound to Hausdorff dimension. Moreover, box dimension has many desirable computational properties thereby being amenable for computer applications.

Notwithstanding the good computational behavior, box dimension has various undesirable mathematical properties: in particular, a countable union of measure zero sets can have positive box dimension. For example, one can show that in R with the standard topology we have [dim.sub.B] {0, 1/2, 1/3, 1/4, ...} = 1/2 which is of course highly undesirable.

Mathematically, this undesirable properties can be impaired with the same trick that was applied to the packing dimension by defining modified box dimension as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (38)

But, of course, by doing so, we would lose all the good computational properties. In general, we have that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (39)

and it is known that none of the inequalities can be replaced by equalities. However, we note that, under Taylor's definition of fractal, the first four dimensions collapse and modified box dimension is an equivalent of Hausdorff dimension and indeed the modified box-counting dimension is a natural quantity to consider.

Moreover, if F has a lot of self-similarity, then modified box-counting dimension is actually equal to the plane box-counting dimension.

Proposition 13. Let F [subset or equal to] R be compact so that for any open set V we have [[bar.dim].sub.B](F) = [[bar.dim].sub.B](F[intersection]V);then, [[bar.dim].sub.B](F) = [[bar.dim].sub.MB](F).

So, in various situations, box counting coincides with Hausdorff dimension. The most famous example is probably that this equality holds for the Mandelbrot set. In addition, there are various other situations where box-counting and Hausdorff dimension coincide [35, 36].

3.1.2. Computability Properties of Fractals. As a first link between fractals and computability properties, we want to mention that of various fractal objects one has studied the computational complexity.

Probably the most famous examples of fractals are Julia sets and the corresponding "roadmap Mandelbrot set." Let us briefly recall some basic definitions. By FJ (f) we denote the filled Julia set of a function f defined on the complex numbers. This set FJ(f) is defined as the set of values z in the domain of f on which iterating f on z does not diverge. That is,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (40)

By J(f), the Julia set of f, we denote the boundary of FJ(f). Following Chong [37], we can consider [f.sub.[theta]](z) = [z.sup.2] + [lambda]z with [lambda] = [e.sup.2[pi]i[theta]] and [theta] [??] Q. Using this notation, the corresponding Julia sets are denoted by [J.sub.[theta]].

One can express that [J.sub.[theta]] is well behaved by saying that it has a Siegel disk at z = 0. Basically, this says that f is locally linearizable at z = 0 by a rotation and we refer the reader to, for example, [38] for further details.

The Turing degree of a set is an indication of how complicated that set is. A set of natural numbers A is of Turing degree, at most that of B; we write A[[less than or equal to].sub.T]B, if the question about x [member of] A can be decided on an idealized computer using various queries of the form y [member of] B. We say that two sets A and B have the same Turing degree, we write A[[equivalent to].sub.T]B, whenever both A[[less than or equal to].sub.T]B and B[[less than or equal to].sub.T]A.

Likewise, we say that a set B is computably enumerable, or c.e. for short, in A if we can, using an idealized computer, enumerate all the elements of B using queries about A. Note that enumerability of B does not give a procedure to decide membership. It only guarantees you that if some element belongs to B, then at some stage it will be enumerated in the enumeration.

We call a set A recursive, computable, or simply decidable if we can decide with an idealized computer without any oracles whether x [member of] A or not for any x [member of] N. Likewise, we call a set A simply c.e. when it is c.e. in the empty set [empty set].

The Turing degree of a set, the equivalence class under [[equivalent to].sub.T] so to say, is a robust notion in various ways. For example, it makes sense to speak of "being c.e. in the degree of A" whence we will often refrain from distinguishing A from its corresponding degree.

We can conceive a real number as a set of natural numbers. Let us restrict ourselves to the real interval [0, 1]. Then, we can conceive any real number a in this interval as a set by looking at the binary expansion of a and using this string 0, [a.sub.0] [a.sub.1] [a.sub.2]... to define a set A where i [member of] A iff [a.sub.i] = 1. So by this identification, it makes sense to speak of the Turing degree of a real number.

We will shortly discuss that one can set up real analysis in such a way that it also makes sense to speak about the Turing degree of nondiscrete objects like [J.sub.[theta]]. Braverman and Yampolsky follow in [39] an approach of what is called constructive analysis as initiated by Banach and Mazur [40], with influences of Markov [41]. The main idea behind this constructive analysis is that we can conceive continuous objects as entities that we can computably approximate to the precision that we require (see, e.g., [42] for an overview).

Braverman and Yampolsky have studied (see [39]) the relations between the Turing degree of [theta] and that of [J.sub.[theta]]. In particular, they prove that b is a c.e. Turing degree if and only if it is the degree of [J.sub.[theta]] with [theta] recursive so that [J.sub.[theta]] has a Siegel disk.

Chong has generalized this result [37]: Let c be a Turing degree. For every d [greater than or equal to] c, we have that d is c.e. in c if and only if it is the degree of a Julia set [J.sub.[theta]] with Siegel disk and deg([theta]) = c.

It is good to stress that all these results are sensitive to the underlying model of computation and real analysis and the results would change drastically if one were to switch to other models like the so-called Blum-Schub-Smale model (see [43]).

The results presented in this section relate the Turing complexity of the fractal to the complexity of the parameter generating it. However, there are no links from the Turing degrees of the Julia sets to the corresponding dimensions. In the next section, we will discuss various results of this sort.

3.1.3. Effective Dimension and Computations. In this section, we will present certain results that relate Hausdorff dimension to other notions of complexity. In order to do so, we will first rephrase the notion of Hausdorff dimension in the setting of binary strings. Next, we will define the so-called effectivizations of Hausdorff dimension. It is these effectivizations that can be related to other notions of complexity. Again, this section will be far from self-contained. We refer the reader to [44] for further details. And actually the presentation here is largely based on this treatise (mainly Chapter 13) and we will closely follow it in structure.

Thus, let us reformulate the definition of Hausdorff dimension in the realm of binary sequences, that is, in the realm of Cantor space which we will denote by [2.sup.[omega]]. We will interchangeably speak of sequences or of reals when we refer to elements of Cantor space. We will denote by [2.sup.<[omega]] the collection of finite binary strings.

For [sigma] [member of] [2.sup.<[omega]], we denote the length of [sigma] as [absolute value of [sigma]]. For [sigma] [member of] [2.sup.<[omega]], we define [[sigma]] := {[sigma][tau] | [tau] [member of] [2.sup.[omega]]}, where or denotes just string concatenation. Whenever we will consider Cantor space as a topological space, we will consider the topology generated by the basic open sets of the form [[sigma]]. For [SIGMA] [subset or equal to] [2.sup.<[omega]], we define [[SIGMA]] := [[union].sub.[sigma][member of][SIGMA]] [[sigma]].

Thus, for any R [subset or equal to] [2.sup.[omega]], we define an n-cover of R to be a set [SIGMA] [subset or equal to] [2.sup.[greater than or equal to]n] so that R [subset or equal to] [[SIGMA]]. Cantor space can be endowed with a measure in the standard way by defining [mu]([[sigma]]) = [2.sup.-[absolute value of [sigma]]]. Thus, in analogy to Section 3.1.1, we now define

[H.sup.s.sub.n](R) := inf {[summation over ([sigma][member of][SIGMA])][2.sup.-s[absolute value of [sigma]]] | Z an n- cover of R} (41)

and [H.sup.s](R) = [lim.sub.n[right arrow][infinity]][H.sup.s.sub.n](R). So, as before, we define [dim.sub.H](.R) := inf{s | [H.sup.s](R) = 0}. It is easy to see that for every r [member of] [0, 1] there is R [subset or equal to] [2.sup.[omega]] with [dim.sub.H](R) = r.

Within the context of Cantor space, we will now give a definition of what is called effective Hausdorff dimension. The effective pendant is defined via

E[H.sup.s.sub.n](R) := inf {[summation over ([sigma][member of][SIGMA])] [2.sup.s[absolute value of [sigma]]] | [SIGMA] a c.e. n-cover of R} (42)

and E[H.sup.s](R) = [lim.sub.n[right arrow][infinity]] E[H.sup.s.sub.n](R), so that the effective Hausdorff dimension is defined as [dim.sub.EH](.R) := inf{s | E[H.sup.s](R) = 0}.

One can now show [45] that, for every computable real r [member of] [0, 1], there is a set R [subset or equal to] [2.sup.[omega]] with [dim.sub.EH](R) = r. By a theorem of Hitchcock, we have that for important subsets F of Cantor space it holds that [dim.sub.H](F) = [dim.sub.EH](F):

Theorem 14 (Hitchcock [46]). Let F be a countable union of [[PI].sup.0.sub.1] classes (a subset A of Cantor space is a [[PI].sup.0.sub.1] class if it is the collection of paths for some computable tree. An alternative definition requires that for some computable relation R one has A := {[sigma] [member of] [2.sup.[omega]] | [for all]n R([sigma] | n)}) of Cantor space; then, [dim.sub.H](F) = [dim.sub.EH](F).

In the same paper, Hitchcock also proves an equality for [[SIGMA].sup.0.sub.2] classes and computable Hausdorff dimension (covers are required to be computable rather than c.e.). So, for some objects, Hausdorff dimension and effective Hausdorff dimension coincide.

However, for other important classes, they differ. In particular, we have that the Hausdorff dimension of any sequence in Cantor space equals zero. However, there may be no simple effective covers around so that a single sequence can have positive effective Hausdorff dimension.

There is a link between Turing degrees and effective Hausdorff dimension although this link is not very straightforward. Recall that for A [member of] [2.sup.[omega]] we have [dim.sub.H](A) = 0 but that we can have [dim.sub.EH](A) > 0 when no simple effective covers are around.

Thus, in a sense, having nonzero effective Hausdorff dimension is an indication of containing complexity. And in fact it can be shown that if A [member of] [2.sup.[omega]] with [dim.sub.EH] (A) > 0, then A can compute a nonrecursive function. To be more precise, A can compute a fix-point free function f (i.e., a function f so that [W.sub.f(e)] [not equal to] [W.sub.e] for all numbers e) by results of Terwiin et al. [47, 48].

This result establishes a relation between effective Hausdorff dimension and computational complexity in the guise of degrees of undecidability. However, the relation between effective dimension and computable content is not monotone nor simple. In particular, one can show that if [dim.sub.EH](A) = [alpha], then there exist sets B of arbitrary high Turing degree with [dim.sub.EH](B) = [alpha]. However, locally, Hausdorff dimension can provide an upper bound to Turing degrees.

Theorem 15 (Miller [49]). Let r be a left-c.e. real (the technical details here are omitted and referred to in [44]. However, one can think of a left-c.e. real as a c.e. real that does converge but for which one cannot computably estimate the rate of convergence). There is a [[DELTA].sup.0.sub.2]-definable set R [member of] [2.sup.[omega]] with [dim.sub.EH](R) = r so that moreover

A[[less than or equal to].sub.T]R [right arrow] [dim.sub.EH] (A) [less than or equal to] r. (43)

It is exactly this kind of results that we are interested in here in this section: theorems that relate different notions of complexity. Another classical result links Kolmogorov complexity to effective Hausdorff dimension. Let us briefly and loosely define Kolmogorov complexity referring to, for example, [50] for further details.

For a string s [member of] [2.sup.<[omega]], the Kolmogorov complexity K(s) is roughly the length of the shortest program that outputs s when computed on a particular universal Turing machine. Of course this is dependent on a particular choice of a universal Turing machine, but different choices of a universal Turing machine only manifest themselves in an additive constant in K. The relation between Kolmogorov complexity and effective Hausdorff dimension is given by a theorem of Mayordomo.

Theorem 16 (Mayordomo [51]). Let A be a sequence in Cantor space and let A | n denote the first n bits of this sequence:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (44)

Moreover, there is a link from effective Hausdorff dimension to a notion that is central to probability theory: Martingales. Martingales indicate expected outcomes of betting strategies. Lutz introduced in [52] an adaptation of this notion that can be linked to effective Hausdorff dimension (or constitute an alternative definition for that matter).

Definition 17. An s-gale is a function d : [2.sup.<[omega]] [right arrow] [R.sup.[greater than or equal to]0] such that d([sigma]) = (d([sigma]0) + d([sigma]1))/[2.sup.s].

This is a generalization of "gales" (as introduced/ simplified by [53]) where d([sigma]) = (d([sigma]0)+d([sigma]1))/2 expresses a certain fairness condition of the betting strategy. In particular, one can see d as a pay-off function where the equality expresses that your expectation is to not lose nor gain money.

We say that a certain gale d succeeds on A whenever lim [sup.sub.n[right arrow][infinity]] d(A | n) = [infinity].

The success set of d is the collection of all A on which d succeeds and is denoted by S[d]. The link from Hausdorff dimension to these gales is given by a theorem by Lutz.

Theorem 18 (Lutz [52]). Consider

[dim.sub.EH] (X) = inf [q [member of] Q | X [subset or equal to] S [d] for some q-gale d}. (45)

In the context of this paper, it is good to mention that other notions of dimension also have their effective counterparts. In particular, Reimann studied an effectivization of box-counting dimension in [54] and the corresponding relations to the other complexity notions are similar to the ones mentioned here.

Also, for various dimensions, the computable versions have been studied, where the open covers are no longer required to be c.e. but rather computable. We refer the reader to [44] for further details.

3.1.4. Our Results. In this section, we have tried to present a selection of readily accessible results in the literature that relate fractal dimension to other notions of complexity.

We mentioned results of Braverman and Yampolsky and the generalization thereof by Chong in Section 3.1.2. These results related computational (Turing degrees) properties of Julia sets to the same computational properties (Turing degrees) of the parameter that generates the Julia set. In a sense, this is not a result that links Hausdorff dimension to a different notion of complexity. However, since it is one of the few results on computational properties of fractals, we have decided to include it in the overview.

Next, in Section 3.1.3, we only worked in the realm of Cantor space. There we considered an effectivization of Hausdorff dimension and this notion was linked to Turing degrees as in Theorem 15, to Kolmogorov complexity as in Theorem 16, and to martingales as in Theorem 18.

Effective Hausdorff dimension however works with highly idealized notions relatively high up in the computational hierarchy. Our results involve complexity classes which are more down-to-earth like PTIME and PSPACE. Moreover, in our theorems, the two complexity notions that are related, computational complexity versus fractal dimension, are not applied to exactly the same object as is the case in the earlier mentioned results. Rather they link two attributes of a small Turing machine: the computational (runtime) complexity on the one hand and the fractal dimension of the corresponding space-time diagrams on the other hand. It is in these respects that, to the best of our knowledge, our results are new in their kind.

http://dx.doi.org/10.1155/2016/5030593

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

Joost J. Joosten wishes to thank Jose Maria Amigo Garcia for his kind support and an invited research stay at the University of Miguel Hernandez in Elche. Joost J. Joosten received support from the Generalitat de Catalunya under Grant no. 2009SGR-1433 and from the Spanish Ministry of Science and Education under Grant nos. MTM2011-26840 and MTM201125747. The authors acknowledge support from the project FFI2011-15945-E (Ministerio de Economi'a y Competitividad, Spain).

References

[1] H. Zenil, "Compression-based investigation of the dynamical properties of cellular automata and other systems," Complex Systems, vol. 19, no. 1, pp. 1-28, 2010.

[2] H. Zenil and E. Villarreal, "Asymptotic behaviour and ratios of complexity in cellular automata rule spaces," International Journal of Bifurcation and Chaos, vol. 23, no. 9, pp. 13-19, 2013.

[3] R. J. Solomonoff, "A formal theory of inductive inference. II," Information and Computation, vol. 7, pp. 224-254, 1964.

[4] A. K. Zvonkin and L. A. Levin, "The complexity of finite objects and the basing of the concepts of information and randomness on the theory of algorithms," Russian Mathematical Surveys, vol. 25, no. 6, pp. 85-127, 1970.

[5] C. H. Bennett, "Logical depth and physical complexity," in The Universal Turing Machine: A Half-Century Survey, pp. 227-257, Oxford University Press, 1988.

[6] H. Zenil, F. Soler-Toscano, J. Delahaye, and N. Gauvrit, "Two-dimensional Kolmogorov complexity and an empirical validation of the coding theorem method by compressibility," PeerJ Computer Science, vol. 1, article e23, 2015.

[7] G. J. Chaitin, "Computing the busy beaver function," in Open Problems in Communication and Computation, T. M. Cover and B. Gopinath, Eds., pp. 108-112, Springer, New York, NY, USA, 1987.

[8] H. Zenil, "On the dynamic qualitative behaviour of universal computation," Complex Systems, vol. 20, no. 3, pp. 265-277, 2012.

[9] S. Krakover, "Spatio-temporal structure of population growth in urban regions: the cases of Tel-Aviv and Haifa, Israel," Urban Studies, vol. 22, no. 4, pp. 317-328, 1985.

[10] T. Vicsek, Fractal Growth Phenomena, World Scientific, Singapore, 1989.

[11] J. Feng and Y. G. Chen, "Spatiotemporal evolution of urban form and land-use structure in Hangzhou, China: evidence from fractals," Environment and Planning B: Planning and Design, vol. 37, no. 5, pp. 838-856, 2010.

[12] S. Wolfram, A New Kind of Science, Wolfram Media, 2002.

[13] J. J. Joosten, "Turing machine enumeration: NKS versus lexicographical," Wolfram Demonstrations Project, 2010.

[14] J. J. Joosten, F. Soler-Toscano, and H. Zenil, "Program-size versus time complexity slowdown and speed-up phenomena in the micro-cosmos of small turing machines," International Journal of Unconventional Computing, vol. 7, no. 5, pp. 353-387, 2011.

[15] T. Neary and D. Woods, "Small fast universal Turing machines," Theoretical Computer Science, vol. 362, no. 1-3, pp. 171-195, 2006.

[16] M. Margenstern, "Frontier between decidability and undecidability: a survey," Theoretical Computer Science, vol. 231, no. 2, pp. 217-251, 2000.

[17] F. Hausdorff, "Dimension und ausseres mass," Mathematische Annalen, vol. 79, pp. 157-179, 1919.

[18] K. Tadaki, "The Hausdorff dimension of the halting self-similar sets of T-universal prefix-free machines," in Proceedings of the 2010 IEEE International Symposium on Information Theory (ISIT '10), pp. 1287-1291, IEEE, Austin, Tex, USA, June 2010.

[19] H. Zenil, F. Soler-Toscano, and J. J. Joosten, "Empirical encounters with computational irreducibility and unpredictability," Minds and Machines, vol. 22, no. 3, pp. 149-165, 2012.

[20] J. J. Joosten, F. Soler-Toscano, and H. Zenil, Fractal Dimension versus Time Complexity in Turing Machines, Wolfram Demonstrations Project, 2014.

[21] T. Rado, "On non-computable functions," The Bell System Technical Journal, vol. 41, no. 3, pp. 877-884, 1962.

[22] J. J. Joosten, "On the necessity of complexity," in Irreducibility and Computational Equivalence: 10 Years After the Publication of Wolfram's A New Kind of Science, H. Zenil, Ed., pp. 11-24, Springer, New York, NY, USA, 2013.

[23] J. J. Joosten, "Complexity fits the fittest," in How Nature Works, I. Zelinka, A. Sanayei, H. Zenil, and O. E. Rossler, Eds., vol. 5 of Emergence, Complexity and Computation, pp. 51-63, Springer, Berlin, Germany, 2014.

[24] G. A. Edgar, Measure, Topology, and Fractal Geometry, Springer, New York, NY, USA, 1990.

[25] K. J. Falconer, Fractal Geometry, Mathematical Foundations and Applications, John Wiley & Sons, Chichester, UK, 2003.

[26] C. Caratheodory, "Uber das lineare mass von punktmengen, eine veralgemeinerung des laongenbegriffs," Nachrichten von der Wissenschaften zu Gotingen, Mathematisch-Physikalische Klass, pp. 404-426, 1914.

[27] A. S. Besicovitch, "On linear sets of points of fractional dimension," Mathematische Annalen, vol. 101, no. 1, pp. 161-193, 1929.

[28] A. S. Besicovitch, "On the sum of digits of real numbers represented in the dyadic system. On sets of fractional dimensions II," Mathematische Annalen, vol. 110, no. 1, pp. 321-330, 1935.

[29] A. S. Besicovitch, "Sets of fractional dimensions. Part III," London Mathematical Society, vol. s2-47, pp. 436-454, 1942.

[30] A. S. Besicovitch, "Sets of fractional dimensions (IV): on rational approximation to real numbers," Journal of the London Mathematical Society, vol. s1-9, no. 2, pp. 126-131, 1934.

[31] A. S. Besicovitch and H. D. Ursell, "Sets of fractional dimensions (V): on dimensional numbers of some continuous curves," Journal of the London Mathematical Society, vol. s1-s12, no. 1, pp. 18-25, 1937.

[32] C. Tricot Jr., "Two definitions of fractional dimension," Mathematical Proceedings of the Cambridge Philosophical Society, vol. 91, no. 1, pp. 57-74, 1982.

[33] D. Sullivan, "Entropy, Hausdorff measures old and new, and limit sets of geometrically finite Kleinian groups," Acta Mathematica, vol. 153, no. 3-4, pp. 259-277, 1984.

[34] S. J. Taylor, "The measure theory of random fractals," Mathematical Proceedings of the Cambridge Philosophical Society, vol. 100, no. 3, pp. 383-406, 1986.

[35] L. Staiger, "A tight upper bound on Kolmogorov complexity and uniformly optimal prediction," Theory of Computing Systems, vol. 31, no. 3, pp. 215-229, 1998.

[36] L. Staiger, "Constructive dimension equals Kolmogorov complexity," Information Processing Letters, vol. 93, no. 3, pp. 149-153, 2005.

[37] C. T. Chong, "Complex dynamics and turing degrees," 2009.

[38] J. Milnor, Dynamics in One Complex Variable. Introductory Lectures, Princeton University Press, Princeton, NJ, USA, 2006.

[39] M. Braverman and M. Yampolsky, Computability of Julia sets, vol. 23 of Algorithms and Computation in Mathematics, Springer, Berline, Germany, 2009.

[40] S. Banach and S. Mazur, "Sur les fonctions calculables," Annales Polonici Mathematici, vol. 16, p. 223, 1937.

[41] A. A. Markov, "On constructive mathematics," Akademiya Nauk SSSR. Trudy Matematicheskogo Instituta imeni V. A. Steklova, vol. 67, pp. 8-14, 1962.

[42] K. Weihrauch, Computable Analysis, Springer, Berlin, Germany, 2000.

[43] L. Blum, F. Cucker, M. Shub, and S. Smale, Complexity and Real Computation, Springer, New York, NY, USA, 1998.

[44] R. G. Downey and D. R. Hirschfeldt, Algorithmic Randomness and Complexity, Theory and Applications of Computability, Springer, New York, NY, USA, 2010.

[45] J. H. Lutz, "The dimensions of individual strings and sequences," Information and Computation, vol. 187, no. 1, pp. 49-79, 2003.

[46] J. M. Hitchcock, "Correspondence principles for effective dimensions," Theory of Computing Systems, vol. 38, no. 5, pp. 559-571, 2005.

[47] S. A. Terwijn, "Complexity and randomness," Rendiconti del Seminario Matematico di Torino, vol. 62, pp. 57-74, 2004.

[48] J. Jockusch, M. Lerman, R. I. Soare, and R. M. Solovay, "Recursively enumerable sets modulo iterated jumps and extensions of Arslanov's completeness criterion," The Journal of Symbolic Logic, vol. 54, no. 4, pp. 1288-1323, 1989.

[49] J. S. Miller, "Extracting information is hard: a Turing degree of non-integral effective Hausdorff dimension," Advances in Mathematics, vol. 226, no. 1, pp. 373-384, 2011.

[50] M. Li and P. Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications, Springer, New York, NY, USA, 3rd edition, 2008.

[51] E. Mayordomo, "A Kolmogorov complexity characterization of constructive Hausdorff dimension," Information Processing Letters, vol. 84, no. 1, pp. 1-3, 2002.

[52] J. H. Lutz, "Gales and the constructive dimension of individual sequences," in Automata, Languages and Programming: 27th International Colloquium, ICALP 2000 Geneva, Switzerland, July 9-15, 2000 Proceedings, vol. 1853 of Lecture Notes in Computer Science, pp. 902-913, Springer, Berlin, Germany, 2000.

[53] P. Levy, Theorie de l'Addition des Variables Aleatoires, Gauthier-Villars, Paris, France, 1937.

[54] J. Reimann, Computability and fractal dimension [Ph.D. thesis], Universita Heidelberg, 2004.

Joost J. Joosten, (1), (2) Fernando Soler-Toscano, (2), (3) and Hector Zenil (2), (4), (5)

(1) Departamento de Logica, Historia i Filosofia de la Ciencia, Universitat de Barcelona, 08001 Barcelona, Spain

(2) AlgorithmicNature Group, LABORES, 75006Paris, France

(3) Departamento de Filosofia, Logica y Filosofia de la Ciencia, Universidad de Sevilla, 41018 Seville, Spain

(4) Unit of Computational Medicine, SciLifeLab, Department of Medicine Solna, Center for Molecular Medicine, Karolinska Institute, 171 76 Stockholm, Sweden

(5) Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK

Correspondence should be addressed to Hector Zenil; hzenilc@gmail.com

Received 1 May 2016; Accepted 29 June 2016

Academic Editor: Joao Florindo

Caption: Figure 1: The figure shows a sequence of space-time diagrams corresponding to the TM in (2, 2) space with number 346 (according to Wolfram's enumeration scheme [12, 13] for (2, 2) space) on inputs 1 up to 14.

Caption: Figure 2: The figure shows an estimate of the box dimension of 2, 2 TM with TM number 346. On the horizontal axis, the input is shown and the vertical axis shows the corresponding approximation of the box dimension. Note that we know that the function converges to 2 when the input tends to infinity.

Caption: Figure 3: Alternating linear and exponential runtime behavior for TM 1728 529.

Caption: Figure 4: Alternating values for [N.sub.[tau]](x) with periodicity 6. The depicted values are for TM number 1159 345. The diagram in (a) shows the data set and in (b) we included a fit from below.

Caption: Figure 5: The figure shows the four different functions that are computed by the four TMs that have quadratic runtime in 2, 2 space. The diagrams show the outputs on increasing inputs. So, for example, in the leftmost diagram, we see that TM with number 1383 (recall this is the code in (2, 2) space) outputs two black consecutive cells on input 1, and more in general it outputs 2n black consecutive cells on input n.

Caption: Figure 6: Execution of the Busy Beaver on the first three inputs.

Caption: Figure 7: Execution of machine 582 263 on the first six inputs and their corresponding space-time diagrams.

Caption: Figure 8: Symmetric performers.
Table 1: Distribution of those TMs in (3,2) space of which we
had to compute the corresponding dimension over their complexity
classes. By [omega](P) we denote the little [omega] notation of the
class of polynomials and hereby collect any super polynomial
behavior in one bucket.

Boxes              Runtime        Space      Machines

O([n.sup.3])    O([n.sup.2])       O(n)        3358
O([n.sup.4])    O([n.sup.3])       O(n)         6
[omega](P)       [omega](P)     [omega](P)      14

Table 2: The structure of the space sequence of machine 666 364 in
(3,2) space.

x     [s.sub.[beta]]    [s.sub.[beta]]          3/2         Difference
            (x)              (x) -        ([s.sub.[beta]]
                        [s.sub.[beta]]       (x - 1) -
                            (x - 1)       [s.sub.[beta]]
                                              (x-2))

1            3                --                --              --
2            7                 4                --              --
3           13                 6                 6              0
4           22                 9                 9              0
5           36                14             13 + 1/2          1/2
6           57                21                21              0
7           88                31             31 + 1/2          -1/2
8           135               47             46 + 1/2          1/2
9           205               70             70 + 1/2          -1/2
10          310               105               105             0
COPYRIGHT 2016 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Joosten, Joost J.; Soler-Toscano, Fernando; Zenil, Hector
Publication:Advances in Mathematical Physics
Article Type:Report
Date:Jan 1, 2016
Words:17960
Previous Article:Construction of the global solutions to the perturbed Riemann problem for the Leroux system.
Next Article:[tau]-complexity and tilting modules.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters