Printer Friendly

A Markov chain algorithm for determining crossing times through nested graphs.

According to the by now established theory developed in order to define a Laplacian or--equivalently--a Brownian motion on a nested fractal, one has to solve certain renormalization problems. In this paper, we present a Markov chain algorithm solving the problem for certain classes of simple fractals K provided that there exists a unique Brownian motion and hence, a unique Laplacian on K.

Keywords: (nested) fractal, self similarity, walk dimension, crossing time

1 Introduction

In the last 20 years, there has been a powerful 'hand in hand' development of approaches to define a 'natural' Laplacian or a 'natural' Brownian motion on fractals. The connection between the two problems is obvious: In [R.sup.n], the Laplacian is the infinitesimal generator of the standard Brownian motion, which can be obtained as the limit of renormalized random walks. This correspondence can be used in order to define a Laplacian on fractal sets which are often used to model 'wild', 'irregular', and 'rough' things. Fractals are non differentiable objects because the classical notion of a 'tangent space' is not available, thus a Laplacian can not be defined explicitly as a differential operator; however, the construction of a random walk does not require a differentiable structure. Hence, the first steps in this direction have been made by people defining a Laplacian on so-called nested fractals (we will not define here what a nested fractal is, see Lindstrom (Lin90)) as the infinitesimal generator of a Brownian motion. (The classical reference on this topic is (Lin90), as a first introduction we recommend the very nice survey by Barlow (Bar89).) The main challenge in following this stochastic approach is to find out the right time-space scaling of a random walk on the fractal graph implied by the iterated function system describing the fractal. By self similarity, the same scaling property holds on every magnification level leading to a sequence of appropriate scaled random walks converging to Brownian motion on the fractal.

Let us illustrate the problem a bit better with the help of the model case of the Sierpinski gasket. Pose A := (0, 0), B := (1, 0) and C := (1/2, [square root of (3)]/2). So A, B and C are the vertices of a unilateral triangle of unit side length. The Sierpinski gasket SG(2) (this notation is chosen in view of Section 3.1) is defined to be the unique nonempty compact set, which is self similar with respect to the family of similarity contractions F := {[[psi].sub.1], [[psi].sub.2], [[psi].sub.3]} (i.e. K = [[union].sup.3.sub.i=1] [[psi].sub.i](K)) where the mappings [[psi].sub.i] : [R.sup.2] [right arrow] [R.sup.2] are just given by the unique contractive similitude with contraction ratio 1/2 and fixed points A, B and C respectively. It is easy to check that the famous open set condition (OSC) is satisfied by choosing the open set O to be the interior of the triangle [DELTA]ABC. Hence, the Hausdorff dimension of the Sierpinski gasket equals [d.sub.H] = ln3/ln2 (see for example (Fal85)). Obviously, the Sierpinski gasket is finitely ramified, which means that a removal of the middle points a, b and c from the line segments BC, AC and AB makes it a disconnected set.


We introduce the vertex set [V.sub.0] := {A, B, C} and the set of first-order approximating points


(see Figure 2, and also Figure 3).

In order to get the natural time-space scaling of a random walk on the corresponding fractal graph, we will use the notion of walk dimension. We recall here the definition of walk dimension for the convenience of the reader, but in a heuristic way only. While the Hausdorff dimension somehow relates the volume of small balls with their radii, the walk dimension relates mean exit times (of the 'canonical Brownian motion') from balls with the radii of these balls. Hence, the walk dimension describes the time-space-scaling of a random walk or a stochastic process.

Let [tau](B(x, R)) denote the exit time of a stochastic process X starting at time 0 in x from a ball with radius R. Then the walk dimension is defined by

[d.sub.W] := lnE[[tau].sup.x](B(x, R))/ln R. (1)

In graph theory, the limit as R [right arrow] [infinity] of the latter term is taken, but we will see, that in the case of fractals (due to the self similarity) we will have a reasonable value for [d.sub.W] independently of R.

The local symmetry and the self similarity of the fractal (as well as of the approximating sets [V.sub.n], see Figure 3 below) in connection with the arising (natural) strong reflection principle of the corresponding random walks lead to the helpful observation that exit times from balls equal crossing times through subgraphs of [V.sub.n]. Thus, we can reformulate our leaving-a-ball-problem as follows: Supposing that the random walk starts in a vertex of [V.sub.0], we ask for the expected time of the first hitting of another vertex of [V.sub.0] provided that we move along the edges of [V.sub.1]. To be more concrete, we assume that we start in A and want to pass through the graph of [V.sub.1] until we reach B or C. In the following considerations, E[[tau].sup.P] denotes the expected time moving from a point P to the set {B, C}. Starting in point A and making one step, we can reach either point b or point c, both of them with probability 1/2 (all notations of this paragraph refer to Figure 2). Hence, we have

E[[tau].sup.A] = 1/2 (E[[tau].sup.b] + E[[tau].sup.c]) + 1 = E[[tau].sup.c] + 1.


Note that our hitting problem is symmetric with respect to the symmetry axis of the triangle mapping b to c. This implies E[[tau].sup.b] = E[[tau].sup.c]. Similar observations lead to

E[[tau].sup.c] = 1/4 (E[[tau].sup.A] + E[[tau].sup.b] + E[[tau].sup.a] + E[[tau].sup.B]) + 1 = 1/4 (E[[tau].sup.A] + E[[tau].sup.c] + E[[tau].sup.a]) + 1


E[[tau].sup.a] = 1/4 (E[[tau].sup.C] + E[[tau].sup.b] + E[[tau].sup.c] + E[[tau].sup.B]) + 1 = 1/2 E[[tau].sup.c] + 1,

taking into account that E[[tau].sup.B] = E[[tau].sup.C] = 0. From the last three equations one easily calculates that the mean graph crossing time T equals T = E[[tau].sup.A] = 5. Hence, it takes in expectation five steps (of length one) to leave a ball of radius two. By the self similarity of the Sierpinski gasket and the Markov property of the random walks it readily verifies that this time-space-scaling will occur on all magnification scales, and also in the limit approaching continuous time (see also (Lin90)). Thus, the walk dimension of the Sierpinski gasket equals [d.sub.W] = ln 5/ln 2.

In order to explain the corresponding analytic counterpart--i.e. to find the analytic renormalization factor let us firstly emphasize that the construction of the Laplacian on a nested fractals deeply relies on its finite ramification. Such sets can be approximated by an increasing sequence of finite sets [([V.sub.n]).sub.n] [greater than or equal to] 0, see Figure 3. As done above, set [V.sub.0] := {A, B, C} and define inductively [V.sub.n] := [[union].sup.3.sub.i=1] [[psi].sub.i] ([V.sub.n-1]), n [greater than or equal to] 1.






We say that p, q [member of] [V.sub.n] are n-neighbours if there exists a n-tuple of indices ([j.sub.1], ..., [j.sub.n]) [member of] [{1, 2, 3}.sup.n] such that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Every point p in [V.sub.n]\[V.sub.0] has four n-neighbours q [member of] [V.sub.n] denoted in the following by q ~ n p. We say that any two points from [V.sub.0] form a pair of zero-neighbours. Further, we set


It holds that K = [[bar.V].sub.*] (the bar denotes the closure with respect to Euclidean norm).

As in the stochastic approach explained above, also analytic quantities on a fractal are approximated by discrete structures. In particular, the energy of a function on the Sierpinski gasket which is the 'fractal analogue' of the Euclidean standard energy form [epsilon][u] = [integral] [[absolute value of [nabla]u].sup.2] dx is obtained as the limit of certain discrete 'pre-energies' defined on finite sets ([V.sub.n]) approximating the fractal. For a general outline of the theory see for example the monograph (Kig01), for a short survey we refer to the first authors paper (F05). For any function u : [V.sub.*] [right arrow] R, these pre-energies are defined by


The number [??] = 5/3 is the energy scaling factor determined by the Gaussian principle as follows: Suppose we are given the values of a function u on the set [V.sub.0], say u(A) = [u.sub.A], u(B) = [u.sub.B] and u(C) = [u.sub.C].

According to (2), we have that

[E.sub.0][u] = [([u.sub.A] - [u.sub.B]).sup.2] + [([u.sub.A] - [u.sub.C]).sup.2] + [([u.sub.B] - [u.sub.C]).sup.2] (3)



(see also Figure 2 on Page 503 for a better illustration). We minimize [[epsilon].sub.1][u] with respect to the values u(a), u(b), u(c), i.e. we are seeking for the harmonic extension of the function u from [V.sub.0] to [V.sub.1] (see (BF07) for a recently developed algorithm calculating such harmonic extensions with the help of a chaos game algorithm). Here, a simple calculation leads to the minimizers

u(a) = ([u.sub.A] + 2[u.sub.B] + 2[u.sub.C])/5, u(b) = (2[u.sub.A] + [u.sub.B] + 2[u.sub.C])/5 and u(c) = (2[u.sub.A] + 2[u.sub.B] + [u.sub.C])/5,

and in view of (3) and (4) we obtain that [[epsilon].sub.1][u] = [[epsilon].sub.0][u] for this harmonic extension. In other words, 5/3 is the unique number [??], satisfying min {[[epsilon].sub.1][v] : [sup.v]|[V.sub.0] = u} = [[epsilon].sub.0][u], if we would use the general ansatz


Note that by self similarity and finite ramification it holds that min {[[epsilon].sub.n][v] : [sup.v]|[V.sub.0] = u} = [[epsilon].sub.0][u] for all n [greater than or equal to] 1. Using the appropriate energy scaling factor, the sequence of pre-energies converges to a Dirichlet form leading to the notion of Laplacian by the Gauss-Green-formula (see (Kig01) and the references therein). In (KigLap93), the authors prove that for a nested fractal the exponent [d.sub.S] of the leading term in the eigenvalue counting function of this Laplacian satisfies [d.sub.S]/2 = ln M/ln(M[??]), where M denotes the number of similitudes in the iterated function system and [??] the energy scaling factor introduced above. The number [d.sub.S] is usually called the spectral dimension of the corresponding set.

It is reasonable to expect that the geometrical feature of a body has influence on spectral asymptotics of its 'natural' Laplacian as well as to the behavior of its 'natural' Brownian motion. In fact, such an interaction can be expressed by a so-called Einstein relation implicating Hausdorff, spectral and walk dimension, expressing geometric, analytic and stochastic aspects of a set (see (GM83), (Tel06), and (Zho93) as related references). Einstein's relation in its 'dimensional form' reads

2[d.sub.H] = [d.sub.S][d.sub.W], (5)

where [d.sub.H], [d.sub.S] and [d.sub.W] denote Hausdorff, spectral and walk dimension respectively. Taking into account that [d.sub.H] = ln M/-ln r (where r < 1 denotes the contraction ratio of the similitudes, see (Fal85) or the original paper (Hut81)), it can be transformed into a relationship between the energy scaling factor [??] and the mean crossing time T introduced above:

M[??] = T (6)

where M denotes the number of similitudes in iterated function system. Hence, it is sufficient to know only one of the numbers [??] and T in order to give a full quantitative analytic and stochastic description of the set K. However, the identification of both of the numbers requires to solve a system of linear equations. These equations can be found by analyzing graphs as shown in Figure 2, in particular by evaluating the ramification properties of the figure.

The aim of this paper is to present an algorithm that enables us to determine such crossing times T (and hence, by relation (6) the corresponding energy scaling factor [??]) without relying on a sketch--like we did in the calculations belonging to Figure 2. We present a pure graph theoretical approach using the connection matrix of the underlying graph only. The developed algorithm founds on Markov chain theory and can be calculated by a computer. An implementation using Maple[R] is demonstrated in Section 4 below. Another advantage is that the calculations are independent of the ambient space, in particular independent of its dimension (see the example of Sierpinski spaces, discussed in Subsection 3.2). In the present paper, we restrict ourselves to the case of isotropic fractals, i.e. in a metric sense we assume that each edge in the graph has the same length and that the graph beside its self-similarity carries a high number of symmetries. In the forthcoming paper (FTa) we will weaken these assumptions.

2 A first example: The Sierpinski gasket

The aim of this section is a demonstration of our algorithm in the case of the classical Sierpinski gasket SG(2). This can be considered somehow as a counterpart to the introduction, where the crossing time was calculated by a method which relies more or less on the concrete vision of the set. Define now [v.sub.1] := A = (0, 0), [v.sub.2] := B = (1, 0) and [v.sub.3] := C = (1/2, [square root of (3)]/2). Then the Sierpinski gasket SG(2) is the unique nonempty compact set which is self similar with respect to the family of similarities F := {[[psi].sub.1], [[psi].sub.2], [[psi].sub.3]} (cf. Section 1). An illustration of this set is provided by Figure 1. As before, denote [V.sub.0] denote the set of vertices [V.sub.0] = {A, B, C} and [V.sub.1] := [[union].sup.3.sub.i=1] [[psi].sub.i]([V.sub.0]) = {A, B, C, a, b, c}, see Figure 2. The set [V.sub.0] can be considered as a graph with adjacency matrix


We also need to introduce the (9 x 9)-block matrix


The so-called connection matrix of the graph [V.sub.1] is the following symmetric (9 x 9)-matrix


Its interpretation is the following: Each row i and each column j represent vertices v, w [member of] [V.sub.1]. The numbers i and j have a unique representation i = 3 x [i.sub.1] + [i.sub.2] and j = 3 x [j.sub.1] + [j.sub.2], where [i.sub.2] - 1 = i mod 3 and [j.sub.2] - 1 = j mod 3. The vertices w and v then correspond to the rows and columns in the following way: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The entry 1 in row (column) i and column (row) j means that the vertices corresponding to i and j are glued together in the graph [V.sub.1]. For example [[psi].sub.2] ([v.sub.1]) = [[psi].sub.1]([v.sub.2]), which means that the entry in row 2 and column 4 equals 1. By the symmetry assumption on C we also have a 1 in row 4 and column 2. The entry 0 in the matrix C means that the corresponding vertices remain not connected by an edge. For the input of the algorithm it is sufficient to give in the codes of pairs of vertices which we want to glue together. In our special case this are the three pairs

12 = 21,

13 = 31,

23 = 32.

It is worth to be pointed out here, that the latter fact can be expressed in a mathematically precise manner by using the notion of code spaces (see (B08) for the very latest insight into that topic as well as the authors references listed therein). According to this approach, every point in the Sierpinski gasket SG(2) has an adress in the space [{1, 2, 3}.sup.N]. The adress function [pi] : [{1, 2, 3}.sup.N] [right arrow] SG(2) given by


is well defined as the limit in (7) is a singleton which does not depend on [x.sub.0] [member of] [R.sup.2]. Note that [pi] is onto, but in general not one-to-one (unless the fractal under consideration is totally disconnected as for example famous Cantor set). In terms of the adress function, the connectivity property above reads

[pi](1[bar.2]) = [pi](2[bar.1]), [pi](1[bar.3]) = [pi](3[bar.1]) and [pi](2[bar.3]) = [pi](3[bar.2]).

It is now an easy task to construct the matrix C from this data using the facts from above. We remark: If one transforms the matrix A into a stochastic matrix in the obvious way, this would lead to a transition matrix of a Markov chain, whose states form three disconnected triangles. It is now our aim to couple these triangles with the help of the connection matrix C = [([c.sub.ij]).sup.9.sub.i,j=1]. The next step of the algorithm is the following loop: Put first i := 1 and j := 1. If [c.sub.ij] = 1 then add row i to row j and column i to column j. If i < j then increase i by 1, i.e. i := i + 1, else increase j by one and reset i, i.e. j := j + 1, i := 1 until j = 9. The resulting (9 x 9)-matrix will be denoted by A'. This procedure will be explicitly demonstrated on this example in Section 4.


In the language of graphs, the matrix A' is the adjacency matrix of the graph shown in Figure 4. For our purpose we have to identify the vertices 2 and 3, 5 and 7 and 6 and 8. For doing so, the next step of our algorithm is the following: If in the connection matrix [c.sub.ij] = 1 for some 1 [less than or equal to] j < i [less than or equal to] 9 then delete row and column number i. Do this for all entries of C and turn the result into the (6 x 6) stochastic matrix A", i.e. divide each entry by the number of non-zero elements in its row. In our special case this results:



If we delete in a further step the rows and columns corresponding to the essential fixed points of the iterated function system {[[psi].sub.1], [[psi].sub.2], [[psi].sub.1]}, this leads to a Markov chain with two cemeteries (in this example the points B and C) which is shown in Figure 5. The resulting (substochastic) matrix is called A"' and is in our special case given by

in our special case given by


To calculate the crossing time of the graph [V.sub.1], we have to solve the following linear system


where 1 = [(1, 1, 1, 1).sup.T] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the vector [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (the upper T denotes the transpose) and E[[tau].sup.v] is the mean number of steps a walker needs from vertex v to one of the vertices B or C (compare with Figure 2). The crossing time T we are looking for is given by T = E[[tau].sup.A] = 5, which is easily calculated by hand or using a computer-algebra-system as Maple[R]. By doing so, we get that the whole vector [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] equals [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].


As mentioned at the beginning of this section, our method provides an algorithmic approach for the calculation of mean crossing times. In the introduction we calculated the same value T = 5 for the mean crossing time of the Sierpinski graph [V.sub.1] using explicitly the ramification properties of [V.sub.1]. This leads to earnest problems for other examples as for example for the generalized Sierpinski gaskets SG(m) (see Section 3.1 below) for large m. In these cases it is a difficult and longish task to write down the linear System


by hand using the ramification properties of the graph. In our approach only the adjacency matrix [A.sub.0] of [V.sub.0] and the upper triangle of the connection matrix C or the identification pairs are needed. The establishment of the linear system and its solution is then reduced to matrix manipulations which can easily implemented on a PC. Since we are interested in exact values (i.e. fractions) for the crossing time T instead of numerical values, we decided to implement the algorithm with the computer-algebra-system Maple[R]. This will be explained in detail in Section 4.

3 Generalizations of the Sierpinski gasket

3.1 Generalized planar Sierpinski gaskets

By a generalized Sierpinski gasket we mean the following construction: Divide a (regular) triangle [DELTA] into [m.sup.2], m [greater than or equal to] 2, smaller triangles, such that they form a replicating dissection of [DELTA] (this is a self-similar dissection, where all the pieces are congruent), compare with Figure 6 for the case m = 4. This dissection consists of two types of triangles: those which are similar to [DELTA] without a rotation (these are the gray colored triangles in Figure 6) and those similar copies of [DELTA] which are rotated (this is for the example m = 4 the white piece of Figure 6). We have M := m(m + 1)/2 triangles of the first and m(m - 1)/2 of the second type. The removing of all rotated small triangles leads to a generator of a self-similar set, which we want to call generalized Sierpinski triangle, SG(m). It is straight forward to compute the Hausdorff dimension of SG(m) as

[d.sub.H]SG(m) = ln m + ln(m + 1) - ln 2 / ln m, (9)

by using the well known dimension formula for self-similar sets which can be found for example in (Fal85) or in the original paper (Hut81). It is also possible to describe SG(m) as the attractor of an iterated function system with appropriate mappings [psi]1, ..., [psi]M all having contraction ratio 1/m. Take the three vertices of [DELTA] as a starting set [V.sub.0] and define [V.sub.1] to be the set of points given by


It is our aim to study some Markov chains defined on [V.sub.1]. For the graph [V.sub.0] we have the following adjacency matrix


Obviously, [A.sub.0] has rank 3. We now define the following (3M x 3M)-block matrix


One can also compute A as follows: Let [E.sub.k] = ([e.sup.k.sub.ij]).sup.i=3M,j=M.sub.i=1,j=1] such that


where k = 0, ..., M. Then A is representable as

A = [M.summation over (k=1)] [([E.sup.m.sub.k]}.sup.T] x [A.sub.0] x [E.sup.m.sub.k]. (10)

This formula will be useful for the implementation of the algorithm in Section 4.


Each block of A is the adjacency matrix of a subtriangle of [DELTA]. If one would transform the matrix A into a stochastic matrix--in the obvious way--then this matrix would correspond to the transition matrix of a Markov chain which states form the vertices of M disconnected triangles. This are M Markov chains, each of them acting independent of the others on one subtriangle. For the construction of a Markov chain on [V.sub.1], some further steps are necessary.

Let the vertices of [DELTA] be labelled by A, B and C. We can now assign to each vertex v of the subtriangle the address ij, where v = [[psi].sub.i]([v.sub.j]). To obtain a Markov chain on [V.sub.1] we must identify some of the the points ij. In [V.sub.1] we have 3 vertices of degree 2, 3(m - 1) of degree 4 and 1/2 (m - 1) (m - 2) of degree 6. Thus, we need 3(m - 1) + 2 (m-1) (m-2) / 2 = [m.sup.2] - 1 identifications. This are


The relations [I.sub.1], [I.sub.2] and [I.sub.3] are the identifications of the points on the boundary (they have degree [I.sub.4] contains those of all the other points (having degree 6). As in the last section, these identification can be coded in the upper triangle of the so-called connection matrix C of dimension 3M x 3M. With this identifications in form of the matrix C = [([c.sub.ij]).sup.3M.sub.i,j=1] it is now possible to obtain a Markov chain on [V.sub.1] by a coupling of the Markov chains of each subtriangle. The algorithmic construction of the transition matrix runs as follows:

1. Put i := 1 and j := 1 and put in the matrices A and C.

2. If [c.sub.ij] = 0 then go to the next step. If [c.sub.ij] = 1 the add row i to row j and column i to column j and to the next step.

3. Put i := i + 1 until i < j. If i = j then i := 1, j := j + 1. Repeat step 2.

4. Call the resulting matrix A'.

5. Put again i := 1 and j := 1.

6. If [c.sub.ij] = 0 then go to the next step. If [c.sub.ij] = 1 then delete row and column with number i in A'. Go to the next step.

7. Put i := i + 1 until i < j. If i = j then i := 1, j := j + 1. Repeat step 6.

8. Transform the actual matrix into a stochastic matrix A'' = ([a''.sub.ij]) by putting [a''.sub.ij] := [a'.sub.ij] / [n.sub.i], where [n.sub.i] is the number of non-zero elements of row i.

9. Delete the two rows and columns corresponding to the vertices B and C and call the result A'''.

A detailed calculation was provided in the last section for the case of SG(2). For SG(3), we only quote the resulting matrix A''':


The sub-stochastic matrix A''' is the transition matrix of a Markov chain with two cemeteries defined on V1. We are interested in the crossing time of this chain starting in A, this is the mean number of steps one needs from A to one of the other vertices B or C following the transition rules given by A'''. Therefore we have to solve the following equation


where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII](as in the last section, the first component refers to the state A), 1 = [(1, ...,1).sup.T] and k = #[V.sub.1] - 2 = 1/2 [m.sup.2] + 3/2 m - 1 is the dimension (or size) of A'''. The crossing time we are looking for is E[[tau].sup.A] := E[[tau].sup.1]. In the case of SG (3) we get

E[[tau].sup.A] = 90 / 7.

Our algorithm results for m = 2 the value E[[tau].sup.A] = 5, which is well known from the last section. We summarize here the values for the crossing time T for the generalized Sierpinski gaskets SG(2), ..., SG(7):
m T(m) = E[[tau].sup.A] ln T(m)/ln m

2 5 2, 3219
3 90/7 2, 3247
4 1030/41 2, 3254
5 8315/197 2, 3255
6 452739/7025 2, 3249
7 904836/9823 2, 3244

Firstly, we declare that for m = 2, ..., 5 our values agree with those obtained in (GM83). Secondly, we want to point out that the explicit formula for the crossing time T (m) is still unknown for arbitrary m. From (9) one sees that dH(SG(m)) [right arrow] 2 as m [right arrow] [infinity] which is clear, because SG(m) approaches a filled triangle as m [right arrow] [infinity]. Hence, also the spectral dimension [d.sub.s] will converge to 2 which yields in view of (5) the conclusion that [d.sub.W] (SG(m)) [right arrow] 2 as well as m [right arrow] [infinity]. So, by (1) it should hold that


However, the above table documents some abnormalities in the convergence behavior of lnT(m) / ln m for small m. This might come from the fact that the number of vertices having degree 3 and 4 is dominant compared with the number of degree-6-vertices. This effect disappearers with growing m and in the limit case, (11) holds.

3.2 Sierpinski spaces

In the last section we considered a planar analogue of the classical Sierpinski gasket SG(2). Another way of generalizing SG(2) is a lifting of the construction to higher dimensional Euclidean spaces [R.sup.D], D [member of] D. This section is devoted to the calculation of their mean crossing times.

We will understand under the canonical Sierpinski space SS(D) in [R.sup.D] the set obtained from the D-dimensional unit simplex with vertex set [S.sub.D] = {[z.sub.1], ..., [z.sub.D+1]} and the iterated function system made from the D + 1 contractive similitudes (all with ratio 1/2) given by

[[psi].sub.i](x):= [z.sub.i] + 1/2 (x - [z.sub.i]), x[member of][R.sup.D], i = 1, ..., D + 1.

The construction is shown for D = 3 in Figure 7. Note that we obtain the usual Sierpinski gasket SG(2) in D = 2, while the construction in D = 1 leads to the unit interval. In this way, one obtains a full scale of models with increasing ramification complexity.

As SS(D) consists of D + 1 smaller copies of itself (each scaled by the factor 1/2), it holds that [d.sub.H]SS(D) = ln(D+1) / ln 2 (see (Fal85) or (Hut81) again). It is an interesting observation, that therefore the Hausdorff dimension of the fractal SS(D) in [R.sup.3],[R.sup.7],[R.sup.15], ..., [R.sup.2'n-1], ... is an integer! In fact, the Hausdorff dimension of the Sierpinski-Tetrahedron shown in Figure 3.2 equals 2. An analysis on the Sierpinski spaces SS(D) has been developed in (Kig89), and from the results in (KigLap93) it easily verifies that the spectral dimension [d.sub.s]SS(D) = 2 ln(D+1) / ln(D+3).


Thus, by Einstein's relation (5) it should hold for the walk dimension that [d.sub.W]SS(D) = ln(D+3) / ln 2. In fact, our algorithm becomes rather simple for these models and yields this result. The result can be obtained as well by an elementary proof:

Proposition 3.1 The mean crossing time through the graph belonging to SS(D) equals D + 3, D [member of] N.

Proof. Denote [V.sub.0] := [S.sub.D] = {[z.sub.1], ..., [z.sub.D+1]} and define [GAMMA] to be the graph with vertex set [V.sub.0] and edges joining each pair of different points [z.sub.i] and [z.sub.j] from the set [V.sub.0]. Set [V.sub.1] := [[union]D+1.sub.i=1 [[??].sub.i]([V.sub.0]). Fix a starting point A [member of] [V.sub.0] and start a random walk on the graph obtained by applying the mappings [[??].sub.1], ..., [[??].sub.D+1] to the graph [GAMMA]. We have to find out the mean number of steps the walker needs to reach one of the other vertices [v.sub.1], ..., [V.sub.D] in [V.sub.0].

Firstly, we observe that according to this problem we may distinguish four different kinds of points in [V.sub.1], namely: The point A itself, the other vertices [v.sub.1], ..., [v.sub.D] in [V.sub.0] ('kind z'); those points one can reach in one step from A ('kind x'- these are the midpoints of the edges joining A with one of the points from kind z); and finally those points which are midpoints of the edges joining two different point of kind z ('kind y'). Note that we have D points of kind x and (D) points of kind y. In the following considerations, E[[tau].sup.P], P [member of] {A, x, y, z} denotes the expected number of steps moving from the point A (or, from a point of kind x, y or z respectively) to the set {[v.sub.1], ..., [v.sub.D}. Of course, E[[tau].sup.z] = 0. Starting from A, we certainly move to a point of kind x, thus we have

E[[tau].sup.A] = E[[tau].sup.x] + 1. (12)

Sitting at a point of kind x, we can reach 2D neighbours (all of them with the same probability). D - 1 of these neighbours are the other points of kind x, one neighbour is A, D - 1 neighbours are of kind y, and one neighbour of kind z. This leads to

E[[tau].sup.x] = D - 1 / 2D E[[tau].sup.x] + 1 / 2D E[[tau].sup.A] + D - 1 / 2D E[[tau].sup.y] + 1. (13)

Finally, similar considerations yield

E[[tau].sup.y] = 2 / 2D E[[tau].sup.x] + 2(D - 2) / 2D E[[tau].sup.y] + 1. (14)

From (12), (13), and (14) we get E[[tau].sup.A] = D + 3.

4 A Note on the Implementation using Maple[R]

In this section we explain briefly the implementation of our algorithm using the computer-algebra-system Maple[R]. We decided to use Maple[R], because it is possible here to do symbolic calculation and not only numerical ones. This advantage results exact values for the crossing times we are looking for (compare this with Section 3.1). The implementation can also be done in any other computer-algebra-systems as for example Mathematica[R] or MATLAB[R]. In Maple[R] we used only standard commands and the 1ina1g package. It is now an easy task to implement steps 1-9 of the algorithm described in section 3.1 using loops and the commands adrow, adcol, delrows, delco1s and mulrow. We like to remark here that the results of all these operations can also be obtained by matrix multiplications and are therefor independent of Maple[R].

At this point we want to indicate the complexity of our algorithm. As mentioned above, the procedure to get A''' can be done only by using loops of certain matrix multiplications. It is well known (see for example (J04) or (Knu90)) that schoolbook matrix multiplications have a cubic complexity, i.e. 0([n.sup.3]). For the solution of the resulting linear system, we need to invert the matrix A''' and to multiply this inverse with a vector. The last task has an O([n.sup.2])-complexity. Using the Coppersmith-Winograd-Algorithm (CW90), the complexity of matrix multiplications and inversions can be reduced to O([n.sup.2.376]), which is at the moment the best known bound. Hence, we can conclude the following:

Theorem 4.1 The algorithmic complexity of our algorithm is of order O([n.sup.2.376]).

5 Outlook

In the forthcoming paper (FTb), we will extend our approach to self similar graphs obtained from regular n-gons. In Figure 8, the first two nontrivial examples are given (note that the self similar set obtained from a square would be the square). The main difference is, that now a fixed point P from the vertex set [V.sub.0] has zero-neighbours of different type depending on how far away the other vertex is from P. So, in the Pentagasket graph we have two types of neighbours, while we have three types of neighbours in the Hexa-gasket case. In (FTb), we will discuss two possible approaches, namely the first one is allowing the random walk to visit only nearest neighbours of a point in the graph, while the second approach allows the particle also to move through cells where the transition probabilities reflect the distances the particle has to overcome. Hence, one has to deal with weighted adjacency matrices instead. Hereby, the main challenge is to find out for which vectors of transition probabilities the arising linear system (cf. (8)) has a (unique) solution.



[Bar89] M. Barlow, Diffusions on fractals, Lecture Notes in Mathematics, Vol. 1690, Springer (1998).

[1308] M.F. Barnsley, Tranformations between self referential sets. to appear in: Amer. Math. Monthly (2008)

[BF07] M.F. Barnsley and U.R. Freiberg, Fractal Transformations of Harmonic Functions, Proc. SPIE Vol. 6417, 64170C, Complexity and Nonlinear Dynamics; Axel Bender; Ed. (2007).

[CW90] D. Coppersmith and S. Winograd, Matrix multiplication via arithmetic progressions, J. Symb. Comp. 9 pp. 251i?1/2-280, 1990.

[Fal85] K.J. Falconer, The geometry of fractal sets. Cambridge Univ. Press. (1985).

[F05] U.R. Freiberg, Analysis on fractal objects, Meccanica 40 pp. 419-436 (2005).

[FTa] U.R. Freiberg, C. Thi?1/2le, Markov chain algorithms on nested fractal graphs with anisotropic weights. in preparation.

[FTb] U.R. Freiberg, C. Thi?1/2le, Crossing times through self similar graphs obtained from regular n-gons. in preparation.

[GM83] J.A. Given and B.B. Mandelbrot, Diffusion on fractal lattices and the fractal Einstein relation, J. Phys. A 16 no. 15 pp. L565-L569 (1983).

[Hut81] J.E. Hutchinson, Fractals and self similarity, Indiana Univ. Math. J. 30 pp. 713-747 (1981).

[J04] D. Jungnickel, Graphs, Networks and Algorithms (Algorithms and Computation in Mathematics), 2nd Edition, Springer (2004).

[Kig89] J. Kigami, A harmonic calculus on the Sierpinski spaces, Japan J. Appl. Math. 6, no.2, pp. 259-290 (1989).

[Kig01] J. Kigami, Analysis on fractals, Cambridge Univ. Press (2001).

[KigLap93] J. Kigami and M.L. Lapidus, Weyl s problem for the spectral distribution of Laplacians on p.c.f. self-similar fractals, Commun. Math. Phys. 158 pp. 93-125 (1993).

[Knu90] D. Knuth, The Art of Computer Programming, Vol. II, 3rd Edition, Addision-Wesley (1990).

[Lin90] T Lindstrom, Brownian Motion on Nested Fractals, Memoirs Amer. Math. Soc. 420 (1990).

[Tel06] A. Telcs, The Einstein relation for random walks on graphs, J. Stat. Phys. 122, no. 4 pp. 617-645 (2006).

[Zho93] X.YZhou, The resistance dimension, random walk dimension and fractal dimension, J. Theo. Prob. 6, no. 4 pp. 635-652 (1993).

Uta Freiberg (1) and Christoph Thale (2 [dagger])

(1) Mathematisches Institut, Friedrich-Schiller-Universitat Jena, Ernst-Abbe-Platz 2, D-07743 Jena, Germany,

(2) Departement de Mathematiques, Universite de Fribourg, Perolles, Chemin du musee 23, CH-1700 Fribourg, Switzerland, christoph.thaele@unifrch

([dagger]) This work was supported by the Schweizenscher Nationalfonds grant SNF PP002-114715/1
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Freiberg, Uta; Thale, Christoph
Publication:DMTCS Proceedings
Article Type:Report
Geographic Code:4EUGE
Date:Jan 1, 2008
Previous Article:A note on the fragmentation of a stable tree.
Next Article:The size of random fragmentation intervals.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |