# Non-Euclidean elastica.

[Mathematical Expression Omitted]Since ch[[Phi].sub.2] + sh[[Phi].sub.2] = k, the above expression simplifies into:

[Mathematical Expression Omitted].

Substituting ch[[Phi].sub.2] = [k.sup.2] + 1/2k, and sh[[Phi].sub.2] = [k.sup.2] - 1/2k above we get:

[Mathematical Expression Omitted], or 0. Introduction. Let G denote the isometry group of a simply connected surface S of constant curvature, and let [Mathematical Expression Omitted], [Mathematical Expression Omitted] and [Mathematical Expression Omitted] denote a frame of left-invariant vector fields in G. This paper considers the problem of minimizing 1/2 [integral of] [k.sub.2] between limits 0 and T (s) ds over all integral curves g(t), satisfying the prescribed boundary con ditions g(0) = [g.sub.0], g(T) = [g.sub.l], of the following differential system:

[Mathematical Expression Omitted]

as k(t) varies over all square summable functions on each interval [0, T]. We shall be mainly interested in the situation where (1) is the Serret-Frenet differential system associated with the curves in S. Then, the projection to S of each integral curve of (1), asociated with a particular function k(t), is a curve whose geodesic curvature is equal to k(t).

We shall refer to this variational problem as the elastic problem, and we shall call the projections of the solution curves to S the elastica. The elastic problem has a rich classical heritage inspired by the following physical situation: a thin elastic rod, when subjected to bending only, assumes the shape of an elastica in its equilibrium position. In this context Euler made the initial study of the planar elastica in 1744. His solutions can be found in A. E. Love's book on elasticity, along with an extensive bibliography on this subject [11]. (The reproductions of the sketches of elastic curves by Euler can also be found in an interesting historical survey by Truedell [14]). According to Love, much of the developement in the theory of elastic rods is based on a discovery of Kirchhoff, known as the kinetic analogue of the elastic problem, that the equations for the equilibrium configurations of an elastic rod are the same as the equations for the Lagrange's spinning top.

The geometric significance of minimizing 1/2 [integral of] [k.sup.2] ds, or more generally any functional of k, was recognized by W. Blaschke in his book on Differential Geometry ([3]) under the name of Radon's problem, although according to F. Klein, investigations of motion of the rigid body (the kinetic analogue of the elastic problem) in non-Euclidean spaces were done by Clifford as early as 1874 ([9]; p. 53).

More recently, P. Griffiths has drawn attention to the elastic problem as an important example of non-holonomic variational problems, and obtained the Euler-Lagrange equation for the geodesic curvature k ([5], p. 73). In a subsequent paper with R. Bryant, Griffiths exploits the natural symmetries of the problem to obtain interesting results concerning the existence of closed, free (no constraints on the arc length), elastica in the hyperbolic plane [H.sup.2] ([4]). At about the same time, J. Langer and D. Singer, in a study with somewhat different objectives from [4], reported related results, particularly concerning the existence of closed elastica on the sphere [S.sup.2]. Their findings were largely based on an analysis of the nature of the elliptic functions associated with the solutions of the Euler-Lagrange equation for k ([10]). Even though the above papers contain important results, they all stop short of providing a complete description of the elastica.

This paper arrives to its solutions through the basic geometry of the problem, described by two conservation laws on the Lie algebra of G. The conservation laws are analogous to the conservation of the total energy and of the total angular momentum for the equations of a rigid body. However, unlike the conservation laws for the rigid body, the conservation laws for the elastic problem imply several geometric types of solutions on the Lie algebra of G, which account for the bifurcations in the elastic curves and explain their diversity. The geometry elucidates the non-holonomic nature of the elastic problem, and sets up easy comparisons with Euler's equations for the rigid body. In particular, this geometry provides a natural perspective for the Kirchhoff's kinetic analogue for the elastic curves ([11]). More importantly, the geometry leads to natural integration procedures which yield easy descriptions of the elastic curves.

Unlike the studies of [5], and [4], which proceed to their results through Griffith's formalism based on the theory of exterior differential systems, the present paper treats the problem as an optimal control problem, and obtains its results through Pontryagin's Maximum Principle, and the associated Hamiltonian formalism. The Maximum Principle, here, as well as in other variational problems on Lie groups, leads directly to the appropriate equations on L*, the dual of the Lie algebra of G, from which the geometric properties of the solutions can be easily deduced.

We shall show that the Maximum Principle yields a single Hamiltonian H; the elastica are the projections (down to S) of the integral curves of the corresponding Hamiltonian vector field [Mathematical Expression Omitted] on the cotangent bundle T*G of G. Due to the left-invariant symmetries of the problem, there is another integral of motion M (in addition to H), which allows the problem to be integrated by quadratures.

The two integrals of motion H and M are the non-holonomic analogues of the total energy and the total angular momentum for the equations of a rigid body. In contrast to the rigid body, H is a cylindrical paraboloid, rather than an ellipsoid and M, instead of the sphere, changes its type according to the geometric setting: it is a cylinder, in the Euclidcan case; a sphere, in the elliptic case, and a hyperboloid, of either one, or two sheets, or a double cone in the case of the hyperbolic plane. The intersection of the energy paraboloid with these surfaces accounts for all the geometric types of the solutions. The kinetic analogue of Kirchhoff, the mathematical pendulum, appears as a natural parametrization of the intersection of the surfaces H = constant, with M = constant. This geometric foundation naturally leads to a unified integration procedure which provides a complete classification of the elastica.

The paper is organized as follows:

Section 1 contains some explanatory material concerning the basic problem of the paper mentioned in the introduction, along with the mathematical preliminaries required for its further analysis.

Section 2 contains a statement of the Maximum Principle, and its use for the optimal problem described in Section 1. This section also contains the basic general facts from symplectic geometry relevant for the derivation of a single Hamiltonian H associated with the problem.

Section 3 deals with the structure of extremals, that is, the integral curves of the Hamiltonian vector fields [Mathematical Expression Omitted] associated with H, based on the decomposition of the cotangent bundle T*G of G as the product G x L*. Each extremal of H, when expressed as a pair (g,p) reveals the conservation laws of H, related to the left-invariance of the problem. This section contains the explicit equations of extrema, their integrals of motion and the geometric analysis of the solutions.

Finally section 4 contains the integration procedures which lead to a complete description of elastic curves. These procedures are based on the particular choices of coordinates each adapted to its own geometric case.

1. Notations, definitions and basic concepts. Throughout the paper G denotes a Lie group, which is the isometry group of a two dimensional space form S. L(G), or simply L, denotes the Lie algebra of G. For any element L in L(G), [Mathematical Expression Omitted] denotes the left invariant vector field whose value at the group identity is equal to L.

Our convention is that the Lie bracket IX, Y] of vector fields on a manifold M is given by [X, Y]f = Y(Xf) -X(Yf) for f [element of] [C.sup.[infinity]](M). Then the Lie bracket [L, M] in L(G) is equal to the value of [[Mathematical Expression Omitted], [Mathematical Expression Omitted]] at the identity. We shall be specifically interested in the case where L(G) is a 3-dimensional Lie algebra with a basis [L.sub.1], [L.sub.2] and [L.sub.3] which satisfies the following Lie bracket table (Table 1).

[TABULAR DATA FOR TABLE 1 OMITTED]

The particular cases of interest are the following:

[Epsilon] = 0, the Euclidean plane [E.sup.2],

[Epsilon] = -1, the Hyperbolic plane [H.sup.2], and

[Epsilon] = 1, the sphere [S.sup.2].

Each of these three cases will be modelled by a closed subgroup G of G[L.sub.3] (R) as follows:

The Euclidean plane [E.sup.2].G is the group of motions of the plane [R.sup.2] realized as the group of all matrices

[Mathematical Expression Omitted]

with R an element of S[0.sub.2](R), and

[Mathematical Expression Omitted].

That is,

[Mathematical Expression Omitted]

with [[Alpha].sup.2]+ [Beta].sup.2] = 1. Then,

[Mathematical Expression Omitted].

The hyperbolic plane [H.sup.2]. G = S0(2, 1), the connected component of the group which leaves the quadratic form [x.sub.1][y.sub.1] + [x.sub.2][y.sub.2] - [x.sub.3][y.sub.3] invariant. Then, L(G) consists of all 3 x 3 matrices A which satisfy [A.sup.tr]Q + QA = 0, for

Alternatively,

[Mathematical Expression Omitted]

for some ([a.sub.1], [a.sub.2], [a.sub.3]) in [R.sup.3].

In this situation,

[Mathematical Expression Omitted], [Mathematical Expression Omitted], and [Mathematical Expression Omitted].

We recall that [H.sup.2] is the hyperboloid [x.sup.2] + [y.sup.2] - [z.sup.2] = 1, z [greater than] 0.

The sphere [S.sup.2]. G = S[O.sub.3](R), L(G) is the space of all 3 x 3 antisymmetric matrices. Here,

[Mathematical Expression Omitted], [Mathematical Expression Omitted], and [Mathematical Expression Omitted]

Note. It follows, from our convention of the Lie bracket, that for any matrices A and B, [A, B] = BA - AB, which accounts for the signs in Table 1.

In each of the preceding cases, let K denotes the compact subgroup of G of all matrices

[Mathematical Expression Omitted],

with [[Alpha].sup.z] + [[Beta].sup.2] = 1. G acts on itself by left multiplications. We will regard S as the homogenous space of all left cosets {gK, g [element of] G}. G also acts on [R.sup.3] by the matrix multiplications (from the left) on the points of [R.sup.3], regarded as column vectors. In particular, [Ke.sub.3] = [e.sub.3], and thus the space form S may be identified with a surface in [R.sup.3] given by {[ge.sub.3], g [element of] G}.

As we already stated, our basic control system is the Serret-Frenet differential system in G:

[Mathematical Expression Omitted],

with k(t) regarded as a control function. For each square summable k(t) on an interval [0, T], and each absolutely continuous curve g(t) in G which satisfies equation (1) almost everywhere in [0, T], the pair (g(t), k(t)) will be called a trajectory in G. Each trajectory in G projects onto a curve x(t) in S defined by x(t) = g(t)K = g(t)[e.sub.3].

The optimal problem which we will consider in this paper is the following: Let [g.sub.0] and [g.sub.1] be arbitrary but fixed points in G. We shall be interested in finding a trajectory (g(t), k(t)) in G which satisfies g(0) = [g.sub.0], g(T) = [g.sub.1], and which in addition minimizes 1/2 [integral of] [k.sup.2](s) ds between limits of T and 0 among all trajectories of (1) which satisfy the same boundary conditions. If the terminal time T is also fixed in advance, then we will call the preceding problem fixed time problem. Otherwise, it will be called the free time problem.

In the language of optimal control theory, the functional 1/2 [integral of] [k.sup.2] (s) ds between limits of T and 0 is the total cost of the trajectory (g(t), k(t)). In this context, the function 1/2[k.sup.2] is also called the cost function. The pair [Mathematical Expression Omitted] which solves the above problem will be called optimal relative to the boundary conditions which define it. Of course, in general, there may not be any trajectories which satisfy the given boundary conditions, in which case the optimal problem is not well posed. In the above problem, T is equal to the arc length of the projected curve x(t) in S; evidently, for a fixed time problem, T has to be large enough in order that the problem be well posed for given boundary conditions [g.sub.0] and [g.sub.1]. More precisely, let A([g.sub.0], T) be the set of all points [g.sub.1] of G for which there exist a trajectory (g, k) with the property that g(0) = [g.sub.0] and g(T) = [g.sub.1]. Such a set is called the reachable set from [g.sub.0] in T units of time. The set of points reachable from [g.sub.0], is denoted by A([g.sub.0]) and is equal to [[union].sub.T[greater than]0]A([g.sub.0], T).

It follows that for each T [greater than] 0, A([g.sub.0], T) = [g.sub.0]A(e, T) since the system is left-invariant. Thus, A([g.sub.0], T) = G (resp. A([g.sub.0]) = G) if and only if A(e, T) = G (resp., A(e) = G). In this notation e is equal to the group identity in G. In the language of control theory, systems which satisfy A(e) = G are called controllable (strongly controllable, if A(e, T) = G for each T [greater than] 0). Evidently our optimality problems are well posed if and only if the boundary conditions [g.sub.0], [g.sub.1] and T satisfy [Mathematical Expression Omitted] for the fixed time interval, or [Mathematical Expression Omitted] for the free time interval.

For the problem considered above, it is fairly easy to see that, for each [g.sub.0] and [g.sub.1] in G there exists T [greater than] 0 such that [Mathematical Expression Omitted]. However, in a more general situation controllability properties are not so evident, and there is a considerable literature devoted to finding conditions on two elements [L.sub.1] and [L.sub.3] in an arbitrary Lie algebra such that the system (1) is controllable in G (see for instance, [8]).

2. The Maximum Principle. The Maximum Principle is a far reaching generalization of the work of Weierstrass concerning the existence of strong solutions for the variational problems in the calculus of variations.

In its original publication [13], the Maximum Principle was largely focused on optimal control problems with bounds on the size of control functions, as an important class of variational problems with the inequality constraints. The formulation was done for problems in [R.sup.n], with particular emphasis on the bang-bang solutions which arise from the bounds on the controls. This slant of the original manuscript may have obscured its geometric content, and may have been a reason why this important theorem was ignored by the mathematical community outside of the control community.

The geometric significance of the Maximum Principle is particularly clear for the problems of Lagrange in the calculus of variations, such as the elastic problem, and similar problems in mechanics with non-holonomic equality constraints. In order not to drift too far from our problem of interest we shall not pursue the discussion of the Maximum Principle in its most general form, although it will still be necessary to "geometrize" its content. For that reason we will first recall the basic facts concerning the symplectic geometry.

If M is an arbitrary manifold, let [T.sup.*]M and TM denote respectively the cotangent bundle and the tangent bundle of M. Let [Omega] denote the natural symplectic form on [T.sup.*]M. (for its definition, see Arnold [2]). [Omega] sets up a correspondence between functions on [T.sup.*]M and their Hamiltonian vector fields. For each function F on [T.sup.*]M, [Mathematical Expression Omitted] denotes its Hamiltonian vector field. F and [Mathematical Expression Omitted] are related through the formula: [Mathematical Expression Omitted] for each tangent vector v in [T.sub.x]([T.sup.*]M), and all x in [T.sup.*]M.

Each vector field X on M can be canonically lifted to its Hamiltonian vector field [Mathematical Expression Omitted] through the formula: [H.sub.X]([Xi]) = [Xi](X(x)) for all x in M and all [Mathematical Expression Omitted]. [H.sub.x] is called the Hamiltonian of X. Evidently, this lifting extends to time varying vector fields on M. In particular, each admissible control k(t) defines a time varying vector field [Mathematical Expression Omitted] on R x G. We shall identify [T.sup.*](R x G) with [R.sup.2] x [T.sup.*]G. Then, the Hamiltonian [H.sub.k] of this time varying field is given by

[Mathematical Expression Omitted]

for all [Lambda] [element of] R and all [Mathematical Expression Omitted], g [element of] G. For simplicity of notation we shall denote by [H.sub.1], [H.sub.2], and [H.sub.3] the Hamiltonians of [Mathematical Expression Omitted], [Mathematical Expression Omitted] and [Mathematical Expression Omitted] in [T.sup.*]G. In terms of this notation,

[H.sub.k]([Lambda],[Xi]) = [Lambda]/2[k.sup.2](t) + [H.sub.1]([Xi]) + k(t)[H.sub.3]([Xi])

for all [Lambda] [element of] R, [Xi] [element of] [T.sup.*]G. We shall now restrict [H.sub.k] to [Lambda] = 0, and [Lambda] = -1. These restrictions define two time varying Hamiltonians on [T.sup.*]G:

[Mathematical Expression Omitted]

The Maximum Principle can now be stated, in terms of the above Hamiltonians as follows:

The Maximum Principle. Suppose that (g(t), k(t)) is an optimal trajectory of the elastic problem on an interval [0, T]. Then, g(t) is the projection of an integral curve [Xi](t) of the Hamiltonian vector field [Mathematical Expression Omitted], with [Alpha] = 0, or [Alpha] = 1, such that:

(i) if [Alpha] = 0, then [Xi](t) is not identically zero on [0, T].

(ii) [Mathematical Expression Omitted] for any u [element of] R, and almost all t in [0, T].

(iii) [Mathematical Expression Omitted] is constant for almost all t in [0, T]. For the free time problem, [Mathematical Expression Omitted].

This formulation of the Maximum Principle is a coordinate invariant paraphrase of the original formulation in [13]. A pair of curves ([Xi](t), k(t)) on an interval [0, T] is said to be an extremal pair if [Xi](t) is an integral curve of [Mathematical Expression Omitted] for either [Alpha] = 0, [Alpha] = 1, such that (i) and (ii) of the Maximum Principle hold. The projection [Xi](t) of an extremal pair is called an extremal. The extremals which correspond to [Alpha] = 0 are called abnormal, and the extremals which correspond to [Alpha] = 1 are called regular. If there is no danger of ambiguity, a regular extremal will be simply called an extremal.

Suppose now that ([Xi](t), k(t)) is an extremal pair on an interval [0, T]. If [Xi](t) is a regular extremal, then the maximality condition (ii) of the Maximum Principle implies that

[Delta][H.sup.1]/[Delta]k([Xi](t)) = 0, or that - k(t) + [H.sub.3]([Xi](t)) = 0.

Therefore, k(t) = [H.sub.3]([Xi](t)), and hence, every regular extremal [Xi](t) is an integral curve of the Hamiltonian vector field [Mathematical Expression Omitted] which corresponds to [Mathematical Expression Omitted].

The converse is also true, that is every integral curve [Xi](t) of [Mathematical Expression Omitted] is also a regular extremal.

Suppose now that ([Xi](t), k(t)) is an abnormal extremal. It follows from the maximality condition (ii) of the Maximum Principle that [Xi](t) is an integral curve of the Hamiltonian vector field [Mathematical Expression Omitted] subject to the constraint that [H.sub.3]([Xi](t)) = 0. In general, it may happen that an optimal trajectory (g(t), k(t)) may be a projection of an abnormal extremal, which may cause additional theoretical difficulties (see, for instance, [12]). However, it turns out, that for the elastic problem, each abnormal extremal is also a regular extremal (this fact will be shown in the next section). Therefore, there is no loss in generality in assuming that each extremal is an integral curve of a single Hamiltonian vector field [Mathematical Expression Omitted] which corresponds to [Mathematical Expression Omitted].

In order to show the structure of extremals it is useful to recall the Poisson bracket. If F and H are any functions on the cotangent bundle [T.sup.*]M of an arbitrary manifold M, then {F, H} denotes their Poisson bracket. The definition and the relevant properties of the Poisson bracket, which are needed the subsequent developments, are contained in the following proposition.

PROPOSITION 1. Suppose that H and F are smooth functions on [T.sup.*]M. Denote by [Mathematical Expression Omitted] the one-parameter group of diffeomorphisms generated by the Hamiltonian field [Mathematical Expression Omitted] of H. Then,

(i) [Mathematical Expression Omitted] for each x [element of] [T.sup.*]M.

(ii) The Lie bracket [Mathematical Expression Omitted] of two Hamiltonian vector fields [Mathematical Expression Omitted] and [Mathematical Expression Omitted] is a Hamiltonian vector field, and [Mathematical Expression Omitted].

(iii) If [H.sub.X] and [H.sub.Y] are the Hamiltonian functions on [T.sup.*]M which correspond to smooth vector fields X and Y on M then {[H.sub.X], [H.sub.Y]} = [H.sub.[X,Y]].

Remark. All of these statements along with their proofs can be found in Arnold's book ([2], Ch. 8) with an unfortunate transposition of the terminology: Poisson bracket of vector fields in Arnold is equal to the Lie bracket of this paper, while the Poisson bracket of this paper is the same as the Lie bracket of Arnold.

It follows immediately from (iii) that the Hamiltonians of left invariant vector fields on a Lie group G form an algebra under the Poisson bracket. In particular, the Poisson algebra of the Hamiltonians [H.sub.1], [H.sub.2] and [H.sub.3] associated with the left invariant vector fields [Mathematical Expression Omitted], [Mathematical Expression Omitted] and [Mathematical Expression Omitted] satisfy an isomorphic table to Table 1. For convenience to the reader we list this table separately as Table 2:

Table 2.

{ , } [H.sub.1] [H.sub.2] [H.sub.3]

[H.sub.1] 0 -[Epsilon][H.sub.3] [H.sub.2]

[H.sub.2] [Epsilon][H.sub.3] 0 -[H.sub.1]

[H.sub.3] -[H.sub.2] [H.sub.1] 0

PROPOSITION 2. Each abnormal extremal corresponds to k = O. Their projections on S are geodesics.

Proof. Let [Xi](t) be an abnormal extremal. That is, [Xi](t) is an integral curve of [Mathematical Expression Omitted] for some function k(t) subject to the condition [H.sub.3] [circle] [Xi](t) = 0. Then, d/dt[H.sub.3] [circle] [Xi](t) = {[H.sub.3], H}([Xi](t)) = 0. (Proposition 1 (i)).

Since {[H.sub.3], H} = {[H.sub.3], [H.sub.1]} = -[H.sub.2] (Table 2), it follows that [H.sub.2]([Xi](t)) = 0. Differentiating this constraint along [Xi], we get that {[H.sub.2], H}([Xi](t)) = 0, or that {[H.sub.2], [H.sub.1]} + k(t){[H.sub.2], [H.sub.3]}([Xi](t)) = 0. Since {[H.sub.2], [H.sub.1]}([Xi]) = [Epsilon][H.sub.3]([Xi](t)) = 0, and {[H.sub.2], [H.sub.3]} = -[H.sub.1], it follows that k(t)[H.sub.1]([Xi](t)) = 0. [H.sub.1]([Xi](t)) cannot be equal to zero, for then [Xi](t) = 0, and that is not possible by (i) of the Maximum Principle. Therefore, k(t) = 0, and our proof is finished.

From here on, we will use H to denote the function [Mathematical Expression Omitted] which generates the regular extrema, and we will simply refer to the integral curves of [Mathematical Expression Omitted] as the extrema. H will be called the Hamiltonian of the elastic problem.

3. The geometric structure of the extremals. Rather than expressing the extrema through the canonical coordinates of [T.sup.*]G, and the familiar differential equation d[x.sub.i]/dt = [Delta]H/[Delta][p.sub.i], and d[p.sub.i]/dt = -[Delta]H/[Delta][x.sub.i], it will be more appropriate to regard [T.sup.*]G as G x [L.sup.*] with [L.sup.*] equal to the dual of the Lie algebra L(G) of G, and proceed with the non-canonical coordinates (g,p) relative to this decomposition. Recall that the above decomposition is accomplished in the following way.

For each g [element of] G, let [[Lambda].sub.g] be the left-translation by g - i.e., [[Lambda].sub.g](x) = gx for all x [element of] G. The differential d[[Lambda].sub.g] at x maps [T.sub.x]G onto [T.sub.gx]G. We shall denote the dual mapping of d[[Lambda].sub.g] by [(d[[Lambda].sub.g]).sup.*]. (This notation suffers from the standard notational weakness in differential geometry, that it suppresses the explicit dependence on the base point x.) In particular then, [(d[[Lambda].sub.[g.sup.-1]]).sup.*] maps [Mathematical Expression Omitted] onto [Mathematical Expression Omitted]. The mapping (g,p) [approaches] [([[Lambda].sub.[g.sup.-1]]).sup.*](p) realizes [T.sup.*]G as G x [L.sup.*].

In this setting, T([T.sup.*]G), the tangent bundle of [T.sup.*]G, is realized as TG x T[L.sup.*]. Upon identifying TG with G x L via the identification (g, A) [approaches] d[[Lambda].sub.g](A) for each (g, A) [element of] G x L, T([T.sup.*]G) becomes G x L x [L.sup.*] x L. We will further think of T([T.sup.*]G) as G x [L.sup.*] x L x [L.sup.*] with the first two coordinates describing the base point. Each element (g, p, X, [Y.sup.*]) of T([T.sup.*]G) is to be regarded as the vector (X, [Y.sup.*]) in L x [L.sup.*] based at (g, p) of [T.sup.*]G.

It is not difficult to show that the natural symplectic form [Omega] on [T.sup.*]G in the above representation of T([T.sup.*]G takes the following form:

[Mathematical Expression Omitted]

for any (g, p) in G x [L.sup.*], and any vectors ([X.sub.i], [Mathematical Expression Omitted]) in L x [L.sup.*] i = 1, 2.

Remark. Abraham and Marsden ([1], Proposition 4.4.1, p. 345) obtain the same expression as above. However they take [Omega] = -d[Theta], rather than [Omega] = d[Theta], which is the convention used here. [Theta] denotes the canonical 1-form on [T.sup.*]G. This discrepancy in sign disappears when discussing Hamiltonian vector fields, since Abraham and Marsden define Hamiltonian vector fields through the formula [Mathematical Expression Omitted], v), which is the negative of the one used here.

Therefore any smooth function H on G x [L.sup.*] determines its Hamiltonian vector field [Mathematical Expression Omitted] through the formula:

d[H.sub.(g,p)](V, [W.sup.*]) = [W.sup.*](X) - [Y.sup.*](V) + p[X, V],

for all (V, [W.sup.*]) in L x [L.sup.*], and all (g, p) in G x [L.sup.*]. Hence any integral curve (g(t), p(t)) of [Mathematical Expression Omitted] must satisfy:

[W.sup.*] ([g.sup.-1](t)dg/dt) = [W.sup.*] (X) = d[H.sub.(g,(t),p(t))](0, [W.sup.*]),

and

dp/dt(V) = [Y.sup.*](V) = p(t)[X, V] - d[H.sub.(g(t),p(t))](V, 0)

for all (V, [W.sup.*]) in L x [L.sup.*].

Let us now return to the Hamiltonian [Mathematical Expression Omitted] of our elastic problem. Each [H.sub.i] is the Hamiltonian of a left-invariant vector field [Mathematical Expression Omitted], i = 1, 2, 3. Therefore, each [H.sub.i] is a linear function on [L.sup.*]. For that reason, [Mathematical Expression Omitted] is a function on [L.sup.*] only; that is, H([g.sub.1], p) = H([g.sub.2], p) for any [g.sub.1], [g.sub.2] in G, and any p in [L.sup.*]. Hence, d[H.sub.(g,p)](V, 0) = 0 for all V in L. We then have the following proposition

PROPOSITION 3. A curve (g(t), p(t)) is an extremal if and only if

[Mathematical Expression Omitted],

and p(t)([g.sup.-1](t)Vg(t)) = [p.sub.0](V) for some [p.sub.0] in [L.sup.*] and any V in L.

Proof. A curve (g(t), p(t)) is an integral curve of [Mathematical Expression Omitted] if and only if

[W.sup.*]([g.sup.-1](t)dg/dt(t)) = d[H.sub.(g(t),p(t))](0, [W.sup.*]) for all [W.sup.*] in [L.sup.*],

and dp/dt(t)(V) = p(t)[[g.sup.-1]dg/dt, V] for all V in L.

Suppose now that q([Epsilon]) is any curve in [L.sup.*] such that q(0) = p(t), and dq/d[Epsilon](0) = [W.sup.*]. Then,

[Mathematical Expression Omitted].

Since each [H.sub.i] is a linear function, d/d[Epsilon][H.sub.i] [circle] q([Epsilon]) = [H.sub.i] [circle] dq/d[Epsilon], and therefore,

[Mathematical Expression Omitted]

Then,

[W.sup.*] ([g.sup.-1](t)dg/dt) = [H.sub.3](p(t))[H.sub.3]([W.sup.*]) + [H.sub.1]([W.sup.*])

for all [W.sup.*] in [L.sup.*], and therefore,

[g.sup.-1](t)dg/dt = [H.sub.3](p(t))[L.sub.3] + [L.sub.1].

This proves that [Mathematical Expression Omitted].

The differential equation for p(t) is then:

dp/dt(t)(V) = p(t)[[H.sub.3](p(t))[L.sub.3] + [L.sub.1] V].

The solutions of this equation are of the form p(t)(V) = [p.sub.0](g(t)V[g.sup.-1](t)), for some [p.sub.0] in [L.sup.*], as can be immediately verified by differentiation. The proof is now finished.

Remark. p(t)([g.sup.-1](t)Vg(t)) = [p.sub.0](V) expresses the conservation laws, and is valid on any left-invariant system on a Lie group G as long as the cost function is also left-invariant. This conservation law may also be viewed as a consequence of E. Noether's Theorem. ([2])

We shall exploit this symmetry condition more systematically later on. For the moment, we single out the relevant properties of the extremals which make a connection with the equations for a rigid body.

PROPOSITION 4. Suppose that [Xi](t) = (g(t), p(t)) is a regular extremal generated by a curvature function k(t). Denote by [H.sub.i](t) each curve p(t)([L.sub.i]) i = 1, 2, 3. Then

(a) k(t) = [H.sub.3](t).

(b) d[H.sub.1]/dt = [H.sub.3](t)[H.sub.2](t), d[H.sub.2]/dt = [H.sub.3](t) ([Epsilon] - [H.sub.1](t)), and d[H.sub.3]/dt = -[H.sub.2](t).

(c) [Mathematical Expression Omitted] is constant. We shall denote this constant by M.

(d) Let u(t) = [k.sup.2](t). Then, u(t) satisfies the following elliptic differential equation

[(du/dt).sup.2] + [u.sup.3] - 4[u.sup.2](H - [Epsilon]) + 4u([H.sup.2] - M) = 0.

Proof. As remarked earlier, k(t) = [H.sub.3](t) is a consequence of the Maximum Principle. Equations (b) follow from the fact that d[H.sub.i]/dt = {[H.sub.i], H}(t) = {[H.sub.i], [H.sub.1]}(t) + [H.sub.3](t){[H.sub.i], [H.sub.3]}(t). The remaining details follow from Table 2.

To prove (c) note that

dM/dt = [H.sub.1]d[H.sub.1]/dt + [H.sub.2]d[H.sub.2]/dt + [Epsilon][H.sub.3]d[H.sub.3]/dt.

Upon substituting from part (b) we get that

dM/dt = [H.sub.1][H.sub.2][H.sub.3] + [H.sub.2][H.sub.3]([Epsilon] - [H.sub.1]) - [Epsilon][H.sub.3][H.sub.2] = 0.

Thus M(t) is constant.

In order to prove (d) start with [Mathematical Expression Omitted]. Hence,

[Mathematical Expression Omitted].

Thus,

[(du/dt).sup.2] = 4u (M - [H.sup.2] + Hu - 1/4[u.sup.2] - [Epsilon]u), or

[(du/dt).sup.2] + [u.sup.3] - 4[u.sup.2](H - [Epsilon]) + 4u([H.sup.2] - M) = 0.

The proof is now finished.

Note. Evidently, k(t) = [H.sub.3](t) = 0 is a solution of the differential system (b) with [H.sub.1](t) = constant, and [H.sub.2](t) = constant. Recall that each abnormal extremal is of this form (Proposition 2); therefore, each abnormal extremal is also regular, as claimed in Section 2.

The papers of Bryant-Griffiths and Langer-Singer obtain their solutions through equation (d) in the preceding proposition. We shall proceed more geometrically and analyze our solutions in terms of the intersection of the invariant surfaces [Mathematical Expression Omitted] and [Mathematical Expression Omitted]. As mentioned in the introduction, this procedure is analogous to the classical treatment of the rigid body motion, with H equal to the total energy of the body, while M is equal to the total angular momentum of the body; for the rigid body these surfaces are, respectively, an ellipsoid and a sphere. (see [2] for further details.)

In the elastic problem, [Mathematical Expression Omitted] is a cylindrical paraboloid in the [H.sub.1] [H.sub.2], [H.sub.3] space, [Mathematical Expression Omitted] is a sphere only on S[O.sub.3](R). In the Euclidean case, [Mathematical Expression Omitted] is a cylinder, while in the hyperbolic case, the surface [Mathematical Expression Omitted] depends on the sign of M: for M [greater than] 0, it is a hyperboloid of one sheet, for M = 0, it is a double cone, and for M [less than] 0, it is a hyperboloid of two sheets.

There is a natural angle [Theta] defined in the [H.sub.1], [H.sub.2] plane which links elastica to the mathematical pendulum. This angle is obtained as follows: Let [H.sub.1], [H.sub.2] and [H.sub.3] be a curve in the intersection of H = constant, and M = constant. Then,

[Mathematical Expression Omitted]

Hence, [Mathematical Expression Omitted] is constant. We shall denote this constant by [J.sup.2]; that is,

[J.sup.2] = 1 - 2[Epsilon]H + M.

J = 0 implies that [H.sub.3] = constant, or that k(t) = constant. This situation occurs when the surfaces H = constant, and M = constant intersect at isolated points. It is easy to check that any constant value for the curvature is a solution. For the hyperbolic plane, k [less than] 1 occurs when M [greater than] 0, k = 1 occurs for M = 0, and k [greater than] 1 occurs when M [less than] 0. Assuming that [J.sup.2] [not equal to] 0, let [Theta] be the angle defined by:

[Epsilon] - [H.sub.1](t) = J cos [Theta](t), and [H.sub.2](t) = J sin[Theta](t).

Then

-J sin[Theta](t)d[Theta]/dt = -d[H.sub.1]/dt = -[H.sub.3](t)[H.sub.2](t) = -[H.sub.3](t)J sin [Theta](t).

Therefore, d[Theta]/dt = [H.sub.3](t) = k(t). Thus, [Theta] is the angle that the tangent vector to the elastic curve makes with a parallel direction to the curve. Differentiating further, [d.sup.2][Theta]/d[t.sup.2] = d[H.sub.1]/dt = -[H.sub.2](t) = -J sin[Theta](t), or [d.sup.2][Theta]/d[t.sup.2] + J sin [Theta](t) = 0. This is the equation of a mathematical pendulum. Furthermore,

[Mathematical Expression Omitted]

Hence, H - [Epsilon] is its "total energy". It follows that,

(2) [H.sub.3](t) = d[Theta]/dt = [+ or -] [square root of] 2[(H - [Epsilon]) + J cos [Theta](t)].

In the Euclidean case [Epsilon] = 0, and H is equal to the total energy of the pendulum.

Kirchhoff was the first one to notice the existence of the mathematical pendulum, and he named it the kinetic analogue of the elastic problem (see A. E. Love [11] for further details.) There are two distinct cases which A. E. Love calls inflectional and non-inflectional. The inflectional case corresponds to H [less than] J. In this case there is a cut-off angle [[Theta].sub.c], and the pendulum oscillates inside of this angle. The curvature k(t) changes sign at the cut-off angle (hence the name). The remaining case H [greater than or equal to] J corresponds to the non-inflectional case, since the pendulum has enough energy to go around the top. The curvature does not change sign.

In the non-Euclidean case, the energy is decreased by the sectional curvature, and everything else remains the same. In their paper [10], Langer and Singer call, the inflectional case, wave-like, and the non-inflectional case, orbit-like. I will follow the older source, and refer to the two cases as inflectional and non-inflectional. The figure below illustrates some of the typical cases.

4. Integrability. The appropriate choice of coordinates for G, suitable for integrating the equation [Mathematical Expression Omitted] is determined both by the symmetry of the problem, and the structure of G. The structure of G becomes relevant when identifying L*(G) with L(G); both [SO.sub.3](R) and SO(2, 1) are simple groups, and this identification is done via the Killing form. The fact that the Killing form is degenerate in the Euclidean group separates the Euclidean case from the non-Euclidean ones. We shall elaborate some of the details of the Euclidean case which were not explicitly stated in the classical literature, but which may be relevant for extensions of analogous results to higher dimensions.

The Euclidean case. Every element of L of L(G) is of the form

[Mathematical Expression Omitted].

We will identify L*(G) with L(G) via the pairing

[Mathematical Expression Omitted].

Then each extremal curve p(t) is identified with a curve P(t) in L(G) via the formula

<P(t),L> = p(t)(L) for all L [element of] L(G).

If [Mathematical Expression Omitted], then

[p.sub.1](t) = p(t)([L.sub.1]) = [H.sub.1](t), [p.sub.3](t) = p(t)([L.sub.3]) = [H.sub.3](t),

and [p.sub.2](t) = p(t)([L.sub.2]) = [H.sub.2](t). Hence,

[Mathematical Expression Omitted].

We shall now examine the conservation laws of Proposition 3 in terms of the matrix P.

The relation p(t)([g.sup.-1](t)Vg(t)) = constant, means that <P(t), [g.sup.-1](t)Vg(t)> = constant. Denote g(t) and V by the following matrices

[Mathematical Expression Omitted], and [Mathematical Expression Omitted] with [Mathematical Expression Omitted], and [Mathematical Expression Omitted].

Then,

[Mathematical Expression Omitted],

and

<P(t), [g.sup.-1] Vg> = h(t) [center dot] [R.sup.tr] (Ax(t) + a) + [H.sub.3](t) [Alpha],

where we have used h(t) to denote the vector [Mathematical Expression Omitted].

It is easy to verify that <P(t), [g.sup.-1] Vg> = constant is equivalent to R(t)h(t) = constant, and

[Mathematical Expression Omitted].

Let us now consider the first conservation law R(t)h(t) = c. Since [Mathematical Expression Omitted], it follows that [[[c]].sup.2] = [J.sup.2]. Let us now integrate the extremal equations in terms of this conservation law:

J = 0 implies that c = 0, which in turn implies that [H.sub.3](t) = constant. Therefore, the elastica is either a circle, or a straight line.

If we assume that c [not equal to] 0, then c can always be rotated to c = J[e.sub.1]. Then the second conservation law

[Mathematical Expression Omitted],

reduces to J[x.sub.2](t) + [H.sub.3](t) = constant.

Denote by [Phi](t) the angle defined by

[Mathematical Expression Omitted].

Since h(t) = [R.sup.-1](t)c, it follows that [H.sub.1] (t) = J cos [Phi], and [H.sub.2](t) = J sin [Phi]. Thus, [Theta](t) = [Pi] - [Phi](t). Then [Mathematical Expression Omitted], and hence [dx.sub.1]/dt = cos [Phi](t) = - cos [Theta], and [dx.sub.2]/dt = sin [Phi] = sin [Theta].

When parametrized by 0, these equations become

[dx.sub.1]/d[Theta] = [- or +] cos [Theta]/[square root of 2(H + J cos [Theta]], [dx.sub.2]/d[Theta] = [+ or -] sin [Theta]/[square root of 2(H + J cos [Theta]].

The second of the above equations immediately yields that,

J[x.sub.2]([Theta]) [+ or -] [square root of 2(H + J cos [Theta])] = constant,

which agrees with the information provided by the second conservation law. Therefore, the second conservation law contains no extra information. (This, however, is not the case for the elastic problem in higher dimensions.)

The first equation is given by:

[x.sub.1]([Theta]) = [- or +] [integral of] cos [Phi]/[square root of 2(H + J cos [Phi])] d[Phi] between limits [Theta] and [[Theta].sub.0].

With the exception of a few values of H and J, this integral is given in terms of the elliptic functions. For convenience to the reader we include the sketch of all the geometric types of solutions. As we mentioned in the introduction, the sketches of the Euclidean elastica were known to L. Euler, and also appear in the book of A. E. Love ([11]).

Non-Euclidean case. We will denote the elements of the Lie algebra by

[Mathematical Expression Omitted],

with [Epsilon] = 1 when G = [SO.sub.3](R), and [Epsilon] = -1, when G = SO(2, 1).

L*(G) will be identified with L(G) via the Killing form K(A, B) = [a.sub.1][b.sub.1] + [a.sub.2][b.sub.2] + [Epsilon][a.sub.3][b.sub.3] = -[Epsilon]/2 Tr(AB) for any

[Mathematical Expression Omitted], and [Mathematical Expression Omitted] in L(G).

Evidently, K(gA[g.sup.-1], gB[g.sup.-1]) = K([A.sub.1]B) for any g [element of] g. As in the Euclidean case, the projection p(t) of an extremal curve (g(t),p(t)) is identified with a curve P(t) in L(G) via the above pairing. If

[Mathematical Expression Omitted],

then [p.sub.1](t) = K(P(t), [L.sub.1]) = p(t)[L.sub.1] = _H.sub.1](t), [p.sub.2](t) = K(P(t), [L.sub.2]) = p(t)([L.sub.2]) = [H.sub.2](t), and -[Epsilon][p.sub.3](t) = K(P(t), [L.sub.3]) = [H.sub.3](t). Thus,

[Mathematical Expression Omitted].

The invariance condition p(t)([g.sup.-1] Vg) = constant becomes K(P(t), [g.sup.-1] (t) Vg(t)) = constant, for any V in L(G). This means that K(g(t)P(t)[g.sup.-1](t), V) = constant for V [element of] L(G), which in turn implies that g(t)P(t)[g.sup.-1](t) = [Lambda] for some element A of L(G). Therefore, the spectrum of P must be constant. The characteristic polynomial f([Lambda]) of P(t) is equal to:

[Mathematical Expression Omitted].

Hence, the eigenvalues of P(t) are given by [Lambda] = 0, and [[Lambda].sup.2] + [Epsilon]M = 0 with [Mathematical Expression Omitted].

Thus, the Killing form identifies M as the characteristic frequency of P(t). In the case of [SO.sub.3](R), M [greater than] 0, and hence [Lambda] = [+ or -]i [square root of M]. However, in SO(2, 1), M can be of either sign; aside from the geometric interpretations in Fig. 3, the choice of a sign, in this context, has a different interpretation: M [greater than or equal to] 0 means that [[Lambda].sup.2] - M = 0 has real solutions. Therefore, the non-eigenvalues of P(t) are real. Bryant and Griffiths called this case non-compact [4]. Hence, the non-compact case occurs to when the surface [Mathematical Expression Omitted] is a hyperboloid of one sheet. M [less than] 0 means that the non-zero eigenvalue [Lambda] is imaginary, and [Lambda] = [+ or -]i [square root of [absolute value of M]]. Bryant and Griffiths [4] called this case compact. Thus, the compact case occurs when [Mathematical Expression Omitted] is a hyperboloid of two sheets.

The remaining case, which is not discussed in [4], is given by M = 0. Then all the eigenvalues of P are zero. On [SO.sub.3](R), M = 0 implies that k = 0, and the corresponding elastica are geodesics. But on SO(2, 1), M = 0 is the light cone, and the integration of that case requires special coordinates.

It will be convenient, for further integration, to associate a vector

[Mathematical Expression Omitted]

to each matrix element

[Mathematical Expression Omitted]

in L(G). It then follows that the matrix commutator AB - BA is associated with [Mathematical Expression Omitted]. Recalling our convention that [A, B] = BA - AB, it follows that [Mathematical Expression Omitted]. Then, it is easy to show that [Mathematical Expression Omitted] for any A in L(G), and any g [element of] G. We shall be using this fact in some of the subsequent calculations.

Assume first that M [not equal to] 0. Our choice of the coordinate functions on G, which is particularly suitable for integrating the extremal equations on G, is dependent on the nature of the symmetry matrix [Lambda]. Recall that g(t)P(t)[g.sup.-1](t) = [Lambda]. We shall coordinatize G in terms of the functions [[Phi].sub.1], [[Phi].sub.2] and [[Phi].sub.3] defined by

(3) g = [e [Epsilon]/[square root of [absolute value of M]] [Lambda][[Phi].sub.1] [e.sup.[L.sub.2][[Phi].sub.2]] [e.sup.[L.sub.3] [[Phi].sub.3]].

In this representation we are implicitly assuming that A is not linearly dependent on [L.sub.2].

PROPOSITION 5. Suppose that (g(t),P(t)) is an extremal curve such that g(t)P(t)[g.sup.-1](t) = [Lambda]. Then, [[Phi].sub.1](t), [[Phi].sub.2](t), and [[Phi].sub.3](t) are given by the following differential equations

(4) [Mathematical Expression Omitted], [Mathematical Expression Omitted], and [Mathematical Expression Omitted]

with [Delta] (difference)] = [Epsilon]/[square root of [absolute value of M]] ([H.sub.1](t)cos[[Phi].sub.3](t) - [H.sub.2](t)sin[[Phi].sub.3](t)).

Proof.

[Mathematical Expression Omitted],

and therefore,

[Mathematical Expression Omitted].

Since dg/dt = g([L.sub.1] + [H.sub.3](t) [L.sub.3]), it follows that

[Mathematical Expression Omitted].

Upon substituting the symmetry relation [g.sup.-1] [Lambda]g = P we get:

[Mathematical Expression Omitted].

Upon further applying the correspondence [Mathematical Expression Omitted] into the above equation we get that:

[Mathematical Expression Omitted].

Since

[Mathematical Expression Omitted],

it follows that [e.sup.-[L.sub.3][[Phi].sub.3]] [e.sub.2] = (sin [[Phi].sub.3])[e.sub.1] + (cos [[Phi].sub.3])[e.sub.2]. Furthermore, [Mathematical Expression Omitted]. Therefore,

[Mathematical Expression Omitted],

and [Mathematical Expression Omitted].

The first two equations can be solved for d[[Phi].sub.1]/dt, and d[[Phi].sub.2]/dt to yield:

d[[Phi].sub.1]/dt = cos [[Phi].sub.3]/[Delta (difference)], and d[[Phi].sub.2]/dt = -[Epsilon][H.sub.2]/[square root of [absolute value of M]], with [Delta (difference)] = [Epsilon]/[square root of [absolute value of M]]([H.sub.1] cos [[Phi].sub.3] - [H.sub.2] sin [[Phi].sub.3]).

Then,

[Mathematical Expression Omitted],

and the proof is now finished.

Remark. When M = 0, it will be assumed that [Lambda] = [L.sub.1] - [L.sub.3], and that the coordinates are given by g(t) = [e.sup.[Lambda][[Phi].sub.1]] [e.sup.[L.sub.2][[Phi].sub.2]] [e.sup.[L.sub.3][[Phi].sub.3]]. Then the differential equations for [[Phi].sub.1], [[Phi].sub.2], and [[Phi].sub.3] can be obtained from equations (4) above by substituting [Epsilon]/[square root of M] = 1.

In either the compact hyperbolic case, or on [SO.sub.3](R), there is no loss of generality in assuming that [Lambda] = [Epsilon][square root of [absolute value of M]] [L.sub.3]. While in the non-compact case, [Lambda] can always be taken as [Lambda] = [Epsilon][square root of [absolute value of M]] [L.sub.1]. This observation follows from the fact that SO(2, 1) acts transitively on each surface [x.sup.2] + [y.sup.2] - [z.sup.2] = M, while [SO.sub.3](R) acts transitively on each sphere [x.sup.2] + [y.sup.2] + [z.sup.2] = M. Hence, [Lambda] can always be conjugated by an element go in G to be one of the above choices.

We will analyze the compact case first:

Note that on [SO.sub.3](R),

[Mathematical Expression Omitted],

and that on SO(2, 1),

[Mathematical Expression Omitted].

We shall obtain the explicit relations between [H.sub.1], [H.sub.2], and [H.sub.3] in terms of [[Phi].sub.1], [[Phi].sub.2] and [[Phi].sub.3] through the relation [Mathematical Expression Omitted]. [Mathematical Expression Omitted], and therefore,

[Mathematical Expression Omitted],

[Mathematical Expression Omitted].

Therefore,

(5) [H.sub.1] = -[square root of [absolute value of M]] cos [[Phi].sub.3] sin [[Phi].sub.2], [H.sub.2] = [square root of [absolute value of M]] sin [[Phi].sub.3] sin [[Phi].sub.2], -[H.sub.3] = [square root of [absolute value of M]] - cos [[Phi].sub.2].

It follows from (5) that cos [[Phi].sub.2] = [H.sub.3]/[square root of [absolute value of M]] = k/[square root of [absolute value of M]], and

[Delta (difference)] = 1/[square root of [absolute value of M]]([H.sub.1] cos [[Phi].sub.3] - [H.sub.2] sin [[Phi].sub.3]) = - sin [[Phi].sub.2].

Then, it follows from (4) that

d[[Phi].sub.1]/dt = cos [[Phi].sub.3]/[Delta (difference)] = (-[H.sub.1]/[square root of [absolute value of M]] sin [[Phi].sub.2]) (-1/sin [[Phi].sub.2]) = [H.sub.1]/[square root of [absolute value of M]] [sin.sup.2] [[Phi].sub.2] = [square root of [absolute value of M]][H.sub.1]/M - [k.sup.2]

If [Delta (difference)] [greater than] 0, then sin [[Phi].sub.2] = -1/[square root of [absolute value of M]] [square root of M - [k.sup.2]], and if [Delta (difference)] [less than] 0 then sin [[Phi].sub.2] = 1/[square root of [absolute value of M]] [square root of M - [k.sup.2]].

It will be convenient to reparametrize each [[Phi].sub.i] in terms of the angle [Theta] defined through the formulas: [Epsilon] - [H.sub.1] = J cos [Theta], and [H.sub.2] = J sin [Theta]. Then it follows from equation (3) that k = d [Theta]/dt = [H.sub.3] = [+ or -] [square root of 2(H - [Epsilon] + J cos [Theta])]. In the non-inflectional case, choosing the branch [H.sub.3] [greater than] 0 amounts to choosing the counter clockwise direction for [Theta]. For [H.sub.3] [less than] 0, the orientation reverses.

Using (5) we obtain

(6) cos [[Phi].sub.3] = -[H.sub.1]/[square root of M] sin [[Phi].sub.2] = 1 - J cos [Theta]/ [+ or -] [square root of M - [k.sup.2]], and sin [[Phi].sub.3] = J sin [[Theta].sub.3]/[+ or -] [square root of M - [k.sup.2]].

We will now shift attention to SO(2, 1):

[Mathematical Expression Omitted] means that

[Mathematical Expression Omitted].

Therefore,

(7) (7) [H.sub.1] = [square root of M] cos [[Phi].sub.3]sh[[Phi].sub.2], [H.sub.2] = -[square root of M] sin [[Phi].sub.3]sh[[Phi].sub.2], and [H.sub.3] = [square root of M]ch[[Phi].sub.2].

Thus

ch[[Phi].sub.2] = [H.sub.3]/[square root of M] = k/[square root of M], and [Delta (difference)] = -1/[square root of M] ([H.sub.1] cos [[Phi].sub.3] - [H.sub.2] sin [[Phi].sub.3]) = -sh[[Phi].sub.2]. Then,

d[[Phi].sub.1]/dt = cos [[Phi].sub.3]/[Delta (difference)] = [H.sub.1]/[square root of M]sh[[Phi].sub.2] (-1/sh[[Phi].sub.2]) = -[H.sub.1]/[square root of M][sh.sup.2][[Phi].sub.2] = -[square root of M][H.sub.1]/[k.sup.2] - M = [square root of M] [H.sub.1]/M - [k.sub.2]

This expression is the same as in [SO.sub.3](R). As before, [Delta (difference)] [greater than] 0 holds when sh[[Phi].sub.2] = -1/[square root of M] [square root of [k.sup.2] - M], and [Delta (difference)] [less than] 0 when sh[[Phi].sub.2] = 1/[square root of M] [square root of [k.sup.2] - M]. Furthermore,

(8) [Mathematical Expression Omitted], [Mathematical Expression Omitted].

The preceding formulas can be combined, for both [SO.sub.3](R) and SO(2, 1), to yield the following information:

[Mathematical Expression Omitted]

The appropriate sign is determined by the sign of [Delta (difference)] which is determined by the initial condition g(0). Furthermore,

cos [[Phi].sub.3] = [Epsilon] - J cos [Theta]/[+ or -] [square root of [Epsilon] (M - [k.sup.2])], and sin [[Phi].sub.3] = J sin [Theta]/[+ or -] [square root of [Epsilon](M - [k.sup.2])].

The remaining parameter [[Phi].sub.1] is a solution of d[[Phi].sub.1]/dt = [square root of M][H.sub.1]/M - [k.sup.2]. In terms of [Theta] this equation becomes

[Mathematical Expression Omitted]

since d[Theta]/dt = [square root of 2(H - [Epsilon] + J cos [Theta])].

The elastica x([Theta]) is given by x([Theta]) = g([Theta])[e.sub.3], or

[Mathematical Expression Omitted].

In the non-inflectional case, k oscillates between its maximal value [k.sub.max] = [square root of 2(H - [Epsilon] + J)], and its minimal value [k.sub.min] = [square root of 2(H - [Epsilon] - J)]. The compact hyperbolic case is always non-inflectional, since M = [J.sup.2] - 2H - 1 = [J.sup.2] + [H.sup.2] - [(H - 1).sup.2], and therefore [J.sup.2] [less than] [(H - 1).sup.2] [center dot] g([Theta]) is a periodic curve of [Theta] with period 2[Pi] if and only if

[integral of] ([Epsilon] - J cos [Theta])d[Theta]/(M - [k.sup.2]) [square root of 2(H - [Epsilon] + J cos [Theta])] between limits 2[Pi] and 0

is a rational multiple of 2[Pi].

In the inflectional case, the mathematical pendulum does not have sufficient energy to go around the top. Hence, there is a cut-off angle [[Theta].sub.c], and the angle [Theta] oscillates between -[[Theta].sub.c] and [[Theta].sub.c]. As [Theta] increases from -[[Theta].sub.c] to [[Theta].sub.c], the geodesic curvature k increases from its minimal value [k.sub.min], which is negative to its positive maximal [k.sub.max]. k changes sign at [Theta] = 0. The curvature then decreases to its minimal value as the pendulum swings back from [[Theta].sub.c] to -[[Theta].sub.c]. This pattern is clear from Figures 2, 3 and 4.

The extremal solution curves g([Theta]) will be periodic of period 2[[Theta].sub.c] if and only if

[integral of] (1 - J cos [Theta])d[Theta]/(M - [k.sup.2]) [square root of 2(H - 1 + J cos [Theta])] between limits [[Theta].sub.c] and -[[Theta].sub.c]

is a rational multiple of 2[[Theta].sub.c].

In either the inflectional or the non-inflectional case, the elastica remains in a bounded region, which further justifies the terminology. Figures 6 and 7 below contain a sketch of some typical elastica.

The non-compact cases occur only on SO(2, 1). As already stated before, [Lambda] will be taken as [Lambda] = -[square root of M][L.sub.1]. Then

[Mathematical Expression Omitted] yields [Mathematical Expression Omitted.

Hence

(11) [H.sub.1] = - [square root of M] cos [[Phi].sub.3]ch[[Phi].sub.2], [H.sub.2] = [square root of M] sin [[Phi].sub.3]ch[[Phi].sub.2], [H.sub.3] = [square root of M]sh[[Phi].sub.2].

The rest of the calculations follows completely analogously to the previous cases. We get sh[[Phi].sub.2] = k/[square root of M], The elastica are and therefore [Mathematical Expression Omitted]. The elastica are given by

[Mathematical Expression Omitted]

Hence,

[Mathematical Expression Omitted]

[Delta (difference)] = 1/[square root of M] ([H.sub.1] cos[[Phi].sub.3] - [H.sub.2] sin[[Phi].sub.3]) = ch[[Phi].sub.2], and therefore

d[[Phi].sub.1]/dt = cos[[Phi].sub.3]/[Delta (difference)] = (-[H.sub.1]/[square root of M]ch[[Phi].sub.2]) 1/ch[[Phi].sub.2] = -[square root of M][H.sub.1]/M + [k.sup.2].

Then,

(13) d[[Phi].sub.1]/d[Theta] = [square root of M](1 + J cos [Theta])/(M + [k.sup.2]) [square root of 2(H + 1 + J cos [Theta])].

As in the compact case there are inflectional, and non inflectional elastica whose analysis will be omitted. Figure 8 shows some typical solutions.

It remains to consider the boundary case M = 0. For this case we will assume that [Lambda] = (-[L.sub.3] + [L.sub.1]). Then the appropriate coordinates suitable for integrating g(t) are given by the following formula g(t) = [e.sup.([L.sub.1] - [

.3]]. The symmetry relation P = [g.sup.-1] [Lambda]g gives

p = [g.sup.-1] ([L.sub.1] - [L.sub.3])g, or [Mathematical Expression Omitted].

Hence, [Mathematical Expression Omitted], or

[Mathematical Expression Omitted].

So,

[H.sub.1] = cos [[Phi].sub.3]ch[[Phi].sub.2] + cos [[Phi].sub.3]sh[[Phi].sub.2],

[H.sub.2] = - sin [[Phi].sub.3]ch[[Phi].sub.2] - sin [[Phi].sub.3]sh[[Phi].sub.2],

and

[H.sub.3] = sh[[Phi].sub.2] + ch[[Phi].sub.2].

Then, sh[[Phi].sub.2] + ch[[Phi].sub.2] = k. Solving this equation for sh[[Phi].sub.2] and ch[[Phi].sub.2] we get:

ch[[Phi].sub.2] = [k.sup.2] + 1/2k, and sh[[Phi].sub.2] = [k.sup.2] - 1/2k.

The elastica is given by x([Theta]) = [e.sup.([L.sub.1] - [L.sub.3])[[Phi].sub.1]][e.sup.[L.sub.2][[Phi].sub.2]][e.sub.3]. [Lambda] = [L.sub.1] - [L.sub.3] is a nilpotent matrix with [[Lambda].sup.3] = 0. Thus,

[Mathematical Expression Omitted].

Hence,

Printer friendly Cite/link Email Feedback | |

Author: | Jurdjevic, V. |
---|---|

Publication: | American Journal of Mathematics |

Date: | Feb 1, 1995 |

Words: | 9791 |

Previous Article: | Algebraic reduction and rigidity for Hilbert modules. |

Next Article: | Moduli spaces of vector bundles on reducible curves. |

Topics: |