Printer Friendly

A divergent generating function that can be summed and analysed analytically.

1 Introduction

Consider the recurrence relation

[a.sub.n+1] + [a.sub.n-1] = ([alpha]n + [beta])[a.sub.n], n[greater than or equal to] 1, (1.1)

where [alpha] and [beta] [not equal to] are given real (or possibly complex) numbers, and with initial values [a.sub.0] = 0, [a.sub.1] = 1, say. We assume [alpha] [not equal to] 0. (Otherwise, this is a linear recurrence with constant coefficients and the solution is well known. See e.g. Flajolet and Sedgewick (2008, Section IV.5.1).)

Recurrence relations of this type appear in several contexts, see e.g. Renlund (2009+), Parviainen and Renlund (2010+) and the sequences A053983, A053984, A058797, A058798, A058799 in Sloane (2008); further examples from Sloane (2008) are given in Table 5.

The recurrence (1.1) is easily solved using Bessel functions, see the references above; for completeness we give this solution in Appendix A. The present paper originates in an attempt to understand this solution using generating functions.

Define thus the generating function

A(z) := [[infinity].summation over (n=0)][a.sub.n][z.sup.n]. (1.2)

It is easily seen that (1.1) with a0 = 0 and a1 = l is equivalent to the differential equation (cf. the general formulas in Section 3)

[z.sup.-1] A(z) + z Z(z) = [alpha]z A'(z) + [beta]A(z) + 1 (1.3)


[alpha][z.sup.2]A'(z) = (1 + [z.sup.2]) - [beta]z) A(z) - z. (1.4)

This differential equation is singular at 0. For z [not equal to] 0 it can be written

A'(z) - ( [[alpha].sup.-1][z.sup.-2] - [beta][[alpha].sup.-1][z.sup.-1] + [[alpha].sup.01]) A(z) + [[alpha].sup.-1] [z.sup.-1] = 0. (1.5)

We write, for convenience, v := [beta]/[alpha] and find the integrating factor H(z) := exp([[alpha].sup.-1][z.sup.-1] + v log z - [[alpha].sup.-1] z) and the solution, in any given simply connected domain not containing 0, where we may choose any z0 in the domain and C is an arbitrary constant,


The standard procedure to find the coefficients from the generating function (1.2) is to find the Taylor coefficients at 0: [a.sub.n] = [D.sup.n] A(0)/n!. (See, e.g., Flajolet and Sedgewick (2008) where this is described and developed in detail.) But here this breaks down because A(z) is (in general) singular at 0. In fact, except in exceptional cases, the coefficients [a.sub.n] increase so rapidly that the sum (1.2) has radius of convergence 0; i.e., the sum (1.2) diverges for every complex z [not equal to] 0, and there is no analytic function A(z) in any neighbourhood of 0 that satisfies (1.5). (See Remark A.4.) Hence, (1.2) should be interpreted as a formal power series, and in this sense (1.3) and (1.4) hold and determine A(z).

Nevertheless, we have in (1.6) found a solution (or, more precisely, a family of solutions) A(z) to (1.3) that is a function of a real or complex variable, and it is natural to ask in what way this solution represents the sequence ([a.sub.n])n, and if one can recover ([a.sub.n])n from A(z). We shall see that indeed there is a strong connection (at least if we choose the right solution, or restrict ourselves to a suitable domain): A(z) has an asymptotic expansion A(z) ~ [[summation].sub.n] [a.sub.n][z.sup.n] as z [right arrow] 0. (Recall that this means that A(z) = [[summation].sup.N.sub.n=0] [a.sub.n][z.sup.n] + 0([z.sup.N+1]) as z [right arrow] 0 for every N [greater than or equal to] 0, but does not imply convergence of the infinite sum, see e.g., Flajolet and Sedgewick (2008, Section A.2).) Further, we will find [a.sub.n] explicitly from this expansion. Moreover, although A(z) is not analytic at 0, there is a solution A(z) of (1.3) that is infinitely differentiable on the entire real line, including 0, and the derivatives at 0 are [a.sub.n] n! as expected. We shall also see that although the sum [[summation].sub.n][a.sub.n][z.sup.n] diverges, it can be summed by (a version of) the Borel summation method (Hardy, 1949) for every z [member of] C \ (0, [infinity]). We can thus define the generating function A(z) in this domain by (1.2), interpreted as a (generalized) Borel sum, and this function A(z) satisfies (1.3)-(1.5), and thus (1.6) for some choice of [z.sub.0] and C. (In fact, we can take [z.sub.0] = 0 with some care in the path of integration in (1.6), and then C = 0.)

We shall prove these results for a somewhat larger class of recurrence relations in Sections 3 and 4. We then use this and (1.6) to find a simple formula for [a.sub.n] in Section 5. The appendices give another formula for [a.sub.n] using Bessel functions, and connect the two solutions.

As mentioned in Flajolet and Sedgewick (2008, Note A.10), the possibility to define a divergent formal power series as a function having the right asymptotic expansion goes back to Euler (1760); he considered a simpler example that we will treat in detail in Section 2 before studying the general case.

The generating function studied in this paper is an example of the class of holonomic functions, see Flajolet and Sedgewick (2008, Appendix B.4) for an introduction. Related and much more general results can be found in the books by Balser (1994, 2000); see further, e.g., Balser, Braaksma, Ramis and Sibuya (1991), Braaksma (1991), Martinet and Ramis (1991), Malgrange and Ramis (1992), and Ramis (1993).

Remark 1.1. By a theorem of Borel (1895), see also Carleman (1926, Ch. V), given any sequence ([a.sub.n]), there exists a [C.sup.[infinity]] function on R with these numbers as Taylor coefficients at 0, and thus the asymptotic expansion [[summation].sub.n] [a.sub.n] [z.sup.n] there; moreover, we may choose the function so that it is analytic in, e.g., a given sector D in the complex plane, with the given asymptotic expansion as z [right arrow] 0 in D. Hence, the existence of a function (and, indeed, infinitely many functions) representing a given sequence by an asymptotic expansion is well-known. The special feature in the problem discussed in this paper is the possibility to find such a function explicitly and to use it to find a formula for [a.sub.n].

Acknowledgement. This research was partly done during a visit to the programme "Combinatorics and Statistical Physics" at the Erwin Schrodinger International Institute for Mathematical Physics in Vienna, 2008. The paper was presented at the colloquium for Philippe Flajolet's 60th birthday in Paris, December 2008. I am grateful both to several participants at these meetings, including Philippe Flajolet, and to Henrik Renlund for interesting comments and references.

2 A simple example

Let us first study a simpler recurrence where we easily can see in detail what happens:

[b.sub.n] = n[b.sub.n-1], n [greater than or equal to] 1, with [b.sub.0] = 1. (2.1)

The solution is obviously [b.sub.n] = n!, but let us ignore this fact and try to find [b.sub.n] through the generating function B(z) = [[summation].sup.[infintiy].sub.n=0] [b.sub.n][z.sup.n]. Since the series diverges for any complex number z [not equal to] 0, we regard B(z) as a formal power series. We have


so (2.1) is equivalent to

B(z) - 1 = [z.sup.2]B'(z) + zB(z) (2.2)


B'(z) = ([z.sup.-2] - [z.sup.-1]) B(z) - [z.sup.-2]. (2.3)

Now forget that the series defining B diverges, and let us regard (2.3) as a differential equation for a function B(z) of a real or complex variable. We find an integrating factor H(z) = [e.sup.h(z)] where h'(z) = [-z.sup.-2] + [z.sup.-1], i.e., we can choose h(z) = [z.sup.-1] + log z and H(z) = [ze.sup.1/z]; this yields (for z [not equal to] 0)

(B(z)H(z))' = B'(z)H(z) + B(z)H'(z) = [-z.sup.-2]H(z) = [-z.sup.-1][e.sup.1/z],

and thus, for any fixed [z.sub.0] = 0 and some constant C,


(We can choose z0 freely; a change in z0 is compensated by a change in C.)

The formula (2.4) defines B(z) as an analytic function in any simply connected domain that does not contain 0, for example in the plane cut along the negative real axis or along any other ray from 0. Note that B cannot be defined as an analytic function in C \ {0}, since [??] [w.sup.-1][e.sup.1/w] dw = 2[pi]i [not equal to] 0 for a circle around 0 (by the power series expansion of [e.sup.1/w]).

Let us now see what happens as z [right arrow] 0. Consider first the case when z < 0, i.e., z is real and negative. Note that [e.sup.1/z] [right arrow] 0 and [e.sup.1/z] [right arrow] [infinity] rapidly as z [??] 0. Hence, [[integral].sup.0.sub.z] [w.sup.-1][e.sup.1/w] dw converges, so we may choose z0 = 0 in (2.4). Let us first also choose C = 0, and denote the resulting solution by [B.sub.0](z); thus, using the change of variable t = 1/z - 1/w, and thus w = z/(1 - zt),


By dominated convergence in the last integral, [B.sub.0](z) [right arrow] 0 as z [??] 0. We made here a specific choice of the integration constant C in (2.4), but we now see that we really had no alternative; any other choice would give a solution B(z) such that [absolute value of B(z)] [right arrow] [infinity] as z [??] 0. Hence B0 is the unique solution of (2.2), or (2.3), on (-[infinity], 0) that is continuous up to 0.

Moreover, using

[(1 = zt).sup.-1] = [N.summation over (0)] [z.sup.n][t.sup.n] + )([z.sup.N+1][t.sup.N+1]), (2.6)

in (2.5) and integrating, we find the asymptotic expansion

[B.sub.0](z) ~ [[infinity].summation over (n=0)] n! [z.sup.n] = [[infinity].summation over (n=0)][b.sub.n][z.sup.n], z [??] 0. (2.7)

Alternatively, we candifferentiate (2.5) repeatedly under the integral sign (using dominated convergence) and obtain, for any n [greater than or equal to] 0,

[[D.sup.n][B.sub.0](z)/n!] = [[integral].sup.[infinity].sub.0] [[t.sup.n][e.sup.-t]/[(1 - zt).sup.n+1]]dt [right arrow] [[integral].sup.[infinity].sub.0] [t.sup.n][e.sup.-t] dt = n!, as z [??] 0. (2.8)

Remark 2.1. Note that the asymptotic expansion (2.7) follows directly from (2.8) by Taylor's formula; however, an asymptotic expansion does not conversely imply convergence of the derivatives without further assumptions. (Consider [e.sup.-1.x] sin [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] as x [??] 0 as an example.) One situation where the converse holds is when the asymptotic expansion holds in a sector [[theta].sub.1] < arg z < [[theta].sub.2] in the complex plane; then Cauchy's estimates imply convergence of all derivatives in any strictly smaller sector [[theta]'.sub.1] < arg z < [[theta]'.sub.2] with [[theta].sub.1] < [[theta]'.sub.2] < [[theta].sub.2]. (We use below this meaning of 'strictly smaller' or 'strictly contained in' for sectors.) More generally, or as a consequence, the asymptotic expansion can be formally differentiated in any strictly smaller sector. This shows one advantage of using complex variables, and will be used below.

Remark 2.2. So far, we have in this section essentially just repeated what Euler (1760) did in 1760 when he wanted to compute [[summation].sup.[infinity].sub.n=0][(-1).sup.n] n!. He took the sum to be, in our notation, [B.sub.0](-1) = [[integral].sup.[infinity].sub.0] [e.sup.-t][(1 + t).sup.-1] dt, and then evaluated this integral numerically, by several methods, finding the approximate value 0.5963473621237 (where all but the last four digits are confirmed by Maple). See Hardy (1949, [section] 2.4-2.6) for a modern presentation.

Next consider z > 0, so z [??] 0 along the positive real axis. Then [e.sup.-1/z] decreases rapidly and is o([z.sup.N]) for every N, and the same holds for the term C[z.sup.-1][e.sup.-1/z] in (2.4). We choose [z.sub.0] = 1 and again make the change of variable t = [z.sup.-1] - [w.sup.-1] , w = z/(1 - zt). This yields, for any C and N,


Assume 0 < z < 1/2. The integral from 1/(2z) to 1/z - 1 is O([z.sup.N]). For 0 [less than or equal to] t [less than or equal to] 1/(2z), (2.6) holds, and (2.9) yields the asymptotic expansion

B(z) ~ [[infinity].summation over (n=0)] n! [z.sup.n] = [[infinity].summation over (n=0)] [b.sub.n][z.sup.n], z [??] 0, (2.10)

just as (2.7) on the negative real axis, but holding for all solutions of(2.3) on the positive real axis.

Again, we further can differentiate (2.9) under the integral sign (noting that we also get a term from the varying integration limit) and conclude that [D.sup.n] B(z)/n! [right arrow] n! = [b.sub.n] as z [??] 0, for any solution B on (0, [infinity]).

Combining the solution [B.sub.0](z) on (-[infinity], 0) and any solution B(z) on (0, [infinity]), and defining B(0) = [b.sub.0] = 1, we obtain a function B(z) defined on the entire real axis, which is infinitely differentiable except possibly at 0, and with all derivatives extending continuously also to 0; this implies (see the end of the proof of Theorem 3.2 for details) that B(z) is infinitely differentiable, and that it satisfies (2.2) on the entire real axis. We have thus found a family of solutions of (2.2) on R; we see that each such solution is infinitely differentiable on R and analytic except at 0.

These results for real z are easily extended to complex z. The arguments and results for z < 0 extend to any ray {z : arg z = [theta]} or sector {z : arg z [member of] ([[theta].sub.1], [[theta].sub.2])} strictly contained in the left half-plane, and the arguments and results for z > 0 extend to any ray {z : arg z = [theta]} or sector {z : arg z [member of] ([[theta].sub.1], [[theta].sub.2])} strictly contained in the right half-plane. (Note that we then can use the argument in Remark 2.1 as an alternative to show convergence of derivatives.) Moreover, the particular solution B0 can be defined by (2.5) for all z [member of] C \ [0, [infinity]), and it is easily seen that (2.7) and (2.8) hold in this larger domain too. We omit the details, since we state and prove the more general Theorem 3.2 in the next section.

3 The general case

Consider, more generally, a linear recurrence

[a.sub.n] = [K.summation over (k=1)] ([[alpha].sub.k]N + [[beta].sub.k])[a.sub.n-k], n [greater than or equal to] K, (3.1)

with given [a.sub.0], ..., [a.sub.K-1], where K [greater than or equal to] 1 and [[alpha].sub.1], [[beta].sub.1], ..., [[alpha].sub.K] are given complex numbers with [[alpha].sub.1] > 0. Equivalently, the recurrence can be written as

[a.sub.n] = [K.summation over (k=1)]([[alpha].sub.k](n - k) + [[??].sub.k]) [a.sub.n-k], n [greater than or equal to] K, (3.2)

with [[??].sub.k] := [[beta].sub.k] + k[[alpha].sub.k]. Clearly, (3.1) includes (1.1) and (2.1) as special cases.

Remark 3.1. The assumption a1 > 0 is no essential restriction, and the results are easily extended to any [[alpha].sub.1] [not equal to] 0. In fact, we may as well assume [[alpha].sub.1] = 1 when convenient; the general case [[alpha].sub.1] [not equal to] 0 is easily reduced to this by replacing [a.sub.n], [[alpha].sub.k], [[beta].sub.k] by [[alpha].sub.1.sup.-n][a.sub.n], [[alpha].sub.k][[alpha].sup.-k.sub.1], [[beta].sub.k][[alpha].sup.-k.sub.1]. Note, however, that unless [[alpha].sub.1] > 0, this entails a rotation in the complex plane of the domains in Theorem 3.2 below; for example, if [a.sub.1] < 0, the intervals [I.sub.+] and [I.sub._] have to be interchanged.

Let A(z) := [[summation].sup.[infinity].sub.n=0][a.sub.n][z.sup.n], regarded as a formal power series. The recursive relation (3.2) is equivalent to the differential equation

A(z) = [K.summation over (k=1)][[alpha].sub.k][z.sup.k+1] A'(z) + [K.summation over (k=1)] [[??].sub.k][z.sup.k]A(z) + r(z), (3.3)

where r(z) is a polynomial of degree less than K determined by the initial conditions. Introduce the polynomials p(z) := [[summation].sup.K.sub.k=1] [[alpha].sub.k][z.sup.k-1] and q(z) := 1 - [[summation].sup.K.sub.k=1] [[??].sub.k][z.sup.k]; note that p(0) = [[alpha].sub.1] > 0 and q(0) = 1. Then (3.3) can also be written q(z)A(z) = [z.sup.2]p(z) A'(z) + r(z) or

A'(z) = q(z)/[z.sup.2]p(z) A(z) - r(z)/[z.sup.2]p(z) = [R.sub.1](z)A(z) - [R.sub.2](z), (3.4)

for two rational functions [R.sub.1] and [R.sub.2].

Theorem 3.2. With conditions and notations as above, let I [subset or equal to] R be a finite or infinite open interval such that 0 [member of] I and p(z) [not equal to] 0 on I. Let [I.sub.+] := I [intersection] (0, [infinity]) and [I.sub._] := I [intersection] (-[infinity], 0). Then

(i) There exists a differentiable function A(z) on I that solves (3.3) there. Any such solution is [C.sup.[infinity]] on I and analytic on I \ {0} = [I.sub._] U [I.sub.+], but in general no solution is analytic at 0.

(ii) The solution in (i) is unique on [I.sub._] but not on [I.sub.+].

(iii) Any solution A(z) of (3.3) on I satisfies [D.sup.n] A(0)/n! = [a.sub.n], n [greater than or equal to] 0.

(iv) Any solution A(z) of (3.3) on I satisfies the asymptotic expansion A(z)~ [[summation].sup.[infinity].sub.n=0] [a.sub.n][z.sup.n] as z [right arrow] 0,

i.e., for every N,

A(z) = [N.summation over (n=0)] [a.sub.n][z.sup.n] + O([z.sup.N+1]), z [right arrow] 0. (3.5)

(v) Every solution A(z) of (3.3) or (3.4) on [I.sub.+] satisfies the asymptotic expansion (3.5) in [I.sub.+], and [D.sup.n] A(z)/n! [right arrow] [a.sub.n] as z [??] 0.

(vi) There is a unique solution [A.sub.0](z) of (3.3) or (3.4) on [I.sub._] that is bounded as z [??] 0. This solution satisfies the asymptotic expansion (3.5), and [D.sub.n] [A.sub.0](z)/n! [right arrow] [a.sub.n] as z [right arrow] 0.

For complex z, these results extend as follows, for some [delta] > 0. (If K = 1, or more generally [[alpha].sub.2] = ... = [[alpha].sub.K] = 0, then [delta] can be chosen as [infinity].)

(vii) The solution [A.sub.0](z) extends to an analytic function in the domain {z : [absolute value of z] < [delta], z [not member of] [0, [infinity])}; this solution to (3.3) and (3.4) satisfies the asymptotic expansion (3.5) and [D.sub.n] [A.sub.0](z)/n! [right arrow] [a.sub.n] as z [right arrow] 0 in this domain.

(viii) For any [[theta].sub.1] and [[theta].sub.2] with--[pi]/2 < [[theta].sub.1] < [[theta].sub.2] < [pi]/2, there exist (non-unique) solutions A(z) of (3.3) and (3.4) in the domain {z : [absolute value of z] < [delta], arg(z) [member of] ([[theta].sub.1], [[theta].sub.2])}. 02)}. Any such solution satisfies the asymptotic expansion (3.5) and [D.sup.n] A(z)/n! [right arrow] [a.sub.n] as z [right arrow] 0 in this domain.

(ix) For any [[theta].sub.1] and [[theta].sub.2] with [pie]/2 < [[theta].sub.1] < [[theta].sub.2] < 3[pi]/2, there exist solutions A(z) of (3.3) and (3.4) in the domain {z : [absolute value of z] < [delta], arg(z) [member of] ([[theta].sub.1], [[theta].sub.2])}. Exactly one of these solutions is bounded as z [greater than or equal to] 0, and this solution satisfies the asymptotic expansion (3.5) and [D.sup.n]A(z)/n! [right arrow] [a.sub.n] as z [right arrow] 0 in this domain.

Proof: Suppose for notational simplicity that [[alpha].sub.1] = 1; the general case is easily reduced to this as explained in Remark 3.1. Then p(0) = q(0) = 1, and [R.sub.1](z) = [z.sup.-2] + [gamma][z.sup.-1] + g(z) for some constant [gamma] and some rational function g that has poles only at the zeros of p, and thus not on I.

In a simply connected domain D in the complex plane where [z.sup.2]p(z) [not equal to] 0, a primitive function of --[R.sub.1](z) is thus h(z) = [z.sup.-1] - [gamma] log z - G(z), where G' = g. Consequently, the equation (3.4) has an integrating factor H(z) := [e.sp.h(z)] and the solutions in D are given by, for any fixed [z.sub.0] [member of] D and with a constant C [member of] C,


Any choice of C gives an analytic solution in D, and all solutions in D are given by (3.6). The same is true if we consider solutions on an interval where [z.sup.2]p(z) [not equal to] 0 (since any such interval is contained in a suitable domain D). Clearly, any solution of(3.3) or(3.4) on an interval or in a domain where [z.sup.2]p(z) [not equal to] 0 is analytic.

We write [R.sub.3](z) = r(z)/p(z); thus [R.sub.2](z) = [z.sup.-2][R.sub.3](z) and [R.sub.3] is a rational function that is analytic at 0. We make in (3.6) the same change of variable t = 1/z - 1/w as in Section 2; thus w = z/(1 - zt) and [w.sup.-2] dw = dt, and we find


(We integrate here along any curve such that z/(1 - tz) [member of] D.)

Consider first z [member of] [I.sub._]. We choose D [contains] [I.sub.-] and [z.sub.0] [member of] [I.sub.-] and integrate in (3.6) and (3.7) along parts of the real axis. As in Section 2, we can then let z0 [z.sub.0] [??] 0, i.e., take [z.sub.0] = 0 in (3.6). It then follows from (3.6) that the solution is bounded as z [??] 0 if, and only if, we choose C = 0. We denote this solution by [A.sub.0](z) and find from (3.7)

[A.sub.0](z) = [[integral].sup.[infinity].sub.0][(1 - zt).sup.[gamma]][e.sup.-t-G(z/(1 - zt))+G(z)][R.sub.3](z/1 - zt) dt. (3.8)

Now consider z [member of] C \ [0, [infinity]). Then, for every t [greater than or equal to] 0, [absolute value of 1 - zt] [greater than or equal to] c(arg(z)), with c([theta]) = [absolute value of sin([thet]) if [absolute value of [theta]] [less than or equal to] [pi]/2, and c([theta]) = 1 if [absolute value of [theta]] [greater than or equal to] [pi]/2. Let [rho] := min{[absolute value of [xi]] : p([xi]) = 0} (with p = [infinity] if [[alpha].sub.2] = ... = [[alpha].sub.K] = 0, and thus p has no root). Then G(z), [R.sub.3](z) and F(z) := [R.sub.3](z)[e.sup.-G(z)] are analytic in {z : [absolute value of z] < p}, and thus, for any [epsilon] > 0, F(z/(1 - zt)) is analytic in [D'.sub.[epsilon]] := {z : [absolute value of arg(z)] > [epsilon], [absolute value of z] < c([epsilon])[rho]} for every t [greater than or equal to] 0. If we take the factor [e.sup.G(z)]] outside the integral in (3.8), we can thus differentiate the remaining integral any number of times under the integral sign, and the derivatives will, by induction, be linear combinations of integrals of the type [[integral].sup.[infinity].sub.0] [t.sup.l][(1 - zt).sup.[gamma]'] [F.sup.(m)] (z/(1 - zt))[e.sup.-t]. By dominated convergence, each such integral converges as z [right arrow] 0 in [D'.sub.[epsilon]], and it follows that every derivative [D.sup.n][A.sub.0](z) converges as z [right arrow] 0 in [D'.sub.[epsilon]]. Denote the limits by [a.sup.*.sub.n], n [greater than or equal to] 0. It follows by Taylor expansions that [A.sub.0](z) has an asymptotic expansion (3.5) in [D'.sub.[epsilon]], with [a.sub.n] replaced by [a.sup.*.sub.n], and that [A'.sub.0] (z) has a similar asymptotic expansion, that can be obtained by formally differentiating the expansion for [A.sub.0](z). Substituting these in (3.3) we find that [[summationj.sup.N.sub.0] [a.sup.*.sub.n] [z.sup.n] satisfies (3.3) modulo O([z.sup.N+1]) for any N (in [D'.sub.[epsilon]] and thus everywhere). Hence, the formal power series [[summation].sup.[infinity].sub.n=0] [a.sup.*.sub.n][z.sup.n] satisfies (3.3), and thus [a.sup.*.sub.n] = an for all n. In other words, (3.5) holds for z [member of] [D'.sub.[member of]. We have thus proved (vii) for a smaller domain [D'.sub.[epsilon]]. Moreover, (vi) and (ix) follow.

Consider now instead a domain [D".sub.[epsilon]] := {z : [absolute value of z] < [rho], [absolute value of arg(z)] < [pi]/2 - [epsilon]} in the right half-plane (with 0 < [epsilon] < [pi]/2), and any solution A(z) there. We use (3.7), choosing [z.sub.0] with 0 < [z.sub.0] < [rho] and integrating along the straight line from 0 to 1/z - 1/[z.sub.0]. It is easily seen that, if [absolute value of z] is small enough, then [absolute value of w] [less than or equal to] < [z.sub.0] < [rho] and w = z/(1 - zt) [member of] [D".sub.[epsilon] for all t on this line. We split the integral in (3.7) into two parts: one with [absolute value of t] [less than or equal to] 1/(2[absolute value of z]) and one with [absolute value of t] > 1/2[absolute value of z]). In the second part [absolute value of [e.sup.-t]] [less than or equal to] [e.sup.- while [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and the other factors are bounded; thus this part is O([z.sup.N+1)] for any N. In the first part, [absolute value of 1] - zt] [greater than or equal to] 1/2. We first substitute (for small [absolute value of z] and with F = [R.sub.3][e.sup.-g] as above, F(z/(1 - zt)) = [[summation].sup.N.sub.m=0] [c.sub.m][(z/(1 -zt)).sup.m] + )([z.sup.N+1]), and then make a similar Taylor expansion of [(1 - zt).sup.[gamma]-m]. We combine this with a Taylor expansion of [e.sup.G(z)] and note that [[integral].sup.1/(2z).sub.0] [t.sup.k][e.sup.-t] dt = k! + O([z.sup.N+1]) for any k, and conclude that, for any choice of C in (3.7), A(z) has an asymptotic expansion A(z) ~ [[summation].sup.[infinity].sub.n=0] [a.sup.*.sub.n][z.sup.n] as z [right arrow] 0 in [D".sub.[epsilon]], for some coefficients [a.sup.*.sub.n]. It follows by Cauchy's estimates, see Remark 2.1 that further A'(z) ~ [[summation].sup.[infinity].sub.n=0] [a.sup.*.sub.n][nz.sup.n- 1] as z [right arrow] 0 in [D".sub.[epsilon]'], for every [epsilon]' < [epsilon]; consequently, arguing as for [A.sub.0] above, the formal power series [[summation].sup.[infinity].sub.n=0] [a.sup.*.sub.n][z.sup.n] satisfies (3.3) and thus [a.sup.*.sub.n] = [a.sub.n] for all n. In other words, (3.5) holds for z [member of] [D".sub.[epsilon]], for any solution A(z) of (3.3) in [D".sub.[epsilon]. Differentiating (3.5) N times, we find, by Cauchy's estimates again, that [D.sup.N] A(z) [right arrow] [a.sub.N] N! as z [right arrow] 0 in any strictly smaller domain [D".sub.[epsilon]]'. Since [epsilon] [member of] (0, [pi]/2) is arbitrary, this implies that (viii) holds. In particular, considering real z only, (v) follows.

We can now complete the proof of (vii). Fix [epsilon] < [pi]/4 and let [D.sub.+] := {z [member of] [D".sub.[epsilon]]; Sz > 0} and [D.sub.-] := {z [member of] [D".sub.[epsilon]]: Sz < 0}. The solution [A.sub.0](z) is defined in [D'.sub.[epsilon]], and thus in [D.sub.1] : = [D'.sub.[epsilon]] [intersection] [D.sub.+]. Choose any [z.sub.0] [member of] [D.sub.1]. Then [A.sub.0](z) is given by (3.7) in [D.sub.1], for some C; hence (3.7) defines a solution [A.sub.+] (z) in [D".sub.[epsilon] such that [A.sub.+](z) = [A.sub.0](z) in [D'.sub.[epsilon]] [intersection] [D.sub.+]. Similarly, there is a solution [A.sub._](z) in [D".sub.[epsilon]] such that [A.sub._](z) = [A.sub.0](z) in [D'.sub.[epsilon]][intersection][D.sub._]. (But note that [A.sub._] [not equal to] [A.sub.+].) We extend [A.sub.0] to {z : [absolute value of z] < [delta], z [not member of] [0, [infinity])} by defining [A.sub.0](z) = [A.sub.+](z) for z [member of] [D.sub.+] and [A.sub.0](z) = [A.sub._](z) for z [member of] [D.sub._]. The claims in (vii) follow by the result for [D'.sub.[epsilon]] proved above together with (viii) applied to [D.sub.+] and [D.sub._].

If we combine the solution [A.sub.0] on [I.sub._] with any solution A(z) on [I.sub.+], we can define functions [f.sub.n] on I by


by (v) and (vi), each function [f.sub.n] is continuous on I. Then [f.sub.n](x) = [f.sub.n](0) + [[integral].sup.x.sub.0] [f.sub.n+1](y) dy, and thus each [f.sub.n] is differentiable with [f'.sub.n] = [f.sub.n+1]. Consequently, A := [f.sub.0] [member of] [C.sup.[infinity]](I). Further, A satisfies (3.3) on I \ {0}, and thus by continuity at 0 too. The claims (i)-(iv) now follow, using (v) and (vi).

We call [A.sub.0] the principal solution of (3.3).

Remark 3.3. It follows from the proof that we can define [A.sub.0] to be analytic in a disc {z : [absolute value of z] < [delta]} cut along any given ray in the right half-plane. In particular, we can choose a ray in the fourth quadrant, say, and then A0 is defined on (-[delta], 0) [union] (0, [delta]); if we further define [A.sub.0](0) = [a.sub.0], the restriction of this [A.sub.0] to (-[delta], [delta]) is a [C.sup.[infinity]] solution on an interval I as in (i). Moreover, we can regard [A.sub.0] (z) as a multivalued function; more precisely, we can regard [A.sub.0]([re.sub.i[theta]]) as a function of (r, [theta]) [member of] (0, [infinity]) x (- [infinity], [infinity]), and then (3.5) holds as r = [absolute value of z] [right arrow] 0 for - 1/2 [pi] + [epsilon] < [theta] 5/2 [pi] - [epsilon], for any [epsilon] > 0.

4 The exponential generating function and Borel summation

We continue to study the recurrence (3.1), with [[alpha].sub.1] > 0 as in Section 3, but consider instead of A(z) the exponential generating function

E(z) := [[infinity].summation over (n=0)] [a.sub.n][[z.sub.n]/n!]. (4.1)

It is easily seen from (3.1), by induction, that [a.sub.n] = O([C.sup.n]n!) for some C, and thus the sum in (4.1) converges and E(z) is an analytic function in some disc {z : [absolute value of z] < R}.

Differentiation of (4.1) yields [D.sup.k]E(z) = [[summation].sub.n] [a.sub.n+k][z.sup.n]/n!, and it follows that z[D.sup.k+1]E(z)= [[summation].sub.n][a.sub.n+k][nz.sup.n]/n!. Hence the recurrence relation (3.1) is equivalent to the differential equation, with [[beta].sup.*.sub.k] = [[beta].sub.k] + K [[alpha].sub.k],




We use the standard method of introducing the vector-valued function F (z) = [([F.sub.k](z)).sup.K-1].sub.k=0] := [([D.sub.k]E(z)).sup.K-1].sub.k=0] and see that (4.3) is equivalent to


[DF.sub.k](z) = [F.sub.k+1(z)], 0 [less than or equal to] k [less than or equal to] K - 2. (4.5)

(When K = 1, we simply have F(z) = [F.sub.0](z) = E(z), (4.5) is vacuous and (4.4) is the same as (4.3).)

The system (4.4)-(4.5) has a unique analytic solution with F(0) = [([a.sub.k]).sup.K-1.sub.k=0] in any simply connected domain containing 0 but not a-1, for example in VE := C \ [a-1, to); hence F0(z) is a solution to (4.3) in [D.sub.E]. We let in the sequel E(z) denote this analytic extension of (4.1) to [D.sub.E].

Remark 4.1. The example [a.sub.n+1] = (n + [beta])[a.sub.n], [a.sub.0] = 1, with E(z) = [(1 - z).sup.-[beta]], shows that in exceptional cases, E(z) might be analytic also at a-1 (and then E(z) is an entire function), and that it is also possible that E(z) has a pole at [[alpha].sub.-1.sup.1]; however, the same example shows that typically E(z) is singular at [[alpha].sub.1.sup.-1] and that it is not possible to extend E(z) around [[alpha].sub.1.sup.-1] as a single-valued function. See also Remark A.4.

The (generalized) Borel summation method for a (possibly divergent) series [[summation].sub.n] [a.sub.n] an consists of the following three steps, see Borel (1899, 1928) and Hardy (1949, [section] 8.5, 8.11). (Step (ii) is Hardy's generalization (Hardy, 1949, [section] 8.11); Borel assumes that E(z) is entire.)

(i) Define the exponential generating function E(z) by (4.1) (assuming it to exist at least for small [absolute value of z]).

(ii) Extend (if necessary) E(z) analytically along the entire positive real axis (assuming this to be possible).

(iii) Define the Borel sum by

([B.sup.*]) [[infinity].summation over (n=0)] [a.sub.n] := [[integral].sup.[infinity].sub.0] E(x)[e.sup.-x] dx (4.6)

(assuming the integral to exist).

Theorem 4.2. Let [([a.sub.n]).sup.[infinity].sub.0] be given by (3.1). There exists [delta] > 0 such that if z [member of] [D.sub.[delta]] := {z [member of] C \ [0, [infinity]): [absolute value of z] < [delta]}, then the Borel sum

([B.sup.*]) [[infinity].summation over (n=0)] [a.sub.n][z.sup.n] = [[integral].sup.[infinity].sub.0] E(xz)[e.sup.-x] dx (4.7)

exists; furthermore, this sum is the principal solution of (3.3) defined in Section 3. Consequently, the Borel sum ([B.sup.*])[[summation].sup.[infinity].sub.n=0] [a.sub.n][z.sup.n] solves (3.3) in [D.sub.[delta]], and has there the asymptotic expansion ~ [[summation].sup.[infinity].sub.n=0] [a.sub.n][z.sup.n] and all derivatives are continuous also at 0, with the limit values an there.

If [[alpha].sub.k] = 0 for 2 [less than or equal to] k [less than or equal to] K (in particular if K = 1), then we may take [delta] = [infinity]; i.e., then ([B.sup.*]) [[summation.sup.[infinity].sub.n-0][a.sub.n][z.sup.n] exists in C \ [0, [delta]), and is an analytic solution of (3.3) there.

See Hardy (1949, Theorem 136) for a related general theorem on generalized Borel summability of asymptotic series. To prove Theorem 4.2 we begin with a simple estimate.

Lemma 4.3. E(z) is an analytic function in [D.sub.E], and satisfies the following estimates:

(i) [absolute value of E(z)] [less than or equal to] [e.sup.C[absolute value of z]] for some constant C and all z [member of] [D.sub.E] with [absolute value of z] [greater than or equal to] [2[[alpha].sub.1.sup.-1].

(ii) If [[alpha].sub.k] = 0 for 2 [less than or equal to] k [less than or equal to] K (in particular if K = 1), then [absolute value of [[alpha].sub.k] = 0 for 2 [less than or equal to] k < K (in particular if K = 1), then [absolute value of E(z)] [less than or equal to][e.sup.o([absolute value of z]] as z [right arrow] [infinity] with z [member of] [D.sub.E].

Proof: We have already shown that E(z) extends to [D.sub.E].

(i) : The system (4.4)-(4.5) can be written DF(z) = M(z)F(z) for a matrix M(z) with [parallel]M(z)[parallel] [less than or equal to] C for [absolute value of z] [greater than or equal to] [2[[alpha].sub.1.sup.-1] and some constant C. Hence, for r [greater than or equal to] [r.sub.0] : = 2[[alpha].sub.1.sup.-1] and 0 < [theta] < 2[pi], by Gronwall's lemma, see e.g. Revuz and Yor (1999, Appendix [section] 1),


and (i) follows (possibly with a larger C) since [theta] [??] F([r.sub.0][e.sup.i[theta]] can be extended to a continuous function on the closed interval [0,2n], and thus is bounded.

(ii) : Let [epsilon] > 0 and define [F.sup.[epsilon]](z) := ([[epsilon].sup.k][D.sup.k]E[(z)).sup.K-1.sub.k=0]. We have, in analogy with (4.4)-(4.5) and using the assumption [[alpha].sub.k] = 0 for k [greater than or equal to] 2,


which can be written [DF.sup.[epsilon](z) = [M.sup.[epsilon]](z)[F.sup.[epsilon]](z), where [parallel][M.sup.[epsilon]](z)[parallel] [less than or equal to] 2[epsilon] for [absolute value of z] large, sya [absolute value of z] [greater than or equal to] R([epsilon]). Consequently, by Gronwall's lemma again, [absolute value of E(z)] [less than or equal to] [absolute value of [F.sup.[epsilon]](z)] [less than or equal to] [Ce.sup.2[epsilon][absolute value of z]] for [absolute value of z] [greater than or equal to] R([epsilon]) and some

Proof Proof of Theorem 4.2: Let z [member of] C \ [0, [infinity]). The exponential generating function of the sequence [([a.sub.n][z.sup.n]).sup.[infinity].sub.0] si x [??] E(zx), which by Lemma 4.3 is analytic on [0, [infinity]) and there satisfies [absolute value of E(zx)] [less than or equal to] [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Consequently, the integral (4.6) defining the Borel sum ([B.sup.*]) [[summation].sup.[infinity].sub.n=0] [a.sub.n][z.sup.n] becomes (4.7) and it exists at least for [absolute value of z] < [delta] = [C.sup.-1.sub.2]; we thus have

A(z) := [B.sup.*])[[infinity].summation over (n=0)] [a.sub.n][z.sup.n] = [[integral].sup.[infinity].sub.0] E(xz)[e.sup.-x] dx, z [member of] [D.sub.[delta]]. (4.8)

If [[alpha].sub.k] = 0 for k [greater than or equal to] 2, we may by Lemma 4.3(ii) choose [C.sub.2] arbitrarily small, and thus we may take [delta] = to.

Since E(z) is analytic in a disc {z : [absolute value of z] < 2r} (with r := [[alpha].sup.-1.sub.1]/2), we have, for any N, E(zx) = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] when [absolute value xz] [less than or equal to] r, and thus, for z [member of] [D.sub.[delta]]/2],


which shows the asymptotic expansion.

A(z) is analytic in Ds, by (4.8) and standard arguments based on dominated convergence, and

A'(z) = [[integral].sup.[delta].sub.0] xE'(xz)[e.sup.-x]dx.

Consequently, using integration by parts K - k times in the first integral, and defining [[alpha].sub.0] := 0, [[beta].sub.0] := [[beta].sup.*.sub.0] := -1 for convenience,


The integrated part is a polynomial [r.sub.1](z) of degree at most K - 1. Further,


and summing we obtain

[z.sup.K] [K.summation over (k=0)] ([[alpha].sub.k]xz[D.sub.K-k+1] E(xz) + [[beta].sup.*].sub.k][D.sup.K-k]E(xz) = 0

by (4.2); hence the last sum of integrals vanishes, and


Consequently, A(z) satisfies the differential equation (4.2) in [D.sub.[delta]], except that the polynomial r(z) is replaced by -[r.sub.1](z). However, this means only that we have the differential equation for same recurrence (3.1) with some other initial values; however, the initial values are (by Theorem 3.2) given by the first terms of the asymptotic expansion, so by (4.9) they are just [a.sub.0], ..., [a.sub.K-1], and thus r(z) = -[r.sub.1] (z) and (4.10) equals (3.3). Hence, A(z) solves (3.3); since it is, by (4.9), a solution bounded as z [??] 0 along the negative real axis, it is by Theorem 3.2 the principal solution [A.sub.0](z).

5 Solving (1.1)

We return to the special case (1.1), for definiteness with [a.sub.0] =0 and [a.sub.1] = 1 as in Section 1. The general solution A(z) to (1.3) is given in (1.6). Assume for simplicity [alpha] > 0. The general theory in Section 3 then shows that there is a principal solution A0(z) analytic in C \ [0, [infinity]) that is given, at least for z < 0, by taking [z.sub.0] = 0 and C = 0 in (1.6). We now make the change of variable t = [[alpha].sup.-1]([z.sup.-1] - [w.sup.-1]), or w = z/(l - [alpha]zt), and obtain


this integral is an analytic function of z [member of] C \ [0, [infinity]), and the formula is thus valid for all such z, cf. (3.8). (Recall that (3.8) was derived under the assumption [[alpha].sub.1] = 1. In fact, if [alpha] = 1, we have in the notation of Section 3 [gamma] = -v, G(z) = z and [R.sub.3](z) = z, and (5.1) coincides with (3.8). For a general [alpha] > 0, one can similarly derive (5.1) from (3.8) and the reduction in Remark 3.1.)

Consider for example - 1 < z < 0, and fix N > 0. By two Taylor expansions,


Since [A.sub.0](z) has the asymptotic expansion (3.5) as z [??] 0, we obtain by identifying coefficients the following solution to (1.1). (The argument has assumed [alpha] > 0, but since obviously each [a.sub.n] is a polynomial in [alpha] and [beta], the solution is valid for all a and [beta].)

Theorem 5.1. The solution of the recurrence (1.1) with [a.sub.0] = 0, [a.sub.1] = 1 is given by, for any complex [alpha] and [beta], and with v := [beta]/[alpha] (provided [alpha] [not equal to] 0)


Knowing this solution, it should be easy to verify it directly by induction; we leave this as an exercise.

Remark 5.2. By continuity, the last formula is valid also in the case [alpha] = 0, where we thus obtain


We have also, by the standard argument for linear recurrences with constant coefficients (provided [beta] [not equal to] [+ or -] 2),

[a.sub.n] = [([beta] + [square root of ([[beta].sup.2] - 4]).sup.n] - [([beta] - [square root of [[beta].sup.2]- 4).sup.n]/[square root of ([[beta].sup.2] - 4] [2.sup.-n]. (5.3)

Hence, the two formulas in (5.2) and (5.3) must agree If we expand (5.3) into a polynomial, using the binomial theorem thrice, and compare coefficients of [[beta].sup.n-2k-1], we can see that this identity is equivalent to the hypergeometric sum [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

A Bessel functions

We use the standard Bessel functions of the first and second kinds [J.sub.v](z) and [Y.sub.v](z), see e.g. Abramowitz and Stegun (1972, Chapter 9) or Lebedev (1972, Chapter 5). Recall that these are analytic functions of z and v for z [member of] C \ (- [infinity], 0] (the complex plane cut along the negative real axis) and v [member of] C. (Alternatively, one can think of them as analytic functions of z in the covering space of C \ {0}, i.e., as analytic functions of log z [member of] C \ {0}, or as functions of (r, [theta]) [member of] (0, [infinity]) x (-[infinity], [infinity]).) The same applies to the functions [PHI] and [PSI] defined below, so we assume throughout the appendices that z is in this domain (or in this covering space). (When [mu] is an integer, [J.sub.[mu]](z) is an entire function of z, and there is no problem at all.)

The Bessel functions satisfy the recurrence relations

[J.sub.v+1](z) + [J.sub.v-1](z) = 2v/z[J.sub.v](z) (A.1)

[Y.sub.v+1](z) + [Y.sub.v-1](z) = 2v/z [Y.sub.v](z). (A.2)

Hence, for a fixed z, the functions [J.sub.v](z) and [Y.sub.v](z), as functions of v (restricted to a sequence [v.sub.0] + n, n [member of] N), satisfy a recurrence relation of the type studied in this paper. Conversely, the solution to (1.1) can easily be expressed in these Bessel functions by fitting the constants and initial values.

We find it convenient to define



[PHI]([mu], v;z) = -[PHI](v, [mu];v) (A.4)


[PHI]([mu], [mu];z) = 0. (A.5)

Since [J.sub.v+1](z) = v/z [J.sub.v](z) - [J'.sub.v](z) and [Y.sub.v+1](z) = v/z[Y.sub.v](z) - [Y'.sub.v](z) (Abramowitz and Stegun, 1972, (9.1.27)), the formula for the Wronskian W {[J.sub.v], [Y.sub.v]} yields, as stated in Abramowitz and Stegun (1972, (9.1.16)),


It follows immediately from (A.1) and (A.2) that

[PHI]([mu] + 1, v; z) + [PHI]([mu] - 1, v; z) = 2[mu]/z[PHI]([mu], v; z) (A.7)

[PHI]([mu],v + 1, v; z) + [PHI]([mu] - v; z) = 2v/z[PHI]([mu], v; z). (A.8)

Hence, for fixed z, the function [PHI] is a function of two (complex) variables that satisfy the type of recurrence studied here in each variable. To write the recurrence in a more convenient form, we define, for (complex) x [not equal to] 0,

[PSI]([mu],v;x) = [x.sup.-1][PHI]([mu], v;2/x). (A.9)


[PHI]([mu] + 1, v;x) + [PHI]([mu] - 1, v;x) = [mu]x[PHI]([mu], v;x), (A.10)

[PHI]([mu],v + 1;x) + [PHI]([mu],v - 1;x) = vx[PHI]([mu], v;x), (A.11)

Further, [PHI]([mu], v; x) = -[PHI](v, [mu]; x) and

[PHI](v,v; x) = 0, (A.12)

[PHI](v + 1,v; x) = 1. (A.13)

Consequently, for any fixed v and x [not equal to] 0, the sequence [PHI](v + n, v; x), n [member of] N, satisfies (1.1) with [alpha] = x and [beta] = vx, and the initial conditions [a.sub.0] = 0, [a.sub.1] = 1 used in Section 1. We just as easily find the general solution:

Theorem A.1. The general solution to the recurrence (1.1), [a.sub.n+1] + [a.sub.n-1] = ([alpha]n + [beta])[a.sub.n] for n [greater than or equal to] 1, is, with v := [beta]/[alpha],


In particular, the solution with [a.sub.0] = 0, [[alpha].sub.1] = 1 is

[a.sub.n] = [PHI](n + v, v; [alpha]).

Proof: These expressions satisfy the recurrence by (A.10) and the initial conditions by (A.12), (A.13) and [PHI](v,v + 1; [alpha]) = -[PHI](v + 1,v; [alpha]) = -1.

Combining the two solutions of the same recurrence found in Theorems 5.1 and A.1, we find the following formulas.

Theorem A.2. For any complex v and n [member of] N,


a polynomial of degree at most n - 1 in x, and


a polynomial of degree at most n in [z.sup.-1].

Remark A.3. These polynomials are called Lommel polynomials, see Lommel (1871) andWatson (1922, [sections]9.6). In the notation of Watson (1922), [PHI](n + v, v; x) = [R.sub.n-1],v+1(2=x). For combinatorial properties of Lommel polynomials, see also Flajolet and Schott (1990) and Feinsilver, McSorley and Schott (1996).

For example, continuing (A.12)-(A.13),

[PSI](v + 2, v; x) = (v + 1)x,

[PSI] (v + 3, v; x) = (v + 1)(v + 2)[x.sup.2] - 1,

[PSI] (v + 4, v; x) = (v + 1)(v + 2)(v + 3)[x.sup.3] - 2(v + 2)x,

[PSI] (v + 5, v; x) = [(v + 1).sup.[bar.4]][x.sup.4] - 3[(v + 2).sup.[bar.2]][x.sup.2] + 1,

[PSI] (v + 6, v; x) = [(v + 1).sup.[bar.5]][x.sup.5] - 4[(v + 2).sup.[bar.3]][x.sup.3] + 3(v + 3)x.

Thus, typically the degree of [PSI](n + v,v; x) is exactly n - 1 (when n [greater than or equal to] 1), but when v is a negative integer it may be smaller.

We give some more examples in Tables 1-4, where [mu] and v are integers in Tables 1-3 and half-integers in Table 4. (Note that for integer v, [J.sup.-v(z)] = [(-1).sup.v][J.sub.v](z) and [Y.sub.-v(z)] = [(- 1).sup.v][Y.sub.v](z); hence, [PHI]([mu], -v; z) = [(-1).sup.v]([mu], v; z) and [PSI]([mu], -v; x) = [(-1).sup.v][PSI]([mu], v; x), and similarly with the argument -[mu] if [mu] is an integer. This explains the symmetry in the tables, and the 0's when [mu] + v = 0; these are special for integer arguments.) These tables are thus parts of doubly infinite arrays that satisfy the recurrences (A.10)-(A.11) in both directions; note that these arrays of polynomials and integers are constructed from the transcendental Bessel functions. (In the half-integer case the Bessel functions can be expressed in the trigonometric functions and powers of z.) We give also in Table 5 some examples from Sloane (2008) of such integer sequences.

Remark A.4. As n [right arrow] [infinity], for any fixed v and z, cf. Abramowitz and Stegun (1972, (9.3.1)),

[J.sub.n+v](z) ~ [(z/2).sup.n+v] [GAMMA](n + v +1) and [Y.sub.n+v](z) ~ -[[pi].sup.-1][(z/2).sup.-n- v][GAMMA](n + v);

hence [absolute value of [J.sub.n+v](z)] [right arrow] 0 and [absolute value of [Y.sub.n+v](z)] [right arrow] [infinity] rapidly. Thus, by Theorem A.1, the solution [a.sub.n] of (1.1) satisfies

[absolute value of [a.sub.n]] ~ C[[absolute value of [alpha]].sup.n] [absolute value of [GAMMA](n + v)] ~ [cn.sup.Rv-1] [absolute value of [alpha]].sup.n]n!

for some c > 0, except in the special case [a.sub.1] [J.sub.v](2/[alpha]) = [a.sub.0] [J.sub.v+1](2/[alpha]), when [a.sub.n] = c[J.sub.n+v](2/[alpha]) [right arrow] 0 rapidly. Hence, as claimed in Section 1, typically the generating function A(z) diverges for all z [not equal to] 0, while the exponential generating function has radius of convergence [[absolute value of [alpha]].sup.-1], as seen more generally in Remark 4.1.

Remark A.5. The Bessel functions of the third kind [H.sup.(1).sub.v] (z) and [H.sup.(2).sub.v](z) are given by [J.sub.v](z) [+ or -] [iY.sub.v](z) (Abramowitz and Stegun, 1972, (9.1.3-4)), and thus


Consequently, taking determinants and noting [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], also


(Remark A.6. The function [J.sub.-v] satisfies the same Bessel equation [z.sup.2] f"(z) + zf(z) + ([Z.sup.2] - [v.sup.2]f(z) = 0 as [J.sub.v] and [Y.sub.v], and it can be expressed in them by [J.sub.-v](z) = cos(v[pi])[J.sub.v](z) - sin(v[pi])[Y.sub.v](z) (Abramowitz and Stegun, 1972, (9.1.2)). It follows that, if [mu] - v = n [member of] Z,


A more general formula expressing [PHI]([mu], v; z) in Bessel functions J only is given in (B.3) below.

Remark A.7. The related recurrence [a.sub.n+1] = ([alpha].n + [beta])[a.sub.n] + [a.sub.n-1] can be solved similarly using the Bessel functions [I.sub.v] and [K.sub.v]; alternatively, we may note that [i.sup.n] [a.sub.n] satisfies (1.1) (with i[alpha] and i[beta] instead of [alpha] and [beta]), and thus the solution follows from Theorems 5.1 and A.1. In particular, it is easily seen that if [a.sub.0] = 0 and [a.sub.1] = 1, then the formulas in Theorem 5.1 hold if we omit the factor [(-1).sup.k]. Examples of such sequences are A000806, A001053, A001517 and A001518 in Sloane (2008), and the Bessel polynomials [y.sub.n](x) that satisfy this recurrence with [alpha] = 2x, [beta] = x (and thus v = 1/2), and [y.sub.0](x) = 1, [y.sub.1](x) = x + 1, see Krall and Frink (1949) and Grosswald (1978).

Remark A.8. We do not know any closed form for the exponential generating function E(z) in general, but we observe that in the special case [a.sub.n] = [J.sub.n](y) for some y [not equal to] 0, when (1.1) holds with [alpha] = 2/y,

[beta] = 0 by (A.1).

E(z) := [[infinity].summation over (n=0)] [[J.sub.n](y)/n!][z.sup.n] = [J.sub.0]([square root of [y.sup.2] - 2yz]).

(This is an entire function, since [J.sub.0] is entire and even.) This can be verified by inserting the right-hand side into (4.3), using the Bessel equation. It would be interesting to know whether there are other cases when E(z) can be expressed using Bessel functions.

B Hypergeometric series

We have shown above (Theorem A.2) that if [mu] - v [member of] Z, then [psi] ([mu], v; x) is a polynomial in x and, equivalently, [PHI]([mu], v; z) is a polynomial in [z.sup.-1]. In this appendix we will show this in another way by deriving the general formula (B.4) for [PHI]([mu],v; z) in terms of hypergeometric functions. In the case [mu] - v [member of] Z we show that this reduces to a polynomial in [z.sup.-1] and we will recover Theorem A.2. However, the formula also shows that for general m and v no similar simplification is possible.

We begin with a product formula for Bessel functions (Abramowitz and Stegun, 1972, (9.1.14))


For [mu], v, [mu] + v [not member of] [Z.sub.<0], this can also be written in terms of a hypergeometric function:


Remark B.1. Since, for v [not member of] [Z.sub.<0],

[J.sub.v](z) = [[(z/2).sup.v]/[GAMMA](V + 1)] 0[F.sub.1](; v + 1; [-z.sup.2]/4),

this product formula is equivalent to the product formula for hypergeometric functions, for any [alpha], [beta] with [alpha], [beta] [alpha] + [beta] - 1 [not member of] [Z.sub.[less than or equal to] 0]:

[sub.0.F.sub.1](;[alpha];x)[sub.0.F.sub.1](;[beta]; x) = [sub.2.F.sub.3]([alpha] + [beta] - 1/2, [alpha] + [beta]/2]; [alpha], [beta], [alpha] + [beta] - 1; 4x).

If v [not member of] Z, then Abramowitz and Stegun, 1972 (9.1.2))

[Y.sub.v](z) = cos(v[pi])[J.sub.v](z) - [J.sub.-v](z)/sin(v[pi])]

And thus the definition (A.3) yields, for [mu], v [not member of] Z,

[PI]([mu], v; z) = - [[pi]/sin(v[pi])][J.sub.[mu]](z)[J.sub.-v](z) + [[pi]/sin([mu][pi])][J.sub.v](z)[J.sub.-[mu]](z) + [pi](cot(v[pi]) - cot([mu][pi]))[J.sub.[mu]](z)[J.sub.v](z). (B.3)

The product formula (B.2) and [GAMMA](y)[GAMMA](1 - y) = [pi]/ sin(y[pi]) yield, if [mu], v, [mu] [+ or -] v [not member of] Z, the formula for [PHI] (and thus for [PHI]) promised above:


In general, no simplification seems possible, since the three terms on the right hand side have expansions with powers of z that are congruent (mod 2Z) to, respectively, [mu] - v, v - [mu] and [mu] + v. In the special (limiting) case [mu] - v [member of] Z, however, cot(v[pi]) = cot([mu][pi]), so the third term disappears. Moreover, if, say, [mu] - v = n [member of] [Z.sub.[greater than or equal to]0, but still [mu], v [not member of] Z, then the result can be written (simplest seen from (B.3) and (B.1), noting that sin([mu][pi]) = [(-1).sup.n] sin(v[pi])).


It is easy to see that the term with k = m, say, in the first sum is cancelled by the term with k = m + n in the second. Hence


the polynomial of degree n in [z.sup.-1] found in Theorem A.2 by a different method using our generating functions. We have thus found another proof of Theorem A.2. (The argument above assumes v [member of] Z, but since the final result is a polynomial in v, and the left hand side is analytic in v, the result holds for all v by continuity.)

received 1 December 2008, revised 18 December 2009, accepted 20 December 2009.


M. Abramowitz & I. A. Stegun, eds., Handbook of Mathematical Functions. Dover, New York, 1972.

W. Balser, From Divergent Power Series to Analytic Functions. Theory and Application of Multisummable Power Series. Lecture Notes in Mathematics, 1582. Springer, Berlin, 1994.

W. Balser, Formal Power Series and Linear Systems of Meromorphic Ordinary Differential Equations. Springer, New York, 2000.

W. Balser, B. L. J. Braaksma, J.-P. Ramis & Y. Sibuya, Multisummability of formal power series solutions of linear ordinary differential equations. Asymptotic Anal. 5 (1991), no. 1, 27-45.

B. L. J. Braaksma, Multisummability and Stokes multipliers of linear meromorphic differential equations. J. Differential Equations 92 (1991), no. 1, 45-75.

E. Borel, Sur quelques points de la theorie des fonctions. Ann. Sci. Ecole Norm. Sup. (3) 12 (1895), 9-55.

E. Borel, Memoire surles series divergentes. Ann. Sci. Ecole Norm. Sup. (3) 16 (1899), 9-131.

E. Borel, Lecons sur les series divergentes. 2. ed., Gauthier-Villars, Paris, 1928.

T. Carleman, Les fonctions quasi analytiques. Gauthier-Villars, Paris, 1926.

L. Euler, De seriebus divergentibus. Novi Commentarii Academiae Scientiarum Petropolitanae 5 (1760), 205-237. In Opera Omnia: Series 1, Volume 14, pp. 585-617. Available as E247 on the Euler Archive

P. Feinsilver, J. McSorley & R. Schott, Combinatorial interpretation and operator calculus of Lommel polynomials. J. Combin. Theory Ser. A 75 (1996), no. 1, 163-171.

P. Flajolet & R. Schott, Nonoverlapping partitions, continued fractions, Bessel functions and a divergent series. European J. Combin. 11 (1990), no. 5, 421-432.

P. Flajolet & R. Sedgewick, Analytic Combinatorics. Cambridge Univ. Press, Cambridge, 2008.

H. L. Krall & O. Frink, A new class of orthogonal polynomials: The Bessel polynomials. Trans. Amer. Math. Soc. 65 (1949), 100-115.

E. Grosswald, Bessel Polynomials. Lecture Notes in Mathematics, 698. Springer, Berlin, 1978.

G. H. Hardy, Divergent Series. Clarendon Press, Oxford, 1949.

N. N. Lebedev, Special Functions and their Applications. Dover, New York, 1972. (Translated from Russian.)

E. Lommel, Zur Theorie der Bessel'schen Functionen. Math. Ann. 4 (1871), 103-116.

B. Malgrange & J.-P. Ramis, Fonctions multisommables. Ann. Inst. Fourier (Grenoble) 42 (1992), no. 1-2, 353-368.

J. Martinet & J.-P. Ramis, Elementary acceleration and multisummability. I. Ann. Inst. H. Poincare Phys. Theor. 54 (1991), no. 4, 331-401.

R. Parviainen & H. Renlund. In preparation.

J.-P. Ramis, Series divergentes et theories asymptotiques, Panoramas et Syntheses, 121, Soc. Math. France, Paris, 1993.

H. Renlund, First-passage percolation with exponential times on a ladder. Combin. Probab. Comput., to appear.

D. Revuz &M. Yor, Continuous Martingales and Brownian Motion. 3rd edition, Springer, Berlin, 1999.

N. J. A. Sloane, The On-Line Encyclopedia of Integer Sequences, 2008.

G. N. Watson, A Treatise on the Theory of Bessel Functions. Cambridge Univ. Press, Cambridge, 1922.

Svante Janson

Department of Mathematics, Uppsala University, PO Box 480, SE-751 06 Uppsala, Sweden
Tab. 1: [PSI] ([mu], v; x) for some integers [mu] and v.

                     -1                 0                 1

-2                   -1                 X                 1
-1                    0                -1                 0
 0                    1                 0                -1
 1                    0                 1                 0
 2                   -1                 X                 1
 3                  -2X    2 [X.sup.2] -1                2X
 4     -6 [X.sup.2] + 1  6 [X.sup.3] - 4X     6[X.sup.2] -1
 5   -24 [X.sup.3] + 6X     24[X.sup.4] -   24[X.sup.3]- 6X
                          18[X.sup.2] + 1

                      2                 3

-2                    0                -1
-1                    1                2X
 0                   -X   -2[X.sup.2] + 1
 1                   -1               -2X
 2                    0                -1
 3                    1                 0
 4                   3X                 1
 5      12[X.sup.2] - 1                4X

Tab. 2: [PSI] ([mu], v; 1) = [PHI] ([mu], v; 2)
for some integers [mu] and v.

      -2     -1     0      1      2

-2     0     -1     1      1      0
-1     1      0    -1      0      1
 0    -1      1     0     -1     -1
 1    -1      0     1      0     -1
 2     0     -1     1      1      0
 3     1     -2     1      2      1
 4     3     -5     2      5      3
 5    11    -18     7     18     11
 6    52    -85    33     85     52
 7   301   -492   191    492    301

      3      4     5      6      7

-2   -1     -3    -11    -52   -301
-1    2      5     18     85    492
 0   -1     -2     -7    -33   -191
 1   -2     -5    -18    -85   -492
 2   -1     -3    -11    -52   -301
 3    0     -1     -4    -19   -110
 4    1      0     -1     -5    -29
 5    4      1      0     -1     -6
 6   19      5      1      0     -1
 7  110     29      6      1      0

Tab. 3: [PSI] ([mu], v; 2) = 1/2 [PHI] ([mu], v; 1)
for some integers [mu] and v.

       0       1      2     3     4      5       6        7

0      0      -1     -2    -7    -40  -313   -3090   -36767
1      1       0     -1    -4    -23  -180   -1777   -21144
2      2       1      0    -1    -6    -47    -464    -5521
3      7       4      1     0    -1     -8     -79     -940
4     40      23      6     1     0     -1     -10     -119
5    313     180     47     8     1      0      -1      -12
6   3090    1777    464    79    10      1       0       -1
7  36767   21144   5521   940   119     12       1        0

Tab. 4: [PSI] ([mu], v; 2) = 1/2[PHI]([mu], v; 1)
for some half-integers [mu] and v.

          1/2      3/2      5/2     7/2    9/2    11/2     13/2

-5/2       -2       -5      -13     -60   -407   -3603   -39226
-3/2        1        2        5      23    156    1381    15035
-1/2       -1       -1       -2      -9    -61    -540    -5879
1/2         0       -1       -3     -14    -95    -841    -9156
3/2         1        0       -1      -5    -34    -301    -3277
5/2         3        1        0      -1     -7     -62     -675
7/2        14        5        1       0     -1      -9      -98
9/2        95       34        7       1      0      -1      -11
11/2      841      301       62       9      1       0       -1
13/2     9156     3277      675      98     11       1        0
15/2   118187    42300     8713    1265    142      13        1
17/2  1763649   631223   130020   18877   2119     194       15

Tab. 5: Some integer sequences satisfying (1.1)
from Sloane (2008).

          [alpha]n +
          [beta]        [a.sub.n] by Theorem A.1

A053983   2n + 1        [PSI](n + 1/2,1/2; 2) -
                          [PSI](n + 1/2, 3/2; 2)
A053984   2n + 1        [PSI](n +1/2,1/2; 2)
A058797   n + 1         [PSI](n + 1,1; 1) -
                          [PSI](n + 1, 2; 1) =
                          [PSI](n + 1, 0; 1)
A058798   n + 1         [PSI](n + 1, 1;1)
A058799   n + 3         [PSI](n + 3, 2;1)
A093985   2n            [PSI](n, 0;2)
A093986   2n            [PSI](n, 0;2) - [PSI](n, 1;2)
A106174   2n + 2        [PSI](n + 1,1;2)
A121323   2n + 3        [PSI](n + 3/2, 3/2; 2)
A121351   3n + 4        [PSI](n + 4/3,4/3; 3)
A121353   3n + 1        [PSI](n +1/3,1/3; 3)
A121354   3n + 2        [PSI](n + 2/3, 2/3; 3)
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Janson, Svante
Publication:Discrete Mathematics and Theoretical Computer Science
Article Type:Report
Geographic Code:4EUSW
Date:Jun 1, 2010
Previous Article:Edge-removal and non-crossing configurations in geometric graphs.
Next Article:Ladder operators and endomorphisms in combinatorial physics.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |