Printer Friendly

Some Inequalities in Information Theory Using Tsallis Entropy.

1. Introduction

Throughout the paper N denotes the set of the natural numbers and for n [member of] N we set

[mathematical expression not reproducible], (1)

where n = 2, 3, 4, ... denote the set of all n-components complete and incomplete discrete probability distributions.

For ([a.sub.1], [a.sub.2], ..., [a.sub.n]) = A [member of] [[DELTA].sub.n], ([b.sub.1], [b.sub.2], ..., [b.sub.n]) = B [member of] [[DELTA].sub.n], we define a nonadditive measure of inaccuracy, denoted by H(A, B; [xi]) as

[mathematical expression not reproducible] (2)

If [b.sub.k] = [a.sub.k]/[[summation].sup.n.sub.k=1] [a.sup.[xi],sub.k], then H (A, B; [xi]) reduces to nonadditive entropy.

[mathematical expression not reproducible]. (3,

Entropy (3) was first of all characterized by Havrda and Charvat [1]. Later on, Daroczy [2] and Behara and Nath [3] studied this entropy. Vajda [4] also characterized this entropy for finite discrete generalized probability distributions. Sharma and Mittal [5] generalized this measure which is known as the entropy of order [alpha] and type [beta]. Pereira and Gur Dial [6] and Gur Dial [7] also studied Sharma and Mittal entropy for a generalization of Shannon inequality and gave its applications in coding theory. Kumar and Choudhary [8] also gave its application in coding theory. Recently, Wondie and Kumar [9] gave a Joint Representation of Renyi's and Tsallis Entropy. Tsallis [10] gave its applications in physics for A [member of] [[DELTA].sup.*.sub.n], and [xi] [right arrow] 1, H (A; [xi]) reduces to Shannon [11] entropy

i.e., H (A) = -[n.summation over (k-1)] [a.sub.k] [log.sub.D] [a.sub.k]. (4)

Inequality (6) has been generalized in the case of Renyi's entropy.

2. Formulation of the Problem

For [xi] [right arrow] 1 and A [member of] [[DELTA].sup.*.sub.n], B [member of] [[DELTA].sup.*.sub.n], then an important property of Kerridge's inaccuracy [12] is that

H (A) [less than or equal to] H (A, B). (5)

Equality holds if and only if A = B. In other words, Shannon's entropy is the minimum value of Kerridge's inaccuracy. If A [member of] [[DELTA].sub.n], B [member of] [[DELTA].sup.*.sub.n] then (5) is no longer necessarily true. Also, the corresponding inequality

H (A; [xi]) [less than or equal to] H (A, B; [xi]) (6)

is not necessarily true even for generalized probability distributions. Hence, it is natural to ask the following question: "For generalized probability distributions, what are the quantity the minimum values of which are H (A; [xi])?" We give below an answer to the above question separately for H (A; [xi]) by dividing the discussion into two parts (i) [xi] [right arrow] 1 and (ii) 1 [not equal to] [xi]. Also we shall assume that n [greater than or equal to] 2, because the problem is trivial for n = 1.

Case 1. Let [xi] [right arrow] 1. If A [member of] [[DELTA].sup.*.sub.n], B [member of] [[DELTA].sup.*.sub.n], then as remarked earlier (5) is true. For A [member of] [[DELTA].sub.n], B [member of] [[DELTA].sub.n], it can be easily seen by using Jenson's inequality that (5) is true if [[summation].sup.n.sub.k=1] [a.sub.k] [greater than or equal to] [[summation].sup.n.sub.k=1] [b.sub.k], equality in (5) holding if and only if

[a.sub.1]/[b.sub.1] = [a.sub.2]/[b.sub.2] = ... = [a.sub.n]/[b.sub.n] = [[summation].sup.n.sub.k=1] [a.sub.k]/ [[summation].sup.n.sub.k=1] [b.sub.k]. (7)

Case 2. Let 1 [not equal to] [xi]. Since (6) is not necessarily true, we need an inequality

[n.summation over (k=1)] [a.sup.[xi]-1].sub.k] [b.sub.k] [less than or equal to] 1; [xi] > 1 (8)

such that H(A; [xi]) [less than or equal to] H(A, B; [xi]) and equality holds if and only if [b.sub.k] = [a.sub.k]/[[summation].sup.n.sub.k=1] [a.sup.[xi].sub.k]

Since [xi] > 1, by reverse Holder inequality, that is, if n = 2, 3, ..., [gamma] > 1 and [x.sub.1], ..., [x.sub.n], [y.sub.1], ..., [y.sub.n] are positive real numbers, then

[mathematical expression not reproducible]. (9)

Let [mathematical expression not reproducible].

Putting these values into (9), we get

[mathematical expression not reproducible], (10)

where we used (8), too. This implies however that

[mathematical expression not reproducible]. (11)

Or

[mathematical expression not reproducible]; (12)

using (12) and the fact that [xi] > 1,, we get (6).

Particular's Case. If [xi] = 1, then (6) becomes

H (A) [less than or equal to] H (A, B), (13)

which is Kerridges inaccuracy [12].

3. Mean Codeword Length and Its Bounds

We will now give an application of inequality (6) in coding theory for

[[DELTA].sup.*.sub.n] = {A = ([a.sub.1], [a.sub.2], ..., [a.sub.n]); 0 < [a.sub.k] [less than or equal to] 1, [n.summation over (k=1)] [a.sub.k] = 1}. (14)

Let a finite set of n input symbols

X = {[x.sub.1], [x.sub.2], ..., [x.sub.n]} (15)

be encoded using alphabet of D symbols, then it has been shown by Feinstein [13] that there is a uniquely decipherable code with lengths [N.sub.1], [N.sub.2], ..., [N.sub.n] if and only if the Kraft inequality holds; that is,

[mathematical expression not reproducible], (16)

where D is the size of code alphabet.

Furthermore, if

L = [n.summation over (k=1)] [N.sub.k] [a.sub.k] (17)

is the average codeword length, then for a code satisfying (16), the inequality

L [greater than or equal to] H (A) (18)

is also fulfilled and equality holds if and only if

[N.sub.k] = -[log.sub.D] ([a.sub.k]) (k = 1, ..., n), (19)

and by suitable encoded into words of long sequences, the average length can be made arbitrarily close to H (A), (see Feinstein [13]). This is Shannon's noiseless coding theorem.

By considering Renyi's entropy (see, e.g., [14]), a coding theorem, analogous to the above noiseless coding theorem, has been established by Campbell [15] and the authors obtained bounds for it in terms of

[H.sub.[xi]] (A) = 1/1 - [xi] [log.sub.D] [summation] [a.sup.[xi].sub.k], [xi] > 0 ([not equal to] 1). (20)

Kieffer [16] defined a class rules and showed that [H.sub.[xi]] (A) is the best decision rule for deciding which of the two sources can be coded with expected cost of sequences of length N when N [right arrow] [infinity], where the cost of encoding a sequence is assumed to be a function of length only. Further, in Jelinek [17] it is shown that coding with respect to Campbell's mean length is useful in minimizing the problem of buffer overflow which occurs when the source symbol is produced at a fixed rate and the code words are stored temporarily in a finite buffer. Concerning Campbell's mean length the reader can consult [15].

It may be seen that the mean codeword length (17) had been generalized parametrically by Campbell [15] and their bounds had been studied in terms of generalized measures of entropies. Here we give another generalization of (17) and study its bounds in terms of generalized entropy of order [xi].

Generalized coding theorems by considering different information measure under the condition of unique decipherability were investigated by several authors; see, for instance, the papers [6, 13, 18].

An investigation is carried out concerning discrete memory-less sources possessing an additional parameter [xi] which seems to be significant in problem of storage and transmission (see [9, 16-18]).

In this section we study a coding theorem by considering a new information measure depending on a parameter. Our motivation is, among others, that this quantity generalizes some information measures already existing in the literature such as Arndt [19] entropy, which is used in physics.

Definition 1. Let n [member of] N, [xi] > 0 ([not equal to] 1) be arbitrarily fixed, then the mean length L ([xi]) corresponding to the generalized information measure H (A; [xi]) is given by the formula

[mathematical expression not reproducible], (21)

where A = ([a.sub.1], ..., [a.sub.n]) [member of] [[DELTA].sup.*.sub.n] and D, [N.sub.1], [N.sub.2], ..., [N.sub.n] are positive integers so that

[mathematical expression not reproducible]. (22)

Since (22) reduces to Kraft inequality when [xi] = 1, therefore it is called generalized Kraft inequality and codes obtained under this generalized inequality are called personal codes.

Theorem 2. Let n [member of] N, [xi] > 1 be arbitrarily fixed. Then there exist code length [N.sub.1], ..., [N.sub.n] so that

H (A; [xi]) [less than or equal to] L ([xi]) < [D.sup.1-[xi]] H (A; [xi]) + -1 + [D.sup.1-[xi]]/ 1-[xi]] (23)

holds under condition (22) and equality holds if and only if

[N.sub.k] = [-log.sub.D] ([a.sub.k]/[[summation].sup.n.sub.k=1] [a.sup.[xi].sub.k]); k = 1, 2, ..., n, (24)

where H (A; [xi]) and L([xi]) are given by (3) and (21), respectively.

Proof. First of all we shall prove the lower bound of L([xi]).

By reverse Holder inequality, that is, if n = 2, 3, ..., [gamma] > 1 and [x.sub.1], ..., [x.sub.n], [y.sub.1], ..., [y.sub.n] are positive real numbers then

[mathematical expression not reproducible]. (25)

Let [mathematical expression not reproducible].

Putting these values into (25), we get

[mathematical expression not reproducible], (26)

where we used (22), too. This implies however that

[mathematical expression not reproducible]. (27)

For [xi] > 1, (27) becomes

[mathematical expression not reproducible]; (28)

using (28) and the fact that [xi] > 1, we get

H (A; [xi]) [less than or equal to] L([xi]). (29)

From (24) and after simplification, we get

[mathematical expression not reproducible]. (30)

This implies

[mathematical expression not reproducible], (31)

which gives L([xi]) = H(A; [xi]). Then equality sign holds in (29).

Now we will prove inequality (23) for upper bound of L ([xi]).

We choose the codeword lengths [N.sub.k], k = 1, ..., n in such a way that

[mathematical expression not reproducible] (32)

is fulfilled for all k = 1, ..., n.

From the left inequality of (32), we have

[mathematical expression not reproducible]; (33)

multiplying both sides by [a.sup.[xi]-1.sub.k] and then taking sum over k, we get the generalized inequality (22). So there exists a generalized code with code lengths [N.sub.k], k = 1, ..., n.

Since [xi] > 1, then (32) can be written as

[mathematical expression not reproducible]. (34)

Multiplying (34) throughout by [mathematical expression not reproducible] and then summing up from k = 1 to n, we obtain inequality

[mathematical expression not reproducible]. (35)

Since 1-[xi] < 0 for [xi] > 1, we get from (35) inequality (23).

Particular's Cases. For [xi] [right arrow] 1, then (23) becomes

H (A)/log D [less than or equal to] L < H(A)/log D + 1, (36)

which is the Shannon [11] classical noiseless coding theorem

[N.sub.k] = -[log.sub.D] ([a.sub.k]); k = 1, 2, ..., n, (37)

where H(A) and L are given by (4) and (17), respectively.

4. Conclusion

In this paper we prove a generalization of Shannon's inequality for the case of entropy of order [xi] with the help of Holder inequality. Noiseless coding theorem is proved. Considering Theorem 2 we remark that the optimal code length depends on [xi] in contrast with the optimal code lengths of Shannon which do not depend of a parameter. However, it is possible to prove coding theorem with respect to (3) such that the optimal code lengths are identical to those of Shannon.

https://doi.org/10.1155/2018/2861612

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

[1] J. Havrda and F. S. Charvat, "Quantification method of classification processes. Concept of structural a-entropy," Kybernetika, vol. 3, pp. 30-35, 1967.

[2] Z. Daroczy, "Generalized information functions," Information and Control, vol. 16, no. 1, pp. 36-51, 1970.

[3] M. Behara and P. Nath, "Additive and non-additive entropies of finite measurable partitions," in Probability and Information Theory II, pp. 102-138, Springer-Verlag, 1970.

[4] I. Vajda, "Axioms for a-entropy of a generalized probability scheme," Kybernetika, vol. 4, pp. 105-112, 1968.

[5] B. D. Sharma and D. P. Mittal, "New nonadditive measures of entropy for discrete probability distributions," Journal of Mathematical Sciences, vol. 10, pp. 28-40, 1975.

[6] R. Pereira and Gur Dial, "Pseudogeneralization of Shannon inequality for Mittal's entropy and its application in coding theory," Kybernetika, vol. 20, no. 1, pp. 73-77, 1984.

[7] Gur Dial, "On a coding theorems connected with entropy of order [alpha] and type [beta]," Information Sciences, vol. 30, no. 1, pp. 55-65, 1983.

[8] S. Kumar and A. Choudhary, "Some coding theorems on generalized Havrda-Charvat and Tsallis's entropy," Tamkang Journal of Mathematics, vol. 43, no. 3, pp. 437-444, 2012.

[9] L. Wondie and S. Kumar, "A joint representation of Renyi's and Tsalli's entropy with application in coding theory," International Journal of Mathematics and Mathematical Sciences, vol. 2017, Article ID 2683293, 5 pages, 2017.

[10] C. Tsallis, "Possible generalization of Boltzmann-Gibbs statistics," Journal of Statistical Physics, vol. 52, no. 1-2, pp. 479-487, 1988.

[11] C. E. Shannon, "A mathematical theory of communication," Bell System Technical Journal, vol. 27, no. 4, pp. 623-656, 1948.

[12] D. F. Kerridge, "Inaccuracy and inference," Journal of the Royal Statistical Society. Series B (Methodological), vol. 23, pp. 184-194, 1961.

[13] A. Feinstein, Foundations of Information Theory, McGraw-Hill, New York, NY, USA, 1956.

[14] A. Reanyi, "On measures of entropy and information," in Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, pp. 547-561, University of California Press, 1961.

[15] L. L. Campbell, "A coding theorem and Renyi's entropy," Information and Control, vol. 8, no. 4, pp. 423-429, 1965.

[16] J. C. Kieffer, "Variable-length source coding with a cost depending only on the code word length," Information and Control, vol. 41, no. 2, pp. 136-146, 1979.

[17] F. Jelinek, "Buffer overflow in variable length coding of fixed rate sources," IEEE Transactions on Information Theory, vol. 14, no. 3, pp. 490-501, 1968.

[18] G. Longo, "A noiseless coding theorem for sources having utilities," SIAM Journal on Applied Mathematics, vol. 30, no. 4, pp. 739-748, 1976.

[19] C. Arndt, Information Measure-Information and Its Description in Science and Engineering, Springer, Berlin, Germany, 2001.

Litegebe Wondie and Satish Kumar (iD)

Department of Mathematics, College of Natural and Computational Science, University of Gondar, Gondar, Ethiopia

Correspondence should be addressed to Satish Kumar; drsatish74@rediffmail.com

Received 27 December 2017; Accepted 27 February 2018; Published 3 April 2018

Academic Editor: A. Zayed
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Wondie, Litegebe; Kumar, Satish
Publication:International Journal of Mathematics and Mathematical Sciences
Date:Jan 1, 2018
Words:2466
Previous Article:The Regular Part of a Semigroup of Full Transformations with Restricted Range: Maximal Inverse Subsemigroups and Maximal Regular Subsemigroups of Its...
Next Article:Ordered Structures of Constructing Operators for Generalized Riesz Systems.
Topics:

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |