Printer Friendly

Decoding of Decode and Forward (DF) Relay Protocol using Min-Sum Based Low Density Parity Check (LDPC) System.

1. Introduction

Malaysian's mobile market has shown a remarkable growth over the years. The next generation services have since been rolled out and had started having a major impact on the market. Mobile phone usage keeps increasing over the years and people become more mobile [1]. According to World Bank, Malaysia leading Indonesia, Thailand and even United States with 140% mobile penetration which mean 47% Malaysians own more than one mobile phone. New data transfer applications such as download information from internet or sending a video has emerged in mobile phone technology, thus demand higher data rates, high speed data transfer capability and less error rate [2]. In a middle 2015, the total number of cellular subscription according to major mobile operator in Malaysia with Celcom had the largest mobile market about 12.3 million subscribers (31.3%), followed by Maxis with 31% and then Digi with 30%. By November 2015, the number internet users in Malaysia have reached 20.6 million [3].

The fifth generation technologies beyond the current wireless communication networks are required to cater a tremendous Internet of Thing (IOT) era demand. Such demand include the embed sensor into security system, automated door locks, health monitoring and mobile transportation. The immediate need of IOT devices for 5G technology is expected to double to over 50 billion by year 2020 [4]. However, wireless channel fading issue on the transmitted signal usually severely degraded the performance of the overall system to gain high data rate. This channel fading effect can be combated effectively by employing a diversity technique called cooperative communication. The cooperative communication is achieved through formation of virtual antenna array created by cooperation of a number of distributed single antenna terminals. Cooperative communication system proposed by Van Der Meulen in 1971 [5], [6] based on relay channel concept was an efficient method for continuous improvement and mitigate all the above factors and at the same time maintaining the reliability of communications particularly for smaller and lighter devices like mobile phone. In the design, the relay was aided between the transmitter and the receiver working as a virtual antenna array.

Cooperative communications have emerged as considerable area of research and become a viable option for next generation communication systems requirement. Furthermore, various error control codes technique can be adapted to cooperative communication environment. Sendonaris et. al was a pioneer in area of cooperative communication introduce in 2003 [7], [8]. In 2006, Hunter et. al [9] introduced convolutional codes integrate to cooperative communication called coded cooperation. Further improvement on the existing works led to development of an embed Turbo codes [10] in the system in order to achieve higher coding gain. Number of the research in [11]-[14] were performed using Turbo code. Then, an advanced development in channel coding technique named LDPC code take place as an efficient solution by exploiting its superior in transferring data performance over noisy relay channel. Among all error control codes, LDPC code shown the great potential error correcting codes compare to other codes as it approaches Shannon channel capacity [15]-[17]. LDPC code was first adopted in relay channel by Khojastepour et. al [17].

The aim of this paper is to review the literature on the minsum based LDPC decoding method for decode and forward (DF) relay protocol and describe the current methods available in the min-sum based LDPC decoding process. More specifically, the paper endeavors to (1) proposed LDPC based system model for decode and forward protocol consist of source, relay and destination components, (2) investigate the existing min-sum based LDPC decoding approaches, (3) compare the performances of the min-sum based LDPC decoding system, (4) identify the potential of the min-sum based methods reported for future work, and (5) recommend other min-sum based analysis methods that could be employed to achieved optimum tradeoff between complexity and error performance of the existing works.

2. Related work

A relay code based on a spatially coupled low-density paritycheck (SC-LDPC) over binary input additive white Gaussian noise (BIAWGN) paths had been developed by Md. Noor-A-Rahim et al. [18]. The newly-devised code reflected an analysis of low complexity density evolution by weighing in the varied nodes that have gone through non-uniform SNRs. Besides, in order to optimize the new code, a non-intricate optimization step was performed prior to implementation in the relay for a system that comprised of half-duplex relay. On the other hand, a decoding structure alteration was carried out by Sreemohan P V et al. [19] for varied and check node architectures based on the implementation of min-sum meant for Field Programmable Gate Array (FPGA). The varied node architecture manipulated the 4-bit quantized node data in order to minimize errors in overflow, whereas the check node enhanced the lost performance for the actual min-sum algorithm. As such, the use of hardware resources was decreased due to the multiplexed storage structure of the node data.

Meanwhile, Huang Chang Lee et al. [20] asserted the significance of transferring the initial minimum value within the check node into the memory of variable node for complete removal in the altered min-sum. This purported notion should occur at the recovery of check node to that of variable that requires the initial minimum value, whereby a value deduced from stored second minimum is applied. If the values of the second minimum exceed the threshold of one that is pre-determined, a minor non-zero positive value is used to refer to the first minimum value, but zero if otherwise. Moreover, the integer is represented by an algorithm developed for first and second minimum values that employed 4-bit quantization, where 1-bit quantization for fractional. This particular algorithm has been applied in the CMOS application.

Next, optimized offset and scaling values for LDPC decoders had been investigated by Seho Myung et al. [21] based on 3.0 LDPC codes of Advanced Television Systems Committee (ATSC) via elaborated simulation using computer. It was reported that although slightly higher intricacy was noted for offset min-sum, when compared to normalized in-sum and the actual min-sum, it offered exceptional and stable coding performance.

Meanwhile, the Adaptive Forced Convergence (AFC) algorithm was developed by Jeong Hyeon Bae et al. [22] in order to decrease computational intricacy amidst check nodes by employing a sole value in its adaptive threshold. Besides, this algorithm of AFC applied the function of altered check node and adjacent variable nodes to disable check nodes. Hence, the AFC decreased both computational intricacies of check and variable nodes by dismissing the threshold value of the check node. Furthermore, as the amount of disabled variable nodes appeared to increase rapidly when the threshold value of the variable nodes was lowered by the AFC.

Additionally, S. Scholl et al [23] developed a novel hybrid technique by amalgamating a conventional min-sum decoder that was enhanced through a scheme of advanced decoding, which is called 'improved saturated min-sum decoding' that served as an 'afterburner' solely to improvise the rate of frame error. The proposed method only functioned upon failure of decoder, thus greatly decreased its level of intricacy. In fact, parallel and serial architectures for the proposed method have been employed in the Application Specific Integrated Circuit (ASIC), where no impact was noted on the performance of communication, but the architectures did affect area latency and efficiency.

The Set Min-Sum (SMS) decoding algorithm was proposed by Liyuan Song et al. [24] to reduce intricacy in non-binary LDPC codes decoding by set partition. As for the enhanced check node in the algorithm, partitioned sets of input vectors ensured that the varied components in virtual matrix have mixed computational stratagems. Thus, exceptional computational efficiency was displayed by the algorithm proposed via strategies devised based on accurate probabilities for the components. The outcomes of simulation showed reduction in check node intricacy and a slump in its performance.

A low complex min-sum algorithm, which was developed by Michaelraj Kingston Roberts et al. [25] to decode nonregular LDPC, displayed vital enhancement in correcting errors without complicating its hardware, especially by applying both optimized and adaptive normalization factors for log-likelihood and extrinsic information ratio data bits, respectively. The proposed algorithm employed a nonuniform 6-bit quantization scheme in order to decrease the effects of finite length of words on high precision soft information. In fact, the employed quantization scheme led to reduced hardware complexity by minimizing the storage of memory block that kept intrinsic data, thus decreasing access of memory for data in bits per iteration.

On top of that, Chen Pei Song [26] developed a novel partially-stopped probabilistic min-sum algorithm (PS-PMSA) to minimize consumption of power in units of check nodes. This PS-PMSA managed to eliminate unimportant data in variable nodes so as to decrease check node computation with insignificant degrading correction of errors and reduced overhead area. The PS-PMSA performs exceptionally with parity check equation (PCE), a viable scheme that discards convergent iterations.

Next, a Multiple Codeword Flooded min-sum decoding method was developed by Sergiu Nimara et al. [27] to process data from varied codewords via parallel processing unit; check and variable nodes. Thus, the amount of variable node units is equivalent to the columns found in base matrix and multiplied with the amount of codewords processed, while the amount of check node units is the amount of rows with the similar condition. Furthermore, the Block RAM (BRAM) memory usage and the decoding throughput had been increased by employing the designed decoding system.

In addition, Kang Zhao et al. [28] proposed the Generalized Mutual Information- (GMI) based metric for scaling search in Flooding Structure Variable min-sum (FS-VMS) based on two concepts; 1) the scaling factors differ by varied iterations, and 2) the scaling factors differ by varied check nodes with degrees. The GMI-based metric in FS VMS scaling search had been implemented in scaling search for Horizontal Shuffled Structure Variable min-sum (HSS VMS) that is based on the structure of Quasi Cyclic LDPC (QC-LDPC). Upon weighing in the special and simple structure of QC, as well as the HSS features for parallel function, the identically independently distributed (i.i.d) assumption was redefined for every parallel function in HSS VMS. Furthermore, the improvised GMI, which had been based on scaling search formulas, had been proposed in HSS VMS parallel degree, which is similar to and bigger than the cyclic block stage of QC-LDPC.

In the attempt to determine the aspect of reliability for LLR, Florence Alberge [29] developed a mutual data-driven rule for scaling factor between the extrinsic elements. The variable scaling factors were modified for both mutual data and check node degrees. The probability of mutual data between the extrinsic elements was applied as early halting criterion or to send data back to the transmitter through feedback path. Moreover, this approach can be used for many purposes, for example, to protect transmission. The suggested approach appeared to offer improved yields for BER and significantly decreased the iterations.

A serial reliability-based iterative min-sum decoding (RBI-MSD) method was developed by Shijie Ouyang et al. [30] for systems of LDPC-coded MLC flash memory in order to gain the required trade-off between intricacy and performance. Hence, towards improvising error performance, as well as in accelerating serial RBI-MSD algorithm convergence speed, the novel LLR-distribution-based non-uniform quantization technique had been proposed. This particular non-uniform quantization approach significantly exploits the dispersion features of path initial LLRs in Multi-level cell (MLC) flash memory. The outcomes of simulation showed that the suggested technique displayed exceptional error performance, and it could be applied in other RBI decoding algorithms. Besides, the excellent convergence speed seems attractive for applications of future NAND-flash-memory.

Next, a method termed non-surjective finite alphabet iterative decoders (NS-FAIDs) was developed by Thien Truong Nguyen-Ly et al. [31] to exploit the efficiency of datapassing LDPC decoders to inaccuracies in calculating the exchanged data so as to generate an integrated model for some designs reported in the literature. These NS-FAIDs were optimized via density evolution for LDPC codes that are both irregular and regular, which offer varied trade-offs between performance of decoding and hardware intricacy. Besides, in order to escalate throughput, two hardware architectures were applied; increased hardware parallelism and pipelining. The decoding kernels of MS and NS-FAID were amalgamated into the two architectures. In fact, the results of ASIC synthesis displayed improved efficacy of the NS-FAID method for throughput and area, when compared to MS decoder, along with insignificant degradation for performance in decoding.

In addition, an unrolled full-parallel architecture based on serial transfer of decoding data for check and variable nodes had been proposed by Reza Ghanaatian et al. [32] so as to enable ultra-high throughput for the application of LDPC decoders for huge node degrees codes by minimizing wires for interconnection. In order to decrease the required quantization bit-width, the finite-alphabet LDPC decoding algorithm was applied, which also hiked the throughput that was restricted by transfer of serial data for the suggested architecture. The implementation of the proposed algorithm was carried out through the use of LUTs, rather than VNs adders, whereas the CNs had been maintained without any changes, in comparison to the decoding of MS. The serial message-transfer decoder that is based on LUT provides more area efficiency and higher throughput, as well as twice the energy efficiency, when compared to serial data-transfer architecture with MS decoder. Besides, the proposed algorithm adopted the linear floor plan to be applied for the architecture of unrolled full-parallel, including a pseudo hierarchical flow that is efficient, which permits the application of high speed physical for the proposed decoder. Through integration of the above mentioned methods, the proposed approach offers the most rapid routed and completely-placed LDPC decoder within the literature.

3. LDPC solution for future decode and forward relay protocol

The choice of appropriate error control code is the important part to overcome decoding problem at relay terminal in DF Relay Protocol [33]. There is no particular coding technique universally best. The best option is dependent on the number of parameters such as BER, code rate, code gain, maximum block length and decoding complexity. Turbo code performs better than the convolutional block code and convolutional code for low code rate ([less than or equal to] 1/2). However, large information length LDPC codes achieve better performance than Turbo codes at high code rates. Table 1 shows the performance comparison of error control code over the years.

LDPC code is a linear block error correcting code originally designed by Robert Gallager in 1963 [34], but soon forgotten by scientific world over 30 years because of the weakness of their implementation due to ineffectiveness of the microelectronics technology in that time. However, in 1996 David Mackey and Radford Neal [35], rediscovered the LDPC codes since current demand of modern communications to run very near to the Shannon maximum theoretical limit of the channel capacity [36].

LDPC decoding algorithm with parallel implementation and lower computational complexity [37] for a long block codes makes it a suitable candidate for use in most hardware product compare with other error control coding, especially turbo code, thus attracted significant attention by the researcher.

On the other hand, properly designed of LDPC decoding algorithm such as belief propagation algorithm can have almost error free, making them attractive choice to support most hardware application. Recently, LDPC codes [38] adopted in numerous applications include Digital Video Broadcasting (DVB-S2), WIMAX (Worldwide Interoperability for Microwave Access), space and satellite communications, mobile communication, optical communication as well as storage system such as hard disk drives and compact disk.

4. LDPC based system model for DF protocol

Relaying protocol is the fundamental structure of the cooperative communication system. Relay node performs two main kinds of message forwarding strategy which can be group as regenerative and non-regenerative which depend on applied signal processing method. The most common use non-regenerative protocol are Amplify and Forward (AF) whereas the signal received by relay still contain noise and propagate error to the receiver. While the most common regenerative protocol referred as Decode and Forward (DF) which relay decode and re-encode the received signal and forward to the destination. Normally, DF can obtain better performance than AF if designed appropriately at the cost of higher complexity. DF can produce both diversity gain as well as coding gain. DF strategies have received a lot of interest recently as the most practical relay strategy. DF relay strategy can be equipped with error control code technique. LDPC codes have been shown a good performance as mentioned above among the coded DF protocol at relay node.

There have been little efforts to formulate an ideal relay protocol that could be significantly provided higher rate, low computational complexity and better error performance. The Hungarian Algorithm was developed by Muhammad Abrar et al. [39] by using the new low-intricate iterative Resource Block (RB)-pairing, as well as scheme of provision that leads to low computational intricacy, hence making it adequate to solve issues related to optimization in networks of relay. Moreover, a genetic algorithms was developed by Said Nouh et al. [40] to decode codes with systematic block, in order to gain low intricacy for decoding threshold, as well as via polynomial encoding for cyclic codes. Many published studies exist that have described methods developed for coded DF protocol, but existing works in [33], [41], [42] employ only some of the components, particularly using LDPC code. In this paper, we propose a comprehensive system model using LDPC code for DF relay protocol analysis. The development of this model is necessary to help understanding on how each component of the entire process should really go together, thus it crucial to gathers requirement of the system, identify internal factors influencing the system as well as visualize the interaction among the required components.

The model is devised from the existing coded DF works reported in the literature. The proposed model is displayed in Figure 1. It consists of three nodes namely a source node, relay node and destination node. Each node comprises of a number of components, where message bit received at source node process by encoder and modulation components. At relay node, message received from source node through source-relay channel going through demodulation process followed by decoding, encoding and modulation. Both the signal received from source and relay node through sourcedestination and relay-destination channel combine together before move to demodulation and decoding part as the final output of the system at the destination. The following subsection will be further elaborating each of the components of the proposed DF Relay Model using LDPC Code.

4.1. Channel model

In wireless communication environment, the received signals usually have different amplitude and phase from the transmitted signals. This is caused by many factors, which can be classified into two groups: large-scale propagation effects and small-scale propagation effects. Large-scale propagation effects can be caused either by path loss or shadowing. The path loss shows the dissipation of transmit power over distance. This loss results with much lower power at the received signals. While shadowing phenomenon is characterized by variation of received signal strength measured at different location even with the same distance between the transmitter and receiver. This variation is due to the effect of large obstruction such as buildings, intervening terrains, and vegetation.

On the other hand, the small-scale propagation effect refers to large changes in the amplitude and phase of signal caused by a small change in the location of the transmitter or receiver. This effect is due to constructive and destructive interference of the transmitted signal that occurs at very high carrier frequencies which is 900 MHz or 1.9 GHz for cellular. There are many models that describe the phenomenon of small scale fading. Among all of these models, rayleigh fading, ricean fading, additive white gaussian noise (AWGN) and nakagami fading models are most widely used. Rayleigh fading is primarily caused by multipath reception. Rayleigh fading is a statistical model for the effect of a propagation environment on a radio signal. It is a reasonable model for troposphere and ionosphere signal propagation as well as the effect of heavily built-up urban environments on radio signals. Rayleigh fading is most applicable when there is no line of sight between transmitter and receiver. Ricean fading model is similar to the rayleigh fading model, except that in ricean fading, a strong dominant component is present. This dominant component is a stationary signal and is commonly known as Line of Sight (LOS). AWGN is the simplest radio communication environment in which a wireless communication or local positioning system or proximity detector based on time of flight will have to operate. AWGN is the commonly used to transmit signal while signals travel from the channel and simulate background noise of channel. The mathematical expression in received signal, r(t) is:-

r(t) = s(t) +n(t) (1)

The received signal passed through the AWGN channel where s(t) is transmitted signal and n(t) is background noise. An AWGN channel adds white Gaussian noise to the signal that passes through it. It is the basic communication channel model and used as a standard channel model. The nakagami-m channel distribution has gained a lot of attention due to its ability to model a wider class of fading channel conditions and to fit well the empirical data. It has gained a lot of attention in the modeling of physical fading radio channels. Nakagami-m is more flexible and it can model fading condition from worst to moderate.

4.2. Modulation

The easiest way to send the low frequency audio signal over long distance is to change transmitted signal according to the information in the message signal. This alteration is known as modulation. The receiver then recovers the original signal through a process called demodulation. Modulation techniques are expected to have three positive properties:-

1. Good bit error rate (BER) performance: Modulation schemes should achieve low bit error rate in the presence of fading, interference and thermal noise.

2. Spectral Efficiency: The modulated signals power spectral density should have a narrow main lobe and fast roll-off side lobes. Spectral efficiency is measured in units of bit/sec/Hz.

3. Power Efficiency: Power saving is one of the critical design challenges in portable and mobile applications. Nonlinear amplifiers are usually used to increase power efficiency. However, nonlinearity may degrade the bit error rate performance of some modulation schemes. Constant envelope modulation techniques are used to prevent the growth of spectral side lobes during amplification.

1. Digital Modulation

As compared to analog modulation, digital modulation schemes transform digital signals into waveform that are compatible with properties of the communications channel. The process that used a constant amplitude carrier and the other carries the information in phase or frequency variation is called phase shift keying (PSK) and frequency shift keying (FSK). A major transition is from the simple type of modulation such as amplitude modulation (AM) and frequency modulation (FM) to more complicated digital modulation techniques such as quadrature phase shift keying (QPSK), amplitude shift keying (ASK), frequency shift keying (FSK), minimum shift keying (MSK) and quadrature amplitude modulation (QAM).

QAM is a method for transmitting two separate channels of information using a single carrier. QAM is both an analog and a digital modulation scheme. It conveys two analog message signals by modulating the amplitudes of two carrier waves using the ASK digital modulation scheme or AM analog modulation scheme. 64-QAM is same as 16-QAM except it is 64 possible signal combinations with each symbol represent six bits ([2.sup.6] = 64). 64-QAM is a complex modulation technique but gives high efficiency. The digital frequency modulation technique is primarily used for sending data downstream over a coaxial cable network. It is very efficient which can support up to 28 Mbps peak data transfer rates over a single 6 MHz channel. It's susceptibility to interfering signals makes it suitable to noisy upstream transmissions.

2. Bit Error Rate (BER)

The BER measurement performance of the digital link is calculated from the number of bits error received divided by the number of bits transmitted within a second during data transmission from transmitter to receiver.

BER= [no.of bits in error/total no.of bits received] (2)

In digital transmission, the data streams sending over communication channel contain the number of bit errors due to noise interference, distortion or bit synchronization errors. These factors affect the BER performance. BER performance also reduces by quantization errors through incorrect reconstruction process of digital waveform.

Quantization errors also reduce BER performance through incorrect or ambiguous reconstruction of the digital waveform. Besides that, quantization error affected through the signal modulation process accuracy, filtering and noise bandwidth.

BER can also be defined in terms of probability of error (POE) as represented in equation (3).

POE= [1/2](l-erf)[square root of [[E.sub.b]/[N.sub.0]] (3)

Where erf is the error function, [E.sub.b] is the energy in one bit and [N.sub.0] is the noise power spectral density which calculated by noise power in a 1Hz bandwidth. The error function value varied for different type of modulation. The energy per bit, [E.sub.b] can be resolved by dividing the carrier power by the bit rate in unit of

joules per second. [E.sub.b]/[N.sub.0] is a numerical ratio called signal to noise ratio.

3. Signal to Noise Ratio (SNR)

SNR is the ratio of the received signal strength over noise strength in the frequency range of operation. It is an important parameter of the physical layer of Local Area Network (LAN) in wireless communication. BER is inversely related to SNR, that is high BER causes low SNR. High BER causes increases packet loss, increase in delay and decreases throughput. In multichannel environment, the relation between the SNR and the BER is not easy to determine. Signal to noise ratio (SNR) is an indicator commonly used to evaluate the quality of a communication link and measured in decibels as represented by equation (4).

SNR = 10[log.sub.10], [signal power/noise power] dB (4)

4. [E.sub.b]/[N.sub.0]

Energy per bit to noise power spectral density ratio ([E.sub.b]/[N.sub.0]) is an important parameter in data transmission in digital communication. It is a normalized signal to noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account. The bits in this context are transmitted data bits such as in error correction information and other protocol overhead. In the context of forward error correction (FEC), [E.sub.b]/[N.sub.0] refer as energy per bit information and used to relate actual transmitted power to noise.

5. LDPC decoding approach

LDPC decoding scheme is the excellent error control code capable to mitigate error propagation in relay channel of cooperative communication. But, the complexity problem LDPC decoding for relay channel is an important issue at relay node as normally stricter hardware and power constraints at the relay. Besides that, the throughput at the destination decreases cause by decoding delay at relay. Generally, the LDPC decoding algorithm can be categorized as soft decision known as Belief Propagation (BP), a hard decision which is Bit Flipping (BF) and a combination of BP and BF which is known as hybrid.

BP decoding algorithm achieve high performance but constraint by its complexity which limited energy resources and computational of nowadays applications. Contrast with BF decoding algorithm, it is simple operation but it's suffer with poor performance. Due to high computational complexity of BP decoding algorithm, some of the published studies [37], [43], [44] invent a combination of BP and BF algorithm into a single algorithm. The invention was known as hybrid algorithm. Furthermore, BP modification known as min-sum algorithm was also developed to reduce the complexity [45]-[50]. Min-sum algorithm used simple comparison and summation operation by finding only two lowest values of reliability at check nodes [46]. Min-sum can significantly reduce the computational complexity of BP at the cost of small performance loss. The following subsection will further explain the soft decision decoding and its low complexity modification version known as min-sum algorithm and its variants.

5.1 Soft Decision Decoding

Soft decision decoding propagate probabilities through the Tanner graph of parity check matrix, H represented using a bipartite graph. Columns in the parity check matrix represent variable nodes (VN) and rows in the matrix represent check nodes (CN). Variable node corresponds to N bit of the codeword and check node corresponds to M parity check constraints. Parity check matrix can be categorized into two types which are regular and irregular. In regular code, number of 1's in each row (row weight) has the same of 1's number in each column (column weight). Here is an irregular parity check matrix, H 10 VNs and 5 CNs with 3 column weights and 6 row weights as presented in Figure 2.

Edges in the graph connect variable nodes to check nodes and represent the nonzero entries in H matrix. If [H.sub.mn] = 1, only if variable n participates in the mth parity check constraint, then check node m is connected to variable node n. The term "low density" conveys the fact that the fraction of nonzero entries in H is small, in particular it is linear in the block length, n. Thus, the tanner graph is a graphical representation of the parity check matrix as shown in Figure 3.

Let C be a regular code of length N and dimension K whose parity check matrix H with M = N - K rows and N columns contains the same number of column weight and row weight. [H.sub.mn]is the value of the m'th row and n'th column in H. The set of bits participate in check is denoted:

[N.sub.m] = { n: [H.sub.mn] =1}. The set of checks that participate in bits [M.sub.n] = {m: [H.sub.mn] = 1}.

Assume codeword, c = [[[c.sub.1],[c.sub.2][c.sub.3]........., [c.sub.N]].sup.T] [50]

Before transmission, it is mapped to a signal constellation (modulation) to obtain the vector,

t = [[[t.sub.1], [t.sub.2],[t.sub.3],......... [t.sub.N]].sup.T]

where;

[t.sub.n] = 2 * [c.sub.n] - 1 (5)

which is transmitted through an AWGN channel with variance, [[sigma].sup.2]

[[sigma].sup.2] = [N.sub.0]/2 (6)

r = [[[r.sub.1], [r.sub.2], [r.sub.3]........., [r.sub.N]].sup.T]

where, received message, [r.sub.n]

[r.sub.n]= [t.sub.n] + [[nu].sub.n] (7)

Here [[nu].sub.n] is the Additive White Gaussian Noise (AWGN) with zero mean. Let hard decision vector z = [[[z.sub.1],[z.sub.2],[z.sub.3]........., [Z.sub.N]].sup.T] be

[z.sub.n] = sgn ([r.sub.n]) (8)

where [mathematical expression not reproducible]

Notation:

Ln: A priori information of bit node, n

[bar.Ln]: A posteriori information of bit node, n

Em, n: The check to bit message from m to n

En, m: The bit to check message from n to m

1. Sum-Product Algorithm (SPA)

The Sum Product Algorithm can be organized into four following steps: [50]

Step 1: Initialization

A priori information, Ln= -rn

Bit to check message initialization, Fn, m = Ln

Step 2: Horizontal Step

Check node Processing:

Em, n = log[1+[[product].sub.n,'[member of]N(m)\n]tanh([Fn',m/2])/1-[[product].sub.n,'[member of]N(m)\n]tanh([Fn',m/2])] (9)

Step 3: Vertical Step

A posteriori information:

[bar.Ln]Ln = Ln + [[summation].sub.m[member of]M(n)] Em, n (10)

Bit node Processing:

Fn, m =[bar.Ln] + [[summation].sub.m[integral][member of]M(n)\m] Em, n (11)

Step 4: Decoding Decision

[bar.Ln] > 0, [bar.Cn] = 0, else [bar.Cn] = 1

If A[bar.Cn] = 0, the algorithm stops and [bar.Cn]. is considered as valid decoding result.

Otherwise, it goes to the next iteration until the number of iteration reaches its maximum.

2. Min-Sum Algorithm

Min-sum algorithm considered as modification of sumproduct algorithm to reduce the decoder complexity implementation.

This can be done by altering the horizontal step: [50]

Em, n = log[1+[[product].sub.n,'[member of]N(m)\n]tanh([Fn',m/2])/1-[[product].sub.n,'[member of]N(m)\n]tanh([Fn',m/2])] (12)

using the relationship:

2[tanh.sup.-1] p = log [[1+p]/[1-p]] (13)

Equation (12) can be rewritten as,

[E.sub.m,n] 2[tanh.sup.-1]1-[[product].sub.n,'[member of]N(m)\n]tanh([Fn',m/2]) (14)

Equation (14) can be further modified as,

[mathematical expression not reproducible] (15)

3. Existing Min-Sum Modification

In order to improve the performance loss of min-sum algorithm, several correction factor methods were proposed in the literatures. Chen and Fossorier then proposed the Normalized min-sum (NMS) and Offset min-sum (OMS) algorithms. Both of these two algorithms applied to the check node operation to improve the decoding performance. The NMS corrected by one scaling factor which is found by exhaustive search algorithm for better performance. While the offset is set before decoding by subtracting a positive constant [beta] from the magnitude and does not take the output value of each iteration output.

(a) Normalized Min-Sum

NMS modified min-sum algorithm by multiplying scaling factor, [alpha] where 0 < [alpha] [less than or equal to] 1 in check node processing to achieve a better error performance closer to sum product algorithm as presented by equation (16).

[mathematical expression not reproducible] (16)

(b) Offset Min-Sum

OMS modified min-sum algorithm by subtracting a positive constant offset factor, b from the magnitude of the min-sum, where [beta] > 0 in check node processing to achieve a better error performance closer to sum product algorithm as presented by equation (17).

[mathematical expression not reproducible] (17)

6. Discussion and Recommendation

Decrease of decoding complexity at relay for Decode and Forward relay channel is particularly important since there are usually stricter hardware and power constraints at the relay. There are still concerns that excessive delay due to decoding high codes at the relay results in additional delay at the destination. This has driven constant research effort aiming at reducing the complexity of decoding techniques at relay. The LDPC decoding algorithm offers a lower hardware complexity at the cost of performance degradation called the min-sum algorithm. Efforts have been made in min-sum algorithm to achieve optimum tradeoff between complexity and bit error performance (BER). Previous research has shown several approaches attempted to keep the performance close to SPA with less hardware complexity are attractive for practical applications. Thus, more effective min-sum based methods in targeting to bring the simplified form of the algorithm close in performance to the SPA are needed to deliver satisfying performance with minimum computational complexity. In the following subsection, the comparison of the existing min-sum based LDPC decoding system in term of the data used, performances and Variable and Check Node (VCN) operation.

6.1 Comparison of Min-Sum Based LDPC Decoding System

The development of min-sum based decoding system would provide an objectives measure of reducing the decoding complexity with considerable error performance. Our review of the existing min-sum based LDPC decoding methods reveals the lack and advantage of the reporting property settings of the used methods in comparing the results of the published works.

Table 2 gives a comparison of the min-sum based LDPC decoding system in terms of their performances. For the data used, we have included parity check, modulation and channel types and number of iteration from published works wherever given. For the variable and check node (VCN) operation methods, we have included information of the techniques implemented in the published studies wherever described. For the performance results, we have included significant error performance, complexity, throughput, energy consumption, coding gain and other measures from the published works wherever reported.

Investigation of low complexity of min-sum based decoding by Fabian et al. [45] saved 32 comparators maintaining the same error performance by divide the input message into two groups, even and odd for exhaustive comparison, Yin Xu et al. [47] achieved low complexity by a metric called Generalized Mutual Information (GMI) to select variable scaling factors both as per check node degree and as per iteration in a one dimensional, while C.L. Wang et al. [51] saved around 60% to 70% of computations by variable node (VN) set and symbol combination set performed separately which reducing the search space in check node (CN) processing, C.C. Cheng et al. [52] requires 51% fewer comparators with loss of 0.05dB error performance used tree structure based minimum value finder (MVF) by removing the connection units and a suitable normalization factor to enhance the error performance, Ioannis Tsatsaragkos et al. [53] utilizes up to 25% less comparators also need less than 14 iterations for maximum 30 iteration by partitioning and minimum identification of approximation process, Meng Zhu et al. [48] performed a uniform quantization with channel likelihood and information transmission between check nodes and variables nodes, Nguyen Thi Dieu Linh et al. [54] used early stopping node which reduce the number of iteration and decreased computation processing 5 times using BPSK and 10 times using QPSK compared with conventional method, Ahmed Emran et al. [55] only requires 0.08 to 0.24dB more than sum product algorithm (SPA) with much lower complexity by approximate the scaling factor graph to a stair graph with constant horizontal step S, and the scaling factor takes values which is exponential and at the same time easy to implement, Yongmin Jung et al. [56] performed low complexity architecture by combined variable and check node operation with just one multiplier which is used in initial process reused in the iterative process. During initial process, received LLR multiplies by the scaling factor before iterative decoding while during iterative process the extrinsic information multiplies by the scaling factor at every iteration. Although the available research on min-sum for relay channel is still limited, min-sum based decoding is the best choice in terms of tradeoff between decoding performance and implementation complexity [48]. The most important improvement concerns the methods used to implement the system. The performance results highlighted the potential of min-sum based algorithm to exhibit less computational complexity with acceptable error correction performance. Thus, replication studies are necessary in order to strengthen the available findings especially for relay channel application. The propose taxonomy for min-sum based LDPC decoding technique is summarized in Figure 4. The taxonomy is based on the elements of existing min-sum techniques as presented in Table 2.

6.2 Parity Check and Iteration

From Table 2, seven of the studies used regular LDPC code [45], [47], [48], [51], [52], [55], [57]; while the others [47], [56], [57] utilized the irregular LDPC code. Ahmed. Emran et al. [55] stated regular codes gives very good performance while irregular is not good enough because irregular codes, unequal message densities are sent from variable nodes with different degree which require unequal scaling factor per iteration. From the studies in table above, the iteration is set between 9 and 100 iterations. One iteration is defined as one round of message updates at both the check nodes and variable nodes. The iteration process stops when the maximum number of iterations is reached or when all parity checks are satisfied using hard decision calculation. The number of iterations directly affects the total decoding complexity as shown by C. C Cheng et. al. [52] proposed early termination (ET) method to reduce the number of decoding iteration and achieving a reduction in energy dissipation of 60.6 %. And Nguyen Thi Dieu Linh et al. [54] employed early stopping (ES) method to reduce the number of iteration which improve the quality and processing time of decoding process. This study highlighted the potential used of early termination on the iteration for improving the processing time particularly for min-sum based decoding method relay channel in DF relay protocol.

6.3 Modulation and Channel Scheme

From Table 2, the most popular modulation technique used in the min-sum based decoding studies is the BPSK modulation. Six of the studies utilized the BPSK modulation [45], [47], [51], [53], [54], [57]; Yin Xu et al., C.L Wang et al. and Ahmed Emran et al. also utilized the QAM modulation [47], [51], [55] while Nguyen Thi Dieu Linh et al. [54] also investigated QPSK as modulator.

BPSK modulation is the simplest and most robust of all techniques and because of that it mostly chosen modulation scheme [58]. QPSK gives high spectral efficiency and it is more efficient than BPSK because it uses two symbols at a time for modulation. Both BPSK and QPSK are power efficient in same way but QPSK is more bandwidth efficient than BPSK. QPSK provides twice the spectral efficiency compared with BPSK at the same energy efficiency. Nguyen Thi Dieu Linh et al. [54] found that using BPSK scheme decreasing processing time by factor of 5 times compared with 10 times by QPSK scheme which is QPSK twice better performance in term of processing time than BPSK. Three of the studies [47], [51], [55] used the QAM modulation as a representation higher order constellation case. They found QAM has better error performance compare with BPSK due to the impact of higher order constellation which provides high efficiency in power and bandwidth. Future work should investigate a suitable modulation scheme for min-sum based relay channel in DF relay protocol.

From Table 2, all of the studies were simulated over AWGN channel with zero mean and variance while C.L Wang et al. [51] also simulated over uncorrelated Rayleigh fading channel. Future relay channel research should attempt to consider other channel model which combines the large-scale effects and small-scale effects together which extensively used in cooperative communication environment.

6.4 Variable and Check Node (VCN) Operation

VCN operation of the min-sum has considerable impact on the success or failure of the decoding process. The review of the VCN operation method used in the existing works is presented in Table 2. It can be seen from the table, the VCN operation method used in min-sum based algorithm can be categorized into two approaches; (1) optimization and (2) architecture.

Optimization approach is the act or process of obtaining the best result of system design under the given circumstances. From the Table 2, the existing optimization approach have been used in studies includes selection searching method for scaling factors [45][47][51][52][53][55][56], numbering system [51][48], early termination [52][54] and quantization [52][48]. The optimization selection searching method for the scaling factors is important to improve the traditional fixed scaling factor algorithm in term of BER performance. In future research, low complexity searching scaling factors method while maintaining the BER performance must be explore. The suitable numbering system such as fixed point calculation can be employed in the calculation of the VCN operation function to reduce the calculation complexity. To reduce energy consumption, early termination based approach can be exploit to avoid unnecessary decoding computation with reduction in the average number of iterations. From observation, proper threshold values based method has a potential to apply at the early termination method especially under high BER scenario to reduce the computational complexity of the operation. It is also suggest that proper design quantization method capable of providing optimization for VCN operation.

While architecture approach presented as a modification of the interconnection structure operation of the VCN which can minimize the usage of comparators. The existing architecture approach have been used in studies includes partitioning [53] and interconnection structure [52][56]. Interconnection structure such as tree and butterfly structure employed in [52] to compare the values from inputs in order to determine the minimum value at the check node operation to achieved low complexity operation. The above designed method aims to reduce the area, delay, energy, complexity, BER, processing time and increase the throughput of the decoding process. Although VCN operation have considerable impact on the tradeoff between error performance and complexity of the decoding process, it is clear that there is still a lack of proper min-sum based VCN modification investigation for relay channel in cooperative communication. Thus, future works should attempt to find the most discriminant min-sum based VCN operation modification for DF relay channel in cooperative communication environment.

7. Conclusion

This paper presented an overview of existing min-sum based LDPC decoding methods which has a great potential to be applied for DF relay protocol. Also, based on existing published works, a comprehensive LDPC based Coded DF relay system model was proposed. The components used to realize the entire process of the whole systems were described. The bottleneck and superiority in comparing the published works was highlighted. Lastly, in this paper proposed new taxonomy for min-sum based LDPC decoding techniques where the existing VCN operation method reported in min-sum based decoding categorized into (1) optimization and (2) architecture design method. The development of the min-sum based LDPC decoding system for DF relay protocol is still its infancy with only few studies. However, the research to date already highlights the potential of min-sum based LDPC decoding analysis for the DF relay protocol. Future research should also focus on the combination of the large-scale effect and small-scale effect channel model together that consider the different quality of each channel due to different location which extensively used cooperative communication environment. In addition, further evaluations of different methods at VCN operation of the min-sum decoding algorithm are vital to the low complexity and reliable implementation of the decoding DF relay protocol in cooperative communication environment.

8. Acknowledgement

Authors would like to acknowledge the Ministry of Higher Education Malaysia (MoHE) for their financial support through the HLP PhD scholarship scheme.

References

[1] Fluorcom, "Mobile Internet and social media in Malaysia," ASEAN UP, 2015. [Online]. Available: http://aseanup.com/mobile-internet-social-media-malaysia/.

[2] ECommerceMILO, "With 140% mobile penetration, Malaysia has 10M smartphone users," E27, 2014.

[3] P. B. C. P. Ltd., "Malaysia - Mobile Communications, Broadcasting, and Forecasts," 2015.

[4] P. Hartley, "Gimme 5:What to Expect from 5G Wireless Networks," FreshMR, 2015. [Online]. Available: https://www.marketstrategies.com/blog/2015/03/gimme-5-what-to-expect-from-5g-wireless-networks/.

[5] E.C. Van Der Meulen, "Transmission of information in a T-terminal discrete memory less channel," University of California, Berkeley, 1968.

[6] V. D. M. Edward C, "Three terminal communication channels," Adv. Appl. Probab., vol. 3, pp. 120-154, 1971.

[7] A. Sendonaris, E. Erkip, and B. Aazhang, "User cooperation diversity-part I: system description," IEEE Trans. Commun., vol. 51, no. 11, pp. 1927-1938, 2003.

[8] A. Sendonaris, E. Erkip, and B. Aazhang, "User cooperation diversity-part II: implementation aspects and performance analysis," IEEE Trans. Commun., vol. 51, no. 11, pp. 1939-1948, 2003.

[9] T. E. H. and A.Nosratinia, "Diversity through coded cooperation," IEEE Trans. Wirel. Commun., vol. 5, no. 2, pp. 283-289, 2006.

[10] C. Berrou, A. Glavieux, and P. Thitimajshima, "Near Shannon limit error-correcting coding and decoding Turbo-codes," IEEE Int. Conf. Commun., vol. 2, pp. 1064-1070, 1993.

[11] Z. Zhang and T. M. Duman, "Capacity approaching Turbo coding and iterative decoding for relay channels," IEEE Trans. Commun., vol. 53, no. 11, pp. 1895-1905, 2005.

[12] H. Sun, S. X. Ng, and L. Hanzo, "Turbo Trellis coded hierarchical modulation assisted decode-and-forward cooperation," IEEE Trans. Veh. Technol., vol. 9545, no. c, pp. 1-11, 2014.

[13] R. Lin, "Cooperative communication systems using distributed Turbo coding," no. July, 2011.

[14] G. Al-habian, A. Ghrayeb, M. Hasna, and A. Abudayya, "Threshold-based relaying in coded cooperative networks," vol. 60, no. 1, pp. 123-135, 2011.

[15] A. Chakrabarti, A. De Baynast, A. Sabharwal, and B. Aazhang, "LDPC code design for half-duplex decode-and-forward relaying," Proc. Allert. Conf. Monticello, 2005.

[16] A. Chakrabarti, A. de Baynast, A. Sabharwal, and B. Aazhang, "Low density parity check codes for the relay channel," IEEE J. Sel. Areas Commun., vol. 25, no. 2, pp. 280-291, 2007.

[17] M. A. Khojastepour, N. Ahmed, and B. Aazhang, "Code design for the relay channel and factor graph decoding," Proceeding 30th Asilomar Conf. Signals, Syst. Comput., vol. 2, pp. 2000-2004, 2004.

[18] M. Noor-A-Rahim, K. D. Nguyen, and G. Lechner, "SC-LDPC code design for half-duplex relay channels," Wirel. Pers. Commun., vol. 92, no. 2, pp. 771-783, 2017.

[19] Sreemohan P V , Nelsa Sebastian, "FPGA implementation of min-sum algorithm for LDPC decoder," Int. Conf. Trends Electron. Informatics 2017, pp. 821-826, 2017.

[20] H. Lee, M. Li, J. Hu, P. Chou, and Y. Ueng, "Optimization techniques for the efficient implementation of high-rate layered," IEEE Trans. Circuits Syst. Regul. Pap., vol. 64, no. 2, pp. 457-470, 2017.

[21] S. Myung, S. Park, K. Kim, J. Lee, S. Kwon, and J. Kim, "Offset and normalized min-sum Algorithms," IEEE Trans. Broadcast., vol. 63, no. 4, pp. 734-739, 2017.

[22] J. Bae, B. J. Choi, and M. Hoon, "Special session: Low power LDPC decoder using adaptive forced convergence algorithm," Circuits Syst. (MWSCAS), 2017 IEEE 60th Int. Midwest Symp., no. 1, pp. 309-312, 2017.

[23] S. Scholl, P. Schl, and N. Wehn, "Saturated min-sum decoding: An ' afterburner ' for LDPC decoder hardware," Des. Autom. Test Eur. Conf. Exhib., pp. 1219-1224, 2016.

[24] L. Song, Q. Huang, and Z. Wang, "Set min-sum decoding algorithm for non-binary LDPC codes," IEEE Int. Symp. Inf. Theory, pp. 3008-3012, 2016.

[25] M. K. Roberts and M. Falaq, "A low-complex min-sum decoding algorithm for irregular LDPC codes," Wirel. Commun. Signal Process. Netw. (WiSPNET), Int. Conf., pp. 9-11, 2016.

[26] C. Song, C. Lin, and S. Lin, "Partially-stopped probabilistic min-sum algorithm for LDPC decoding," IEEE 5th Glob. Conf. Consum. Electron., pp. 6-7, 2016.

[27] S. Nimara, O. Boncalo, A. Amaricai, and M. Popa, "FPGA architecture of multi-codeword LDPC decoder with efficient BRAM utilization," Form. Proc. 2016 IEEE 19th Int. Symp. Des. Diagnostics Electron. Circuits Syst., 2016.

[28] K. Zhao, Y. Xu, D. He, Y. Guan, and W. Zhang, "Variable LLR scaling in LDPC min-sum decoding under horizontal shuffled structure," 2016 IEEE Int. Symp. Broadband Multimed. Syst. Broadcast., pp. 1-7, 2016.

[29] F. Alberge, "Min-sum decoding of irregular LDPC codes with adaptive scaling based on mutual information," 2016 9th Int. Symp. Turbo Codes Iterative Inf. Process., no. 2, pp. 71-75, 2016.

[30] A. M. L. C. Nand and F. Memory, "LLR-distribution-based non-uniform quantization for RBI-MSD algorithm in MLC flash memory," IEEE Commun. Lett., vol. 22, no. 1, pp. 45-48, 2018.

[31] T. T. Nguyen-ly, V. Savin, K. Le, and D. Declercq, "Analysis and design of cost-effective, high-throughput LDPC decoder," IEEE Trans. Very Large Scale Integr. Syst., vol. 26, no. 3, pp. 508-521, 2018.

[32] R. Ghanaatian, A. Balatsoukas-stimming, T. C. Muller, M. Member, G. Matz, A. Teman, A. Burg, and A. An, "A 588-Gb/s LDPC decoder based on finite-alphabet message passing," IEEE Trans. Very Large Scale Integr. Syst., vol. 26, no. 2, pp. 329-340, 2018.

[33] A. S. Mohamed, M. Abd-Elnaby, and S. A. El-Dolil, "Performance evaluation of adaptive LDPC coded modulation cooperative wireless communication system with best-relay selection," Int. J. Digit. Inf. Wirel. Commun., vol. 4, no. 1, pp. 155-168, 2014.

[34] R. G. Gallager, "Low-density parity-check codes," IRE Trans. Inf. Theory, vol. IT-8, pp. 21-28, 1962.

[35] D. J. . MacKay and R. M. Neal, "Near shannon limit performance of low density parity check codes," Electron. Lett., vol. 33, p. 457, 1997.

[36] C. E. Shannon, "A mathematical theory of communication," Bell Syst. Tech. J., vol. 27, no. July 1928, pp. 379-423, 1948.

[37] E. O. Torshizi, H. Sharifi, and M. Seyrafi, "A new hybrid decoding algorithm for LDPC codes based on the improved variable multi weighted bit-flipping and BP algorithms," 2013 21st Iran. Conf. Electr. Eng. ICEE 2013, 2013.

[38] A. Neubauer, J. Freudenberger, and V. Kuhn, Coding theory algorithms, architectures and applications. 2007.

[39] M. Abrar, X. Gui, and A. Punchihewa, "Low complexity joint sub-carrier pairing, allocation and relay selection in cooperative wireless networks," Int. J. Commun. Networks Inf. Secur., vol. 6, no. 3, pp. 182-188, 2014.

[40] S. Nouh, I. Chana, and M. Belkasmi, "Decoding of block codes by using genetic algorithms and permutations set," Int. J. Commun. Networks Inf. Secur., vol. 5, no. 3, pp. 201-209, 2013.

[41] Z. Si, R. Thobaben, and M. Skoglund, "Bilayer LDPC convolutional codes for decode-and-forward relaying," IEEE Trans. Commun., vol. 61, no. 8, pp. 3086-3099, 2013.

[42] X.-B. Li, S. Zhang, F.-H. Zhao, and H.-W. Zhang, "A multi-base station cooperative algorithm for LDPC-OFDM system in the HF channel," 2014 Int. Conf. Wirel. Commun. Sens. Netw., pp. 21-27, 2014.

[43] H. A. Orabi, "Implementation for two-stage hybrid decoding for low density parity check (LDPC) codes," Int. J. Comput. Appl., vol. 80, no. October, pp. 34-41, 2013.

[44] T.-C. Chen, C.-J. Li, and E.-H. Lu, "A hybrid belief propagation decoding algorithms of LDPC codes for fast convergence," Cross Strait Quad-Regional Radio Sci. Wirel. Technol. Conf., pp. 389-392.

[45] F. Angarita, J. Valls, V. Almenar, and V. Torres, "Reduced-complexity min-sum algorithm for decoding ldpc codes with low error-floor," IEEE Trans. Circuits Syst. I Regul. Pap., vol. 61, no. 7, pp. 2150-2158, 2014.

[46] V. V Vityazev, E. A. Likhobabin, and E. A. Ustinova, "Min-sum algorithm-structure based decoding algorithms for LDPC codes," 2014 3rd Mediterr. Conf. Embed. Comput. (MECO), pp. 256-259, 2014.

[47] Y. Xu, S. Member, L. Szczecinski, S. Member, and B. Rong, "Variable LLR scaling in min-sum decoding for irregular LDPC codes," IEEE Trans. Broadcast., vol. 60, no. 4, pp. 606-613, 2014.

[48] M. Zhu, L. Li, and H. Zhang, "Two-stage fixed-point quantization of LDPC min-sum decoding," Int. J. Emerg. Technol. Adv. Eng., vol. 4, no. 2, pp. 3-6, 2014.

[49] Z. Zhong, Y. Li, and X. Chen, "Modified min-sum decoding algorithm for LDPC codes based on classified correction," Third Int. Conf. Commun. Netw. China., 2008.

[50] M. R. Islam, D. S. Shafiullah, M. Mostafa, A. Faisal, and I. Rahman, "Optimized min-sum decoding algorithm for low density parity check codes," 14th Int. Conf. Adv. Commun. Technol., vol. 2, no. 12, pp. 168-174, 2012.

[51] C. L. Wang, X. Chen, Z. Li, and S. Yang, "A simplified min-sum decoding algorithm for non-binary LDPC codes," IEEE Trans. Commun., vol. 61, no. 1, pp. 24-32, 2013.

[52] C. C. Cheng, J. D. Yang, H. C. Lee, C. H. Yang, and Y. L. Ueng, "A fully parallel LDPC decoder architecture using probabilistic min-sum algorithm for high-throughput applications," IEEE Trans. Circuits Syst. I-Regular Pap., vol. 61, no. 9, pp. 2738-2746, 2014.

[53] I. Tsatsaragkos and V. Paliouras, "Approximate algorithms for identifying minima on min-sum LDPC decoders and their hardware implementation," IEEE Trans. Circuits Syst. II Express Briefs, vol. 62, no. 8, pp. 766-770, 2015.

[54] N. T. D. Linh, G. Wang, M. Jia, and G. Rugumira, "Performance evaluation of sum product and min-sum stopping node algorithm of LDPC decoding," Inf. Technol. J., vol. 11, no. 9, pp. 1298-1303, 2012.

[55] A. A. Emran and M. Elsabrouty, "Simplified variable-scaled min sum LDPC decoder for irregular LDPC codes," Consum. Commun. Netw. Conf., pp. 526-531, 2014.

[56] Y. Jung, Y. Jung, S. Lee, and J. Kim, "New min-sum LDPC decoding algorithm using SNR-considered adaptive scaling factors," Electron. Telecommun. Res. Inst. J., vol. 36, no. 4, pp. 591-598.

[57] Meng Xu, Jianhui Wu, and Meng Zhang, "A modified offset min-sum decoding algorithm for LDPC codes," 2010 3rd Int. Conf. Comput. Sci. Inf. Technol., no. November, pp. 19-22, 2010.

[58] M. T. Pursuing and C. Engineering, "Bit error rate analysis in simulation of digital communication systems with different modulation schemes," Int. J. Innov. Sci. Eng. Technol., vol. 1, no. 3, pp. 406-413, 2014.

Jamaah Suud, Hushairi Zen, Al-Khalid B Hj Othman, Khairuddin Ab. Hamid

Department of Electrical and Electronic Engineering, Faculty of Engineering, University Malaysia Sarawak, Kota Samarahan, Sarawak, Malaysia
Table 1.Performance Comparison of Error Control Coding Over the Years

Error          BCH           Turbo         LDPC

Control Code
Year           1959          1993          1996 and beyond
Code Rate      1/6, 1/4      1/3, 1/2      2/3, 3/4
               (Low)         (Medium)      (High)
BER            10-3 (Poor)   10-6 (Good)   10-8 (Very Good)
Decoding       Average       High          Low
Complexity

Table 2: Performance Comparison of Min-Sum Based LDPC Decoding System

Authors                 Method         Parity      Iteration  Modulation
                                       Check

Fabian et. al. [45]     svwMS          regular     30         BPSK
Yin Xu et. al [47]      ID-GMI         regular &   50         BPSK 256
                                       irregular              QAM
C.-L. Wang et. al       SMSA           regular     20,50,100  BPSK, QAM
[51]
C.-C. Cheng et. al.     NPMSA          regular     9          --
[52]
Ioannis Tsatsaragkos    ExMin-n;                   10,30      BPSK
et al. [53]             rExMin-n
Meng Zhu et. al. [48]   Two-stage      regular     10
                        fixed-point
                        quantization
Nguyen Thi Dieu         MSA-SN         --          30         BPSK/QPSK
Linh et. al. [54]
Ahmed. Emran et al.     SVS MS         regular     50         256-QAM
[55]
Yongmin Jung et al.     SANMS          irregular   10,15,30   --
[56]

Authors                 Channel           VCN Operation

Fabian et. al. [45]     AWGN              CN: correction factor,
                                          comparison- divide into
                                          2 group
Yin Xu et. al [47]      AWGN              CN: select metric
                                          based on density
                                          evolution-searching scaling
                                          factor
C.-L. Wang et. al       AWGN &            CN: symbol combination
[51]                    independent       set separately-smaller
                        Rayleigh fading   search space
C.-C. Cheng et. al.     --                CN: NPMSA-tree-structure-based
[52]                                      MVF & optimal
                                          normalization factor,
                                          quantization bits,
                                          mix of tree and
                                          butterfly types, normalization
                                          factor, Early termination
                                          method
Ioannis Tsatsaragkos    AWGN              CN: ExMin-n :
et al. [53]                               n-level exMin
                                          approximation-partitioning
                                          and minimum identification,
                                          rExMin-n, adding
                                          a negative factor r.
Meng Zhu et. al. [48]                     CN: fixed-point number
                                          not float-point number,
                                          1st stage: uniform
                                          quantization + channel
                                          likehood information, 2nd
                                          stage: uniform
                                          quantization+ information
                                          transmission btw CN
                                          & VN
Nguyen Thi Dieu         AWGN              CN: Stopping Node
Linh et. al. [54]
Ahmed. Emran et al.     AWGN              approximate scaling
[55]                                      factor graph to a stair
                                          graph with constant
                                          horizontal step
Yongmin Jung et al.     AWGN              VN:SNR received: -ve
[56]                                      & +ve effects, log-likelihood
                                          ratio received:adaptive
                                          scaling factors
                                          CN:extrinsic information :
                                          adaptive scaling
                                          factors,
                                          VN + CN architecture
                                          (combine)

Authors                 Results

Fabian et. al. [45]     BER=[10.sup.-15] (FER=[10-.sup.13]),
                        less 32 comparators,
                        throughput: 12.8 Gbps / area : 3.8 [mm.sup.2]
Yin Xu et. al [47]      BER= [10.sup.-7]
C.-L. Wang et. al       small SNR loss,
[51]                    SMSA-computation
                        saves - 60% to
                        70%, memory bits-saves
                        55%.
C.-C. Cheng et. al.     51%fewer comparators
[52]                    with loss of 0.05 dB, area
                        reduction - 19.8%, energy
                        reduction - 60.6%.
Ioannis Tsatsaragkos    exMin-3 vs MS - coding
et al. [53]             gain = 0.15-0.2 dB, exMin-3
                        vs NMS: degrade 0.08 dB
                        at a BER of [10.sup.-7], rExMin-3
                        improve exMin-3performance
                        at BER range below [10.sup.-6],
                        rExMin-3 vs exMin-3 :
                        gain of 0.06 dB, exMin : less
                        than 14 iterations, exMin-n :25%
                        less comparators &
                        65% less multiplexers,
                        exMin - complexity reduction:
                        6% - 15%. delay reduction:
                        9% - 23%.
Meng Zhu et. al. [48]   at BER [10.sup.-6]:4-bit
                        quantization vs float-point
                        calculation: coding gain = 0.025 dB,
                        fixed point vs float point only
                        0.05 dB loss at BER: [10.sup.-6],
                        at BER:[10.sup.-6] two-stage
                        quantization vs single stage : gain =
                        0.2 dB, 1-stage gain vs 2-stage
                        0.025 dB:[down arrow]complexity +
                        internal information quantization.
Nguyen Thi Dieu         MSA-SN vs MS : 0.1
Linh et. al. [54]       dB - 0.3 dB BER [down arrow], Process time:
                        MSA-SN : [down arrow] 5 x(BPSK)
                        & 10 x (QPSK)
Ahmed. Emran et al.     SVSMS vs MS: 0.41
[55]                    to 0.85 dB, SVSMS
                        vs Scale MS :
                        0 to 0.43 dB better, SVSMS
                        vs SPA: only 0.08- 0.24 dB
                        more + [down arrow] complexity.
Yongmin Jung et al.     Max iteration =10,
[56]                    SANMS vs MS: coding
                        gain = 0.4 dB, SANMS +
                        adaptive SFs = 2.6 dB (overestimated)

                        at BER = 9.26 x [10.sup.7], adaptive
                        SFs = 2.8 (perfect
                        estimated) dB at BER
                        8.57 x [10.sup.-7]

svwMS simplified variable weight min-sum, BPSK binary phase shift
keying, QPSK Quadrature phase shift keying, AWGN additive white
gaussian noise, ID-GMI iteration and degree dependent generalized
mutual information, QAM quadrature amplitude modulation,, SMSA
simplified min-sum algorithm, NPMSA normalized probabilistic min-sum
algorithm, MVF minimum value finder, MSA-SN min-sum using stopping
node, SVSMS simplified variable scaled min-sum, SNR signal noise ratio,
NMS normalized min-sum, SANMS snr-considered adaptive nms, Scale MS
scale min-sum, [down arrow] lower, + with, btw between, CN check node,
VN variable node, BER bit error rate, VCN variable and check node, -ve
negative, +ve positive
COPYRIGHT 2018 Kohat University of Science and Technology
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Suud, Jamaah; Zen, Hushairi; Othman, Al-Khalid B. Hj; Hamid, Khairuddin Ab.
Publication:International Journal of Communication Networks and Information Security (IJCNIS)
Article Type:Report
Date:Apr 1, 2018
Words:10002
Previous Article:Efficient Clustering Protocol Based on Stochastic Matrix & MCL and Data Routing for Mobile Wireless Sensors Network.
Next Article:A Novel and Low Processing Time ECG Security Method Suitable for Sensor Node Platforms.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |