Printer Friendly

Criterion to evaluate the quality of infrared target images based on scene features.


In the field of infrared image signal processing, along with the development of infrared thermography technology, infrared imaging systems are capable of showing more and more subtle information about target and some more detail in the scene [1]. While these changes improve the imaging resolution, they also make the content of the scene become one of the most important factors affecting target acquisition performance. Since the development of infrared jamming technology and the real scene of complex and changeable, which interfere with target acquisition performance, the input infrared images become more and more complex. Meanwhile, the content and quality of the images vary greatly [2]. Therefore, the accurate and effective quantitatively description of infrared image quality could provide the decision-making basis for designing and improving infrared image processing algorithms and it has a great significance of the development of infrared image processing technology [3], [4].

The image quality evaluation is usually proposed to measure the compression rate and information loss which existing in the procedure of video processing and image communication [5], while the infrared image quality evaluation mainly involves the image processing algorithms and focuses on their performance[3], [6]. Targets mentioned in this paper usually have sufficient size (> 0.2 %) and contain abundant shape and gray scale information [7]. Metrics have been proposed based on many researches which could be divided into three classes: (1) statistical metrics, such as, statistical variance (SV) [8], signal to clutter ratio (SCR) [9], target versus background entropy (ETB) [10], target standard deviation (TSD) [11] and target background interference ratio (TBIR) [12]. Although these statistics are easy to obtain, they give no concern about structure information between pixels which makes great contribution to the performance of target detection [13]. (2) metrics based on human perceptual properties, such as probability of edge (POE) [14], target structure similarity (TSSIM) [15], which take account of the disturbing of background when detecting target with human eyes. Obviously, these evaluation metrics are not fit for machine vision. (3) metrics based on texture, including improved co-occurrence matrix (ICOM) [16]--[18], texture-based image clutter (TIC) [6], target texture similarity (TTSIM) [13], which measure the degree of disturbing by the similarity of texture between target and background. Texture based metrics contain a potential assumption that there exist some scale and texture features to the target. However, in infrared images, the texture features of target are not obvious. As analysed above, it is necessary and important to propose an infrared image oriented evaluation metrics which could be used in describing the inherent difficulty of detecting, recognizing and segmentation target in IR target image [6]. It is well known that the main purpose of dealing with infrared images is to detect the target [3], [6]. From this point of view, we first analyse the factors that affect the target detecting, and then put forward two metrics to measure the image from the perspective of the target detecting difficulties: interference degree of global background (IDGB) and similarity degree of local background (SDLB). The definition and calculation of these metrics are given in detailed. Related experiments are designed to illustrate the validity of proposed metrics and promising results have been achieved.

The structure of this paper is as follows: Section I describes the background of this research. Section II analyses the factors affected target detection in IR target image. Section III introduces two new IR target image quality metrics, namely interference degree of global background (IDGB) and similarity degree of local background (SDLB), reflect the complexity of target detection in IR target image. Section IV validates the proposed metrics by both theoretical analysis and experimental results, and compared with the traditional metrics, such as SV, TTSIM and TSSIM. Finally, Section V draws the conclusion of this paper.


Since image contains abundant information, it is a difficult task to evaluate image quality precisely. Infrared target image quality evaluation metrics could be designed based on these assumptions for depicting the factors that influence image processing algorithms [3], [6]. In this paper, we limit image quality evaluation to some scope, that is, we just focus on the image quality factors that could affect the performance of image processing algorithms [6].

As indicated in literature [13], the location of target is unknown in the process of target searching and detecting while there usually exist some 'target-like' regions. The 'target-like' regions represent regions which are similar to target and vary from local background. As the amount of 'target-like' regions climbs, finding the location of target becomes more and more difficult even causing detection disabled. As target searching is mainly influenced by the global background, while target static detecting (extract target from region that might contain target) is mainly affected by its local background [13]. The more complex the local background is, the harder to extract target from its local background.

For explaining more explicitly, we show two identical infrared images in Fig. 1(a) and Fig. 1(b). B1, B2, B3 and TR are used to label different areas in Fig. 1(b) for comparing conveniently: TR represents target area while B1, B2 and B3 are different background areas. We could see that B1, B2 and B3 are much different with each other: B1 is similar to TR and has a big size which will be a disturbance for detecting TR; B2 is also similar to TR, but it could not play an equal role as B1, since B2 has a smaller size; although B3 has the biggest size, it will not intervene in detecting TR for its scene is differ from TR so much. In one word, there are three factors to influence target searching and detecting: (1) the background scene; (2) the similarity between background and target; (3) the size of background which is similar to the target. An image quality evaluation method which could depict these three factors will be valid in measuring the interruption of global background to target searching and detecting.

In this paper, we will build a novel evaluation method of the quality of infrared target image by measuring the disturbing factors exist in the process of target searching and static detecting. Details about the designing and calculating of this method will be stated below.


Before giving the quantitative description, we define the local background area of target as stated in [4] and [6] as: assume that MR is the bounding rectangle of the target T, and TR is the rectangle containing T whose area is twice of the area of MR. MR and TR have the same centre and ratio. The local background area L could be defined as the region in TR but without MR as Fig. 2 shows.

A. Interference Degree of Global Background

Interference degree of global background (IDGB) is a quantitative metric which represents the ability of global background disturbs target detection. The IDGB is based on finding 'target-like' area from global background by using grey scale features. The minimum value of IDGB is zero which means there is no 'target-like' area in global background and there is no disturbance in the process of finding TR. The larger the IGDB becomes, the higher the interference will be.

As described in Section II, there are three factors to influence target searching and detecting: (1) the background scene. (2) the similarity between background and target and (3) the size of background which is similar to the target. A novel algorithm has been proposed in this paper to tackle these three problems. First, background is divided adaptively into several blocks according to different frequencies and then similarity of each block with TR is calculated. Finally, we calculate weights of each blocks' area. We show the details of this algorithm in Fig. 3, and the text stated as follows:

Step 1: adaptively dividing background into several blocks according to different frequencies.

Higher frequency means more drastic changes in content [4] and bigger mean square error (MSE). MSE is used to divide image into several blocks and each block has similar frequency [4]. The image is quartered and the sub block is quartered again and again until MSE of blocks is below the TR's MSE, i.e. [MSE.sub.TR] or the area of blocks is below the TR's area, i.e. [A.sub.TR]. At this time, the image is divided into sub blocks of different sizes, the larger block mainly contains the flat area of the image, while the smaller block mainly contains the detail sections of the image.

Step 2: calculating the similarity between TR and each blocks.

A block which is similar to target T and different from background L will disturb the process of target detection. The key point of this step is to find an effective descriptor to depict the image information to get the similarity between images.

Comaniciu introduces kernel-based grey density estimation [19] to get the grey scale information, and uses Bhattacharyya coefficient as the descriptor of similarity. We employ this method for its good performance to obtain the similarity between each block and the target T as

[w.sub.1](i) = [B.sub.Ti]/1 + [B.sub.Li], (1)

where [B.sub.Ti] = [m.summation over (u = 1)] [square root of [q.sub.TR,u] x [q.sub.i,u]] represents the similarity between MR and the i-th image block, [B.sub.Li] = [m.summation over (u = 1)] [square root of [q.sub.L,u] x [q.sub.i,u]] represents the similarity between L and the i-th image block. We use 1 + [B.sub.Li] to avoid division by zero due to [B.sub.Li] in range [0,1]. And [q.sub.TR] = [{[q.sub.TR,u]}.sub.u] = 1,2, ..., m, [q.sub.L] = [{[q.sub.L,u]}.sub.u] = 1,2, ..., m, [q.sub.i] = [{[q.sub.i,u]}.sub.u] = 1,2, ..., m are the kernel density estimation of TR, L and the i-th image block respectively. The [q.sub.TR], [q.sub.L] and [q.sub.i] can be calculated as follows

[q.sub.u] = C[n.summation over (i=1)] k([parallel][x.sub.i] - [x.sub.0]/h[parallel])[delta][b([x.sub.i]) - u], (2)

where [x.sub.0] is the centre of region and [{[x.sub.i]}.sub.i] = 12,...,n is the set of pixels in the region; grey scale is divided into m level (a typical value of m is 64), then the grey level in [x.sub.i] is [b.sub.([x.sub.i])] x u = 1,2,..., m is the region features and [m.summation over (u=1)] [q.sub.u] = 1; C is a constant for normalization; Kronecker Delta function [delta][b([x.sub.i]) - u] is used to judge if the grey value in [x.sub.i] belongs to the u th feature; k is the weight to make sure that the pixel closer to the centre will play more important part and k is obtain by


where h is the scale of the region.

Step 3: getting the area weights [w.sub.2] (i).

We calculate the area weights [w.sub.2](i) by (3)

[w.sub.2](i) = [A.sub.Bi]/[A.sub.TR], (4)

where [A.sub.TR] is the area of TR and [A.sub.Bi] is the area of the i th image block. Based on the analysis above, IDGB is calculated like

IDGB = [n.summation over (i=1)] [w.sub.1](i) x [w.sub.2](i)/[n.summation over (i=1)] [w.sub.2](i), (5)

where [w.sub.1](i) [greater than or equal to] 0.15. [w.sub.1](i) which is below 0.15 is ignored to avoid accumulation of non-relation regions. We show an instance of calculating IDGB in Fig. 4 (the value of IDGB is 4.075).

B. Similarity Degree of Local Background

Similarity degree of local background (SDLB) is a quantify metric which is used to reflect the differences between target and its local background. The definition of local background, L, is given in Section III. The main idea of SDLB is to evaluate the disturbance of local background by introducing the difference between target and its local background. SDLB ranges from 0, meaning that there is no disturbance for target detecting, to 1 meaning that the local background is the same as target and the target cannot be detected from its local background.

As stated in Section II, local background is the principal factor to disturb target static detecting in infrared image. It becomes harder and harder to accurately locate the target as the grey level of target and its local background are more and more close from Fig. 5(a) to Fig. 5(c). Assume that [u.sub.T] is the average grey level of target, and [[micro].sub.L] is the average grey scale of local background, the similarity between target T and local background L could be depicted as

GS = 2[[mu].sub.L][[mu].sub.L]/[[mu].sup.2.sub.T] + [[mu].sup.2.sub.L]. (6)

The value range of GS are [0, 1] and the bigger the value is, the more close the average grey level of T and L will be.

Furthermore, the structure and distribution of grey scale in both target and its local background also disturb target detection. In Fig. 6, three images are shown and the target and local background in each of them have the same mean value of grey level, but the difference of structure and grey intensity are weaker and weaker from Fig. 6(a) to Fig. 6(c). It is obvious that the target in Fig. 6(a) could still be identified from its local background. However, target could be barely detected in Fig. 6(c). We introduce grey variance to depict the similarity of target and local background in structure and intensity distribution is [4]

IS = 2[[sigma].sub.T][[sigma].sub.L]/[[sigma].sup.2.sub.L], (7)

where [[sigma].sup.2.sub.T] and [[sigma].sup.2.sub.L] are the grey variance of target and local background respectively. The range of IS is [0, 1], '1' represents that the structure and intensity distribution of the target and the local background is the same.

We synthesize (6) and (7) to calculate SDLB as

SDLB = IS x GS = (2[[mu].sub.L][[mu].sub.L]/[[mu].sup.2.sub.L] + [[mu].sup.2.sub.T]) x (2[[sigma].sub.T][[sigma].sub.L]/[[sigma].sup.2.sub.L] + [[sigma].sup.2.sub.T]). (8)


Several experiments were carried out to evaluate the performances of proposed metrics. The experimental data consists of actual infrared images (Alls) and synthesis infrared images (SIIs). Alls are selected from standard databases, such as ATCOM, Ohio state University infrared database, etc. These images are captured under different backgrounds, such as sky, terrain, sea clutter and low altitude ground mixed. SIIs are combinations of real infrared backgrounds and artificial targets, which are used to verify the performance of proposed metrics in specific scenarios.

A. Validity Experiment

One hundred and sixty actual infrared target images are used to evaluate the validity of metrics IDGB and SDLB. The size of these images is 256 x 256 pixels. Due to the numerous experimental data, we will take four typical images which shown in Fig. 7 as examples to explain in details, and their evaluation value of metrics IDGB and SDLB are listed in Table. I.

It could be concluded from Table I that the interference from global background and the interference from local background are both low in Fig. 7(a) with IDGB = 0.276 and SDLB = 0.216 . The qualities of Fig. 7(b) and Fig. 7(c) are worse than Fig. 7(a). Although Fig. 7(b) has a low global interference, i.e. IGDB = 2.167 , the local background generate more interference (SDLB = 0.625). The situation of Fig. 7(c) is on the contrary, there are many 'target-like' areas in Fig. 7(b), hence the interference from global background plays an important part. The IDGB value of Fig. 7(c) is high (10.798), while the value of SDLB is low (0.176). Fig. 7(d) is the worst quality image in these four images, the IDGB value of Fig. 7(d) reaches 8.514, which means there is a high interference from 'target-like' background in the process of target searching, and the value of SDLB is 0.854 indicating that local background also generates high disturbance to target detection.

Comparing with Fig. 7(a) to Fig. 7(d), it could be concluded that IDGB and SDLB are both validity for the quality evaluation of infrared target image and could give an accurate measurement of interference to target detection. Besides that, IDGB and SDLB can also give the cause of background which disturbs target detection.

B. Performance Comparative Experiment

The influence of image scene to the target is subjected to the background information and target characteristics [13]. In this paper, statistical variance (SV, a kind of statistical metric), target structural similarity (TSSIM, a kind of metric that based on human perceptual properties) and target texture similarity (TTSIM, a kind of texture based metric) are chosen to compare with our metrics, i.e. IDGB and SDLB.

1) Metrics used for comparison

SV uses the average grey level standard deviation to describe the strength of the background interference [8], it is obtained by

SV = [square root of 1/N [summation over i][[sigma].sup.2.sub.i]], (9)

where N is the amount of sub blocks in the image, [[sigma].sub.i] is the standard deviation of grey level in the i th block. The bigger SV is, the worse the quality of image will be.

TSSIM estimates the image quality through calculating the difference of luminance, contrast and structure between target and background [15], as (10) shows

TSSIM = [square root of 1/N [N.summation over (i=1)] TSSIM[(T, [B.sub.j]).sup.2]], (10)

where TSSIM(T, [B.sub.j]) = 4[[mu].sub.T][[mu].sub.Bj][[sigma].sub.TBj] + C/([[mu].sup.2.sub.T] + [[mu].sup.2.sub.Bj])([[mu].sup.2.sub.T] + [[mu].sup.2.sub.Bj]) + 9 C

means the structural similarity measure between the target and the j th block, where is the mean grey level of the target, [[mu].sub.Bj] is the mean grey level of the j th block, [[sigma].sub.TBj] is the grey covariance of the j th block, C is a constant for avoiding division by zero. Bigger TSSIM also means worse image quality.

TTSIM is calculated by

TTSIM = [square root of 1/N [N.summation over (i=1)] [TTSIM.sup.2.sub.i], (11)

where [TTSIM.sub.i] = [SIGMA][([CM.sub.i] - [CM.sub.T]).sup.2], is the texture similarity measure between the target and the i th block, [CM.sub.T] and [CM.sub.i] are the grey level co-occurrence matrix (GLCM) of the target and the i th block respectively, N is the amount of sub blocks. Unlike the metrics SV and TSSIM, the larger the value of TTSIM is, the better the quality of the image will be.

2) Performance comparative experiments to the scene of same target, but different backgrounds

We use twenty-four sets of actual infrared images to compare the performance of proposed metrics IDGB and SDLB with the traditional metrics, and each set is consisted of 3 to 5 images which have different backgrounds but the same target. And these images have the same size of 360 x 240. A typical set of images are shown in Fig. 8, the global background interference and the local background interference both become stronger and stronger from Fig. 8(a) to Fig. 8(c). Table II is the evaluation results of IDGB, SDLB and traditional metrics (SV, TSSIM and TTSIM) to the images shown in Fig. 8.

According to Table II, different metrics sort these three images from good to bad in different order. SSIM in Fig. 8(c) is considered to be the best while Fig. 8(b) the worst. TTSIM puts Fig. 8(a) in the first place and takes Fig. 8(b) the worst. Both TSSIM and TTSIM do not reflect the real situation for they take the target and its background as an entity and do not pay enough attention to the target characteristics.

TSSIM and TTSIM weakened target feature because they classify the background region which similar to the local background LB2 as the 'target-like' area. The proposed metrics IDGB and SDLB show that the image with the best quality is Fig. 8(a), then Fig. 8(b) in second place, and Fig. 8(c) is the worst. The proposed metrics are more accurate than TSSIM and TTSIM. Though the result of SV is similar to ours, IDGB and SDLB could provide the cause of background interference in target detection.

3) Performance comparative experiments to the scene of same background, but different targets

When targets are different, the influence from background is also different, even having the same background. That is, the characteristic of target is related to the disturbance of target detection from background [7]. In order to compare the performances of the proposed metrics IDGB and SDLB with the traditional metrics when evaluating the infrared images which have the same background but different targets, we add targets T1, T2 and T3, and synthesis three images shown in Fig. 9(b), Fig. 9(c) and Fig. 9(d) respectively referring a real infrared image, i.e. Fig. 9(a). The size of Fig. 9(a) is 256 x 256, and the size of T1, T2, and T3 are all 36x 36. The grey level of T1 obeys N (240,10), T2 obeys N (210,20) and T3 obeys N(180,30). It is obvious that the grey level similarity with the background and the inner variance enlarge from T1 to T3. Although the background is totally the same in Fig. 9(b), Fig. 9(c) and Fig. 9(d), the evaluation value of IDGB and SDLB climb bigger and bigger as shown in Table III.

According to SV in Table III, the quality of Fig. 9(b), Fig. 9(c) and Fig. 9(d) is basically the same (the SV value of these three images are: (b) 6.312, (c) 6.319, (d) 6.317). TSSIM shows the quality of Fig. 9(c) is the best, then is the Fig. 9(d), and Fig. 9(b) is the worst. The evaluation value of SV, TSSIM are inconsistent with the actual situation, that is to say, neither SV nor TSSIM properly reflects the disturbance from the same background to different targets. TTSIM shows the order from best to worst is Fig. 9(b), Fig. 9(c) and Fig. 9(d) which meets the actual situation. IDGB and SDLB could also give the right order of image quality. Moreover, IDGB and SDLB can provide the information about the specific factors that impact on image quality.


In the field of image signal processing, metrics IDGB and SDLB are proposed in this paper to measure the infrared target image based on analysing disturbance factors in target detection. Experimental results show that IDGB and SDLB are valid metrics for evaluating the quality of infrared target image. These metrics are more consistent with the real situation than existing metrics such as SV, TSSIM and TTSIM. Future works will be the application of IDGB and SDLB in the quality description of infrared target image and the final destination is to design a background self-adjusting detection algorithm of infrared target.


[1] A. Rogalski, "Semiconductor detectors and focal plane arrays for far-infrared imaging", Opto-Electronics Review, vol. 21, no. 4, pp. 406-426, 2013. [Online]. Available: s11772-013-0110-x

[2] Xiubao Sui, Qian Chen, Guohua Guo, "A novel non-uniformity evaluation metric of infrared imaging system", Infrared Physics & Technology, vol. 60, pp. 155-160, 2013. [Online]. Available:

[3] W.-H. Diao, X. Mao, H.-C. Zheng, "Image sequence measures for automatic target tracking", Progress In Electromagnetics Research, vol. 130, pp. 447-472, 2012. [Online]. Available: http://dx.doi .org/10.2528/PIER12050810

[4] Zheng Xin, Peng Zhen-ming, "Image segmentation based on activity degree with pulse coupled neural networks", Optics and Precision Engineering, vol. 21, no. 3, pp. 821-827, 2013. [Online]. Available:

[5] Wu Jinjian, Lin Weisi, Shi Guangming, "Reduced-reference image quality assessment with visual information fidelity", IEEE Trans. Multimedia, vol. 15, no. 7, pp. 1700-1705, 2013. [Online]. Available:

[6] Li Ming, Zhou Zhen-hua, Zhang Gui-lin, "Image measures in the evaluation of ATR algorithm performance", Infrared and Laser Engineering, vol. 36, no. 3, pp. 412-416, 2007.

[7] Qiao Li-yong, Xu Li-xin, Gao Min, "Survey of image complexity metrics for infrared target recognition", Infrared Technology, vol. 35, no. 2, pp. 88-96, 2013.

[8] D. E. Schmieder, M. R. Weathersby, "Detection performance in clutter with variable resolution", IEEE Trans. Aerospace Electron. Sys. AES, vol. 19, no. 4, pp. 622-630, 1983. [Online]. Available: http://dx.doi .org/10.1109/TAES .1983.309351

[9] Wu B, Ji H-B, Li P, "New method for moving dim target detection based on third-order cumulant in infrared image", Journal of Infrared and Millimeter Waves, vol. 25, no. 5, pp. 364-367, 2006.

[10] L. G. Clark, V. J. Velten, "Image characterization for automatic target recognition algorithm evaluations", Optical Engineering, vol. 30, no. 2, pp. 147-153, 1991. [Online]. Available: 10.1117/12.55784

[11] F. A. Sadjadi, M. E. Bazakos, "Perspective on automatic target recognition evaluation technology", Optical Engineering, vol. 30, no. 2, pp. 183-188, 1991. [Online]. Available: 10.1117/12.55788

[12] F. Sadjadi, "Measures of effectiveness and their use in comparative image fusion analysis", in IEEE Conf. Geosci Remote Sens, 2003, pp. 3659-3661.

[13] Honghua Chang, "Quantification of background clutter & its influence on target acquisition performance of EO imaging systems", Ph.D. dissertation, Dept. Elect. Eng., Xidian Univ., Xidian, MA, 2006.

[14] S. K. Ralph, J. Irvine, M. Snorrason, "An image metric-based ATR performance prediction", in Int. Conf. Artificial Intelligence and Pattern Recognition, 2005, pp. 192-197.

[15] H. Chang, J. Zhang, "New metrics for clutter affecting human target acquisition", in IEEE Trans. Aerospace and Electronic Systems, vol. 42, no. 1, pp. 361-368, 2006. [Online]. Available:

[16] K. Okarma, "Constructed Polynomial Windows with High Attenuation of Sidelobes", Elektronika ir Elektrotechnika, vol. 19, no. 5, pp. 109-112, 2013. [Online]. j01.eee.19.5.4368

[17] K. Okarma, "Extended Hybrid Image Similarity - Combined Ful-Reference Image Quality Metric Linearly Corelated with Subjective Scores", Elektronika ir Elektrotechnika, vol. 19, no. 10, pp. 129-132, 2013. [Online]. Available: htp:/

[18] G. Aviram, S. R. Rotman, "Evaluation of human detection performance of targets and false alarm, using a statistical texture image metrics", Optical Engineering, vol. 39, no. 8, pp. 2285-2295, 2000.

[19] D. Comaniciu, V. Ramesh, P. Meer, "Kernel-based object tracking", IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 564-577, 2003. [Online]. Available: 10.1109/TPAMI.2003.1195991

Xin Zheng (1), Zhenming Peng (1), Jiehua Dai (2)

(1) School of Opto-Electronic Information, University of Electronic Science and Technology of China, Chengdu 610054, China

(2) Sony Mobile in Sweden, Lund 22655, Sweden

Manuscript received May 6, 2014; accepted September 15, 2014.

This work is supported in part by the National Natural Science Foundation of China (No.61308102) and Beam Control Laboratory Foundation of Chinese Academy of Sciences (No. 2010LBC001).

METRICS proposed IN THIS paper.

       Fig. 7(a)   Fig. 7(b)   Fig. 7(c)   Fig. 7(d)

IDGB   0.276       2.167       10.798      8.514
SDLB   0.216       0.625       0.176       0.854


            SV      TSSIM   TTSIM    IDGB    SDLB

Fig. 8(a)   2.546   0.256   8226.1   0.306   0.363
Fig. 8(b)   2.590   0.261   6267.9   1.429   0.401
Fig. 8(c)   3.431   0.242   7282.5   4.497   0.519


            SV      TSSIM   TTSIM    IDGB    SDLB

Fig. 9(b)   6.312   0.147   2451.9   0.336   0.339
Fig. 9(c)   6.319   0.083   1330.2   1.128   0.521
Fig. 9(d)   6.317   0.093   1325.7   3.399   0.687
COPYRIGHT 2014 Kaunas University of Technology, Faculty of Telecommunications and Electronics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Zheng, Xin; Peng, Zhenming; Dai, Jiehua
Publication:Elektronika ir Elektrotechnika
Article Type:Report
Date:Oct 1, 2014
Previous Article:Coreless, contactless power supply system with DSP controlled active compensation of parameter changes.
Next Article:Emotion recognition in human computer interaction systems.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters