Printer Friendly

Comparing the virtual confocal microscope transform with the direct fourier method.

INTRODUCTION

The recent developments of medical microwave imaging for early cancer detection have created a need to better establish image formation techniques for these diffracting waves (Kak, C. and Malcolm Slaney, 1988). This paper addresses the issues of image formation from diffracting radiation which has been propagated through or scattered from, a target object.

A wide range of imaging applications using microwaves has been reported. Recent research has shown that microwave tomography is suitable for the detection of buried bodies, concrete inspection, examining polymers' properties and security scanning of luggage.

Medical imaging by diffracting radiation is currently receiving much attention as a potentially valuable medical diagnostic tool (Elise, X.L., 2002). Driving these developments are several factors, the primary being the potential for high contrast between tissue types available (Elise, X.L., 2002). For typical human tissues the dielectric constants range from 4.4 (breast fat) to 64.5 (vitreous humor) and the conductivity between 0.3 (bone marrow) to 8.1 S.[m.sup.-1] (Cerebro-Spinal fluid) (Gabriel, S., 1996). Of secondary, but significant importance is the relative safety of microwave based techniques as compared to X-ray based topographies. The high costs of magnetic resonance Imaging, Positron emission Tomography and X-ray techniques are important factors in the development of this technology. Whilst conventional X-ray tomography is enabled by relatively simple back projection algorithms, when the radiation being used suffers diffraction simple ray methods break down.

A method of image formation is required which meets the following criteria:

1. No foreknowledge of the objects electrical parameters should be required in order to obtain good images.

2. No iterative solutions to the inverse problem for back-propagation should be required.

3. Cross sectional images should be formed using a single or minimum number of view directions rather than requiring multiple views.

In (Hassanein, A.M.D.E., 2009), Hassanein et al. presents the development of a new imaging method based on the forward rather than backward propagation of recorded waves. The method requires no a-priori knowledge of the environment or of the target's shape and forms cross sectional images from a single view direction. Hassanein et al. begin with simulating a measurement of the wavefronts leaving the target and use these along with a synthetic optical system to form a confocal image by computing the wave propagation through the virtual optics to form the image. Previous theoretical work (Hassanein, A.M.D.E., D.J. Edwards, et al., 2008) has tested the new method using point sources that have unity amplitude and they are either phase locked together, or in another case, have phases determined by assuming they are scatterers reflecting signals from a separate point source illuminator located outside the field of view. Images are formed using a pixel dimension of 6x4mm in the horizontal plane.

In this paper, further testing of the new method and an evaluation of the results in an objective way are researched. We start by providing an overview of the theoretical description of the proposed method. A flow chart of the used program is given with a description of its steps and their aims. Monochromatic imaging using the new method and the problems associated with it are considered. A mathematical method to evaluate images which are calculated using the new method is presented. Finally, we compare the efficacy of the new imaging method with the Direct Fourier Method.

Method: The Virtual Microscope Transform:

Fig. 1 shows a diagram of the basic concept of our imaging method. We form images by the same method used in confocal microscopy (Rowland, R.E. and E.M. Nickless, 2000) whereby signals for an object are focused by a lens to form an image in a remote plane. By adjusting the relative position of the object and the lens, a cross sectional image may be formed. Rather than using a real lens, we are calculating the effect of propagating signals through a lens exploiting the known phase and amplitude of the microwave fields scattered by the object. We first determine the focal length and location required for a lens to form an image on its own axis, of each point in the required object field (x',y' = 0). We then position a virtual lens (not a real physical lens but a lens simulated in software) at this location. The wave fronts recorded, arriving from an object, in plane [U.sub.L] are computationally propagated through a perfect thin lens ([L.sub.1]) which result in a new set of wavefronts in plane [U'.sub.L]. To then obtain the image of point (x',y' = 0) the new wavefronts are propagated to the appropriate image plane of the lens and the complex amplitude of that point recorded. The virtual lens is "moved" transversely to calculate the image of the object along the focal line ([sup.x]-axis) and the focal length adjusted to interrogate different depth planes ([sup.z]-axis). By using this technique, 3D tomographic images can be obtained from a single or a small number of view directions.

The Virtual Microscope Transform:

Theoretical Description:

The following description is based on wave optics derived from the principles of Huygens & Fresnel (Goodman, J.W., 1996). Here monochromatic light is considered and diffraction processes are included (Goodman, J.W., 1996). The imaging system is divided into three sections. The first transformation deals with propagating of waves from the object plane to the lens (in order to create synthetic datasets for testing). We have (see fig. 1) (Goodman, J.W., 1996):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

where we use the Huygens-Fresnel principal (using the first Rayleigh-Sommerfeld solution) to describe the propagation of plane [U.sub.O](x',y' = 0) to [U.sub.L] x [U.sub.O](x',y' = 0) is the object plane and [U.sub.L] is the plane just before the lens. [z.sub.1] is the distance between plane [U.sub.O](x',y' = 0) and [U.sub.L] x ([xi],n = 0) are the coordinates of the lens. [lambda] is the wavelength appropriate to the recorded waves passing through a vacuum which is the medium assumed to be present within our synthetic optical system after the lens plane.

The second transformation deals with the phase transformation through the lens which generates the focusing equation (Goodman, J.W., 1996),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

describes the phase change for the waves passing through the lens assuming the lens has no attenuation or reflection properties. Here UL is the wave front just after the lens, UL is the wave front just before the lens, ([xi],[eta] = 0) are the coordinates for the center of the lens, k = 2[pi]/[lambda] is the wave vector and f is the lens's focal length.

The third transformation deals with the propagation of waves from the lens to the image plane (Goodman, J.W., 1996):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

Where again the Huygens-Fresnel principal is used to describe the propagation of plane [U'.sub.L] to plane [U.sub.I](x,y = 0). [z.sub.2] is the distance between the two planes. [U.sub.I](x,y = 0) is the final image plane. ([xi],[eta] = 0) are the coordinates of the lens. [lambda] is the wavelength in vacuum again. We use the paraxial approximation in both the phase transformation of the lens and the Huygens-Fresnel equations which assumes that all rays are traveling near parallel to the axis of the system.

Implementation of the Algorithm:

We use the transforms (1), (2) and (3) to build an algorithm using Matlab(TM). Two variants on the image formation algorithm are tested, both based on the mathematical method described previously. One uses single RF frequency to form (monochromatic) images and the other combines images from multiple angles into a single image. The algorithm starts at step 1 with the measured (or computed) amplitude and phase of wavefronts arriving at plane [U.sub.L] scattered from the objects. The algorithm in step 2 propagates these wave fronts to plane [U'.sub.L] through the virtual lens whose focal length and position has been chosen to form an image of a particular plane within the target object. These wave fronts are propagated in step 3 to the plane where we expect the lens to form an image and the central amplitude and phase recorded before the lens centre is translated to the next point required to interrogate the next pixel. This process is repeated until all the pixels in the object cross section have been interrogated. At present we are only performing 2D cross sectioning from a 1D array of simulated measurements but it is quite simple to extend this to a 2D wavefront measurement forming a 3D tomographic image.

The different angles algorithm uses the 'imrotate' command in Matlab(TM) to rotate the object and computes monochromatic images from a range of angles. In step 5 and 6, individual monochromatic images are formed for each angle. In step 7 all images are combined to generate a final image. Two methods of combining are investigated as described later.

In order to test the algorithms we use a single test object plane to generate synthetic measured wavefronts at [U.sub.L]. The object plane used for our images is the 'Modified Shepp-Logan' as defined later. It contains an array of point sources of 101x101 pixels as shown in fig. 3. The point sources have different amplitudes and they are phase locked together assuming they have initial phase zero. Images are formed using a pixel dimension of 2.5x2.5mm in the horizontal plane. To obtain input wavefronts for testing the algorithm, we forward propagate the object's wavefronts into a common plane following the same assumptions as for equations 1-3.

Monochromatic Imaging from Single View:

Images calculated for the 'Modified Shepp-Logan' using monochromatic signals with a wavelength of [approximately equal to]0.0074m (40GHz) are shown in figs. 4 and 5. The object from the source shown in fig. 3 is very clearly imaged in fig. 4 with gray halos surrounding it as a consequence of the defocusing of its image as the lens is scanned through the image space. The bright artifacts are seen at the top horizontal space above the skull in the image because of the focusing effect of the lens. The arrows show the exact place of the artifacts.

Fig. 5 shows an image computed under identical conditions but with the object rotated 180 degrees. A function in Matlab[TM] namely "imrotate" is used to rotate the object in fig. 3 before calculating the corresponding image. A similar image to the one in fig. 4 is shown. But, in fig. 5 the bright dots are seen at the bottom of the image and not at the top as seen in fig. 4 as pointed to by the arrows. The view direction of the lens causes these artifacts to appear due to the focusing effect. The problem can be alleviated by combining the images processed from the different angles of views.

Monochromatic Imaging from Multiple Views:

Using different angles of view to image the object space produces interference effects for each of the angles on a range of different spatial scales. Consequently, by combining image information from each angle, these interference effects can be reduced.

The ideal one would like to be able to combine the single frequency images in such a way as to make maximum use of the available phase and amplitude data.

One approach, which uses all available phase and amplitude information, would be to combine N images via coherent summation using the equation:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

Where I is the net amplitude for each microwave image, an is the electric field amplitude of each point for each monochromatic image combined and [e.sup.i[phi]n] represents the phase of each point in each monochromatic image. The unique aspect about this way of summing is that it gives us the advantage of imitating the effect of destructive and constructive interference which allows for a single bright fringe to be formed in interference phenomena when the net path difference for every angle being combined is zero for a particular point in space. In Fig. 6, the calculated image obtained by using coherent summation is shown. The figure clearly shows the object with a series of gray dots surrounding the imaged object. These dots combine to form the artifacts around the object.

In practice the phase of each point scatterer in an object will not be identical. In general one would illuminate the object with a single source outside of the imaged field and the phase of signals scattered by each point will vary according to the net optical path between the illuminating source and each point. White constructive interference is only formed when the phase of the signals sourced at each point are identical. If the points are illuminated remotely then the initial phase of the scattered signals varies with the used angle and the coherent summation fails. One way to deal with this is compute the expected initial phase for each angle at each point in the object plane but this would require foreknowledge of the optical path between the illuminator and each point. Since this is a requirement we would prefer to avoid we have relaxed the need to utilize the phase and move on to incoherent summation techniques were absolute phase is irrelevant.

Incoherent summation can be applied using the equation:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

where I is the net amplitude of the image, [a.sub.n] is the amplitude of the each point considered and [e.sup.i[phi]n] represents the phase of each point. Here the phase of each individual frequency image pixel is lost on the summation. This is essentially the same process as in combining different view directions in an optical camera to get the best view of the object under consideration.

Fig. 7 shows the image formed using the object source in fig. 3 with 4 angles combined using a single wavelength of [approximately equal to]0.0074m. At first glance the image formed is quite similar to that formed in via coherent summation shown in fig. 6. Fig. 6 and 7 could be considered a good image of the object field of Fig. 3. In the next section, a mathematical tool is suggested to compare the degree of similarity between the above images and the object being imaged.

Comparing virtual confocal microscope technique with Direct Fourier Method:

Thus far we have shown that a method based on the principals of physical optics namely the Virtual Confocal Microscope (VCM) can generate cross sectional images of the 'Modified Shepp-Logan' source. However the work still is incomplete unless compared to already existing image reconstruction methods. To test the success or failure of our method, another previously developed method of imaging is used to calculate images for the same object source in fig. 3 to compare the results.

Gottlieb, David, (2000) considers the direct Fourier method (DFM) for the two-dimensional (2-D) computer tomography problem of reconstructing an image from its given X-ray projections. Gottlieb et al. consider the case when parallel beams are used to obtain the projection of the object along a line. The projections are mathematically calculated using the Radon transform of the object under consideration. The theoretical solution to the reconstruction problem is given by the inverse Radon transform applied on the radon transform. Computational methods are used, which require discretization of the Radon transform. Many such methods have been developed, and commercial tomography machines have been in use for many years. The basic idea with the DFM is to perform a Fourier transform of the projections and relate it to the Fourier transform of the image function. The slice theorem dictates that the one dimensional fourier transform of the projections of an object is equal to the two dimensional fourier transform of the same object (Walden, Johan, 2000). The computational implementation involves taking the inverse discrete Fourier transform of the discrete Fourier transform of the projections using the fast Fourier transform (FFT).

In subsection A, a brief mathematical introduction to the DFM method is illustrated. The results obtained from imaging the object in fig. 3 using the DFM method are discussed. In subsection B, two mathematical methods to evaluate the obtained images are introduced. Results from the New imaging method which are discussed in this paper and the DFM method are compared.

Numerical Results:

For the object [U.sub.O] in fig. 3, the density function f(x',z') and the parallel x-ray beams passing through the object are shown in fig. 8. The line L representing the beam passing through the object is defined by r and [??]. The measured value of the beam after passing through the object is the integral of f (x',z') along the line L as follows (Gottlieb, David, 2000):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

p(r,[??]) is the projection of the object and the right hand side of the equation is the Radom Transfer of f (r,[??]). The one dimensional fourier transform of p(r,[??]) in the r -direction is (Gottlieb, David, 2000):

[??]([rho],[??]) = [integral]p(r,[??])[e.sup.-2[pi]i[rho]r]dr (5)

In the frequency domain a filter can be used to attenuate the undesired frequencies. In our calculations, the Hamming filter is used in some of the cases as shown later.

The two dimensional Fourier Transform of the object f (x',z') is (Gottlieb, David, 2000):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

as described at the beginning of the section, the slice theorem relates [??]([xi],[eta]) with [??]([rho],[??]) as follows (Gottlieb, David, 2000):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

from which we can recover the image f (x',z') by using the inverse fourier transform. Linear interpolation is used to calculate the values of [??]([rho],[??]) at any desired points of interest. A program is written using Matlab[TM] with handmade functions to calculate images using equations 4-7.

In fig. 9, an image calculated by the DFM method through combining 36 images is shown. No filter is used in reconstructing any of the images used. The image is distorted and the object is vaguely seen. We go forward to use a filter to get rid of the high frequencies causing the distortions and so improve the resolution of the image. In fig. 10, an image calculated by the DFM method is shown which combines 36 images of the object seen in fig. 3. In this case, the Hamming filter is used to suppress the undesired artifacts. A filter is multiplied by the 1-D Fourier Transform of the projection of the object to attenuate the undesired frequencies. The image is clearer than the one in fig. 9. However; gray spots are seen around the skull so we proceed to use more view directions to combine them in one image.

In fig. 11, 180 images are combined to calculate the image shown instead of the 36 images used before. The image is clearer than any of the ones in fig. 9 and 10. Again, the Hamming filter is used to attenuate the undesired artifacts. The image is good showing fairly clear location of the ellipses of the Shepp-Logan object with the gray spots surrounding the skull reduced to a change in the background at much lower level.

Evaluating and Comparing Results:

In this subsection, two mathematical tools are used to evaluate the ability of each of the imaging methods explained to represent the object being imaged in an accurate way. The first is to use the sum of the square of errors for each pixel as follows:

E = [summation][([U.sub.I] - [U.sub.O]).sup.2] (8)

where [U.sub.I] represents the image and [U.sub.O] represents the object under consideration and E represents the total error in the image. When calculated, we find that the accuracy recognized by the normal eye for an image is not reflected in the values of the error E. When the Shepp-Logan of fig. 3 is examined we find that there are two well defined areas. One area is outside the skull of the human head and the other is inside the skull. The area outside the skull is comparable to the area inside the skull and so the error in an image in representing accurately either area greatly affects the total error of the whole image. The skull has well defined outer border which can't be missed. The error resulting from accurately representing the area outside the skull can be safely ignored. A mask is calculated that has amplitude of one for the area inside the skull including the skull itself and zero amplitude otherwise. The mask is multiplied by each image [U.sub.I] before applying equ. (8). Now, equ. (8) represents the error in imaging the area inside the skull and not the area outside it.

More reasonable results are obtained and shown in fig. 12 where the x-axis represents the number of images combined to form the final single image. The y-axis represents the E value corresponding to the number of images combined to form the final image. For the DFM method, the E value drops from an average of 1750 when a filter is not used to 400 when a filter is used. The minimum value is reached when combining 180 images which are processed using a filter. For the incoherent VCM, E reaches a minimum of 43.1. While for the coherent VCM, E reaches a minimum of 34.38. Either for the incoherent or the coherent method of summing images, the VCM is superior in comparison with the DFM method.

The second is to use Normalized Cross Correlation (NCC) equation which is defined in Matlab[TM] namely "normxcorr2". NCC is a measure of how two signals match together (Saravanan, C., M. Surender, 2013). We can use the same concept to measure how much the object and any of the images match together. The higher the value of the NCC the greater is the similarity between an image and the object.

In fig. 13 the x-axis represents the number of images combined to form the final single image. The y-axis represents the NCC value corresponding to the number of images combined to form the final image. In fig. 13b, we can see that the NCC value goes up to a maximum of 0.567 after combining 180 images when applying the DFM method without a filter. However. The maximum value of NCC is 0.9 when combining 180 images which are processed using the Hamming filter. In fig. 13 a, when using the VCM method both ways either coherently or incoherently, the NCC value goes up to about 0.97. The new proposed method succeeds in producing final images that has a very high NCC.

For both values of the NCC and E, the VCM method provides better results than that of the DFM method. The reason is that when processing images using the VCM method both phase and amplitude are used to calculate a single image. But for the DFM method, only the amplitude is used to calculate an image for the object under consideration. More information about the object increases significantly the quality of images which are produced using the VCM method and decreases significantly the number of view directions required to produce images of high quality.

Unfortunately, the time required to process images using the VCM method is much higher than that required to process images using the DFM method. In fig. 14a, we can see that the time required to process 4 images from different view directions is 675.3 sec. But in fig. 14b, we can find that when calculating 180 images from different view directions using the DFM method, the required time is only 383.4 sec. When comparing fig. 14a and 14b, we find that almost half of the time is needed by the DFM method to calculate 45 times the number of images to be combined using the VCM method. The reason is that the Fourier transforms used in the DFM method are very fast computationally compared with calculating the Huygens & Fresnel equations used in the VCM method.

Conclusion:

Two aims have been achieved in this paper first we test the VCM method using an object with a range of amplitudes such as the 'Modified Shepp-Logan'. Secondly, we evaluate the results from the VCM against those of another well known method namely the DFM. Further testing cross sectional imaging of an object field without using backpropagation, using single view angles and without requiring foreknowledge of the objects environment has been achieved. The method presented is versatile in forming sectioned images from single view direction but the focusing effect of the used lens is a problem for monochromatic imaging. The multiple view directions approach is successful for both coherent and incoherent methods of image combination but only incoherent methods can form good images without knowledge of the propagation characteristics of the environment and object plane. To mathematically evaluate the results, the Error E and the Normalized Cross Correlation NCC methods were used. The VCM method provided higher quality images than the DFM method when compared using E and NCC. However, the time required to process an image using the VCM is multiple times that required to process an image using the DFM. The VCM method is phase as well as amplitude dependent while the DFM method is only amplitude dependent. Future work will expand on these results; the problem of the time required to process images will be investigated. In this paper, non diffracting beams are used to process the shown images; we aim next at using diffracting waves instead and compare the results with those of the VCM method.

ARTICLE INFO

Article history:

Received 15 November 2013

Received in revised form 25 December 2013

Accepted 31 December 2013

Available online 15 February 2014

Appendix I:

As in (Gottlieb, David, 2000), the 'Modified Shepp-Logan' object contains 10 ellipses. The ellipses have shared areas at some points. A shared area has amplitude that is equal to the sum of the amplitudes of the ellipses sharing the area. The main lines of MATLAB[TM] code which are used to generate the object of fig. 3 are as follows (Gottlieb, David, 2000):

f = 0

if ([(x/0.69).sup.2] + [(y/0.92).sup.2]) [less than or equal to] 1 [right arrow] f = 1

if ([(x/0.6624).sup.2] + [((y + 0.0184)/0.874).sup.2]) [less than or equal to] 1 [right arrow] f = f - 0.8

[xi] = (x - 0.22) x cos(0.4[pi]) + y x sin(0.4[pi])

[eta] = -(x - 0.22) x sin(0.4) + y x cos(0.4[pi])

if ([([xi]/0.31).sup.2] + [([eta]/0.11).sup.2]) [less than or equal to] 1 [right arrow] f = f - 0.2

[xi] = (x + 0.22) x cos(0.6[pi]) x sin(0.6[pi])

[eta] = -(x + 0.22) x sin(0.6[pi]) + y x cos(0.6[pi])

if ([([xi]/0.41).sup.2] + [([eta]/0.16).sup.2]) [less than or equal to] 1 [right arrow] f = f - 0.2

if ([(x/0.21).sup.2] +[((y - 0.35)/0.25).sup.2]) [less than or equal to] 1 [right arrow] f = f + 0.1

if ([(x/0.046).sup.2] + [((y - 0.1)/0.046).sup.2]) [less than or equal to] 1 [right arrow] f = f + 0.1

if ([(x/0.046).sup.2] + [((y + 0.1)/0.046).sup.2]) [less than or equal to]1[right arrow]f = f + 0.1

if ([((x + 0.08)/0.046).sup.2] + [((y + 0.605)/0.023).sup.2]) [less than or equal to]1[right arrow]f = f + 0.1

if ([(x/0.023).sup.2] + [((y + 0.605)/0.023).sup.2]) [less than or equal to]1[right arrow]f = f + 0.1

if ([((x - 0.06)/0.023).sup.2] + [((y + 0.605)/0.046).sup.2]) [less than or equal to]1[right arrow]f = f + 0.1

REFERENCES

"GPR and microwave tomography imaging of buried objects using the short-term (picosecond) videopulses", The International Society for Optical Engineering, v 4084, 2000, pp:530-535.

"Microwave frequency dielectric properties of poly(vinylidene fluoride) films", Polymer Physics, v 33, n 4, Mar, 1995, p 685-690.

"Microwave inspection of luggage for contraband materials using imaging and inverse-scattering algorithms", Research in Nondestructive Evaluation, v 7, n 2-3, 1995, p 153-168.

"Microwave nondestructive determination of sand-to-cement ratio in mortar", Research in Nondestructive Evaluation, v 9, n 4, 1997, p 227-238.

Elise, X.L., C. Fear, Susan C. Hagness, Maria A. Stuchly, 2002. "Confocal Microwave Imaging for Breast Cancer Detection: Localization of Tumors in three Dimensions," IEEE Transactions on Biomedical Engineering, 49.

Gabriel, S., R.W. Lau, C. Gabriel, 1996. "The dielectric Properties of Biological Tissues: III. Parametric Models for the Dielectric Spectrum of Tissues", Phys. Med. Biol., 41:2271-2293.

Goodman, J.W., 1996. Introduction to Fourier Optics, 2nd ed: McGraw-Hill International Editions, Ch. 5.

Gottlieb, David, Bertil Gustafsson, Patrik Forssen, 2000. "On the Direct Fourier Method for Computer Tomography," IEEE Transactions on Medical Imaging, 19(3).

Hassanein, A.M.D.E., 2009. "Imaging Techniques Using Ultra Wide Band Waves", PhD Thesis, University of Oxford.

Hassanein, A.M.D.E., D.J. Edwards, et al., 2008. "UWB Tomography via Simulated Optical Systems". International Symposium on Antenna & Propagation (ISAP08). Taipei, Taiwan.

http://www.mathworks.com/help/matlab/

Kak, C. and Malcolm Slaney, 1988. Principles of Computerized Tomographic Imaging, IEEE Press.

Rowland, R.E. and E.M. Nickless, 2000. "Confocal Microscopy Opens the Door to 3-Dimensional Analysis of Cells." Confocal Microscopy Bioscene, 26(3):4.

Saravanan, C., M. Surender, 2013. "Algorithm for Face Matching Using Normalized Cross-Correlation," International Journal of Engineering and Advanced Technology (IJEAT), 2(4).

Walden, Johan, 2000. "Analysis of the Direct Fourier Method for Computer Tomography," IEEE Transactions on Medical Imaging, 19(3).

Corresponding Author: Ahmed M.D.E. Hassanein, System & Information Department, National Research Centre, Dokki Cairo, Egypt.

E-mail: ahmed22@aucegypt.edu.
COPYRIGHT 2013 American-Eurasian Network for Scientific Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2013 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Hassanein, Ahmed M.D.E.
Publication:Advances in Natural and Applied Sciences
Article Type:Report
Date:Dec 1, 2013
Words:5036
Previous Article:Manifestation of biopower in Aldous Huxley's Brave New World.
Next Article:Risk management and hazard and operability study on steam turbine power plant unit-5 in the power generation Paiton, East Java-Indonesia.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters