Printer Friendly

Registration of Partially Focused Images for 2D and 3D Reconstruction of Oversized Samples.

1. Introduction

This paper follows up recent published works [1, 2] about 3D reconstruction methods for images acquired by confocal and nonconfocal microscopes. Confocal microscopes are actually optical microscopes which are distinguished from other optical microscopes by two unique properties. They have a very small depth of optical field and their advanced hardware is capable of removing nonsharp points from images. The points of objects very near to the focal plane are visible as sharp points and are depicted as light areas in our optical sections (see Figure 1(a)) whereas those parts lying above or beneath the focal plane are invisible and are represented by black regions. Analogous regions can be observed by using conventional microscopes fitted with the same lens (see Figure 1(b)). Both the confocal and nonconfocal snapshots (optical sections) show sharp regions of very similar shapes. The only difference concerns nonsharp regions that manifest themselves as blurred regions in the nonconfocal sections, whereas in the confocal sections they are missing (black regions). However, the shapes of confocal and nonconfocal out-of-focus regions are very similar.

To create a sharp image (2D reconstruction), it is necessary to obtain a series of images of the same object, each of them with different focusing and each point on the object focused in one of the images (in the ideal case). The sharp parts are identified and composed in a new image. There is also a simple method for constructing a rough 3D model of the object where all sharp points belonging to the same image have the same height.

There are two principal problems in this 2D and 3D reconstruction. The work [3] deals with methods that detect focused and blurred parts in images taken in nonconfocal mode and are able to assign the corresponding focus plane for each surface point. In this way, construction of 3D stair-approximation of the studied surface is possible (see Figure 2). However, this approximation is not sufficient in many applications. It is necessary to specify the height of each point between two focal planes and construct a smooth approximation (see Figure 3).

The projection used is the second problem. In the case of the confocal microscope, we can assume that the field of view is small and the used projection is parallel. The paper [4] and many other works [3, 5-11] presume this projection property. In parallel projection, all images are provided in the field of view with the same size. In Figure 4, there is the first image (a) and the thirtieth image (b) in a series of fiftytwo photos of the fracture surface of hydrated cement paste acquired by the confocal microscope Olympus LEXT 3100. Corresponding pixels have the same coordinates in separate partial focused images (compare the crosses and arrows in Figure 4). However, this assumption is not valid in the case of larger samples; the angle of the projection lines is not negligible and the view fields are different. In Figure 5, we can see the first image (a) and the forty-third image (b) in a series of seventy photos of a sandstone sample (locality Brno-Hady, Czech Republic) taken with a Canon DSLR camera. The used projection is central and the fields of view (i.e., also coordinates of corresponding pixels) of individual images are clearly different (see crosses in Figure 5).

Therefore, we set two goals in the conclusion of the work [1]: firstly, to specify the height of each point between two sharpness planes and construct a smooth approximation. This goal has already been met in [2]; for the result of this new reconstruction see Figure 3. Some other works [3, 8,10] also deal with this problem.

The second goal, solution of different sizes of the field of view, is discussed in this paper.

2. Materials and Methods

2.1. Materials and Equipment. The methods discussed in this article are suitable for 2D and 3D processing of samples which are oversized for confocal microscopes. These samples may have a size from a few centimetres (e.g., geological samples). Data may be taken by a CCD camera or conventional digital camera with a narrow sharpness zone. This scanning device must be connected with a stepping device that allows changing the distance between the camera and the scanning sample and thus the position of the camera sharpness zone.

Data used in this paper was acquired by special hardware designed and assembled by professor Tomas Ficker from the Faculty of Civil Engineering of our University. The hardware consists of a Canon EOS 600D photographic camera augmented by EF 100 mm f/2.8 Macro USM lens. The photographic camera is mounted on a motorized tough stand which enables movement in the vertical direction. The vertical stepping movement is governed by software running on a PC. See [12] for more details.

Separate images are taken by central projection; that is, they differ at least in the scale used. The various scale of images could be solved using only elementary mathematics; the image size is proportional to the camera shifting in this case (see Figure 6 on the left).

However, the practical situation is more complicated. The images differ not only in the scale used but also in the content displayed (different parts are focused in different images). Due to mechanical errors, the step in the z-axis may be not fully constant; the images can also be mutually shifted in the x- or y-axis and even rotated. Image registration is also complicated by the nonplanarity of samples (see Figure 6 on the right). In Figure 5, we can see the first image (on the left) and the forty-third image (on the right) in a series of photos of a sandstone sample (locality Brno-Hady, Czech Republic) taken with a Canon DSLR camera. The projection used is central, the fields of view are clearly different.

In this paper we describe the preprocessing of a series of such images for 2D and 3D reconstruction. A suitable tool for this preprocessing is the Fourier transform.

2.2. Fourier Transform. This is an integral transform that transforms a function of one or more variables (in spatial domain) to another function (in frequency domain) of the same number of variables [3, 13, 14]. Since the Fourier transform of a function is in general a function with a complex image and since a digital image is a function of two spatial variables, we will deal here for simplicity with functions f: [R.sup.2] [right arrow] C.

Digital images are rectangles; for simplicity we deal here with square images only. All computations that use the Fourier transform are performed using the discrete Fourier transform (or more precisely by special algorithms that speed up the discrete Fourier transform, such as the Fast Fourier Transform (FFT)). However, some derivations of image processing methods are better performed with the Fourier transform of functions with the domain [R.sup.2] since operations such as rotation and rescaling are easily modeled on these functions.

The standard definition of the Fourier transform of a function of two variables is as follows [15].

Definition 1 (Fourier transform). Let f(x;y) : [R.sup.2] [right arrow] C be a function such that

[mathematical expression not reproducible] (1)

exists and is finite. The Fourier transform of f is

[mathematical expression not reproducible]. (2)

Function F([xi]; [eta]) is also called the Fourier spectrum of function /. Function A([xi]; [eta]) = [absolute value of (F([xi]; [eta]))] is called amplitude spectrum of f(x; y).

Definition 2 (inverse Fourier transform). Let F([right arrow];ft) : [R.sup.2] [right arrow] C be a function such that

[mathematical expression not reproducible] (3)

exists and is finite. The inverse Fourier transform of function F is function

[F.sup.-1] {F (x; y)} (x; y) = f (x; y): [R.sup.2] [right arrow] C (4)

defined as

[mathematical expression not reproducible] (5)

2.3. Phase Correlation. For processing and analyzing the images it is necessary to transform the images so that the studied structures are at the same position in all the images. This is the task of image registration, to find the transformation. In some applications we assume that images were shifted only; in others we allow shift, rotation and scale change (i.e., similarity), general linear transformation, or even general transformations.

The methods used for registration depend on the expected transformation and on the structures in the image. Some methods use corresponding structures or points in the images and then find a global transformation using the measurements of positions of the structures or points [1618]. These methods require these structures to be clearly visible. Other methods are based on correlation and work with the image as a whole. The phase correlation proved to be a powerful tool (not only) for registration of partially focused images. For functions [f.sub.1]; [f.sub.2] it is defined as

[mathematical expression not reproducible] (6)

and its modification as

[mathematical expression not reproducible], (7)

where bar means complex conjugation and H([xi]; [eta]) is a bounded real function such that H([xi]; [eta]) = H([xi]; [eta]) and p; q> 0 are arbitrary constants. It can be proved that for real functions [f.sub.1]; [f.sub.2] the phase correlation function is real [14]. This is of great value, since it enables us to search for extremes of the phase correlation function.

2.4. Shifted Images. The phase correlation function can be also used for estimation of image shift. The method was first published by Kuglin and Hines [19].

It is clearly seen that the phase correlation function of a function with itself is the [right arrow]-distribution, that is,

[mathematical expression not reproducible] (8)

[right arrow]-distribution is a generalized function for which

[mathematical expression not reproducible]. (9)

In an illustration of the [right arrow]-distribution, maximum pixel value is used instead of infinity. The illustration of the phase correlation of a function with itself can be seen in Figure 7 on the left.

If two functions are shifted in arguments, that is, [f.sub.2](x; y) = [f.sub.1](x - [x.sub.0]; y - [y.sub.0]), their Fourier transforms are shifted in phase; that is,

[F.sub.2] ([xi]; [eta]) = [F.sub.1]([xi]; [eta]) * exp(-i ([xi][x.sub.0] + [eta][y.sub.0])) (10)

and their phase correlation function is the [right arrow]-distribution shifted in arguments by the opposite shift vector

[mathematical expression not reproducible]. (11)

The illustration of phase correlation of shifted but otherwise identical images can be seen in Figure 7 on the right.

This is the main idea of phase correlation. The task to find a shift between two images is converted by the phase correlation to the task of finding the only nonzero point in a matrix (computation using the discrete Fourier transform). If the images are not identical (up to a shift), that is, if the images are not ideal, the phase correlation function is more complicated, but it still has a global maximum at the coordinates corresponding to the shift vector. To keep this maximum global, (6) can be modified with possibilities suggested in (7) or modifying directly the original images and the parameters of these modifications can be optimized.

2.5. Rotated Images. The phase correlation function can be also used for estimation of image rotation and rescale. The method was first published by Reddy and Chatterji [20]. Let [f.sub.2] be function [f.sub.1] rotated and shifted in arguments; that is,

[mathematical expression not reproducible]. (12)

Their Fourier spectra and amplitude spectra are related as follows:

[mathematical expression not reproducible], (13)

The shift results in a phase shift and the spectra are rotated in the same way as the original functions. A crucial step here is transformation of the amplitude spectra into the polar coordinate system to obtain functions [A.sup.p.sub.1]; [A.sup.p.sub.2] : [R.sup.+.sub.0] x (0; 2[pi]) [right arrow] [R.sup.+.sub.-] such that [A.sup.p.sub.1]([rho];[phi]) = [A.sup.p.sub.2]([rho]; ([phi] + [theta]). The rotation around an unknown centre of rotation was transformed to a shift. This shift is estimated with the standard phase correlation, Section 2.4.; after rotating back by the measured angle, the shift ([x.sub.0]; [y.sub.0]) is measured with another computation of the phase correlation.

2.6. Scaled Images. Let [f.sub.2] be function [f.sub.1] rotated, shifted, and scaled in arguments; that is,

[mathematical expression not reproducible]. (14)

Their Fourier spectra and amplitude spectra are related as follows:

[mathematical expression not reproducible]. (15)

The shift results in a phase shift; the spectra are rotated in the same way as the original functions and scaled with a reciprocal factor. A crucial step here is transformation of the amplitude spectra into the logarithmic-polar coordinate system

exp [rho] = [square root of ([x.sup.2] + [y.sup.2])]; x = exp [rho] cos [phi]; y = exp [rho] sin [phi] (16)

to obtain [A.sup.p.sub.1]; [A.sup.p.sub.2] : [R.sup.+.sub.] x (0; 2[pi]) [right arrow] [R.sup.+.sub.0] such that [A.sup.1 p.sub.2] ([rho]; [phi]) = [A.sup.1 p.sub.2]([rho] - ln [alpha];[phi] + [theta]).

Both rotation and scale changes were transformed to a shift. The unknown angle [theta] and unknown factor a can be estimated by means of the phase correlation applied on the amplitude spectra in the logarithmic-polar coordinate system [A.sup.1 p.sub.1]; [A.sup.1 p.sub.2]. After rotating function [f.sub.2] back by the estimated angle d and scaling by factor [alpha], the shift vector ([x.sub.0]; [y.sub.0]) is estimated by means of the standard phase correlation, Section 2.4.

2.7. Practical Issues. Amplitude spectra of real functions are even functions A([xi]; [eta]) = A(-[xi]; -[eta]); therefore it is sufficient to use only a half of the domain of the spectra, for example, [xi] [greater than or equal to] 0. If amplitude spectra (computed by means of the discrete Fourier transform) are transformed to polar coordinates, only a half of the domain on the angular axis is sufficient.

The amplitude spectra have very high values in [0; 0] and its close neighbourhood compared to the rest of the domain; therefore instead of the values of the amplitude spectra it is better to use their logarithms ln(1 + [A.sub.1]([xi]; [eta])); ln(1 + [A.sub.2]([xi]; [eta])) to use the dynamic range of the amplitude spectra more effectively.

The discrete Fourier transform takes images as if they were periodic with period N on both axes. The image edges thus represent a jump in pixel values. Therefore, it is necessary to "remove" image edges, to smooth them out by multiplying them with so-called windowing functions. The most common are Gaussian and Hanning window functions. Most commonly they are applied radial-symmetrically. If there are important structures closer to image corners, they may also keep untouched a square or a rectangle and then decrease to zero.

Image pixel coordinates are integers but the scaling, rotation, and shift vector are obviously stated as not-integer values by registration. Therefore, values of pixels in the target image are calculated by various interpolation methods (nearest neighbour, bilinear or bicubic interpolation).

3. Results

The theory described in the previous section was applied to a series of 43 partially focused images of a sandstone sample (locality Brno-Hady, Czech Republic) which was acquired using a central projection in the viewing field 2,5 x 1,875 cm. The results are summarized in Table 1. We have identified mostly insignificant rotation (note that rotation of two hundred arc seconds around the centre means deviation of one-half pixel in the image with the resolution 1024 x 768); shift of the image centre is more significant.

The graph in Figure 8 illustrates the scaling for the individual image (relative to the first). It is not precisely linear in practice as was stated in Section 2.1 (see also Figure 6). In Figure 9 we can see the sum of four input images without registration (a) and the sum of the same images preprocessed using the registration method described in Section 3 (b).

The entire process of 2D and 3D processing of the series of the partially focused images acquired in central projection thus proceeds as follows:

(a) Preprocessing: registration of partially focused images using elementary mathematics (see Figure 6 on the left) simple but usually not precise enough method; or registration using the method proposed in Sections 2.2-2.7 in this paper.

(b) 2D reconstruction: Identification of sharp parts in separated images and composition of a new whole sharp 2D image (see [3, 4]).

(c) 3D reconstruction: Height is assigned to all the image points (see [1, 2, 4,10]).

In Figure 10 we can see the 3D reconstruction of this series using the method described in [1, 2], that is, without any registration. The registration is not necessary in the case of confocal microscope images. However, images differ in scaling and rotation in the case of classic cameras and their 3D reconstruction without any registration is unusable.

In Figure 11, we can see the 3D reconstruction of the same series which was transformed using elementary mathematics according to Figure 6 on the left. The result is significantly better but artifacts are still evident.

In Figure 12, there is illustrated the 3D reconstruction of the same series which was registered using phase correlation described in Section 2. No artifacts are perspicuous in this reconstruction.

4. Conclusions

In the case of 3D reconstruction of a series of partially focused images of oversized surfaces, we usually cannot neglect the angle between projection rays and therefore even the different scale of individual images. In the simplest case, we can assume that the scaling is linear and no other geometric transformations occur. This case can be solved using elementary methods but subsequent 3D reconstruction contains usually unwanted artifacts. In real devices, scaling is not linear and images can be shifted and even rotated with respect to each other. Ifaccurate 3D reconstruction is required, precise image registration is necessary as preprocessing. Phase correlation is a suitable method for this preprocessing. It is able to detect the above-mentioned transformations with subpixel precision and we can neatly eliminate them.

https://doi.org/10.1155/2017/8538215

Received 28 April 2016; Accepted 2 August 2016; Published 10 August 2017

Conflicts of Interest

There are no conflicts of interest related to this paper.

Acknowledgments

This work was supported by the Grant Agency of the Czech Republic under Contract no. 13-03403S. The authors acknowledge partial support from the Brno University of Technology, Specific Research Project no. FSI-S-14-2290 and Project LO1202 by financial means from the Ministry of Education, Youth and Sports under the National Sustainability Programme I. The authors thank Professor Tomas Ficker from the Faculty of Civil Engineering of Brno University of Technology for the provided data.

References

[1] D. Martisek, J. Prochazkova, and T. Ficker, "High-quality three-dimensional reconstruction and noise reduction of multifocal images from oversized samples," Journal of Electronic Imaging, vol. 24, no. 5, Article ID 053029, 2015.

[2] T. Ficker and D. Martisek, "Three-dimensional reconstructions of solid surfaces using conventional microscopes," Scanning, vol. 38, no. 1, pp. 21-35, 2016.

[3] D. Martisek and H. Druckmullerova, "Multifocal image processing," Mathematics for Applications, vol. 3, no. 1, pp. 77-90, 2014.

[4] D. Martisek, "The two-dimensional and three-dimensional processing of images provided by conventional microscopes," Scanning, vol. 24, no. 6, pp. 284-295, 2002.

[5] T. Ficker, D. Martisek, and H. M. Jennings, "Roughness of fracture surfaces and compressive strength of hydrated cement pastes," Cement and Concrete Research, vol. 40, no. 6, pp. 947955, 2010.

[6] T. Ficker and D. Martisek, "Digital fracture surfaces and their roughness analysis: Applications to cement-based materials," Cement and Concrete Research, vol. 42, no. 6, pp. 827-833,2012.

[7] Y. Ichikawa and J. Toriwaki, "Confocal microscope 3D visualizing method for fine surface characterization of microstructures," in Proceedings of SPIE 2862, Flatness, Roughness, and Discrete Defect Characterization for Computer Disks, Wafers, and Flat Panel Displays, pp. 96-101, Denver, Colo, USA.

[8] D. A. Lange, H. M. Jennings, and S. P. Shah, "Analysis of surface roughness using confocal microscopy," Journal of Materials Science, vol. 28, no. 14, pp. 3879-3884, 1993.

[9] K. Nadolny, "Confocal laser scanning microscopy for characterisation of surface microdiscontinuities of vitrified bonded abrasive tools," International Journal of Mechanical Engineering and Robotics Research, vol. 1, no. 1, pp. 14-29, 2012.

[10] M. Niederost, J. Niederost, and J. Scucka, "Automatic 3d reconstruction and visualization of microscopic objects from a macroscopic multificus image sequence," in Proceedings of the International Archives of the Photogrammetry, Remote Sensing and spatial Information Sciences 34(5/W10), 2003.

[11] V. Thiery and D. I. Green, "The multifocus imaging technique in petrology," Computers and Geosciences, vol. 45, pp. 131-138, 2012.

[12] T. Ficker and D. Martisek, "Computer Evaluation of Asperity Topology of Rock Joints," Procedia Earth and Planetary Science, vol. 15, pp. 125-132, 2015.

[13] J. K. Hunter and B. Nachtergaele, Applied Analysis, World Scientific Publishing Company, 1st edition, 2001.

[14] E. M. Stern and G. L. Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, Princeton, NJ, USA, 1975.

[15] H. Druckmullerova, Application of adaptive filters in processing of solar corona images [Ph.D. Thesis], Brno University of Technology, 2014.

[16] W. K. Pratt, Digital Image Processing: PIKS Inside, John Wiley & Sons, Inc., New York, USA, 3rd edition, 2001.

[17] B. Zitova and J. Flusser, "Image registration methods: a survey," Image and Vision Computing, vol. 21, no. 11, pp. 977-1000, 2003.

[18] M. V. Wyawahare, P M. Patil, and H. K. Abhyankar, "Image registration techniques: an overview," International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 2, no. 3, 2009.

[19] C. D. Kuglin and D. C. Hines, "The phase correlation image alignment method," in Proceeding of IEEE International Conference on Cybernetics and Society, pp. 163-165, New York, NY, USA, 1975.

[20] B. S. Reddy and B. N. Chatterji, "An FFT-based technique for translation, rotation, and scale-invariant image registration," IEEE Transactions on Image Processing, vol. 5, no. 8, pp. 1266-1271, 1996.

Dalibor Martisek and Hana Druckmullerova

Faculty of Mechanical Engineering, Institute of Mathematics, Brno University of Technology, Technicka 2, 616 69 Brno, Czech Republic Correspondence should be addressed to Dalibor Martisek; martisek@fme.vutbr.cz

Academic Editor: Brandon Weeks

Caption: FIGURE 1: Fracture surface of cement paste. The image was acquired by confocal microscope in confocal mode (a) and nonconfocal mode (b).

Caption: FIGURE 2: 3D stair-approximation of fracture surface of hydrated cement paste.

Caption: FIGURE 3: Smooth-approximation of fracture surface from Figure 2.

Caption: FIGURE 4: The first image (a) and the thirtieth image (b) in the series of photos of the fracture surface of hydrated cement paste acquired by confocal microscope Olympus LEXT 3100. The projection used is parallel and the fields of view are the same size (compare the position of the marked points).

Caption: FIGURE 5: The first image (a) and the forty-third image (b) in the series of photos of sandstone sample (locality Brno-Hady, Czech Rep.) taken with a Canon DSLR camera. The projection used is central and the fields of view are clearly different (compare the position of the marked points).

Caption: FIGURE 6: The central projection of an oversized sample, ideal case on the left, real case on the right.

Caption: FIGURE 7: The main idea of phase correlation: on the left: the phase correlation of the function with itself, the [delta]-distribution [delta](x; y). On the right: the phase correlation of the functions f(x; y) and f(x - 4; y - 2), [delta]-distribution [delta](x + 4; y + 2). It is necessary to shift the function f(x -4; y -2) by vector (-4; -2) for further image processing and reconstruction.

Caption: FIGURE 8: Scaling of individual images in processed serie, dependence is not precisely linear.

Caption: FIGURE 9: Sum of first, tenth, twentieth, and fortieth partially focused photo of sandstone sample aquired in central projection: (a) no registration, (b) registration described in section 2 (first and forty-third image of this serie we can see in Figure 5).

Caption: FIGURE 10: 3D reconstruction of image series aquired in central projection without any registration.

Caption: FIGURE 11: 3D reconstruction of image series aquired in central projection registered using elementary methods according to Figure 6 on the left.

Caption: FIGURE 12: 3D reconstruction of image series aquired in central projection registered by phase correlation described in section 2.

TABLE 1: Scales, rotations, and shift vectors detected for the
second to forty-third image (relative to the first).

Img. number    Scale      Angle     Shift vector
                        (arc sec)     (pixels)

2             0.99914     5.49      [-0.3; 0.5]
3             0.99757     14.77     [-0.1; 0.2]
4             0.99398     4.62       [0.3;0.3]
5             0.99349     14.65      [0.5;0.2]
6             0.99198     16.98      [0.6;0.1]
7             0.98878     -6.44      [-1.6;0.0]
8             0.98694     11.93      [1.7;0.0]
9             0.98565     20.53      [0.5;0.1]
10            0.98324    297.58      [0.4; 0.2]
11            0.98206    297.86      [0.4; 0.1]
12            0.97986    302.90      [0.3;0.1]
13            0.97749    297.52      [0.3; 0.2]
14            0.97636     5.87       [0.1; 0.2]
15            0.97382     3.52       [0.1; 0.3]
16            0.97277     12.75      [0.1; 0.2]
17            0.97069     6.07       [0.0; 0.3]
18            0.96932     5.20      [-0.1; 0.4]
19            0.96716     4.13      [-0.2; 0.5]
20            0.96584     3.33      [-0.2; 0.6]
21            0.96372     3.10       [0.4; 0.7]
22            0.96156     -3.71     [-0.5; 0.7]
23            0.96036     -4.77      [0.2; 0.5]
24            0.95810     5.74       [0.1; 0.8]
25            0.95690     2.63       [0.0; 0.7]
26            0.95579    -14.45      [0.0; 0.5]
27            0.95358     4.72      [-2.3; 0.9]
28            0.95135     -9.62     [0.0; -0.9]
29            0.95020    -302.73    [-2.6; 0.5]
30            0.94808     3.41      [-1.0; 0.2]
31            0.94697    -13.04     [-1.1; 0.3]
32            0.94472     -6.45     [-1.1; 0.5]
33            0.94369     3.45      [-1.7; 0.5]
34            0.94134     4.95      [-1.6; 0.7]
35            0.94027     -4.22     [-1.1; 1.5]
36            0.93826     -4.97     [-1.3; 1.4]
37            0.93743     3.02      [-1.2; 1.5]
38            0.93669     -8.02     [-1.2; 1.5]
39            0.93419     9.59      [-1.3; 1.5]
40            0.93351     5.97      [-1.7; 1.0]
41            0.93093     11.41     [-1.3; 0.9]
42            0.93023     10.36      [1.3; 0.9]
43            0.92805     14.84     [-1.2; 0.7]
COPYRIGHT 2017 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Martisek, Dalibor; Druckmullerova, Hana
Publication:Scanning
Article Type:Report
Date:Jan 1, 2017
Words:4503
Previous Article:Application of Environmental Scanning Electron Microscope-Nanomanipulation System on Spheroplast Yeast Cells Surface Observation.
Next Article:Electron Beam Irradiation Induced Multiwalled Carbon Nanotubes Fusion inside SEM.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters