Printer Friendly

The testing of photoscan 3D object modelling software.

UDK 528.718

Introduction

The science discipline of photogrammetry has been seeing a progressive development toward ever better techniques of precisely determining the dimensions of objects and terrain features from photographic images. The progress is driven by technological development of digital cameras and camera chips with each new camera model progressively enhancing the capability to render ever more authentic images of reality.

The trend has recently been driven by sophisticated digital image processing algorithms as well as by 3D modelling of objects using specialized autocorrelation software. Comprising huge amounts of data, considerable computing power is required to create 3D models from digital images. Combined with the above developments, modern personal computers provide adequate computing power to enable efficient and reasonably accurate 3D modelling at reasonable cost. For another 3D modelling method see (Dandos et al. 2013).

1. Image resolution

Digital photography is the single data source for 3D modelling. This begs the question of how digital image quality affects the accuracy of a 3D model generated from it. While digital camera manufacturers offer a plethora of technical data on accompanying product sheets, the declared parameters or parameter combinations fail to provide clear answers about the real image data quality/accuracy. E.g., total stations come with clear numbers just for distance and direction accuracy. Image resolution thus appears to be a good criterion for determining image accuracy because the image processing algorithm is based on finding identical features in different digital shots. The resulting image resolution value tells us how many pixels are stored in a data file produced by the camera and the lens.

The influence of camera resolution was explored by means of 3 digital cameras:

--Canon PowerShot G9,

--Canon PowerShot G15,

--Canon 7D with Canon EF-S 18-135 mm lens.

Our test object was the ISO 12233 compliant A4sized Danes Picta DCR3 table. Resulting image quality also depends on other factors including the shutter setting, selected image resolution and digital image format (JPEG/RAW). Thus all available shutter and resolution setting combinations and both data formats were used to produce up to 128 images per camera. The results were assessed by the Olympus HYRes software. The output consists of the optimum shutter (f) and resolution setting combination for each camera, see Table 1.

2. Camera calibration

The image is centrally projected by the camera lens on the camera chip, which carries with it distortions from the lens inbuilt optical flaws.

Dominant role in affecting the geometric accuracy is played by the radial and tangential lens distortion. Additional distortions stem from the camera build imperfections consisting of small axial misalignments of the lens components and of the camera chip. The effect of camera build imperfections must be eliminated if accurate image coordinates are to be attained (Pavelka et al. 2001). The PhotoScan software can automatically determine calibration parameters required to generate an accurate 3D model from surveying images where metadata (EXIF) is available. Notwithstanding the PhotoScan's useful facility, two alternative methods, Agisoft Lens and Photomodeler 6.2., were involved to determine the calibration parameters in order to establish whether or not the PhotoScan's automatic calibration is good enough to produce the same level of accuracy in the resulting 3D model. To compare the three calibration methods and the respective 3D models we used the Canon 7D camera with Canon EF-S 18-135 mm lens.

Agisoft Lens calibration used a chequered field (Fig. 1) displayed on a 102cm Samsung UA40C6530 LCD TV screen. The set of 8 calibration tests has shown that the internal orientation element values diverged up to 1.5%. A set of calibration parameters for 98 images was used in the process.

The Photomodeler 6.2 calibration used an A1-sized field comprising 100 points four of which were used to determine the field orientation (Fig. 2).

Each calibration set consisted of 12 images with the lines of sight at a 45[degrees] tilt from the horizontal. Each set of three images was shot from one of four different stations, as follows:

--camera in horizontal,

--90[degrees] tilt,

--270[degrees] tilt.

Nine image sets were made with the conclusion that the total error and RMS values increase along with the increasing coverage of the camera chip by the calibration field, a consequence of inferior quality of the lens fringe. Therefore, a set of calibration parameters with 84% chip coverage was used in the process. The above calibration methods are based on different parameters as shown by the lens distortion coefficients (Weng et al. 1992). Agisoft Lens data import feature was used to import the Photomodeler 6.2 calibration data and the two corresponding tangential and radial distortion data sets are displayed in a diagram (Fig. 3) showing very good consistency between the two calibration coefficients. While a comparison between real image coordinates after correction for all calibration parameters may not be clear enough, a comparison between distortion coefficients from the resulting 3D model will be shown.

3. PhotoScan 3D object modelling

The 3D models were generated by Agisoft's PhotoScan Professional Edition, version 0.9, build 1586, for 64 bit Windows 8 OS. PhotoScan is top-class autocorrelation software for professional-class 3D modelling from at least two static images of the object shot from any camera station. 3D object generation consists of three stages.

Stage one is a software search for and coordinating of identical points in different images to calculate the camera stations. The next step is to create a small point cloud, which will not be used for modelling apart from cases where a 3D model is restored by the point cloud method. Such point clouds may be exported for further processing to other software.

Stage two is to create a 3D polygon web model representing the object shape from the camera and image relative positions.

Stage three is a completion of the 3D model using simple features like:

--Reduction of the number of areas in the 3D polygon,

--Filling of gaps in the 3D polygon net,

--Elimination of irrelevant objects not belonging to the modelled object.

Textures of required resolution can be added. The final model can be exported to other software for more complex manipulation.

The rules of shooting images good for PhotoScan processing are very similar to those applicable in Photomodeler (Kapica et al. 2013), as follows:

--Any digital camera with 5 Mpix resolution or higher,

--Wide-angle lens are better than telephoto lens to reconstruct relative positions in 3D,

--Avoid surfaces with no structural features; they make identical point coordination difficult,

--Avoid glossy and transparent surfaces,

--Avoid disconnected mobile objects in front of the object of interest,

--Only shoot glossy objects under overcast skies,

--Make largely overlapping images,

--Make multiple shots (3 or more) of important parts from different angles,

--Never crop images, never apply geometric transformations of any kind,

--More images make better models.

Digital images are the single input source for 3D modelling. Factors governing 3D model accuracy include image quality, calibration parameter quality and 3D configuration of shots.

4. PhotoScan testing

Generally, the tests were set up so as to eliminate every other factor affecting the resulting 3D model apart from the one factor under scrutiny. The testing started with initial tests designed to identify the best settings for 3D modelling by means of our specific PC configuration: Intel i5 450 2,4 GHz, RAM: 4 GB + swap 50 GB, HDD: Intel 320 120 GB. A PhotoScan test rated our PC configuration as good for 50 million samples per second. Some 3D models with high-quality setting took up to a few days to generate.

Image configuration testing, i.e. determining the maximum image quantity that can be aligned by PhotoScan without compromising quality, must be carried out prior to complex 3D modelling and prior to model accuracy comparison with terrestrial geodesic surveying data. What is also important is to identify the maximum tilt angle of the image without loss of accuracy.

The first configuration test for line of sight tilt angle against the object face was made using a 60x40 cm cork panel; cork was chosen for its distinct surface texture. Cork panel images were shot from different camera stations with the lines of sight diverging at 10[degrees]. The first set of images was taken from 15 different camera stations and the cork panel stood perpendicular to the lines of sight plane. Follow-up image sets used different cork panel tilts at steps of 10[degrees]. A total of 9 image sets were made. For model parameter results see Table 2.

A reference 3D model was generated from two image sets with lines of sight at 90[degrees] and 60[degrees] tilts to the cork panel plane respectively. The 3D models were adjusted to measure by means of two points defining the cork panel long edge. A comparison was made by 3D model alignment by means of a calculated point cloud. Each 3D model was then exported in PLY format to the CloudCompare software for comparison with the reference model (Fig. 5). The test demonstrated that PhotoScan's ability to generate surfaces with minimum distortion remains unaffected in surfaces placed at up to 30[degrees] tilt to the line of sight. Results for objects with less distinct surface textures (making it hard for PhotoScan to locate identical points in different images) tend to produce more significant deviations.

The second line of sight angle test used the 90[degrees] image set from the previous test. Five 3D models each based on a different number of images, i.e., on different line of sight angles were generated as shown in Table 3 using the same 3D modelling parameters as in the previous test. The 15-image (complete image set) 3D model served as reference model.

The second test has shown that the 3D model based on 8 images had zero distortion from the reference 3D model while minimum distortion was evident on the 5-image 3D model.

The second test was replicated on a chapel near the Czech-German Route of Understanding. The 3D models were computed from 52, 26 and 12 images (with lines of sight angles of 7[degrees], 14[degrees] and 30[degrees] respectively). Deviations from the 52-image reference model were zero in case of the 26-image 3D model while reaching the size around 1 cm in case of the 12-image 3D model.

The third test, designed to assess the effect of calibration on 3D model accuracy, used the chapel near the Czech-German Route of Understanding for imaging object once again. Each model was generated from the same 26-image set to eliminate image set size influence and to isolate the calibration parameter effect. The process started by determining a distance between two points on the first computed 3D model. The first model was replicated to attain six identical 3D models. Then calibration parameters of model images were altered and new 3D models were generated. The test used calibration parameters from Agisoft Lens, from Photomodeler 6 and the PhotoScan's automatic calibration parameters.

Each 3D model based on one of the three calibrations got a second optimized 3D model obtained by the introduction of two tangential distortion coefficients [P.sub.1] and [P.sub.2] as well as of the camera chip x/y axis distortion parameter. A set of 6 models was thus generated. A comparison by 3D model alignment by means of a computed point cloud was made. Relative comparisons of differently calibrated models were made by means of CloudCompare (Fig. 6) The results have shown that PhotoScan's calibration ability is excellent to the point of making the use of specialised calibration software redundant. With project optimization the resulting 3D models have shown minimum deviations.

The effect of resolution on 3D model quality was studied by using different cameras to shoot images, in which the object size would always be the same (CORP factor). Sets of images with different resolution settings were made. Resolution determines the level of detail of the 3D model thus affecting its accuracy. The test involved the shooting of images with resolution settings decreasing progressively as the distance from the object was growing, see Table 3.

Figure 7 demonstrates the different levels of detail on a high-precision 3D model. The centre of the cropped image was generated from 6 Mpix images shot at 1.5 m distance. The outer parts come from a 6 Mpix shot taken from app. 6 m distance. The difference is striking.

Differences between PC 3D models generated from the JPEG format and those from the RAW format are negligible.

5. 3D model accuracy comparison against geodesic surveying

The chapel near the Czech-German Route of Understanding (Fig. 8), located at the foot of Cervena hora, Guntramovice, Moravia-Silesia, was once again selected for testing object (GPS: 49[degrees] 49' 35.211" N, 18[degrees] 10' 19.260" E). Geodesic surveying of the chapel's control points was carried out by Leica 1202 total station in the local coordinate system on November 19, 2012. A suitable selection of points was surveyed by the polar method and by triangulation. The differences between position vectors determined by a number of ways led to determining the measurement's internal accuracy as a weighted average, also taking account of the respective accuracies of each surveying method (Sucha et al. 2005). The average mean error was [+ or -] 4.3 mm.

3D model source data were three 26-image sets shot around the chapel by our three cameras using the best shutter/resolution combination in each case. Five 3D models of the chapel were generated from the three image sets. The Canon EOS 7D image set was used to make 3 models to determine the numerical accuracy of each calibration method. Each Canon model used different calibration parameters (Agisoft Lens, Photomodeler 6 and PhotoScan automatic calibration).

To determine the influence of each camera and that of the image resolution, two more 3D models were generated from the Canon PowerShot G15 and Canon PowerShot G9 image sets with automatic calibration. The model generating settings were identical in all the models, see Table 4.

Having generated the 3D models, identical points were pegged to the corresponding total station survey spots by PhotoScan and the 3D model size was defined from a known distance between two reference points. The reference points were selected with the view to maximum accuracy, maximum point-to-point distance and clarity of identification on the 3D model. Model coordinates were then transformed to the local coordinate system applying identical transformation to the common reference points on the models and in the geodesic data. A comparison was made between the geodesic coordinates and the transformed model along the X, Y and Z coordinates and differential coordinates [increment of x], [increment of y], [DELTA]z were obtained.

A location vector [[DELTA].sub.x,y,z] was computed for each point and mean error [m.sub.[DELTA]x,y,z] was calculated as quadratic average for each 3D model. For results see Tables 5 and 6.

[m.sub.[DELTA]x,y,z] = [+ or -] [square root of ([[DELTA].sup.2.sub.x] + [[DELTA].sup.2.sub.y] + [[DELTA].sup.2.sub.z])]; (1)

[m.sub.[DELTA]x,y,z] = [+ or -] [square root of ([n.summation over (i=1)][[DELTA].sub.x,y,z]/N)], (2)

where:

[increment of x] - x-axis coordinate differential, [increment of y] - y-axis coordinate differential, [DELTA]z - z-axis coordinate differential, N - number of points compared, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] - mean error for points compared.

The testing was closed by making a high-accuracy 3D model (Fig. 9) of a residential building facade in Ostrava-Poruba (GPS: 49[degrees] 49' 35.211"N, 18[degrees] 10' 19.260"E) where the facade was pictured in detail by a set of 36 images shot at different focal lengths. A coordinate comparison with Leica 1202 total station was made for 27 points yielding the position vector mean error [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] = 11.4 mm.

6. Test evaluation

Geometry setting Smooth appears suitable for object 3D modelling.

Line of sight relative angles of around 20[degrees] are best for high-precision 3D modelling. Object surfaces with distinct textures can be modelled with surface-to-line of sight angle from 90[degrees] to 30[degrees]. Imaging should be more detailed in smooth surfaces to provide adequate image sources for identical point determination.

Camera-to-object distance has a strong effect on the level of image detail and, consequently, on 3D model accuracy. Large objects require shorter-distance imaging.

In cameras supporting shutter priority programs it is recommended to select a shutter setting associated with maximum resolution.

Stage one of modelling should be done immediately after image shooting. The resulting point cloud provides a good first picture of the level of surface texture detail based on the number of identical points and of correct image alignment. Is the result unsatisfactory, additional images of problem areas can be made immediately.

There is negligible difference between models made from JPEG images and those from RAW format after a PC conversion by Zoner 15 to PhotoScan-compatible format. The use of different focal distances is advantageous to capture small surface features in more detail. Image orientation is irrelevant.

Problems may pop up in object sections captured by images shot from very varied camera station distances. The problems may be eliminated by combining 3D models using different methods of depth filtering; alternatively shoot all parts of the object from the same distance.

Camera calibration is not necessary. Best results are produced by 3D modelling with automatic calibration by PhotoScan.

The gravest bottleneck in 3D modelling is the PC's limited computing power, which extends the computing times needed to go through all 3D modelling operations and affects the level of detail and accuracy. However, PhotoScan delivers high-quality results despite hardware limitations especially in distinctly textured surfaces.

The application field is vast. Exports in multiple 3D formats open up technological uses as well as artistic uses in the area of film animation or in the gaming industry.

doi: 10.3846/20296991.2014.930251

References

Agisoft LLC. 2012. Agisoft PhotoScan User Manual. St. Petersburg. Dandos, R.; Staftkova, H.; Cernota, P; Subikova, M. 2013. Spatial visualisation of the infantry blockhouse OP--S 25 "U trigonometru", Advances in Military Technology 8(1): 73-84. ISSN 1802-2308. Available from Internet: http://aimt.unob. cz/vol8is1.htm

Kapica, R.; Vrublova, D.; Michalusova, M. 2013. Photogrammetric documentation of Czechoslovak border fortifications at Hlucin-Darkovicky, Geodesy and Cartography 39(2): 157164. ISSN 2029-6991 print, ISSN 2029-7009 online. http://dx.doi.org/10.3846/20296991.2013.806243

Pavelka, K. 2001. Fotogrammetrie 10. Praha: CVUT, ISBN 8001-02649-3.

Sucha, J. 2005. Urcovani geometrickych parametru prostorovych ocelovych konstrukci Geometric parameter determination in 3D steel structures, Acta Montanistica Slovaca 10(2): 234-241 [online], [cited 18 February 2013]. Available from Internet: http://actamont.tuke.sk/pdf/2005/n2/25sucha.pdf

Svabensky, O.; Vitula, A.; Bure, J. 2007. Inzenyrska geodezie II--Analyza presnosti vytyceni polohy [Engineering geodesy II--Layout accuracy analysis]. Brno: VUT.

Weng, J.; Cohen, P R.; Herniou, M. 1992. Camera calibration with distortion models and accuracy evaluation, IEEE Transactions Pattern Analysis and Machine Intelligence 14(10): 965-980. IEEE Computer Society. ISSN 0162-8828. http:// dx.doi.org/10.1109/34.159901

Tomas Jirousek (1), Roman Kapica (2), Dana Vrublova (3)

(1,2) Institute of Geodesy and Mining Surveying, Faculty of Mining and Geology, VSB--Technical University of Ostrava, 17. listopadu 15, CZ-708 33 Ostrava, Czech Republic

(3) The Institute of Combined Studies in Most, VSB-Technical University of Ostrava, Delnicka 21, Most, Czech Republic

E-mails: (1) tomas.jirousek.st@vsb.cz (corresponding author); (2) roman.kapica@vsb.cz; (3) dana.vrublova@vsb.cz

Received 10 March 2014; accepted 10 June 2014

Tomas JIROUSEK, Ing., Institute of Geodesy and Mining Surveying, Faculty of Mining and Geology, VSB--Technical University of Ostrava, 17. listopadu 15, CZ 708 33 Ostrava, Czech Republic. Ph +420 597 323 302, e-mail: tomas.jirousek.st@vsb.cz. Research interests: UAV photogrammetry, 3D modelling.

Roman KAPICA. Ing., PhD Asst. Prof., The Institute of Geodesy and Mining Surveying, Faculty of Mining and Geology, VSB--Technical University of Ostrava, 17.listopadu 15, CZ 708 33 Ostrava, Czech Republic. Ph +420 597 323 302, e-mail: roman.kapica@vsb.cz

Research interests: terrestrial photogrammetry, digital photogrammetric mapping, 3D modelling and animation, cartography.

Dana VRUBLOVA. Ing., PhD Asst. Prof., The Institute of Combined Studies in Most, Faculty of Mining and Geology, VSB Technical University of Ostrava, Delnicka 21, Most, Czech Republic. Ph +420 597 325 707, e-mail: dana.vrublova@vsb.cz Research interests: geodesy, cartography, mine surveying.

Table 1. Camera resolutions

Camera               Optimum shutter   Resolution
                       setting (f)       [Mpix]

Canon PowerShot 15         3.2            5.78
Canon PowerShot G9          4             6.38
Canon EOS 7D               6.3            7.81

Table 2. 3D model test parameters

Camera                    Canon G9
Resolution                12 Mpix
Alignment accuracy          High
Geometry                   Smooth
Depth filtering             Mild
No. of elementary areas   500,000

Table 3. 3D model feature comparison

Model       No. images in set   Lines of sight
                                    angle

Reference          15            10[degrees]
1                   8            20[degrees]
2                   5            30[degrees]
3                   4            40[degrees]
4                   3            50[degrees]

Table 4. 3D model parameters

Resolution                Maximum

Alignment accuracy         High
Geometry                  Smooth
Depth filtering            Mild
No. of elementary areas   500,000

Table 5. Comparison differences for 3D models based on
different calibrations

Camera                Canon 7D        Canon 7D        Canon 7D
Calibration           Automatic     Photomodeler    Agisoft Lens

No. points               28              28              28
[MATHEMATICAL       [+ or -] 24.4   [+ or -] 24.7   [+ or -] 28.0
EXPRESSION NOT
REPRODUCIBLE
IN ASCII], mm

Table 6. Comparison differences for 3D models generated
from different cameras

Camera                Canon 7D        Canon G15       Canon G9
Calibration           automatic       automatic       automatic

No. points               28              31              28
Resolution [Mpix]        6.7             5.6             5.4
[MATHEMATICAL       [+ or -] 24.4   [+ or -] 27.3   [+ or -] 29.3
EXPRESSION NOT
REPRODUCIBLE
IN ASCII], mm


----------

Please note: Illustration(s) are not available due to copyright restrictions.
COPYRIGHT 2014 Vilnius Gediminas Technical University
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Jirousek, Tomas; Kapica, Roman; Vrublova, Dana
Publication:Geodesy and Cartography
Article Type:Report
Geographic Code:4EXCZ
Date:Jun 1, 2014
Words:3605
Previous Article:Inquiry into vertical movements of the earth's crust based on samples from Eastern Lithuania.
Next Article:Cartographic image of Samogitia in the old maps of Lithuania, Poland and other neighboring countries (1700-1939).
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters