Printer Friendly

Active Camera Positional Tracking for Augmented Reality Applications.

1. Introduction

According to Gartner's Hype Cycle and DHL, augmented reality (AR) is a very promising emerging technology suitable for many industrial applications that should take about 5 years to enter the consumer market. The combination of a real view for the user and virtual enhancements, either 2D or 3D, is a feature of this technology which is very much in demand. AR users can visualize arrows pointing to an object of interest, be navigated to a certain place or certain object, visualize inside the contents of a box, etc.

In their promo video, SAP SE foresee the use of AR smartglasses for navigating around a production system and warehouses, displaying a warehouse operator the correct box to pick and correct stations to supply. BMW AG also has a promo video that shows their vision of using smartglasses to guide a car mechanic through the replacement of faulty parts.

Although these visions show the great potential of such technology, there are many problems for augmented reality to overcome. The overall quality of current smartglasses can discourage potential buyers from using them for various reasons. They are not very beautiful or comfortable, and the display quality is low, the field of view is too narrow and the displays lack brightness. There are not yet many studies showing the health effects of wearing a display in such a short distance in front of the eyes or a source of various electromagnetic and heat radiation very near to the brain.

Another problem with current smartglasses is their positional tracking in space. Their rotation in space can be tracked autonomously using a gyroscopic sensor, but getting the position is not an easy task. In exterior applications, the GPS signal will not always provide the required accuracy. Indoors, GPS cannot be used and the AR device needs to acquire its position differently. Unfortunately, an accelerometer is not accurate enough and is basically the only simple way to get the position autonomously. There are various ways to achieve more or less precise tracking, including WPS (Wi-Fi position system), magnetic or ultrasonic tracking, self-learning 3D scanning of the room, etc. The method this article will focus on is known as position marker tracking.

There are many AR applications that focus on marker tracking using computer vision and image recognition algorithms. For example, a hybrid method was presented in [1]. It relies on marker recognition to view a 3D model placed in the position of the marker. The marker recognition process can fail due to camera refocusing or when a part of the marker is obstructed. In such cases the position is computed from an InterSense InterMax2 device, an inertial tracking device, to supplement the loss of tracking. The disadvantage of this approach is the price of the InterSense device.

Another study deals with tracking objects using binocular cameras--a dual camera rig [2]. The tracked device, a measuring pen, has three near-IR beacon markers. Using sub pixel recognition algorithms, the authors achieved an accuracy of about 1 millimetre, but the camera rig was quite large and this method was intended to be used at close distances. Another enhancement for video based marker tracking is presented in [3], using QR codes in a marker contrast border as the tracked image. This allows for a basically unlimited number of markers, but with increased demands for marker recognition, because the QR code features are smaller.

The algorithms mentioned above are mainly used for tracking the markers in the camera's field of view. In this case, an inverse algorithm is needed, but the main difference is small: The markers are in fixed, known positions, whereas the camera moves. This would require the markers to be placed and their positions stored in a database. Then, the camera's position is known when the camera can see these markers. This approach can be enhanced by an inertial sensor as mentioned in [1] for cases when visual tracking is lost. For higher accuracy, the approach from [2] can be used with IR beacons around the space, but there would be a problem if there were a lot of them--the patterns must be unique. For tracking in big areas like production halls, QR code markers (mentioned in [3]) might not be very suitable because of their small details.

To get active camera tracking, the markers must be placed at positions that are visible from as wide an area as possible, for example on ceiling support pillars or at aisle crossings. There are even markerless approaches for camera tracking, like the one in [4], which first learn the space by recognition of natural features like edges, but there are many edges in a production hall, making such algorithms prone to errors and instability. Recovery of lost tracking can take a small amount of time or even needs the camera to be placed camera somewhere, where the tracking is guaranteed to recover [5]. Paper [6] deals with making AR tracking more robust and deals with accuracy in different tracking conditions. [7] presents an approach where the ceiling is covered with a lot of markers, one next to another. The user wears a HMD with a front facing camera for tracking real environment and markers inside it. Another camera points directly up to see the markers on the ceiling, providing positional tracking. This can be applied for precision tracking inside one single room, but in production halls, the ceiling is too high and the surface is too big to cover it all with markers.

Combining vision based tracking for idle states with InterSense IS-600 for tracking while moving results in more robust tracking with less jitter and lower latencies was presented by [8]. Unfortunately, the InterSense tracking device requires a static and relatively small tracking space. [9] presents a combination of GPS and vision based tracking based on known marker positions for use in outdoor environments and a prepared indoor environment, all worn in a backpack. The quality of augmented reality applications and tracking can be reduced by errors occurring during the image recognition, registration, etc. Several types of error sources may alter the quality of the tracking of augmented reality tasks. An overview of potential error sources is given in [10].

2. Implementation

From our previous projects, we made augmented assembly manuals requiring positional tracking of supply boxes to be picked by the operator. This was done in Metaio Unifeye Design which allowed placing of virtual objects over markers. The assembly manual was made in two versions: one only showed animated assembly instruction, the other showed arrows over the boxes with correct parts. This implementation required the markers to be on each of the boxes.

We needed an AR tracking solution for the virtual assemblies in our warehouse navigation system, so we decided to go with the ARToolkit for Unity. The Unity3D platform allowed us to make any virtual content and bind it with the ARToolkit. Unfortunately, for active camera tracking, the ARToolkit for Unity was not suitable right out of the box. There was no solution for camera tracking in an environment with fixed markers with known positions. We decided to make our own branch of ARToolkit for Unity with two targets:

* Tracking of virtual objects in camera's field of view.

* Tracking of the camera based on fixed markers with known positions.

The default ARToolkit algorithm is to consider the first found marker as a so- called base marker. The scene camera is placed relative to this marker. When other markers are found in the image, the virtual objects are then placed relative to the base marker. This approach is not convenient for our intended use. We want to have the camera tracked using fixed markers and display virtual models over other markers that are not marked as fixed. We changed the marker tracking algorithm so that all virtual models are placed relative to the camera without considering any markers as base markers. See Fig. 1 displaying the change to the algorithm.

Compared to the original algorithm, our algorithm does not cause intensive jittering of the camera by changing its position relative to the markers all the time. Between the frames, it is not consistent which of the markers will be considered as the base marker. This would not be such a problem, but applications depending on the positions of objects would be affected, because these objects were basically jumping in the scene. The modified algorithm makes the objects jitter only because of tracking accuracy and it allows us to combine it with the next modified algorithm.

The other algorithm is for active camera tracking. Considering having markers as fixed and with known positions, we can simply place the camera relative to them and compute an average of positions and rotations. The ARToolkit gets the transformation matrix of the marker in the camera's coordinate system. Simply by inverting this matrix, we have the relative position of the camera in the marker's coordinate system. Then, we need to transform this matrix to the world coordinate system. From the resulting matrix, the rotation and position of the camera can be computed. If more markers are seen, the location vectors and rotation quaternions are averaged. See Fig. 2 for the scheme of the algorithm.

3. Testing, Discussion and Further Work

For testing the modified algorithm, we used a Logitech C910 webcam capable of delivering a FullHD video at 30 frames per second. Then, we placed the markers around our laboratory and captured their exact positions using a Leica ScanStation C5 LiDAR scanning device which we have at our disposal. We imported the resulting point cloud in Unity in order to place objects representing the markers and to check the registration of the point cloud with the actual camera's field of view.

The resulting image jittered slightly, with the accuracy being around 10 cm. This is enough for navigational tasks, but when using this solution for smartglasses tracking, it would be more than enough to cause cyber sickness and unpleasant feelings. The jittering is caused by the ARToolkit algorithm and also by lighting conditions, marker print quality, marker size, camera framerate and resolution. For an example of the result, see Fig. 3.

Although the functional prototype for tracking objects in the visual field (field of view) and cameras in the area was ready, it is still necessary to develop the program further. First it is necessary to find out the precision of the tracking method and then the dependence of its accuracy in different conditions. The size of the markers, the distance to them, lighting, camera resolution and the ink that was used to print the markers, all must be taken in mind when measuring accuracy. In addition to that we must take into account the speed of tracking as well. Tracking speed is very important when using head-mounted displays.

In our further research, we are going to rig a webcam with an InterSense IS-900 tracking device for better accuracy reading. The InterSense IS-900 has a tracking positional accuracy of 2 to 3 millimetres, allowing us to evaluate the accuracy of the tracking more precisely. We will conduct experiments with different cameras, marker print ink and quality, light conditions and other conditions that will be as near to actual shop floors as possible.

4. Intended Use

The modified ARToolkit can be used as a platform for different projects. Below you can see some examples of its possible applications in industry:

* Finding the position of the warehouse operator, who is wearing smartglasses in the warehouse. It is possible to accurately navigate him through the warehouse, though connection to an information system is required. It is also necessary to know the exact position of the markers after their deployment in the warehouse.

* Visualization of a model of a future product in the office. The office should be equipped with marker tracking. The set CAD model is then projected in the same position, for example--on the floor. Placing of the model is done using a modelling tool in a PC (the object then would not require a marker) or with the help of markers. Static objects are resistant to errors in marker tracking and have less risk of displacement or disappearance of their markers in the shade, which is an advantage compared to the classical approach.

* Determining the position of a user's smartglasses in the production system. If suddenly an information system alerted that one of the machines requires maintenance, adjustment or any other service, an arrow and a message would appear on the user's smartglasses leading a worker to the machine location. How it can be used: when passing the production system, it shows the location pointers. When finally reached, it displays the instructions required for fixing the problem.

5. Conclusion

The modified algorithm for object tracking allowed us to tailor the functionality of ARToolkit for Unity for easier and more intuitive development of augmented reality applications. The advantages of ARToolkit are the wide possibilities of the tracking modes, both marker and markerless. It was possible to implement active camera marker based tracking, assuming the positions of the markers are known and they are registered with their virtual counterparts. We now have a suitable platform to develop applications for navigation in interiors with other tracked markers and objects attached to them. The accuracy of the algorithm needs to be improved, as the camera jitters a bit. The algorithm will be improved by using the visual tracking only as positional tracking. The rotation tracking should be more accurate when using gyroscopes. The tracking platform will then be suitable for visualisation of production machine state or for navigation in warehouses.

DOI: 10.2507/27th.daaam.proceedings.066

6. Acknowledgement

This paper is based upon work sponsored by project LO1502 Development of Regional Technological Institute.

7. References

[1] Yang, P.; Wu, W. & Moniri, M. (2007). A Hybrid Marker-Based Camera Tracking Approach in Augmented Reality, Proceedings of 2007 IEEE International Conference on Networking, Sensing and Control, pp. 622-627, London, DOI: 10.1109/ICNSC.2007.372851

[2] Liao G.; Hu Y. & Li Z. (2011). Research on marker tracking method for augmented reality, Fuzzy Systems and Knowledge Discovery (FSKD), Proceedings of 2011 Eighth International Conference on, pp. 2386-2390, Shanghai, DOI: 10.1109/FSKD.2011.6019986

[3] Agusta, G. M.; K. Hulliyah; Arini & Bahaweres R. B. (2012). QR Code Augmented Reality tracking with merging on conventional marker based Backpropagation neural network, Advanced Computer Science and Information Systems (ICACSIS), Proceedings of 2012 International Conference on, pp. 245-248, Depok, DOI: 10.1109/WMSVM.2010.52.

[4] Chen, L.; Peng, X.; Yao, J.; Qiguan, H.; Chen, C. & Ma, Y. (2016). Research on the augmented reality system without identification markers for home exhibition, Proceedings of 2016 11th International Conference on Computer Science & Education (ICCSE), pp. 524-528, Nagoya, Japan. DOI: 10.1109/ICCSE.2016.7581635

[5] Coffin, C.; Lee, C. & Hollerer, T. (2011). Evaluating the impact of recovery density on augmented reality tracking, Mixed and Augmented Reality (ISMAR), Proceedings of 2011 10th IEEE International Symposium on, 2011, pp. 93-101, Basel, DOI:10.1109/ISMAR.2011.6092374

[6] Hirzer, M. (2008). "Marker Detection for Augmented Reality Applications", Inst. For Computer Graphics and Vision Graz University of Technology Austria, Available from: http://studierstube.icg.tugraz.at/thesis/marker_detection.pdf Accessed: 2016-10- 05

[7] Jun, J.; Yue, Q. & Qing Z. (2010). An Extended Marker-Based Tracking System for Augmented Reality, Modeling, Simulation and Visualization Methods (WMSVM), Proceedings of 2010 Second International Conference on, pp. 94-97, Sanya, DOI: 10.1109/WMSVM.2010.52.

[8] Yong Liu; Storring, M.; Moeslund, T. B.; Madsen, C. B. & Granum, E. (2003). Computer vision based head tracking from re-configurable 2D markers for AR, Mixed and Augmented Reality, Proceedings of The Second IEEE and ACM International Symposium on, 2003, pp. 264-267. DOI: 10.1109/ISMAR.2003.1240712.

[9] Piekarski, W.; Avery B.; Thomas, B. H. & Malbezin, P. (2004). Integrated head and hand tracking for indoor and outdoor augmented reality, Proceedings of IEEE, pp. 11-276. DOI: 10.1109/VR.2004.1310050

[10] Egnal, G.; Mintz, M. & Wildes, R. (2004). A stereo confidence metric using single view imagery with comparison to five alternative approaches, Proceedings of Image and Vision computing, pp. 943-957, DOI: 10.1016/j.imavis.2004.03.018.

This Publication has to be referred as: Polcar, J[iri]; Martirosov, S[ergo] & Kopecek, P[avel] (2016). Active Camera Positional Tracking for Augmented Reality Applications, Proceedings of the 27th DAAAM International Symposium, pp.0447-0451, B. Katalinic (Ed.), Published by DAAAM International, ISBN 978-3- 902734-08-2, ISSN 1726-9679, Vienna, Austria

Caption: Fig. 1. Marker tracking algorithm modification.

Caption: Fig. 2. camera tracking algorithm.

Caption: Fig. 3. Example of tracking results: Registration of a point cloud with a real view.
COPYRIGHT 2017 DAAAM International Vienna
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Polcar, Jiri; Martirosov, Sergo; Kopecek, Pavel
Publication:Annals of DAAAM & Proceedings
Article Type:Report
Geographic Code:1USA
Date:Jan 1, 2017
Words:2769
Previous Article:Machining of Inconel 718 Using Uncoated Cutting Tools with Different Cutting Edge Quality.
Next Article:Direct Displacement Problem of the 6-DOF Gough-Stewart Platform.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters