Printer Friendly

New shadow detection and removal approach for VHRS neural stereo correspondence.

1 INTRODUCTION

In the last years, both very high resolution (VHR) urban remote sensed images and aerial images show very fine details of features such as buildings, roads, cars and vegetation. However, Shadow is an inevitable natural phenomenon which is usually cast by elevated objects like buildings, bridges, towers, and it is necessary to reduce its effect in order to get information we need according to the application aimed at.

2 STATE OF THE ART

In general, Shadow detection and removal methods work together to remove shadows [1]. In this issue several works have been proposed in literature, the most important one is the review presented in [2], [3]. Andres et al 2012 have surveyed recent methods for moving objects in video sequences. In [3] Adeline et al have presented a study of shadow detection methods for single very high resolution aerial images. From these works one can classify shadow detection methods for very high resolution remote sensing images into four main classes: machine learning methods, physics based-methods, model based-methods and property-based methods.

In the first class, Martel-Brisson and Zaccarin have used Gaussian mixture model as an unsupervised method to classify shadowed regions [4]. Levine and Bhattacharyya have used Support Vector Machine (SVM) to classify shadow boundaries obtained by segmentation [5]. In [6] Lalonde et al have proposed a supervised shadow classification using a conditional random field. The two last methods cited above require reference samples to train the classifier and/or are generally computationally expensive.

In physics-based methods, shadow detection algorithms use the physical meaning of shadow formation, they take into consideration the illumination, the atmospheric conditions and some physical properties of materials. There are few published papers using this class [7], [8] because of the lack of this additional information.

In the other hand, model-based methods need for an accurate 3D model or an atmospheric illumination conditions to determine where shadows are located on the image [3]. Using digital surface models (DSM), several geometrical methods for shadow detections have been published in literature. In this framework Rau et al [9] used a Z-buffer technique with DSM data and multi-view ortho- photos. More recently, Tolt et al [10] employ a straight forward line-of-sight analysis with a very accurate DSM combined with SVM supervised classification.

For property-based methods, the a priori information is not required any more. Indeed the methods can be applied directly to images based on some specific properties like chromaticity and intensity. In [11] Paul Dare has detected shadow from panchromatic IKONOS images by an Otsu thresholding technique [12] based on a predetermined level and a post processing of the segmented regions, Tsai et al [13] has exploited the properties of shadows in chromaticity applied to several invariant colour spaces, to detect shadow in Quickbird images, Arevalo et al [14] have exploited both shadow invariant colour component and edge information. More recently, Krishna et al [1] have detected shadow by considering the HSV colour space using Otsu's method for thresholding.

It is important to mention that almost all shadow detection researches cited above were applied on very high resolution single images (aerial or satellite images). In this framework, we propose a new fast image shadow detection technique in RGB colour space based on pixel intensity feature.

Once shadows areas are located, the removal process can be conducted. It is clear that shadow removal techniques depend basically on the used detection method. In shadow removal methods we can distinguish two main classes: the gradient-based method and factor's multiplication-based methods. The zeroing gradient is applied in boundaries of shadow followed by an image reconstruction [15]. In this case the user intervention is needed to specify the shadow boundary. The second class methods perform shadow removing using multiplicative factors to the shadow pixels. Almoussa et al [9] minimized an energy function to obtain the optimal factors values. In the same class, Murali and Govindan [16] removed shadow by multiplying R, G and B channels of the shadow pixels y appropriate constants chosen heuristically. In comparison, factors multiplication-based methods are simpler than the gradient-based ones.

In this work shadow detection and removal process is performed on stereo VHR remote sensing images as a pre-processing stage. Hence, the computation load should be reduced. In the other hand, additional data, like reference images, atmospheric conditions, DSM models are not required. In the property-based methods, the shadow detection process has been revealed greatly dependent to features choice. For shadow removal step, we have been inspired by the energy minimization concept applied in Almoussa et al [9], to propose a new technique based on minimizing an energy function of shadowed and illuminated areas, in order to find three suitable coefficients related to the three image components R, G and B.

The proposed shadow detection and removal method will then be applied as a pre- processing step to the stereo-matching satellite images method proposed in [17] Zigh et al. The obtained results will show the accuracy and the efficiency of the detection process.

3 SHADOW DETECTION AND REMOVAL

Shadows occur when objects totally or partially occlude direct light from illumination source. Shadows can be divided into three classes: cast, self and boundaries (figure1). A cast shadow is projected by the object in the direction of the light source, whereas self shadow is the part of the object which is not illuminated by direct light. Boundaries are the edge of shadow.

Most of the proposed methods presented in the remote sensing field deal only with cast shadows [14]. In this paper, our challenge is to deal with cast and self shadows, as well as shadows boundaries correction. We mention that we have to treat only large surfaces of shadows. Contrarily, the smallest ones are not considered since they can be helpful for the following processing stage such as buildings extraction (figure 1).

[FIGURE 1 OMITTED]

3.1 Description of the proposed method

A thresholding procedure is firstly done using Otsu method [12] to detect shadows. Only large areas of shadows will be removed. Once the detection is done, an energy function of shadowed and illuminated areas is minimized to determine the multiplicative factors values allowing shadow regions compensation. Shadow boundaries suppression is then performed.

Figure 2 summarizes the main steps of the proposed shadow detection and removal method applied to improve buildings stereo matching process of urban VHR Pairs IKONOS 2 images.

[FIGURE 2 OMITTED]

3.1.1 Shadow detection step:

We are interested by removing shadows in RGB color space. The proposed detection method is based on both spectral and spatial image information. It consists basically on an Otsu thresholding algorithm followed by specific filters e.g.: median filter and closing morphological operator.

Firstly, the algorithm separates shadows (cast and self-shadows) from non- shadows by thresholding at a predetermined level according to (1):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

Where:

[I.sub.shadow](i,j): Shadow binary image obtained by Otsu thresholding.

i,j: are pixels coordinates

I(i,j): Initial image.

l: Otsu threshold.

As a result, we obtain a binary image with shadow areas corresponding to zero pixels (Fig. 3 (b)). Otsu algorithm detects all shadows small and large one. Each shadow area should be treated separately, to avoid an expensive computation load the small shadows are considered as artifacts and deleted using the following specific filters as follows:

a. Filtering areas outside considered shadows: each dark or low brightened region being outside the considered shadows are detected to avoid its treatment a morphological closing operator is applied to eliminate it from the binary shadow image. (Fig. 3(c)).

b. Filtering inside considered shadows: clear or high brightened regions being in considered shadows will not be detected as a shadowed area (figure 2.b), a correction procedure is then conducted using median filters with morphological closing operator. (Fi3 (c)).

Since we detected the shadows regions that can be cast or self (Fig. 3 (c)), we have to remove them in the next step.

[FIGURE 3 OMITTED]

3.1.2 Shadow removal step:

We propose a new shadow removal technique; it is based on an energy minimization concept followed by a shadow boundaries suppression.

As we assume that the shadowed regions have almost the same illumination as the nearest non shadow regions [16], we need to compute the mean value for each colour (R, G and B) inside as well as outside shadow regions. An energy function of shadowed and non-shadowed regions illumination can be defined.

Let E([??]) being the energy function defined by

E([??]) = [[([c.sub.R][[mu].sup.R.sub.out]).sup.2] + [([c.sub.G][[mu].sup.G.sub.in] - [[mu].sup.G.sub.out]).sup.2] + [([c.sub.B][[mu].sup.B.sub.in] - [[mu].sup.B.sub.out]).sup.2]] (2)

Where

[??] denotes the required amount of illumination for shadow removal.

[??] = ([c.sub.R], [c.sub.G], [c.sub.B]).

With:

[c.sub.R]: illumination compensation factor for red component

[c.sub.G]: illumination compensation factor for green component

[c.sub.B]: illumination compensation factor for bleu component

[[mu].sup.R.sub.in]: The mean value of image red component inside the shadow region.

[[mu].sup.R.sub.out]: The mean value of image red component outside the shadow region.

[[mu].sup.G.sub.in]: The mean green light inside the shadow region.

[[mu].sup.G.sub.out]: The mean green light outside the shadow region.

[[mu].sup.B.sub.in]: The mean blue light inside the shadow region.

[[mu].sup.B.sub.out]: The mean blue light outside the shadow region.

It is clear that it is matter to find the required amount of illumination for shadow removal [??].

The partial derivatives of E([??]) are calculated and set to zero:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

Solving these three equations gives:

[??] = ([[[mu].sup.R.sub.out]/[[mu].sup.R.sub.in]],[[[mu].sup.G.sub.out]/[[mu].sup.G.sub.in]], [[[mu].sup.B.sub.out]/[[mu].sup.B.sub.in]]) (6)

The proposed approach of shadow detection and removal is applied on several urban pairs of images. Figure 4 illustrate an example of shadow removal from IKONOS 2 VHRS.

[FIGURE 4 OMITTED]

From the obtained image (b), we notice that hidden objects are recovered as well as an over illumination in shadows boundaries due to its multiplication by the compensation factor (e.q (6)). To overcome this problem, we propose a new shadow boundaries suppression technique which consists of two parts: shadow boundaries detection and shadow boundaries smoothness.

a. Shadow boundaries detection: boundaries of shadow are detected using the shadow binary image. Canny filter is firstly then morphological dilatation is used in order to raise their thickness, (figure 5 (c)).

b. Shadow boundaries smoothness: it is achieved using a mean filter (figure 5 (d)).

4. APPLICATION TO THE BUILDINGS STEREO MATCHING PROCESS

The entire process was tested over a pair of stereo sample images captured by IKONOS 2 satellite.

In this framework we use shadowed image pairs covering urban areas with elevated buildings where significant shadow regions appear. The shadow existing in the second original pair (figure 7) is larger and more complicated than that in the first original pair (Figure 6).

The proposed shadow detection and removal method is applied to improve the building stereo matching process proposed in Zigh et al [17], and conducted using a neuronal Hopfield matching method, initialized by geometric and photometric regions properties (surface, elongation, perimeter, colour and gravity center coordinates) [17].

[FIGURE 5 OMITTED]

[FIGURE 6 OMITTED]

4.1 Results of stereo matching process on the original pairs (shadowed pairs)

To demonstrate the efficiency of the proposed shadow detection and removal method, we apply stereo matching process firstly on the original pairs (shadowed pairs)..In following we present the results of stereo-matching process on shadowed images.

A-First shadowed pair, in below the results of stereo matching process applied on the first shadowed pair of images.

[FIGURE 7 OMITTED]

As it is illustrated above, fuzzy buildings extraction step provides a good quality segmentation for the interesting regions (buildings), however, the resulting number of regions in each image is not the same. This is due to two main reasons: the first one is the variation of capture conditions from right to left image, as a result, some regions in the left image haven't their homologous in the right one (figure 8. (a2), (b2)) such as regions 18 and 26 in figure 8 (b2). The second reason is that shadow hides many regions which keep them away from extraction. In total, we find 25 regions in the right image (figure 7. (a2)) whereas, 43 regions are obtained in the left one (figure 8. (b2), table 1).

The obtained results at this step (buildings extraction) will have a direct effect on the neural stereo matching process. Indeed, one can notice a correct stereo matching of building facades, e.g., region 14 from the right image is matched with the region 25 from the left image (figure 8. (a2), (b2)). However, there is a possibility of confusion for small regions matching like region 1,2,3 (figure 8. (a3), (b3)) due their geometric characteristics similarity. The achieved matching rate is 44% for the right image and 25.58% for the left one (table1), with one ambiguous region.

B-Second shadowed pair, the stereo matching process is now applied on the second pair of images; this latter can be considered as one of the most complicated shadowed existing VHR remote sensing pairs. In following we present the corresponding results.

[FIGURE 9 OMITTED]

We can notice that shadow surface, covering the same scene varies from left to right images. This variation increases according to shadow complexity, which is much more pronounced near sky scrapers area (figure 9 (c) (d)). In this case, all shadowed buildings couldn't be extracted as it is clearly illustrated on figure 9 (c2) (d2) where they are merged with the image background. In fuzzy building extraction procedure, we have obtained 35 regions in the right image and 43 regions in the left one.

As a result, shadowed buildings are not matched; the matching rate reached for the right image 31.42% is smaller than that obtained for the first pair (44%) with one ambiguous region. For the left image, the same stereo matching rate has been obtained for the two pairs: 25.58%, whereas, the number of ambiguous regions has been doubled (Table 1).

4.2 Results of stereo matching process on the recovered shadowed free pairs.

Now, we apply the proposed shadow detection and removal method to recover the information hidden by shadow. All shadows have been firstly classified as image background figure (aa1). One can notice that shadow removal revealed new areas, (buildings) and fuzzy buildings extraction becomes more efficient. In addition, much more regions are obtained compared to the first case (first shadowed pairs in figure 8), there are 33 regions in the right image and 46 regions in the left one. This buildings extraction result has a direct and a positive impact on a neural stereo matching process.

[FIGURE 9 OMITTED]

From figure figures (aa3) and (bb3), an example of one of totally hidden buildings recovering is a pair of regions labeled 12. Another example of partially hidden buildings recovering is a pair of regions labeled 16 in the same figures ((aa3) and (bb3)).As a result, we obtain an interesting stereo matching rate reaching 63.63% in the right image and 45.65% in the left one (table 2).It corresponds to a matching improvement range of 19.85% over a first stereo matching case (first shadowed pair).

After that, we apply stereo matching process on the second recovered shadowed free pair. Obtained results are illustrated below.

[FIGURE 10 OMITTED]

For this pair of images with an extreme amount of shadow, the proposed approach provides an interesting shadowed objects recovering. We can see for example from figure 11. (cc3) and (dd3) that regions 8 and 15 which were totally hidden in the original pair of images (figure 9. (c1) and (d1)) have been recovered. Another example of partially hidden regions is that of region labeled 2 (figure 9. (c1) and (d1)); we can see that has been completely restored after shadow treatment (figure 11. (cc3) and (dd3)).

As a result, we obtain 56 segmented regions in the left image and 50 extracted regions in the right one (table2). In comparison, only 43 segmented regions in the left image and 35 extracted regions in the right one (table1) have been detected for initial shadowed images.

Moreover, for the fuzzy buildings extraction step, we obtain a matching rate equal to 32.14% for left image and 36% for right image which demonstrate the efficiency of the proposed algorithm (Table2). It corresponds to a matching improvement range of 5,57% over a first stereo matching case (second shadowed pair).

5 CONCLUSION

A new method to detect and remove shadows from RGB very high resolution remote sensing images is proposed. We have chosen to operate on IKONOS2 images as an example of the most challenging shadowed remote sensing images.

The primary goal of the proposed method is to deal with cast, self and boundaries of shadows. First, a shadow detection process is performed using an Otsu threshholding procedure followed by a morphological filtering. The detected shadows are then removed using an energy minimization function providing a set of illumination compensation factors. The resulting images show a good compensation quality over cast and self-shadow areas, whereas an over illumination in shadows boundaries occurs, to overcome this problem, we propose a shadow boundaries suppression technique using Canny filter, a morphological dilatation and a smoothing procedure.

The proposed method, it doesn't require any a priori information. The second objective of the proposed method is to improve a stereo matching process in complex urban environment. In This framework the efficient shadows suppression leads to an improvement of stereo matching rate and a considerably reduction of ambiguous regions over the VHR IKONOS2 images.

The proposed method is fully automatic fast and simple. The application of the method isn't limited to the stereo matching of buildings; it can easily be applied as a shadowed image restoration.

REFERENCES

[1.] Krishna, K. S., Kirat, P., Nigam, M.J., 2012. "Shadow detection and removal from remote sensing images using NDI and morphological operators". International journal of computer applications 42. N. 10, 657-8887

[2.] Andres, S., Conrad, S., Brian, C. L., 2012. "Shadow detection: A survey and comparative evaluation of recent methods". Pattern recognition 45, 1684-1695.

[3.] Adeline, K. R. M., Chen. M, Briottet. X., Pang, S. K., Paparoditis, N. "Shadow detection in very high spatial resolution aerial images: A comparative study". ISPRS Journal of photogrammetry and remote sensing 80, 21-38.

[4.] N. Martel-Brisson and A. Zaccarin. "Learning and removing cast shadows through a multidistribution approach". IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(7):1133-1146, 2007

[5.] Levine, M. D., Bhattacharyya. J., 2005. "Removing shadows". Pattern recognition letters 26, 251-265.

[6.] Lalonde, J.-F., Efros, A.A., Narasimhan, S. G., 2010. "Detecting ground shadows in outdoor consumer photographs. In Proc. 11th European Conference on Computer Vision": Part II, Heraklion, Crete, 5-11 September, pp. 322-335.

[7.] Adler-Golden, S.M., Matthew, M.W., Anderson, G.P., Felde, G.W., Gardner, J.A., 2002. "An algorithm for de-shadowing spectral imagery". Proc. SPIE, Imaging Spectrometry VIII 4816, 203-210.

[8.] Rau, J.Y., Chen, N.Y., Chen, L.C., 2002. "True orthophoto generation of built-up areas using multi-view images". Photogrammetric Engineering and Remote Sensing 68 (6), 581-588.

[9.] Almoussa. N., 2005. "Variational retinex and shadow removal". The Mathematics Department--UCLA Under the mentorship of Dr. Todd Wittman.

[10.] Tolt, G., Shimoni, M., Ahlberg, J., 2011. "A shadow detection method for remote sensing images using VHR hyperspectral and LIDAR data". In: Proc. Geoscience and Remote Sensing Symposium, IGARSS, Vancouver Canada, 25-29 July 2011, pp. 4423-4426.

[11.] Dare Paul, M., 2005. "Shadow analysis in high-resolution satellite imagery of urban areas". Photogrammetric Engineering and Remote Sensing, 71(2): 169-177.

[12.] Otsu, N., "A threshold selection method from gray level histograms" Publication, IEEE Transactions on Systems, Man and Cybernetics, 1979.

[13.] Tsai, V.U.D., 2006. "A comparative study on shadow compensation of color aerial images in invariant color models". IEEE Transactions on Geoscience and Remote Sensing 44 (6), 1661-1671.

[14.] Arevalo, V., Gonzalez, J., Ambrosio, G., 2008. Shadow detection in color high- resolution satellite images. International Journal of Remote Sensing 29 (7), 1945- 1963.

[15.] Feng, L., Gleicher, M., 2008. "Texture-Consistent Shadow Removal". European Conference on Computer Vision, Part IV, pp. 437-450, Springer-Verlag Berlin Heidelberg.

[16.] Murali, S., Govindan, V. K.., 2013. "Shadow Detection and Removal from a Single Image using LAB Color Space". Cybernetics and information technologies. Volume 13, No 1.

[17.] Zigh, E., Belbachir, M.F., 2012. "Soft computing strategy for stereo matching of multi spectral urban very high resolution IKONOS 2 images". Applied soft computing 12, 2156-2167.

[18.] Finlayson, G.D., Hordley, S.D., Lu, C., Drew, M. S. 2002. "Removing Shadows from Images". In: Proceedings of 7th European Conference on Computer Vision--Part IV, ECCV'02, London, UK, Springer-Verlag, pp. 823-836.

[19.] Finlayson, G. D., Hordley, S. D, Lu, C., Drew, M. S. 2006. "On the Removal of Shadows from Images".--IEEE Trans. Pattern Anal. Mach. Intell., Vol. 28, No 1,pp. 59-68.

[20.] Finlayson, G. D., Drew, M. S., Lu, C. 2009. "Entropy minimization for shadow removal". IJCV, 85(1):35-57.

[21.] Martel-Brisson, N., Zaccarin, A., 2005. "Moving cast shadow detection from a gaussian mixture shadow model". In. Proc. Computer society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 20-25 June, 2, pp. 643-648.

[22.] Qiang, H., Chee-Hung, H. C., 2013. "A New Shadow Removal Method for Color Images". Advances in Remote Sensin,Vol 2, 7 pp: 7-84.

[23.] Richter, R., Muller, A., 2005. "De-Shadowing of satellite/airborne imagery". International Journal of Remote Sensing 26 (15), 3137-3148.

[24.] Weiss, Y., 2001. "Deriving intrinsic images from image sequences". In: IEEE ICCV, pp. 68-75.

H. Dermeche, A.Benali, N.Benmoussat, E.Zigh, M.F. Belbachir

LSSD: signal system and data laboratory, University of Sciences and The Technologies of Oran Mohamed Boudiaf, B.P 1505, El M'naouer, Bir el djir, 31000, USTOMB, Algeria.

dermeche@live.fr, benabdel1984@hotmail.com, benmoussat_na@yahoo.fr
Table 1: Neural stereo matching method applied on original pairs
(shadowed pairs)

Stereo shadowed images        First pair   Second pair

Number of matched pairs of    11 per 43    11 per 43
regions in the left image.

Number of matched pairs of    11 per 25    11 per 35
regions in the right image.

Number of ambiguous regions   (01, 01)     (02, 01)
(Left, Right).

Matching rate (left image,    25.58%       25.58%
right image).
                              44%          31.42%
Mean matching rate.           34.79%       28.5%

Table 2: Neural stereo matching method applied on recovered shadowed
free pairs.

Stereo recovered shadowed     First pair   Second pair
free images

Number of matched pairs of    21 per 46    18 per 56
regions in the left image.

Number of matched pairs of    21 per 33    18 per 50
regions in the right image.

Number of ambiguous regions   (01, 01)     (01, 01)
(Left, Right).

Matching rate (left image,    45.65%,      32.14%
right image).                 63.63%       36%

Mean matching rate.           54.64%       34.07%
COPYRIGHT 2015 Springfield Publishing Corporation
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:very high resolution
Author:Dermeche, H.; Benali, A.; Benmoussat, N.; Zigh, E.; Belbachir, M.F.
Publication:International Journal of Emerging Sciences
Article Type:Report
Geographic Code:1USA
Date:Mar 1, 2015
Words:3873
Previous Article:Challenges facing e-government projects: how to avoid failure?
Next Article:PLS mechanism for local search algorithm (PPCA) for medical clustering problem.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters