Printer Friendly

All-Weather Road Image Enhancement using Multicolor Content-Aware Color Constancy.


In preparation for the widespread use of self-driving cars, many studies are being conducted in relation to advanced driver assistance systems (ADAS). ADAS is designed to assist the driver and ultimately enable the car to drive itself. In other words, when a driver is unable to deal with a situation, this technology will assist with safe driving by alerting the driver and preventing an accident from occurring. To achieve this, the vehicle's external environment must be recognized, which requires various sensors such as radars, lidars, ultrasonic waves, cameras, etc. Even though the camera sensor is sensitive to changes in weather and light, it also has the ability to recognize objects. Thus, it is used to identify lanes, traffic lights, traffic signs, pedestrians, and the front and rear of cars. The recognition of surroundings using a camera sensor is a core function of ADAS and self-driving cars; thus, continuous research and development on this technology is underway.

Road images captured by the camera sensor are influenced by various sources of light depending on the weather and time of day. Road images that are captured under the influence of a source other than usual sunlight can lead to the distortion of the true colors of objects. Fig. 1 shows examples of road images that change according to the weather and time of day. The true colors of each object in the image can be used as important features for object detection and recognition, but road images are affected by significant changes in color due to changes in the source of light[1-3], as shown in Fig. 1. Restoration of the original color may enhance the ability to detect and recognize objects.

Previous image enhancement approaches are based on either increase the contrast or reduce color distortion. Retinex theory proposed by Land and McCann et al. [4] and the Automatic Color Equalization (ACE) method all employ an increasing the contrast in an image. The retinex theory states that brightness and color are determined by the reflective elements and light elements on the surface of objects. Land and McCann et al. proved that color comparisons with surroundings are required in order for people to recognize color. Jabson et al. [5] proposed a single-scale retinex (SSR) method that reduced the influence of light based on retinex properties that modeled the human visual system. This method applied a Gaussian filter on the input image, which was then removed to eliminate the light elements in the image. The main disadvantage of the SSR[5] method is that only one filter is used, and condition differs according to the filter size. To improve this, the multiscale retinex (MSR)[6] method was proposed, which applies different weights to various Gaussian filter sizes and uses the average value of the resulting images. This method was proposed as a supplement to the SSR that only uses a certain filter size. However, the color information was still distorted, because it fails to consider the correlations between each RGB channel. In an effort to resolve this color distortion issue, the multiscale retinex with color restoration (MSRCR) method was proposed, which considers color information[7]. The MSRCR[7] method adds a color restoration function to the MSR to restore colors in images taken in low-light conditions and increases contrast. However, there is still remaining problem with color distortion due to increased contrast. There is also significant noise at night time.

Gatta et al. [8] proposed the ACE method that enhances the local contrast of images by imitating the human visual system. This method uses the relative brightness value obtained through the difference between current pixels and surrounding pixels. Although this method can be used under various lighting conditions, because each RGB channel enhances images independently, it leads to an overall color distortion in the image. Moreover, the contrast becomes excessive since the absolute brightness value is not considered. Therefore, image correction methods that try to increase contrast have disadvantages such as low quality images in low-light environments and long processing time.

This paper proposes a method that enhances images in real time and is stable under various lighting conditions. It is based on the human visual system rather than existing image enhancement methods. The proposed method uses the results of previous research on detecting traffic signs in road images on detecting traffic signs from grayscale images, and then uses the actual color of the traffic signs as the standard color to make corrections. The color constancy method typically corrects distorted color values based on the color white, but the proposed method makes corrections by automatically selecting the color that will lead to the most desirable results, based on the various colors in a traffic sign. As a result, it remains effective despite changes in light and yields stable color correction results under various weather conditions and time periods, compared to most methods based on image processing. Unlike the existing color constancy method based on the use of one color in an image, this is a content-aware color constancy method that considers various colors simultaneously to enhance the accuracy of color correction. Moreover, the proposed method also enables real-time processing, which is required for road driving images used in ADAS. Fig. 2 shows a result of applying the proposed method on the same road images from Fig. 1. Through the proposed method, objects such as the car in front, traffic signs, and traffic lights can be found more robustly because of the enhanced image quality, which were previously not found due to changes in weather or time of day in Fig. 1.

In Section 2 of this paper, the color constancy theory and the content-aware color constancy method will be introduced. In Section 3, we propose a new color constancy method based on the multicolor content awareness. In Section 4, we will evaluate the performance of the proposed method and Section 5 is conclusion.


This section will explain the color constancy theory and the white balance method, before discussing the color constancy method based on content awareness. It also introduces the color adaptation model typically used in color constancy and explains the existing content-aware color constancy method.

A. Color Constancy

Humans recognize the color of an object through the reflection of light from the object. If the color of the light source is not white, it may change the recognized color of the object. However, the human visual system still recognizes the object because of color constancy. For example, the borders of traffic signs in road images are perceived as red regardless of whether it is daytime or nighttime. To imitate the color constancy system in humans, the method of modeling brightness [I.sub.i] (x,y) for each channel acquired by camera sensors modeled after the human eye is defined as Equation (1) where R(x,y) shows the reflectivity of objects for each channel and [L.sub.i] (x,y) shows the source of light for each channel. This is a basic model used in most color correction, and the color constancy method estimates light by applying various assumptions based on the above equation.

[[I.sub.i] (x,y) = [R.sub.i] (x,y) x [L.sub.i] (x,y)] (1)

The white balance method[9] is one of the main light source estimation methods based on color constancy, and it assumes that there are achromatic colors in images. Buchsbaum et al. [10] proposed the Gray World white balance method, which assumed that the average of each channel in an image was achromatic and defined through Equations (2) and (3).

[[[alpha] = [[G.sub.avg]/[R.sub.avg]]], [[beta] = [[G.sub.avg]/[B.sub.avg]]]] (2)

[[I.sub.r] (x,y) = [alpha] x [I.sub.r] (x,y), [I.sub.b] (x,y) = [beta] x [I.sub.b] (x,y)] (3)

In Equation (2), [R.sub.avg], [G.sub.avg] and [B.sub.avg] denote the averages of each channel in the input image and [G.sub.avg] is used as the brightness channel of the image. [alpha] and [beta] are obtained through [G.sub.avg], which is the average brightness of the image, which are then used as image correction coefficients in Equation (3). A drawback of the Gray World method is that images with low light become overcorrected.

The white patch retinex (WP) method [6] is similar to the Gray World [10] method, and it assumes that the area with the highest brightness in each channel of an image is the white area. To estimate the source of light, [R.sub.max], [G.sub.max] and [B.sub.max] which are the maximum brightness pixel values for each channel in the input image, are obtained using Equation (4). The coefficient for correction is calculated as follows and then applied to Equation (3).

[[[alpha] = [[G.sub.max]/[R.sub.max]], [[beta]] = [[G.sub.max]/[B.sub.max]]]] (4)

Fig. 3 shows the results of applying the WP [11] method. The top images in Fig. 3 are identical to Fig. 1, which are the original images before processing. The bottom images in Fig. 3 show the results of applying the WP method on the original images. The white balance method exhibits stable image processing performance compared to typical image processing methods, but there are often no enhancements from the original image if the contrast ratio is already high.

Besides Equation (1) which expresses the brightness of each channel, the color constancy method such as white balance can also be expressed as a chromatic adaptation model. The chromatic adaptation model is a model of the human color constancy system that reproduces the colors that are recognized as the same in environments where the source of light changes. As mentioned in the example above, the red color on traffic signs on a cloudy day is recognized as the same red on a sunny day. The Von Kries model [12] is the main color adaptation model that accurately models human vision characteristics through simple linear calculation. This model can be expressed by Equation (5) and it removes the influence of illumination by modeling the relationship between the human visual system and illumination.

[mathematical expression not reproducible] (5)

In Equation (5), [k.sub.r], [k.sub.g] and [k.sub.b] denote the Von Kries coefficients for each channel, respectively. The color of input image changes according to the coefficients. In case of WP[8], the value of input RGB would be the maximum pixel value of the input image, and the value of output RGB would be (255, 255, 255). Even if an object is influenced by various illumination, humans apply the coefficient appropriately through the color adaptation model and correctly recognize the true colors of an object.

B. Content-Aware Color Constancy

The conventional color constancy method is applied based on various assumptions during estimating illumination. Because there are various situations and the assumptions are not always right, performance may not be enhanced. In order to make better results, this paper will apply a content-aware color constancy method that calculates the Von Kries coefficients when the target object is already detected and its true colors are known.

Hansen et al. [13], who introduced the content-aware color constancy method using the recognition of an object seen in the past does not simply related with colors reflected on the object, but is also influenced by the previous experience associated with the objects. In this experiment, when the color of a banana was adjusted to monotone colors, subjects still saw the banana as color yellow. This shows that for familiar objects, even though the colors of an image are adjusted to be monotone, the image is not completely monotone. This phenomenon is called the memory color effect, which means that the object recognition process is not only influenced by stimulants to the eye, but is also influenced by the memory of the object's true colors. Xue et al. [4] conducted a perception experiment on three objects: skin, sky, and grass. This experiment checked the main memory color distribution of both context-free and context-based objects. By having the evaluation of the image that was enhanced through memory color, the results showed that the image can be enhanced if memory color is used. Rahtu[15] and Moreno[16] used memory color to propose a framework that improved the performance of the automatic color constancy method, and the images were corrected using the memory color of objects with one color. Rahtu[15] used three objects (grass, leaf, and sky) as the standard. If the standard object exists in the image, the white balance was adjusted to change image from observed color to true color. This experiment demonstrated improved performance over the existing white balance method. Moreno[16] used grass, snow, and sky as the objects, and categorized various lighting conditions. The results showed that a Von Kries model[12] which is suitable for the dataset could be found. Nachlieli[17] and Bianco[18] corrected photos of people by using memory colors of facial skin color using facial recognition technology. Nachlieli[17] proposed a method that partially corrects a person's image, because people want to smooth only skin for hiding blemishes and want to make other part to sharp. Bianco [18] estimates the illumination based on skin color and using it to restore the color. Because the standard color used for corrections in the aforementioned content-aware color constancy methods uses only one color of one object, as with the white balance method, it is difficult to enhance images with various sources of light, such as those obtained from the outdoor environments.


ACE and MSRCR, which are the major image enhancement methods, have difficulty with certain illumination. For example, the existing image enhancing method shows dramatic decrease of performance in nighttime images and also has long processing time. On the other hand, the color constancy method shows an overall stable image enhancement performance. However, content-aware color constancy methods generally have just one standard color, and if this standard color cannot be found in the image, it has difficulties in enhancing the image. Even though it uses a single standard color, this color could be changed to get better results due to time of day or weather condition. Therefore, in addition to a content-aware color constancy method, this paper also proposes a method that enhances images with the most desirable color by designating the main color as the standard color, instead of a single standard color. Fig. 4 shows the general flow of the proposed method.

The proposed content-aware color constancy method uses the true color of a specific object that may appear in the image to determine the restoration coefficients for changes in lighting or tone, and applies these to the overall image.

Existing content-aware color constancy methods maintain color constancy by using just one main color for objects such as faces, grass, or the sky [14-18]. Moreover, these methods enhanced images that were taken in controlled environments, making it difficult to obtain the same results from road driving images, which have frequent changes in lighting. Therefore, this paper uses traffic signs from road images as the main object for maintaining color constancy. Traffic signs comply with the colors, specifications, and shapes designated by the Vienna Convention on Road Traffic. While these standards may differ in each country, most countries have the same colors, shapes, and specifications. The standard colors used in this paper were red, white, and yellow for traffic signs. Black was not used, since there are almost no means for images to be enhanced by changes in black colored areas.

Fig. 5 shows a summary of the proposed method. First, the traffic sign is detected in the input road image, then the pixels within the range of standard colors in the detected traffic sign are selected. The selection is made randomly from within the range of the traffic sign, but it includes all the standard colors. By comparing the recognized color values representing the traffic sign pixels randomly selected under the current lighting conditions against the standard color values of the traffic sign previously extracted from favorable white lighting, the standard color value with the least color distortion was found and the Von Kries coefficients were determined from this value. The magnitude of the selected coefficient was reviewed and the coefficient was readjusted within the set range, then an enhanced image was created by applying this coefficient.

A. MCT Based Road Traffic Sign Detection

Various methods for detecting and recognizing traffic signs have been proposed, and these methods can be categorized into those that detect traffic sign areas from an image and those that recognize detected content. In this paper, the modified census transformation-based (MCT) traffic sign detection proposed in a previous study[19] was applied and the detected traffic sign range was used as the standard content recognition area.

The census transformation-based (CT) method is widely used for reducing the effects of light on an image. This method compares the brightness of neighboring pixels based on the brightness of pixels at the center of a 3 x 3 or 5 x 5 range in the image in order to express the edge shape of local areas as an index. The most representative methods include local binary pattern (LBP) and MCT. The MCT used by Lim et al. [19] was the CT transformation proposed by Froba and Ernse[20], which is a transformation method that expressed the relationship of each pixel in a range compared to the average brightness of a 3 x 3 range in an image. Comparing the brightness values of each pixel and the average brightness of a small area of a 3 x 3 range, if brightness is higher, then the average value is 1; if it is lower, then the value is 0. A total of 9-bit binary patterns were generated. Because only the brightness of each pixel was compared to the average of a local area through this process, the influence of the overall brightness of the image was somewhat nullified and only reflective elements remained, thus creating strong characteristics in light changes. The 8-bit MCT did not compare the size of central pixels in a 3 x 3 local area. Fig. 6 shows the process of creating the results of 8-bit MCT transformations. These are the results of transforming the binary index value that was obtained for each 3 x 3 area into a brightness value between 0 and 255. This process obtains the index of structural edge shapes of 8-neighboring pixels excluding central pixels, and it is highly robust against the noise of central pixels even in linear, diagonal, corner detection, nighttime images, and images with significant noise. Since the size of binary vectors is cut in half, fast processing speed is another advantage. Fig. 7 shows the results of applying 8-bit MCT in various lighting environments. It shows that this method is strong despite changes in lighting.

After applying MCT characteristics that are robust against changes in lighting to the image, a four-stage cascaded classifier that was learned through Adaboost was used to detect the candidate area of traffic signs. The Adaboost classifier is often used to detect objects in images rather than the detailed recognition of images, and its low calculation costs are advantageous. Froba[20] proposed a method that detected facial features in an image based on MCT characteristics, and this method used a strong classifier that connected and composed weak classifiers relevant to the total number of pixels in a range in four stages. This method repeatedly learned the process of allocating highly weighted values to pixel classes, where the face and background were the most easily classified through Adaboost in an image transformed through MCT, and reduced the weighted value for pixel classifiers with many classification errors. Lim et al. [19] proposed an enhanced Adaboost algorithm suitable for detecting traffic sign ranges that learned by selecting landmarks that were determined to be important locations in the shape of the traffic signs to be detected. Fig. 8 shows the range of the traffic sign detected in low lighting or nighttime images. The RGB color value inside the range of the detected traffic sign was used as the content-aware standard color and the color constancy method was applied

B. RGB Based Multicolor Value Selection

The content-aware color constancy method corrects images by restoring the object's color that was transformed based on object important locations in the shape of the traffic signs to be detected. Fig. 8 shows the range of the area with true color data. In order to recognize the true color of traffic signs that will become the basis of correction, standard color values were selected from the pixel values of traffic signs under natural light (white), where the object's true color can be recognized. Then, the traffic sign's standard colors that will become the standard in road driving images were determined. The pixels were categorized according to standard colors and the central value of each standard color group was selected as the standard color value. The selected standard color values were used in setting the Von Kries coefficients in Section 3.3.

The standard colors of traffic signs used in this paper were red, white, and yellow. Black was not used because there are almost no means for images to be enhanced by changes in the color black. The selected three colors are suitable as standard colors since they are in line with the designated specifications for traffic signs of the international convention on road traffic. Rather than using one color to maintain color constancy, multiple colors were selected as the standard colors so that various lighting can be effectively managed. The enhanced image results can be seen through the experiments.

Since multiple colors were selected as the standard color, the pixels in traffic signs must be categorized according to standard color. Here, the background of the traffic sign that did not include the standard colors was first removed to facilitate categorization according to standard colors. The pixels that remained after removing the unnecessary background were categorized as red, yellow, or white pixels, and the Euclidian distance from red (255, 0, 0), yellow (0, 255, 255), and white (255, 255, 255) in the RGB color space to the pixel of the traffic sign image was used as the categorization method. Each pixel in the image can be categorized into a color that is similar to one of the three standard colors. Pixel values with a Euclidian distance above a certain value are considered a different color that is not one of the three standard colors, and were thus excluded. A Euclidian distance of 128 was used as the standard in this paper. Of the colors included in the traffic sign, black pixels were excluded because they are not one of the standard colors. Pixels regarded as black have a distance from (0, 0, 0) to 128, thus they are automatically excluded if the pixels are categorized based on that distance. Fig. 9(a) shows a traffic sign with the background removed, and Fig. 9(b) shows the distribution of the RGB space of pixels that were categorized according to the standard color categorization process in the traffic sign range. The categorized pixels are RGB values of each pixel and the color can be seen accordingly. Weather conditions aside from sunny (daylight) in Fig. 9 are arranged in an order that is closest to black through the experimental method described in Section 4. It shows the distribution after removing 20% of all pixels near to the black.

A K-means clustering was performed for each standard color pixel cluster, and central values were found for each standard color to select M standard color values. Each central value is a sample representing a true color in the traffic sign when the lighting is white (daylight). These values were used in the process of finding a pair with the least distortion by comparing recognized color values for traffic signs under different lighting. Table I shows the M central values obtained as cluster results from each standard color pixel cluster. Ten standard color values were selected for each standard color. By comparing these against the recognized color values of a traffic sign area under changing lighting conditions, stable color correction values can be obtained regardless of the weather or time of day.

C. Determining the Von Kries Coefficients

The standard colors of pixels that will be sampled from the detected traffic sign are red, white, and yellow. Because road driving images are influenced by various outdoor lighting, the current traffic sign color will differ from standard color values with ideal true color data. In other words, standard color values are color values of traffic signs under desirable sunlight, but the sampled pixels may be colors of traffic signs seen on cloudy days, rainy days, or at night.

The proposed method compares recognized color values of sampled pixels that were influenced by various sources of light against standard color values that were selected under average sunlight. Thus, the pixel pair that is closest in color is selected and used to determine the Von Kries coefficients.

In order to sample pixel values that pertain to standard colors in the detected traffic sign, 10% of the areas at the top and bottom of the center of the traffic sign were removed and sampling was performed on the remaining 80% area. Ten percent of the top and bottom of traffic signs were excluded because they can cause errors during the process of detecting traffic sign areas. Fig. 11 shows the sampling area. Because the color black may be included instead of standard colors, the average brightness value of pixels in the sampling range must be above a certain level in order to eliminate this area. For the pixels that pass the criteria, N pixels were randomly selected. Here, N is set to 10 and 10 pixels were extracted. The target traffic sign includes both circular and triangular shapes, and while the outer color is red, the inner background is white or yellow. Therefore, the 10 sample pixels can either be red and white or red and yellow. These pixels are the recognized color values and will be compared to the standard color values that were established in advance to determine the Von Kries coefficients.

In order to compare the recognized color values that were extracted against the degree of distortion of the standard color values, the angle and Euclidian distance between two pixels within the RGB color range were used simultaneously. The angle between two pixels is typically the standard used to evaluate the performance of the color constancy method. If the angle between two pixels is small, the color distortion is minor. The angle between the recognized and standard color values were measured and pixel pairs were arranged in the order of smallest, and it was measured using the arccosine in Equation (6).

[mathematical expression not reproducible] (6)

A and B are the sample pixel and standard color value, respectively, shown in vectors in the RGB three-dimensional space, and [theta] is the angle between the two vectors. Then the Euclidian distances between the top three pixel pairs with the smallest angle among pixel pairs that minimize color distortion were calculated. The Euclidian distance was used since it can measure the difference in color brightness.

The recognized color value of the selected pair was input as the RGB value and the standard color value was output as the RGB value in Equation (1) to determine the Von Kries coefficients. Then the image was corrected by applying the determined coefficients to the pixels of the entire image. Fig. 10 shows the results of applying the Von Kries coefficients. The standard color value from one selected pixel pair is one of the three standard colors, and Fig. 10 shows the results of correcting an image according to the selected standard color. The first image is the result of selecting the red as a standard color for a triangular traffic sign using red and yellow as the standard colors. The second image is the result of making corrections using white and the third image then yellow. The second row is the result of applying the WP method, which is comparable to the proposed method.

D. Readjusting the Von Kries Coefficients

After applying the Von Kries coefficients if the pixel value of the resulting image exceeds the maximum which is 255 in usual intensity, we need to readjust the coefficients. The pixel values often have values that exceed 255, resulting in an image with saturate brightness, as shown in Fig. 12(a).

Therefore, if the coefficient exceeds a certain size, [alpha] in Equation (7) is applied to the Von Kries coefficients to reduce saturation and generate results such as the one shown in Fig. 12(b). After applying the Von Kries coefficients to an image, if the number of pixel values in each channel that exceeds the tolerable range of 255 is more than 1% of all pixels in the image, then the coefficient is adjusted through [alpha].

[mathematical expression not reproducible] (7)

where [Mean.sub.after] is the average brightness after correction, and [Mean.sub.after] is the average brightness before correction, and constant C is the correction coefficient. [alpha] is calculated by using the difference in average brightness. It is multiplied all at once against the Von Kries coefficients of the image whose brightness exceeded standard levels and applied accordingly. If there is no difference in average brightness before and after correction, [alpha] is close to 1 and the Von Kries coefficients will be similar to before. However, when [alpha] is closer to 0, then the Von Kries coefficients become decreasing and the average brightness value of the post-correction image will be lower than before [alpha] was applied.


A. Experimental Environment and Data

In order to evaluate the results of the proposed method, road driving image data provided by Hyundai Mobis were used. Table II shows the environment of the experiment for the proposed method, and Table III shows the domestic road driving image data provided by Hyundai Mobis. The image correction method based on traffic sign extraction was used and all the frames in Table III include traffic signs. The proposed method used multiple colors to maintain stable performance under various lighting environments, and lighting status was categorized by weather and time of day.

The proposed method was applied to road signs of countries which complied with the Vienna Convention on Road Signs and Signals. Three colors, red, white, and yellow, which are the most commonly used, are selected as the reference colors for the signs. For signs in the countries that do not comply with the Vienna Convention, the same method can be applied by reconstructing the colors shown in Table 1 from road sign image data to which this system is applied.

B. Experiment and Performance Measurement Method

The proposed method tested with four different weather images. The data used were all outdoor images and because there is no ground truth image to quantitatively evaluate the performance of the proposed method, a qualitative evaluation was first performed. Also, a method that configures a ground truth and test set was also added to enable quantitative evaluations.

To evaluate the proposed method quantitatively, the sunny day image was designated as the standard ground truth, and test images were created by distorting the sunny day image according to weather. The traffic sign on a sunny day was selected as the standard state with true color data, and the color of sample pixels from test image traffic signs were assumed as distorted states. By restoring these colors to the colors of the standard state, the proposed method could be evaluated quantitatively. Therefore, the sunny day image was distorted into colors on a cloudy day, rainy day, and nighttime, and the distortion method is explained below. The performance of the color constancy method for outdoor images will be evaluated by measuring the degree of color distortion between the ground truth image at a standard state and the results of applying various image enhancement methods to test images with distorted colors.

The method of distorting colors in an image from sunny to cloudy days is similar to the process in Section 3. The method of selecting a standard color value is also the same as the method of selecting a standard color value for a sunny day image in Section 3.2, but the image of a distorted day is selected as the standard color value instead of a sunny day. To eliminate black pixels in the traffic sign, the pixels in the traffic sign were arranged based on the distance from white (255, 255, 255) in the RGB color space, then the central value for 80% of the pixels close to the color white was calculated. The pixel colors were not categorized according to standard colors because it is difficult to categorize standard colors using only Euclidian distance when the overall image is dark, such as those at nighttime or on cloudy days. The calculated central value represents the color of the traffic sign under distorted lighting. These color values were used to find the pixel pairs like in Section 3.3, and the Von Kries coefficients ware applied to pixels in the entire image to distort the image.

Test images were configured by distorting a sunny day image using distorted standard color values calculated from the ground truth from a sunny day as in Fig. 13 and other weathers. Thus, the difference between the ground truth and the results of correcting randomly distorted test images was measured to perform a quantitative performance evaluation. The root mean square (RMS) error and angular error were used as performance evaluation indices, and each error was calculated using Equations (8) and (9).

[mathematical expression not reproducible] (8)

[mathematical expression not reproducible] (9)

Then in Equations (8) and (9) n is the number of pixels, and A and B are the pixel RGB values before and after correction, respectively. Because the RMS error reacts sensitively to differences in pixel value for each channel, the difference between color distortion and color brightness can clearly be seen. The angular error in Equation (9) is a basic method for evaluating the color constancy method. The ground truth pixel that becomes the standard for RGB image correction and the proposed method was used to calculate the angular errors between pixels of the corrected image so that color restoration performance can be measured.

C. Experiment Results

Fig. 14 shows the results of comparing the proposed method with previous representative methods. The last row in the figure shows the results of applying the proposed method to the original image in the first row. The results of enhancements on a sunny day image show no major differences from the original image because the sunny day image was used as the standard of color restoration. Sunny day images, where it is easy to find specific objects in road images, were sufficient even if there were no major changes in the restoration results. The results of enhancing images of other weathers and time of day, such as cloudy, rainy, or nighttime, made the image brighter compared to the original one, and made it easier to find specific objects in the road image. As the distorted red borders of traffic sign was corrected to a certain level, results similar to the human color constancy maintenance system were observed. The results of applying ACE[8], MSRCR[7], and WP[11], are shown in Fig. 14 in rows 2, 3, and 4, respectively, in comparison to the results of the proposed method. In the daylight results of ACE[8] and rainy day results of MSRCR[7], the overall images were enhanced to make them clearer. However, but both methods involved significant color distortions for rainy day and nighttime images with low lighting because the correlation between each RGB channel was not considered in both methods.

Row 4 in Fig. 14 is an example of applying the WP[11] color constancy method and shows the results of maintaining color constancy using only the color white. The results are comparable to the multicolor-based color constancy method proposed in this paper. The image resulting from the proposed method is slightly brighter and the colors were restored closer to the true colors of the traffic sign. Moreover, the differences in restoration from the two methods in Fig. 15 can be seen in more detail. Fig. 15 shows the results of applying two methods on a triangular traffic sign. The triangular traffic sign has no white, and only red and yellow were established as the standard colors. Because WP[11] assumes that the maximum brightness value of pixels in an image is white, it is difficult to obtain good performance for images with a balanced contrast distribution because the maximum brightness value is already white in such images. For the results in Fig. 15, because there was no white in images with significantly distorted colors, such as those on rainy days, the WP[11] method is unable to restore the image properly. However, if multiple standard colors were used as in the proposed method, colors can be restored stably even for dark images.

Table IV shows the results of comparing the processing times of the proposed and existing methods. The processing time of the proposed method is the shortest, thus making it most suitable for ADAS technology that requires real-time processing. There is a difference of up to 26 times compared to the ACE[8]. Table V shows the results of quantitatively measuring the performance of the proposed method by setting a sunny day image as the ground truth, and then distorting this image and measuring the RMS error and angular error rate of the image after it is restored.

The ACE[8] and MSRCR[7] are enhancement methods using contrast, thus they have difficulties in reducing color distortions and restoring color, and have significant errors compared to the proposed method. On the other hand, the WP[11] method has less errors and the degree of color distortion is relatively small compared to the above two methods since it restores images by maintaining color constancy and restoring the maximum pixel value in an image to the color white. However, because it only restores images using the color white, it still has major errors compared to the proposed method in most other lighting conditions.

After testing the proposed method, the RMS error rates were 24.5, 22.5, and 29.2 on cloudy days, rainy days, and at nighttime, respectively. The angular error rates were 1.25, 1.37, and 3.45 on cloudy days, rainy days, and at nighttime, respectively. The proposed method demonstrated overall superior performance compared to other methods regardless of the weather or time of day. However, the RMS error on a cloudy day was higher for the proposed method than the WP [11] method, and the reason for this can be explained by Table VI, which compares the results of applying each method to circular and triangular traffic signs on a cloudy day. For the circular traffic sign, WP[11] showed a performance of 21.91, which is better than the proposed method. However, the proposed method showed better performance at 16.87 for triangular traffic signs. The reason why the WP[11] method was better on cloudy days in Table V is because the sum of the average performance of circular and triangular traffic signs is better than the average performance of the proposed method, as shown in Table VI. The circular traffic sign performance for the proposed method was 27.27, which is significantly different from the 21.91 achieved through WP[11]. This is because there are more data for circular traffic signs based on the characteristics of datasets, which led to higher performance for WP[11] for circular traffic signs. However, for triangular traffic signs with less white than circular traffic signs, because WP[11] assumes that white pixels exist in the image, the proposed color constancy method that used multiple standard colors in addition to white demonstrated superior performance. Moreover, the proposed method had fewer errors than WP[11] for images that were dark overall, such as those taken on rainy days or at night.

Fig. 16 shows the results of enhancing a randomly distorted image by comparing the ground truth image on a sunny day with the proposed method and existing image correction methods. Because the ACE[8] and MSRCR[7] methods enhanced contrast, and overall image became clearer. However, the color distortion was severe for nighttime images, as shown in Fig. 15. The performance of the WP[11] method was similar to that of the proposed method under various lighting environments, but the results were not as stable as the proposed method based on multiple colors.

D. Discussions

When the sign is completely hidden by mud or other elements, it can happen that the signs are not found using the detection method [19] that detects the road sign area for accuracy of more than 95% in real time. In this case, the image is not improved in short interval until the next sign is found, but the Von Kries coefficients found in the previous signs can be applied equally or with different weights.

On the other hand, when the sign is found, but the part of the sign is covered by impurity, the pixels of the undamaged part can be sampled to find the reference color as in Fig. 11. If any specific color is completely obscured by the damage, since the proposed method is based on multi-color selection, the reference color can be selected from other colors that are not damaged.

If the color of the sign is changed by the sun and the weather for a long time, several color values are included as reference for each reference color as in Table 1, a color value of a color similar to the discolored one will be selected in Table 1, preventing the image from being over-corrected due to the detection of the discolored road sign in the clear day. If the reference color is found but the correction range of the Von Kries coefficients is large, the degree of image enhancement is limited by readjusting the coefficients.


This paper proposed a multicolor content-aware based color constancy algorithm that can stably enhance road driving images in real time regardless of the lighting environment. In the proposed method, three colors from traffic signs on an ideal sunny day were selected as the standard colors by detecting the traffic sign in road driving images from previous studies. The colors of traffic signs in the image with distorted lighting to be corrected were matched with the standard colors, the color constancy model coefficients were calculated, and these values were used to correct the image. If the pixel value of the post-correction image exceeded the tolerance level, the coefficients were readjusted and the image was restored accordingly. Our method showed that significantly better results can be obtained for images with low lighting, such as those on rainy days or at night. Thus, the proposed method can be stably used under various weather conditions and times of day. Considerations on how to apply the Von Kries coefficients found in previous signs during the course of travel, and how they are applied to currently or not currently found coefficients, will be considered in future studies. Future studies should also adaptively process the Von Kries coefficients calculated based on multiple traffic signs in a single image for each image area.


[1] Y. Zhang, J. Xue, G. Zhang, Y. Zhang, and N. Zheng, "A multi-feature fusion based traffic light recognition algorithm for intelligent vehicles," 33rd Chinese Control Conference (CCC), pp. 4924-4929, 2014. doi:10.1109/ChiCC.2014.6895775

[2] M. Diaz-Cabrera, P. Cerri, and P. Medici, "Robust real-time traffic light detection and distance estimation using a single camera," Expert Systems with Applications, pp. 3911-3923, 2014. doi:10.1016/j.eswa.2014.12.037

[3] M. A. A. Sheikh, A. Kole, T. Maity, "Traffic sign detection and classification using colour feature and neural network," In Intelligent Control Power and Instrumentation (ICICPI), pp. 307-311, 2016. doi:10.1109/ICICPI.2016.7859723

[4] E. H. Land and J. J. McCann, "Lightness and Retinex Theory," Josa, vol. 61, no. 1, pp.1-11, 1971. doi: 10.1364/JOSA.61.000001

[5] D. J. Jobson, Z. Rahman, and G. A. Woodell, "Properties and Performance of a Center/Surround Retinex," IEEE Transactions on Image Processing, vol. 6, no. 3, pp 451-462, 1997. doi: 10.1109/83.557356

[6] Z. U. Rahman, D. J. Jobson, and G. A. Woodell, "Multiscale Retinex for Color Image Enhancement," Image Processing, vol. 3, pp. 1003-1006, 1996. doi: 10.1109/ICIP.1996.560995

[7] D. J. Jobson, Z. U. Rahman, and G. A. Woodell, "A Multiscale Retinex for Bridging the gap between Color Images and the Human Observation of Scenes," IEEE Transactions on Image processing, vol. 6, No. 7, pp. 965-976, 1997. doi: 10.1109/83.597272

[8] C. Gatta, A. Rizzi, D. Marini, "Ace: An Automatic Color Equalization Algorithm", In Conference on Colour in Graphics, Imaging, and Vision, pp. 316-320, 2002.

[9] M. D. Fairchild, "Color Appearance Models", John Wiley & Sons, 2013. doi: 10.1002/9781118653128

[10] G. Buchsbaum, "A Spatial Processor Model for Object Colour Perception," Journal of the Franklin institute, vol. 310, no. 1, pp 1-26, 1980. doi: 10.1016/0016-0032(80)90058-7

[11] E. H. Land, "The Retinex Theory of Color Vision", pp. 2-17, Scientific America, 1977. doi: 10.1038/scientificamerican1277-108

[12] J. Von Kries, "Die Gesichtsempfindungen", Handbuch der Physiologie der Menschen, 1905.

[13] T. Hansen, M. Olkkonen, S. Walter, and K. R. Gegenfurtner, " Memory Modulates Color Appearance," Nature Neuroscience, vol. 9, no. 11, pp1367-1368, 2006. doi: 10.1038/nn1794

[14] S. Xue, M. Tan, A. Mcnamara, J. Dorsey, and H. Rushmeier, "Exploring the Use of Memory Colors for Image Enhancement," IS&T/SPIE Electronic Imaging. International Society for Optics and Photonics, vol. 9014, 2014. doi:10.1117/12.2036836

[15] E. Rahtu, J. Nikkanen, J. Kannala, L. LepistLe, and J. Heikkilnd, "Applying Visual Object Categorization and Memory Colors for Automatic Color Constancy," In International Conference on Image Analysis and Processing, vol. 5716, pp. 873-882, 2009. doi: 10.1007/978-3-642-04146-4_9

[16] A. Moreno, B. Fernando, B. Kani, S. Saha, and S. Karaoglu, "Color Correction: a Novel Weighted Von Kries Model Based on Memory Colors," In International Workshop on Computational Color Imaging, vol. 6626, pp. 165-175, 2011. doi: 10.1007/978-3-642-20404-3_13

[17] H. Nachlieli, R. Bergman, D. Greig, C. Staelin, B. Oicherman, G. Ruckenstein, and D. Shaked, "Skin-sensitive Automatic Color Correction," SIGGRAPH, New Orleans, 2009. doi:

[18] S. Bianco and Sc. Raimondo. "Adaptive Color Constancy Using Faces," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 8, pp 1505-1518, 2014. doi:10.1109/TPAMI.2013.2297710

[19] K. Lim, Y. Hong, Y. Choi, H. Byun, "Real-time Traffic Sign Recognition Based on a General Purpose GPU and Deep-learning," PLoS one, vol. 12, no. 3, 2017. doi: 10.1371/journal.pone.0173317

[20] B. Froba and A. Ernst, "Face Detection with the Modified Census Transform," Proc. of the sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 91-96. 2003. doi:10.1109/AFGR.2004.1301514

Dongah LEE (1), Taehung KIM (1), Hyeran BYUN (1), Yeongwoo CHOI (2*)

(1) Department of Computer Science, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, Republic of Korea

(2) Department of Computer Science, Sookmyung Women's University, 100, Cheongpa-ro 47-gil, Yongsan-gu, Seoul, Republic of Korea


Digital Object Identifier 10.4316/AECE.2018.03011

Standard       Red           White            Yellow
Channel   R    G    B   R    G    B    R    G    B

1         192   66  19  218  197  178  221  164  54
2         202   86  48  210  184  167  210  161  62
3         182   76  37  255  255  255  217  171  44
4         255    0   0  201  183  170  216  156  45
5         207  102  36  202  187  176  225  173  74
6         199   81  40  210  190  173  255  255   0
7         190   87  52  210  195  183  213  150  40
8         190   87  58  224  183  163  220  176  55
9         195   58   8  191  187  180  216  159  35
10        197   73  28  217  193  170  222  151  55


Processor  Intel Xeon E5-1650 3.5GHz
Memory     64 GB
VGA        Nvidia GeForce 960
OS         Windows Server 2012 R2
Tool       Visual Studio 2013
Library    OpenCV 3.0, Nvidia Cuda


Test set  Resolution  Frames  Location

Daylight  1280 x 720   859    Korea
Cloudy    1280 x 720   744    Korea
Rainy     1280 x 720  1730    Korea
Night     1280 x 720  1094    Korea


Test Set       Average Computation Time

ACE[8]         341.1042
MSRCR[7]       24.10832
WP[11]         14.20164
Proposed(GPU)  13.3159


                  RMS Error                 Angular Error
          Cloudy  Rainy      Night  Cloudy  Rainy   Night

ACE[5]    34.41   34.13      33.18  6.22    6.12    6.35
MSRCR[4]  39.49   39.41      38.56  4.50    4.53    4.67
WP[8]     21.99   26.77      33.49  2.70    2.44    5.05
Proposed  24.54   22.49      29.23  1.25    1.38    3.45


          Circular-type  Triangular-type

ACE[8]    35.99          29.99
MSRCR[7]  40.53          36.56
WP[11]    21.91          22.23
Proposed  27.27          16.87
COPYRIGHT 2018 Stefan cel Mare University of Suceava
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Lee, Dongah; Kim, Taehung; Byun, Hyeran; Choi, Yeongwoo
Publication:Advances in Electrical and Computer Engineering
Article Type:Technical report
Date:Aug 1, 2018
Previous Article:A Proposal of a Novel Method for Generating Discrete Analog Uniform Noise.
Next Article:Implementation of High Speed Tangent Sigmoid Transfer Function Approximations for Artificial Neural Network Applications on FPGA.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |