# A co-occurrence region based Bayesian network stepwise remote sensing image retrieval algorithm.

1. IntroductionStudies on natural and medial image retrieval have achieved remarkable results. Meanwhile, there have been numerous research projects designed specifically for remote sensing image retrieval. However, both theoretical and application systems are immature and need to be further researched (Samadzadegan et al., 2012; Hejazi et al., 2017). For example, the KIM system developed by DLR explores implied semantic features of images by Bayesian networks. Nanyang Technological University conducts multi-sensor (RS)2I thoroughly research feature description and extraction, multidimensional indexing technology and system structure design (Mountrakis, 2012). The study compares existing image retrieval systems and focuses on image storage, network transmission model, feature extraction and description, segmentation algorithm, reasonable segmentation, similarity measurement and relevance feedback. The results show that studies on remote sensing image retrieval have some deficiencies, which are mainly manifested as follows: (1) Although studies on content-based image retrieval have achieved brilliant results, they are mostly focused on natural and medical images. Rarely do scholars study remote sensing images (Han, 2012; Xu, 2015; Radan et al., 2017). (2) Currently, remote sensing image retrieval systems are mainly based on texts, strip numbers, latitudes, and longitudes, rather than contents (Abdi, 2013). (3) Scholars have made great use of Bayesian networks to study semantic-based remote sensing image retrieval. Retrieval precision of existing algorithms can be further improved. (4) The retrieval precision is inversely related to time efficiency for most image retrieval algorithms. In other words, high-precision image retrieval tends to consume lots of time (Zhai, 2014; Simon et al., 2017).

This paper proposes a stepwise Bayesian network algorithm for retrieving remote sensing images, aim to address these problems; the schema combines co-occurrence region-based Bayesian network image retrieval with average high-frequency signal strength, and adopts integrated region matching for iterative retrieval, thereby efficiently improving the precision of semantic retrieval and significantly reducing the retrieval time. In the meantime, semantic-based feature vectors are introduced to enhance the precision of image retrieval.

2. Material and Methods

In content-based image retrieval, scientific and reasonable similarity measurement models are the keys to achieving the precision of the entire retrieval system (Zhong et al., 2012; Bata et al., 2017). Bayesian network image retrieval systems consist of three parts (Boser et al., 1992; Roslee et al., 2017): (1) Segment images with a simple image segmentation algorithm and extract color or texture features of each sub-image to describe image contents, which is referred to as image segmentation and feature extraction. (2) Employ unsupervised classification to classify the extracted features, generate codebooks and encode images following the codebooks, thereby producing encoded image library. (3) Obtain the final retrieval results by selecting training samples and calculating related probabilities, which is known as semantic inference. The paper proposes two concepts: code co-occurrence matrix and semantic score function to simplify the process of semantic inference.

2.1 Co-occurrence Region Based Bayesian Network Algorithms for Remote Sensing Images Retrieval

2.1.1 Features of IKONOS Images

IKONOS is the world's first high-resolution commercial satellite, with an orbit altitude of 681km, a revisit period of 3 days and a nadir image swath width of 11.3km. It has one panchromatic band and four multi-spectral bands (Pedergnana, 2013; Rahman et al., 2017). Wherein, the spectral resolution of the panchromatic band and multi-spectral bands is 1m and 4m respectively. Fundamental parameters of wave bands are shown in Table 1.

This study adopts a 256-dimensional color histogram to describe spectral features of images. In fact, Swain et al. proposed to describe color features of images with color histograms in 1991. Since color histogram, which presents the number of image pixels of each color afterimage colors is quantified, has transition invariance and strong robustness to rotation and scale changes, it is extensively used (Stumpf, 2014; Ruiz, 2014; Maleki-Ghelichi and Sharif, 2017). Normalization processing shall be adopted, and color features shall be described by the proportion of

H(k)= [n.sub.k] /N,k = 0,1,2-",L-1 (1)

different colors to keep the scale invariant; as shown in Formula (1).

In the formula, N is the total number of image pixels; L is the dimension of the color histogram; nk is the number of image pixels in the k-th color dimension.

2.1.3 Image Segmentation

In the experiment, multi-spectral KONOS images are segmented into non-repetitive sub-images, and 256-dimensional color histograms of the sub-images are extracted and considered as their spectral features. However, the sub-images are in large quantities, and objects with similar characteristics are segmented into different sub-images, which is not conducive to image retrieval (Amiri et al., 2017). To better describe major features of images and reduce the number of feature vectors, the k-means algorithm is introduced to cluster segmented sub-images. The clustering results are deemed as the final segmentation results. Sub-image is taken as the minimum unit to judge whether image regions are spatially connected, preventing segmented regions from being spatially unconnected (Tariq et al., 2017). If not, the clustering results shall be segmented so that the images are spatially combined after segmentation.

2.2 Codebook and Image Coding

2.2.1 Codebook

Image features are accurately extracted for quantification and classified by the k-means algorithm to realize many-to-many matching between image features and semantic meanings. Assuming that all image areas in the database are expressed as [mathematical expression not reproducible], N is the total number of image segmentation regions, F(D) is the region feature extraction function and F(D) represents features of all image areas in the database. F(D) is classified into L categories by the k-means algorithm, and each cluster [[lambda].sub.u] (1 [less than or equal to] u [less than or equal to] L) is a set of features. Let [C.sub.u] be the cluster center of [[lambda].sub.u], then there is a mapping relation [mathematical expression not reproducible] Wherein, C[B.sub.i] is referred to as codebook and the index number between [C.sub.u] and [[lambda].sub.u] is referred to as code. For the image It, its region code [R.sub.t] can be expressed as Ci([R.sub.t]). Wherein, [C.sub.i] is the code function of the region. Since codebooks generated by the k-means algorithm are all of the fixed length and appropriate codebook sizes are difficult to be identified, this paper adopts tree structured coding to solve the problem. Tree-structured codebooks are composed of different levels of [mathematical expression not reproducible] (n represents the number of codebook levels). Figure 1 demonstrates a schematic drawing of structured tree coding.

Extraction

According to Figure 1, top-level codes have more regional features, which flexibly and efficiently allows users to select codebooks required at different levels of [mathematical expression not reproducible]. Based on the features of image areas, the chosen codebooks shall be used to encode images, thereby generating image library (encoded image library).

2.2.2 Image Coding

It is necessary to select proper codebooks from tree-structured codebooks to encode images. The coding function of image areas is defined as [mathematical expression not reproducible] (n represents the number of tree-structured codebook level), and the process of image coding is described as follows:

Step 1: select a m-level tree-structured codebook [mathematical expression not reproducible] for [mathematical expression not reproducible] in [I.ub.t] and its region features [mathematical expression not reproducible], and let the optimal codebook level l be 1.

Step 2: traverse each cluster [[lambda].sub.u], identify the cluster center [C.sub.u] that contains clusters of regional features [mathematical expression not reproducible] and encode the region [mathematical expression not reproducible] as u, then, l = I +1.

Step 3: if l > m, the program stops. Otherwise, it will go to Step 2.

Step 4: encode [mathematical expression not reproducible] into a code strand (each code represents clusters of image areas at different codebook levels), encode each image area and obtain encoded images, thereby forming encoded image library.

2.3 Semantic Inference

2.3.1 Mathematical Models

[mathematical expression not reproducible],[mathematical expression not reproducible] and [mathematical expression not reproducible] coexist in the image [I.sub.t], [mathematical expression not reproducible] and [mathematical expression not reproducible] are referred to as co-occurrence regions of [mathematical expression not reproducible]. Semantic inference is based on codes of [mathematical expression not reproducible] and its co-occurrence regions ([mathematical expression not reproducible] and [mathematical expression not reproducible]). Spectral or texture features of the triad constituted by [mathematical expression not reproducible], [mathematical expression not reproducible] and [mathematical expression not reproducible] are described by corresponding code triad ([mathematical expression not reproducible]) (region code). Assuming that the codebook has L codes, it may generate L3 code combinations from the perspective of the combination. In this way, spectral or texture feature symbols of the region increased from L to [L.sup.3], thereby improving the precision of semantic inference. When inferring semantic meanings of images, the correlation between [I.sub.t] and S[C.sub.k] is measured by the posterior probability P(S[C.sub.k][I.sub.t]) of co-occurrence region based Bayesian networks. Co-occurrence region based Bayesian networks are shown in Figure 2:

Formula (2) is the formula for calculating the posterior probability of co-occurrence region based Bayesian networks.

[mathematical expression not reproducible] (2)

According to Bayesian formula,

[mathematical expression not reproducible] (3)

Formula (2) can be expressed as:

[mathematical expression not reproducible]

Where in, P[(u,v,w)] is the prior probability of (u,v,w); the conditional probability P[(u,v,w) s[c.sub.k]] reflects random links between codes and semantic concepts and can be obtained from user feedback or training samples. Assuming that the prior probability P(S[C.sub.k]) follows a uniform distribution, it can be ignored when calculating [mathematical expression not reproducible]. The calculation formula is as follows:

[mathematical expression not reproducible] (5)

When interpreting the image [I.sub.t], importance of the code triad ([mathematical expression not reproducible]) can be described with [mathematical expression not reproducible] and calculated by the product of region importance function RI(*) and co-occurrence region importance function RCI(.,.,.). Assuming that [mathematical expression not reproducible] and its co-occurrence regions [mathematical expression not reproducible] are equally important to interpreting [I.sub.t] en [mathematical expression not reproducible] and [mathematical expression not reproducible]. Where in, the functio |*| is element data in the set. If all co-occurrence regions are taken into account, the following formula is valid:

[mathematical expression not reproducible] (6)

Formula (6) prevents the bias toward regions with more co-occurrence regions when calculating P(S[C.sub.k][I.sub.t]). Based on Formula (5) and the calculation formula of [mathematical expression not reproducible], Formula (4) can be simplified as:

[mathematical expression not reproducible] (7)

2.3.2 Code Co-occurrence Matrix

The codes are defined to describe the importance and generate code co-occurrence matrix to simplify the calculation of P(S[C.sub.k][I.sub.t]),

The correlation between (u,v,w) and S[C.sub.k] is reflected by the code triad importance function CP[I.sub.k](-). It is possible to determine the importance of the code triad to (u,v,w) by virtue of prior probability of S[C.sub.k]. The calculation formula is as follows:

[mathematical expression not reproducible] (8)

Wherein, u, v and w are codes that jointly constitute code co-occurrence matrix Mk. Assuming that the prior probability P(S[C.sub.k]) follows a uniform distribution, Formula (7) can be simplified as:

[mathematical expression not reproducible] (9)

2.3.3 Semantic Score Function

Semantic scores can be divided into region semantic scores and semantic image scores. Region semantic score function is used to calculate the correlation between [mathematical expression not reproducible] and [mathematical expression not reproducible]:

[mathematical expression not reproducible] (10)

The normalization coefficien [mathematical expression not reproducible] prevents bias of semantic scores toward regions with more co-occurrence regions. According to the region semantic score function, the image semantic score function S[I.sub.k](*) can be defined as:

[mathematical expression not reproducible] (11)

The image semantic score function is used to measure the correlation between It and S[C.sub.k]. If S[C.sub.k]([I.sub.t]) is greater than the predefined thresholds, It is deemed to contain the semantic concept S[C.sub.k] at least. According to Formula (9) and Formula (11),

P(S[C.sub.k]\[I.sub.t])[varies]S[I.sub.k]\([I.sub.t]) (12)

It should be noted that this is a special form of Formula (3), which indicates that the image semantic score function is consistent with co-occurrence region based Bayesian networks.

2.3.4 Learning with Bayesian Networks

The prior probability P [(u, v, w) S[C.sub.k]] can be obtained from a group of training samples that reflect semantic concept S[C.sub.k] and consist of [mathematical expression not reproducible] and its co-occurrence regions to calculate the semantic score of images. The probability of the code triad can be calculated through Formula (13):

[mathematical expression not reproducible] (13)

Since [mathematical expression not reproducible] and [mathematical expression not reproducible] may not belong to Tk, the choice function [epsilon] is defined as:

[mathematical expression not reproducible] (14)

The prior probability P[(u, v, w)|S[C.sub.k]] can be approximately calculated through Formula (15):

[mathematical expression not reproducible] (15)

If users submit a new training sample [T'.sub.k] and merge it with the original training sample Tk, the prior probability P[(u,v,w) S[C.sub.k]] shall be re-calculated through Formula (16):

[mathematical expression not reproducible] (16)

If there are intersections between the new training sample and the original training sample, the calculation formula of the prior probability is P[(u,v,w) \S[C.sub.k],[T.sub.k][union][T.sub.k]].

2.4 Stepwise Remote Sensing Image Retrieval

Traditional content-based remote sensing image retrieval systems tend to calculate the distance between low-level feature vectors. Due to semantic gaps, however, images with similar features and irrelevant semantics are always retrieved (Ali et al., 2017). In the process of image retrieval, time efficiency is also a key factor that determines the performance of the retrieval system. Users require not only high precision and recall but also demand high time efficiency.

2.4.1 Integrated Region Matching

Integrated region matching (IRM), which has the strong robustness to possible segmentation errors, determines the overall similarity of images by comprehensively calculating the similarity of all image regions. Mathematically speaking, image similarity can be determined by calculating the distance between two point sets in high-dimension space (feature space). Although there are many methods of calculating the distance, such as Euclidean distance, between points in high-dimension space, they cannot be used to calculate the distance between two point sets. The biggest problem of defining the distance between two point sets in feature space is to maximize the difference between overall similarity and people's subjective feelings of image similarity. By comprehensively considering significance and distance between image regions, integrated region matching allows one region to match multiple regions, thereby minimizing retrieval errors caused by inaccurate image segmentation (Yasin et al., 2017).

According to the research results of Wang et al.(2016), the algorithm is described as follows: Image 1 and Image 2 are represented by [R.sub.1]= ([R.sub.1], [r.sub.2],..., [r.sub.n]) and [r.sub.2]= ([mathematical expression not reproducible]) , and feature vectors of the segmentation region i and j are represented by [r.sub.i] and [r'.sub.j] respectively. Also, the distance between [r.sub.i] and [r'.sub.j] is d([r.sub.i],[r'.sub.j]).When calculating the distance between [R.sub.1] and [r.sub.2], it is necessary to match all regions of the two images, calculate the distance between all regions and give corresponding weights to significance factors [S.sub.i,j]. Since significance factor represents the matching degree between [r.sub.i] and [r'.sub.j] , S = {[s.sub.w]},1 [less than or equal to] i [less than or equal to][less than or equal to] j [less than or equal to] n is referred to as significance matrix.

[mathematical expression not reproducible] (17)

The significance matrix S of integrated region matching is determined by significance factors si,j - the area ratio between one region and the whole image. Assuming that the area ratio of [r.sub.i] in Image 1 and [r'.sub.j] in Image 2 is [p.sub.i] and [p'.sub.j] separately, then

[mathematical expression not reproducible] (18)

[mathematical expression not reproducible] (19)

Under standard conditions, [mathematical expression not reproducible]. A reasonable matching mechanism is required to involve all image regions and give priority to the matching of the most similar regions. Under this matching mechanism, regions in Image 1 will only match with similar regions in Image 2. In other words, the distance between the two images is 0. By distributing significance factors, integrated region matching connects the most similar regions in two images with the shortest distance, which is referred to as the most similar highest priority (MSHP).

It is assumed that d(i',j') is the minimum distance, [S.sub.i,j] = min ([p.sub.i][p'.sub.j]) and [p.sub.i][less than or equal to][p'.sub.j] to obtain significance matrix with an iterative method. When [mathematical expression not reproducible] and j', which are regions with the minimum matching distance, are successfully matched together by significance factors of i'. The significance factor of j' is calculated through [mathematical expression not reproducible]. According to MSHP, the matching problem of [mathematical expression not reproducible] can be solved when the following four formula requirements are met.

[mathematical expression not reproducible] (20)

[mathematical expression not reproducible] (21)

[mathematical expression not reproducible] (22)

The iteration stops when all significant factors are calculated.

By co-occurrence region based Bayesian networks, the algorithm selects a group of candidate images that are highly correlated with the query semantics from large-scale remote sensing image database and adopts integrated region matching to reorder candidate images and return top ranking images, thereby realizing rapid image retrieval. Since co-occurrence region based Bayesian network image retrieval is characterized by low time complexity, it can retrieve a group of candidate images that are highly correlated with the query semantics within a short time (Shahzad et al., 2017; Basarian and Tahir, 2017). Integrated region matching has high time complexity. In comparison with the entire database, candidate images are small in quantities, which effectively reduce image retrieval time. Flowchart of the whole process is shown in Figure 3.

Stepwise region retrieval is mathematically described as follows:

Step 1: select a group of candidate images that are highly correlated with the query semantics. There are two alternatives:

(1) Users choose candidate images by searching images by the image semantic score function trained from co-occurrence region based Bayesian networks. Let [I.sub.q] be the query image and [[delta].sub.k] be the threshold of semantic image score; then the candidate images can be expressed as:

A = {[I.sub.p] \S[I.sub.k] ([I.sub.P] [greater than or equal to] [delta]k)} (24)

(2) Users select candidate images by searching semantic information of pictures by semantic score function. Assuming that [I.sub.q] has two semantic meanings, [mathematical expression not reproducible] and [mathematical expression not reproducible] are the two semantic score functions, and [mathematical expression not reproducible]. and [mathematical expression not reproducible] are thresholds of the two functions, then the candidate images selected can be expressed as:

[mathematical expression not reproducible] (25)

Under normal circumstances, the threshold shall be set to 0, to ensure the precision of image retrieval.

Step 2: adopt integrated region matching and query [I.sub.q] to reorder the candidate image gallery A, return top ranking images and finish the search. The more the returned images are similar to the query images in spectral and texture features, the more top-ranked the images will be.

3. Results

This experiment involves a total of 28 multi-spectral remote sensing images photographed by IKONOS and synthesizes standard true-color images with blue, green and red wave bands. Each image is segmented into 256x256 sub-images, and the maximum repeatability between two sub-images is 50%. After deleting unqualified images, the database has a total of 12,000 sub-images, covering nearly 40,000km2 earth surfaces. Images in the database are classified into six categories following semantic meanings: farmland (644), city (3,547), water body (3,243), vegetation (2,937), bare earth (1,033) and rock (1,730). Wherein, the figures in brackets represent numbers of images related to corresponding semantic concepts, and one image may be correlated with multiple semantic concepts. This paper measures performance of the retrieval system with precision and recall.

In the experiment, the author segments images into 32x32 sub-images extracts color histograms of regions with spectral features and employs the k-means algorithm for image clustering. When generating encoded image library, the color feature vectors are classified into tree codes by the k-means algorithm, thereby generating codebooks (Hussin et al., 2017). Each image area is encoded into corresponding codes following its feature vectors, thereby making encoded image library. This experiment creates three-level codebooks: 150, 300 and 600.

Experiments are designed to verify the retrieval algorithm proposed, which focuses on co-occurrence region based Bayesian network stepwise remote sensing image retrieval, verifes precision and time efficiency of the algorithm.

3.1 Experiment Design and Results

Considering that the retrieval time of integrated region matching may be influenced by the segmentation region numbers of query images, the experiment selects images with 4 to 13 image regions, with integrated region matching and stepwise image retrieval as controlled trails. Each retrieval experiment is repeated for three times, to ensure the accuracy of time, and the average of the retrieval time is deemed as the experiment result. Since stepwise image retrieval experiment records the time consumed in Step 1 and Step 2, it is possible to obtain the total retrieval time. This experiment measures performances of retrieval systems by retrieving time and precision. The retrieval time of integrated region matching and stepwise image retrieval is shown in Table 2, and the comparison results are shown in Figure 4(a). Since integrated region matching belongs to sorting algorithm, rather than classification algorithm, it selects the first 100 images for precision calculation. As for stepwise image retrieval, it will fetch the first 100 images if it retrieves more than 100 images; otherwise, it will be subject to the number of images returned. The final retrieval precision is demonstrated in Figure 4(b).

4. Discussion

4.1 Time Efficiency Analysis

According to Table 2, the retrieval time of integrated region matching increases gradually with the increase in region numbers, which is mainly because the number increase adds difficulties to the computation. Nevertheless, the increase in region numbers has little impact on the retrieval time of stepwise image retrieval. It fuctuates around 1.3s, 1.4s and 2.35s at 150, 300 and 600 yards respectively. With the increase of codebook numbers, however, the retrieval time increases accordingly. On average, the retrieve time rises by 0.126s from 150 yards to 300 yards and by 0.943s from 300 yards to 600 yards. Due to the increase of codebook numbers, code co-occurrence matrices appear. The codebooks at 300 yards are eight times those at 150 yards, and the codebooks at 600 yards are eight times those at 300 yards, which indicates that the ratio of retrieval time difference is 1:8. Through experiment, the ratio is proved to be 1:7.4841, which is consistent with the speculation. The retrieval time of Step 2 is related to the number of candidate images returned in Step 1. It can be seen from Figure 4(a) that the retrieval time of stepwise image retrieval is significantly shorter than that of integrated region matching, which is because co-occurrence region based remote sensing image retrieval has lower time complexity.

Figure 4. A Comparison of between Integrated Region Matching and Stepwise Image Retrieval (a) Average Retrieval Time between Integrated Region Matching and Stepwise Image Retrieval IRM(s) 5.398 150 yards 1.517 300 yards 1.543 600 yards 2.445 (b) Precision between Integrated Region Matching and Stepwise Image Retrieval IRM 0.609 Stepwise 0.853 (150 yards) Stepwise 0.926 (300 yards) Stepwise 0.93 (600 yards) Note: Table made from bar graph.

4.2 Recall Analysis

According to Figure 4(b), the precision of integrated region matching is 0.609, which is significantly higher than that of stepwise image retrieval (0.853, 0.926 and 0.93 at 150, 300 and 600 yards respectively). The reasons are as follows: images retrieved by integrated region matching are merely similar to each other in respect to low-level features, without considering semantic similarity. Stepwise image retrieval selects candidate images highly correlated with query images and retrieves images with similar characteristics to query images by integrated region matching, which effectively improves the precision of image retrieval.

4.3 Retrieval Cases

This paper selects an image of city region for retrieval analysis and 600 yards for stepwise image retrieval. The first ten images retrieved are shown in Table 3: arranged from left to right and from top to bottom.

5. Conclusions

By considering remote sensing image features and remote sensing image retrieval, the paper proposes a co-occurrence region based Bayesian network stepwise remote sensing image retrieval algorithm that takes into account both time efficiency and retrieval precision. This algorithm mainly draws reference from stepwise remote sensing image retrieval proposed by Moustakidis et al.(2012) and based on CSBN and integrated region matching. It consists of two parts: co-occurrence region based remote sensing image retrieval and integrated region matching. Select a group of candidate images that are highly correlated with query images in semantics by co-occurrence region based remote sensing image retrieval and reorder the candidate images with integrated region matching, thereby obtaining the final retrieval results. From the perspective of time efficiency, stepwise remote sensing image retrieval consumes 1.5171s, 1.5425s and 2.4454s at 150, 300 and 600 yards respectively. Regarding the precision, integrated region matching enjoys an accuracy of 0.609 while stepwise image retrieval significantly improves the precision to 0.853, 0.926 and 0.93 at 150, 300 and 600 yards respectively.

Acknowledgments

This research was financially supported by Science and technology project of Zhejiang Province (Grant No. 2015C31171) and National education information technology research plan project(Grant No. 146241806) (Grant No. 136241559).

References

Abdi, M. J., & Giveki, D. (2013). Automatic detection of erythemato-squamous diseases using PSO-SVM based on association rules. Engineering Applications of Artificial Intelligence, 26(1), 603-608.

Ali, S. S., Ijaz, N., Aman, N., Nasir, D. A., Anjum, D. L., & Randhawa, D. I. A. (2017). Clinical Waste Management Practices In District Faisalabad. Earth Sciences Pakistan, 1 (2), 01-03.

Amiri, M., Arabhosseini, A., Kianmehr, M. H., Mehrjerdi, M. Z., & Mirsaeedghazi, H. (2017). Environmental impact assessment of total alkaloid extracted from the Atropa belladonna L. using LCA. Geology, Ecology, and Landscape, 1(4), 259-263.

Basarian, M. S., & Tahir, S. H. (2017). Groundwater Prospecting Using Geoelectrical Method at Kg Gana, Kota Marudu, Sabah. Earth Science Malaysia, 1(2), 7-9.

Bata T., Samaila, N. K., Maigari, A. S., Abubakar, M. B., & Ikyoive, S. Y. (2017). Common Occurences Of Authentic Pyrite crystals in Cretaceous Oil Sands as Consequence of Biodegradation Processes. Geological Behavior, 1(2), 26-30.

Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on Computational learning theory, Pittsburgh, pp. 144-152.

Han, M., Zhu, X., & Yao, W. (2012). Remote sensing image classification based on neural network ensemble algorithm. Neurocomputing, 78 (1), 133-138.

Hejazi, S. M., Lotf, F., Fashandi, H., & Alirezazadeh, A. (2017). Serishm: an eco-friendly and biodegradable fame retardant for fabrics. Environment Ecosystem Science, 1(2), 05-08.

Hussin, H., Fauzi, N., Jamaluddin, T. A., & Arifin, M. H. (2017). Rock Mass Quality Effected by Lineament Using Rock Mass Rating (RMR)--Case Study from Former Quarry Site. Earth Science Malaysia, 1(2), 13-16.

Maleki-Ghelichi, E., & Sharif, M. (2017). Prioritize and choose the best process of anaerobic digestion to produce energy from biomass using analytic hierarchy process (AHP). Geology, Ecology, and Landscapes, 1(4), 219-224.

Moustakidis, S., Mallinis, G., Koutsias, N., Theocharis, J. B., & Petridis, V. (2012). SVM-based fuzzy decision trees for classification of high spatial resolution remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 50(1), 149-169 .

Pedergnana, M., Marpu, P. R., Dalla-Mura, M., Benediktsson, J. A., & Bruzzone, L. (2013). A novel technique for optimal feature selection in attribute profiles based on genetic algorithms. IEEE Transactions on Geoscience and Remote Sensing, 51(6), 3514-3528.

Radan, A., Latif, M., Moshtaghie, M., Ahmadi, M. & Omidi, M. (2017). Determining the Sensitive Conservative Site in Kolah Ghazi National Park, Iran, In Order to Management Wildlife by Using GIS Software. Environment Ecosystem Science, 1(2), 13-15 .

Rahman, N. A., Tarmudi, Z., Rossdy, M., & Muhiddin, F. A. (2017). Flood Mitigation Measures Using Intuitionistic Fuzzy Dematel Method. Malaysian Journal Geosciences, 1(2), 01-05.

Roslee, R., Mickey, A. C., Simon, N., Norhisham, M. N. (2017). Landslide Susceptibility Analysis (LSA) Using Weighted Overlay Method (WOM) Along the Genting Sempah To Bentong Highway, Pahang. Malaysian Journal Geosciences, 1(2), 13-19.

Ruiz, P., Mateos, J., Camps-Valls, G., Molina, R., & Katsaggelos, A. K. (2014). Bayesian active remote sensing image classification. IEEE Transactions on Geoscience and Remote Sensing, 52(4), 2186-2196.

Samadzadegan, F., Hasani, H., Schenk, T. (2012). Simultaneous feature selection and SVM parameter determination in classification of hyperspectral imagery using ant colony optimization. Canadian Journal of Remote Sensing, 38(2), 138-156 .

Shahzad, A., Munir, M. H., Yasin, M., Umar, M., Rameez, S., Samad, R., Altaf, S., & Sarfraz, Y. (2017). Biostratigraphy of Early Eocene Margala Hill Limestone in The Muzaffarabad Area (Kashmir Basin, Azad Jammu And Kashmir). Pakistan Journal of Geology, 1(2), 16-20.

Simon, N., Roslee, R., & Lai, G. T. (2017). Temporal Landslide Susceptibility Assessment Using Landslide Density Technique. Geological Behavior, 1(2), 10-13 .

Stumpf, A., Lachiche, N., Malet, J. P., Kerle, N., Puissant, A. (2014). Active learning in the spatial domain for remote sensing image classification. IEEE Transactions on Geoscience and Remote Sensing, 52(5), 2492-2507.

Tariq, W., Hussain, S. Q., Nasir, D. A., Tayyab, N., Gillani, S. H., & Rafq, A. (2017). Experimental Study on Strength And Durability Of Cement And Concrete By Partial Replacement Of Fine Aggegate With Fly Ash. Earth Sciences Pakistan, 1(2), 07-11.

Wang, Q., Lin, J., & Yuan, Y. (2016). Salient band selection for hyper spectral image classification via manifold ranking. IEEE Transactions on Neural Networks and Learning System, 27 (6) , 1279-1289.

Xu, L., Li, Y. P., Li, Q. M., Yang, Y. W., Tang, Z. M., & Zhang, X. F. (2015). Proportional fair resource allocation based on hybrid ant colony optimization for slow adaptive OFDMA system. Information Sciences, 293, 1-10 .

Yasin, M., Shahzad, A., Abbasi, N., Ijaz, U., & Khattak, Z. (2017). The Use of Stratigraphic Section in Recording Quagmire of Information For The Fluvial Depositional Environment--A Worked Example In District Poonch, Azad Jammu And Kashmir, Pakistan. Pakistan Journal of Geology, 1(2), 01-02.

Zhai, S., & Jiang, T. (2014). A novel particle swarm optimization trained support vector machine for automatic sense-through-foliage target recognition system. Knowledge-Based Systems, 65, 50-59.

Zhong, Y. , & Zhang, L. (2012). An adaptive artificial immune network for supervised classification of multi-/hyperspectral remote sensing imagery. IEEE Transactions on Geoscience and Remote Sensing, 50(3), 894-909.

Rui Zeng (1, 2, *), Yingyan Wang (1), Wanliang Wang (2)

(1) School of Electro-mechanical and Information Technology, Yi Wu Industrial &Commercial College, YiWu 322000, China

(2) College of Computer Science & Technology, Zhejiang University of Technology, Hangzhou 310014, China

(*) Email of Corresponding author: jamse007@126.com

Record

Manuscript received: 05/07/2017

Accepted for publication: 19/01/2018

How to cite item Zeng, R., Wang, Y., & Wang, W. (2018). A co-occurrence region based Bayesian network stepwise remote sensing image retrieval algorithm. Earth Sciences Research Journal, 21(1), 29-35

DOI: http://dx.doi.org/10.15446/esrj.v22n1.66107

Table 1. Wave Bands of IKNOS Imagesz2.1.2 Spectral Feature Wave Band Name Wavelength ([micro]m) 1 Blue 0.045-0.516 2 Green 0.506-0.595 3 Red 0.632-0.698 4 Near Infrared 0.757-0.853 Panchromatic ~ 0.526-0.929 Table 2. A Comparison of Retrieval Time between Integrated Region Matching and Stepwise Image Retrieval Image IRM(s) 150 yards 300 yards region 1st 2nd Sum 1st 2nd Sum 1st 4 2.934 1.309 0.028 1.337 1.414 0.01 1.424 2.377 5 3.437 1.306 0.037 1.343 1.434 0.012 1.446 2.442 6 4.034 1.311 0.071 1.382 1.446 0.052 1.498 2.353 7 4.595 1.308 0.072 1.38 1.439 0.049 1.488 2.381 8 5.18 1.311 0.213 1.524 1.432 0.07 1.502 2.386 9 5.696 1.312 0.115 1.427 1.439 0.105 1.544 2.366 10 6.244 1.306 0.317 1.623 1.427 0.158 1.585 2.401 11 6.75 1.316 0.511 1.827 1.463 0.288 1.751 2.356 12 7.373 1.294 0.567 1.861 1.432 0.236 1.668 2.345 13 7.734 1.322 0.145 1.467 1.438 0.081 1.519 2.381 Average 5.398 1.31 0.208 1.517 1.436 0.106 1.543 2.379 Image 600 yards region 2nd Sum 4 0.01 2.387 5 0.002 2.444 6 0.031 2.384 7 0.031 2.412 8 0.036 2.422 9 0.1 2.466 10 0.05 2.451 11 0.211 2.567 12 0.117 2.462 13 0.078 2.459 Average 0.067 2.445

Printer friendly Cite/link Email Feedback | |

Author: | Zeng, Rui; Wang, Yingyan; Wang, Wanliang |
---|---|

Publication: | Earth Sciences Research Journal |

Article Type: | Report |

Date: | Mar 1, 2018 |

Words: | 5637 |

Previous Article: | Research of behaviours of continuous GNSS stations by signal. |

Next Article: | An Assessment Method for Debris Flow Dam Formation in Taiwan. |

Topics: |