A multi level system model for ensuring freshness in cloud environment.
Cloud computing plays a major role in the modern business world in terms of cost reduction. It renders services to the end user such as software as a service or platform as a service or infrastructure as a service or everything as a service. One of the main characteristics of cloud computing which attracts the storage industry is its high availability. Whatever data stored, should be available to the end user anywhere at any time. To maintain high availability, the cloud storage providers should have several data centres that have the replica of data. Before the data is stored in a cloud, owner of the data and cloud storage vendor have to agree upon a service level agreement (SLA) which ensures quality-of-service (QoS). The SLA holds the number of replicas to be maintained. As the replica increases, it favors the availability of data but the provider should ensure that the end user gets fresh data. A data is fresh when the value it bears is the latest one. The value associated with each data object has some life span which can be absolute or relative to other data object. That time is called the validity interval associated with that data object. A lighten up of ACID property in concurrent execution of data may increase overall performance but there is a high chance of conceiving stale data by the end user. The degradation of the freshness of each data value is measured by the staleness (Lukasz, Theodore and Vladislav, 2012) which increases by time until the data is refreshed.
The remaining paper is organized as follows. In Section I the problem is stated. Section II briefly narrates the related work. Section III the proposed system architecture is explained. Section IV shows implementation and results.
1. Problem Statement:
One of the major challenges in the storage of concurrent accessed data is to preserve its consistency. The same data is stored at different locations to achieve high availability. Depending upon the mode of transaction carried over the data, the data storage sites are classified into read-only sites and read-write sites. Changes occur in the read-write sites and all these sites are synchronized among themselves. These modified values are finally updated in the read-only sites.
All read or write transactions have to be carried within the validity interval of the data item. In this paper, we focus on four types of updates namely lazy updates, lazy differed updates, diligent updates and diligent priority updates. In the lazy update, updating take place in uniform interval of time and it is used to update the data item value from read-write site to read-only sites. In lazy differed update method, updates take place when a request comes from the read-only site or at the end of validity interval. In the diligent update, the updating takes place whenever there is some update operation done on the data item. It is used to update the data item value in the read-write site. The diligent priority update applies to the update operation carried out between several read-writes sites. To achieve synchronization among the read-write sites, some priorities are set.
2. Related Works:
Ming Xiong, Song, Kam-Yiu and Deji (2008), has proposed a deferrable Scheduling algorithm for first priority transactions. It narrates a new way of minimizing update workload and at the same time holding the temporal validity of real-time data. Here update transactions follow an aperiodic task framework in the deferrable scheduling algorithm. This algorithm makes use of the semantics of temporal validity limitation of real-time data by judiciously deferring the interval of update transaction jobs as late as possible. It also presents a theoretical estimation of its processor utilization. But the concept of deferrable scheduling is only used to schedule Update transactions with fixed priority and if there is no fixed pattern then it is difficult to find the processor utilization.
Kyoung-Don, Son, Stankovic and Abdelzaher (2002), suggested a QoS sensitive approach to guarantee freshness in real-time databases. Here a dynamic balancing scheme called QMF is used to balance the user transaction and update workloads in a flexible manner. A database administrator can explicitly specify the database QoS needed in terms of deadline miss ratio and perceived freshness. By applying the adaptive update policy with the feedback and admission control, the mentioned miss ratio and freshness can be supported while other non-adaptive approaches fail.
Ming, Xiong, Qiong and Krithi (2008), address the method for temporal consistency maintenance using earliest deadline first scheduling. The author proposes three novel approaches, namely More-Less EDF, Optimal Search EDF and Heuristic Search EDF, using the EDF scheduling algorithm. From the experiment result, it is demonstrated that HSEDF is an effective algorithm that assign periods and deadlines with much lower utilization than MLEDF. MLEDF is a linear algorithm, but it only supports transactions with deadlines not greater than their corresponding periods.
Song Han, Deji, Ming Xiong. and Aloysius (2009), investigates the cost of data freshness maintenance and online scheduling overhead in the presence of mode changes in real-time systems. In this paper, new algorithms to search for the proper online switch point online were proposed. Proposed algorithms are the search-based switch (SBS) and adjustment-based switch (ABS). The result shows scheduling switch with respect to the runtime processor workload is better than a policy with single fixed scheduling.
3. Architecture and Techniques:
The main objective of this project is whenever the end user access or performs any operation on the data item, it should be fresh. To synchronize the values of the data item at a different site, a global timestamp is used. Each transaction is carried with a timestamp. All the read-write sites form a cluster and one of them will act as a cluster head. The cluster head sends fresh data to the read-only site. Fig. 1 shows the high-level implementation model of the cluster of the read-only sites and read-write sites.
If an end user tries to access the site, depending upon the transaction that person will be directed to the read-only site or the read-write site. If the user wants to do a read-only transaction, he is directed to read-only sites. Each data item in the read-only site is assigned to a validity interval. If the request to access the data item comes within this validity period, it is granted. If a user wants to do some update transaction, the request is directed to read-write site and the updated value is stored in the local site through Diligent Update (DU) protocol which is summarized in Algorithm 4.1. Fig.2 depicts the system design and the execution of different protocols.
A. Lazy Updates--Diligent Priority Updates (LU-DPU) Protocol:
In LU-DPU, Lazy Updates runs at read-only sites and Diligent Priority Updates runs at cluster head of the read-write sites. Whenever some change occurs at the local site of read-write sites, the data item is updated there and passes the new value to the cluster head. In the cluster head all the transactions regarding each data item is enqueued and based on the time stamp, the data is updated. The DPU is summarized in Algorithm 4.2. The latest fresh data is sent back to all read-write sites. At the end of validity interval time of each data item, the present value of the data item from the cluster head is passed to each read-only site to maintain the freshness using LU algorithm which is summarized in Algorithm 4.3.
Algorithm 4.2: DPU Protocol Input: value of the data item (1) begin (2) store the initial values of each item (3) get the data item (4) push item into the corresponding queue (5) sort the queue wrt time stamp (6) do while (true) (7) get the older data item (8) compare the value with the initial value (9) check the modification and assign the new value to the data item that reflects the modification. (10) loop (11) set the resultant value as the initial value (12) end Algorithm 4.3: LU Protocol Input: value of the data item (1) set the clock interval (2) store the initial values of each item (3) get the data item at the end of the time interval (4) begin (5) push item into the corresponding queue (6) sort the queue wrt time stamp (7) do while (true) (8) get the older data item (9) compare the value with the initial value (10) check the modification and assign the new value to the data item that reflects the modification. (11) loop (12) set the resultant value as the initial value (13) end
B. Lazy Differed Updates--Diligent Priority Updates (LDU-DPU) Protocol:
In the cluster head of read-write site, LU-DPU and LDU-DPU protocol works in the same manner. LDU runs to update the read-only sites. Algorithm 4.4 summarizes the LDU. An update occurs when a request comes from the read-only site for a particular data item or when its validity interval is over, whichever comes first. The site gets the value from the cluster head and maintains the freshness.
Algorithm 4.4: LDU Protocol Input: value of the data item (1) set the clock interval (2) store the initial values of each item (3) getting any request or at the end of the time interval (4) begin (5) push item into the corresponding queue (6) sort the queue wrt time stamp (7) do while (true) (8) get the older data item (9) compare the value with the initial value (10) check the modification and assign the new value to the data item that reflects the modification. (11) loop (12) set the resultant value as the initial value (13) end
RESULTS AND DISCUSSION
Fig. 3 shows the average response time against no. of transaction for LU-DPU and LDU-DPU protocols.
The experiment setup is done on thirty virtual machines out of which fifteen are reserved for the read-only sites and the rest used for the cluster of read-write sites. Here the evaluation metric used is the average response time of the read-only site and read-write site when the different protocols are running. The average response time is measured as the average amount of time required to handle a request. Concurrent requests were generated periodically for read-only sites and modification of data. The requests are uniformly distributed across the sites.
We have successfully launched multiple sites to carry out read-only transaction or modification transaction exclusively. This helps to achieve maximum availability of data without compromising on freshness. As a future work, many clusters can be added on demand to achieve scalability.
Article history: Received on Feb 27th 2015
Accepted on March 20th 2015
Available online 15th June 2015
Kyoung-Don, Kang, S.H. Son, J.A. Stankovic and T.F. Abdelzaher, 2002. A QoS-sensitive approach for timeliness and freshness guarantees in real-time databases. Proceedings of 14th Euromicro Conference, Real-Time Systems, 203-212.
Lukasz, Golab, Theodore, Johnson and Vladislav, Shkapenyuk, 2012. Scalable scheduling of updates in streaming data warehouses. IEEE Transactions on Knowledge & Data Engineering, 24(6): 1092-1105.
Ming, Xiong, Song, Han, Kam-Yiu, Lam and Deji, Chen, 2008. Deferrable scheduling for maintaining real-time data freshness: algorithms, analysis, and results. IEEE Transactions on Computers, 57(7): 952-964.
Ming, Xiong, Qiong, Wang and Krithi, Ramamritham, 2008. On earliest deadline first scheduling for temporal consistency maintenance. Real-Time Systems, 40(2): 208-237.
Song, Han, Deji, Chen, Ming, Xiong and Aloysius, K. Mok, 2009. Online scheduling switch for maintaining data freshness in flexible real-time systems. RealTime Systems Symposium, 115-124.
(1) Lino Abraham Varghese, (2) S. Bose, (3) S. Sankari
(1) Department of Computer Science and Engineering, College of Engineering Guindy, Anna University, Chennai 600 025, India.
(2) Department of Computer Science and Engineering, College of Engineering Guindy, Anna University, Chennai 600 025, India.
(3) Department of Computer Science and Engineering, College of Engineering Guindy, Anna University, Chennai 600 025, India.
Corresponding Author: Lino Abraham Varghese, Department of Computer Science and Engineering, College of Engineering Guindy, Anna University, Chennai 600 025. India.
|Printer friendly Cite/link Email Feedback|
|Author:||Varghese, Lino Abraham; Bose, S.; Sankari, S.|
|Publication:||Advances in Natural and Applied Sciences|
|Date:||Jul 15, 2015|
|Previous Article:||Generalized rough set method for intensity inhomogeneity correction in brain MRI segmentation.|
|Next Article:||Image encryption based on DNA sequence coding and Logistic map.|