Printer Friendly

Security and integrity of data in cloud computing based on feature extraction of handwriting signature.


Cloud Computing gains users to store their data into the cloud as the remotest manner so that they can be comforted from the trouble of local data save and maintenance. The user loses the control of his remotely located data. This feature has many security challenges such as the authority and integrity of data. One of the significant concerns that require to be addressed is to assure the user of the integrity i.e. rightness of his data in the cloud. Continuously, the user cannot access to cloud's data directly. So, the cloud must provide a technique for the user to ensure if the integrity of his data is protected or is compromised. In this paper, we propose the use of encrypted data integrity by presenting the feature extraction of handwriting signature in a modern encryption scheme that preserves the integrity of data in cloud server. Any prohibited data modification, removal, or addition can be detected by cloud user. Additionally, our proposed scheme presents a proof of data integrity in the cloud which the user can know the truth of his data in the cloud server. We employ user's handwritten signature to secure and integrate his data in cloud server. Extensive security and performance analyses view that our proposed scheme has highly efficient and provably secure. In addition, the performance time decreases and the compensation ratio of data integrity is increases.


Cloud computing, handwriting, Feature Extraction, data integrity, Security


Recently, we are witnessing an increasing interest in cloud computing: Many Internet vendors including Amazon, Google and Microsoft have introduced various cloud solutions to provide computing resources, programming environments and software as a services in a Pay-As-You-Go manner. For example, Amazon introduces Amazon Elastic Compute Cloud (EC2) which provides computing cycle as a services and Google introduces Google App Engine (GAE) to provide programming environments as a service [1, 2].

This interest of cloud computing is due to its significant features which can be summarized according to the National Institute of Standards and Technology (NIST) as follows [3] (1) On-demand self-service: A user can be unilaterally supplied with computing facilities; (2) Wide network access: All the services can be obtained through Internet; (3) Resource pooling: Service provider's computing resources are available on-demand and for multiple users. Large number of physical and virtual resources can be automatically assigned and reassigned according to the user's demand; (4) Rapid flexibility: Services and resources can be dynamically scaled up and down; (5) Measured service [3, 4].

In particular, cloud components are gradually more popular even though a doubting security and privacy problems are slowing down their acceptance and success. Indeed, saving of user data in a cloud server regardless of its benefits has several interesting security concerns which require to be extensively studied for making it a trustworthy solution to the issue of averting local storage of data. Several issues such as data authority and integrity i.e., how to proficiently and securely guarantee that the cloud storage server side preserves truthful and complete results in reply to its user's queries [4, 5].

Data integrity considers one of the most serious components in any system. It is easily accomplished with a standalone system, when data integrity deals with a single database. In this case, data integrity is responsible for maintaining database through a series of constraints and transactions. The situation is different in the distributed system; there are several databases and many applications. With a view to maintain data integrity in this system, transactions via several data sources need to be taken exactly in the fail safe manner. This state needs to use manger of a central global transaction. At the same time, each application in the distributed system must have ability to take part in the global transaction across a resource manager.

Data integrity in cloud computing, it refers in the same meaning by ensuring the integrity of remote data saved at the un-trusted cloud servers. In this case, we deal with the issue of implementing a protocol for getting a proof of data ownership in the cloud. This issue attempts to get and validate a proof that the data that is saved by a real user at remote data storage in the cloud side is not updated by the archive and thus the integrity of data is confident. This verification system does not allow the cloud storage archives from changing the data stored in it without the permission of the data owner by using a multi-test on the storage archives. Furthermore, the cloud server could defraud the cloud users in two manners:

1. The cloud server calculates some functions and sends back a random number, but the claims remain needed some computations to complete transaction.

2. The cloud server selects some miss data which does not require highest computational cost and claims to use the valid data while the original data is wrong.

In this paper we focus on the important issue of implementing a protocol for getting a proof of data ownership in the cloud sometimes denoted to as Proof of retrievability (POR). This issue tries to get and validate a proof the data that is saved by real user at a remote data storage which is called cloud storage archives or simply archives. This type of storage has a good feature which does not allow modifying it by the archive and in this manner the integrity of the data is confidential. Cheating, in this environment, refers that the storage archive has ability to delete some of the data or perform some modifications on data. It must be distinguished that the storage server has immune against malicious attacks; as a substitute, it might be simply untrustworthy and lose the hosted data. Here, the data integrity schemes must to detect any modifications that may happen to data users in cloud storage servers. Any like these proofs of data ownership schemes do not perform by themselves, preserve the data from fraud by the archive. It just gives permission to reveal of tampering or modifying of a remotely sited file on an untrustworthy cloud storage server. While enhancing proofs for data ownership at untrustworthy cloud storage servers we are often restricted by the number of resources on the cloud server in addition to the client.

Additionally, we propose an efficient and secure data integrity scheme based on Merkle hash tree and feature extraction from user's handwriting. Additionally, our scheme does not require extra device or software compared with previous works in biometric field. Also, we provide a scheme which gives a proof of data integrity in the cloud which the customer can employ to check the correctness of his data in the cloud. Our proposed scheme was enhanced to minimize the computational and storage operating cost of the client side as well as to reduce the computational fixed cost of the cloud storage server. The encryption function of data commonly requires a large computational power. In our proposed scheme the operation of encryption is not there and hence preserving cost and time of computation in the client side. Additionally, we developed Merkle hash tree by making a user's query to work one-time which leads to prevent an adversary from applying his malicious attacks such as Man-in-the-Middle (MITM) attack, insider attack, and replay attack. In addition, our proposed scheme provides many pivotal merits: more functions for security and effectiveness, mutual verification, key agreement, dynamic data support, recoverability when some data blocks are lost, unlimited number of queries, and privacy preserving.

The rest of this paper is organized as follows. The necessary primitives and requirements of our scheme exist in section 2. An overview of related work is displayed in section 3. The proposed scheme is addressed in section 4. Security analysis and experimental results are existed in section 5. Conclusions are presented in section 6.

Manuscript must be typed in two columns. All text should be written using Times Roman 12 point font. Do not use page numbers.


2.1 Problem Definition

We assume a cloud data storage service consisting of three different components, as explained in Fig. 1 show the first one is called cloud user (CU), who possess many data files to be saved in the cloud; the second one is known the cloud server (CS), which is controlled by third component who knows the cloud service provider (CSP) to provide data storage service and has important storage space and computation resources. For more details, CSP must ensure that all significant data are covered and only formal users have arrived to data in its entirety. Also, it has ability to ensure that applications available as a service over the cloud are secure from adversaries. We assume a general cloud computing model involving of n cloud servers such as [S.sub.1], [S.sub.2],.., [S.sub.n], which may be monitored by one or more CSP. CU delegates his data to the cloud servers. CU employs the cloud servers as data storage and submits some functions for computation. The cloud service provider can expose the user in two manners as follows.

1. The Cloud Service Providers (CSPs) has ability to remove some seldom accessed data files to decrease the storage cost or update the stored data of users to disclose the data integrity. This way is called storage misuse.

2. The cloud server selects some incorrect data which describes by lowest computational cost and claims to employ the valid data while the original data of the user is lost. This way is known compromising computation.

Additionally, we must refer to significant component that is called the third party auditor (TPA); this component has skill and capability that is trusted to evaluate the security of cloud storage service instead of the user upon request. Users depend on the CS for saving and preserving their data. They may also automatically cooperate with the CS to arrive and update their stored data for different application purposes. Sometimes, the users rely on TPA to guarantee the storage security of their data, while wishing to preserve their data private from TPA.

Fig. 2 shows the basic architecture of our proposed data integrity scheme. Our scheme comprises of three components that have previously mentioned. The overall work can be divided into two phases: Configuration Phase and Verification Phase. The first phase consists of two steps: 1) Generation of meta-data; 2) Encrypting the Meta-data. In the generation of meta-data stage, each user registers his identity information (username, password, and signature handwriting handF) and his data file (F) in CA. Then, CA extracts features from user's signature handwriting and then splits it to m byte. Constantly, CA divides the data file F into n data blocks and each data block n splits to m byte. In encrypting the meta-data by embedding each m byte in the data block of F with m byte in handF. Fig. 3 shows the mechanism of this phase. Finally, the original data and secure meta-data store into cloud server.

In the verification phase, assume the verifier V wish to verify the integrity of the original data file F. It sends a challenge to the cloud server and requires it to respond. Each of the challenge and response are compared and V displays the result as accepting or rejecting the integrity proof by using the feature extraction of signature handwriting and Merkle hash tree. Additionally, we notice our proposed scheme does not require TPA and then acquires it more privacy, performance, and efficiency (see Fig. 2).

2.2 Merkle Hash Tree

In cryptography, Merkle tree considers a binary tree which consists of many nodes; each non-leaf node is imprinted with the hash of the names of its child nodes. Hash trees are functional because they product flexible and secure verification of the components of many data structures. To explain that a leaf node considers as a part of a specified hash tree that needs to offer an amount of data appropriately with the number of nodes in the hash tree. We can demonstrate the mechanism of the hash tree during Fig. 4. So, the leaves are generated by hashing of data blocks in, for example, a file or components of files.

We notice the hash 0 represents the result of merging hash(0-0) and hash(0-1). Respectively, hash(0) = hash(hash(0-0) [parallel] hash(0-1)) so (parallel) referees concatenation function.

Definition Merkle trees. It is a binary tree with an employment of a string with each node: n [right arrow] P(n) [member of] [{0,1}.sup.k] p([n.sub.parent]) = hash(p([n.sub.left]) [parallel] (p([n.sub.right]) ...(1) where the value of the parent node products from a one-way hash function of the children's node values (see equation 1).

The root value of this tree have denoted to public value, while all the values connected with leaf preimages are identified by the "tree owner" alone.

2.3 Feature extraction of digital handwriting

Signature recognition represents one of the oldest and most significant biometric authentication schemes, with wide-spread official acceptance. Handwritten signatures are usually used to approbate the components of a document or to verify a financial transaction [6]. An important benefit biometrics signature as the human tinge for biometric authentication via other features is their long standing convention in several regularly encountered verification jobs. In other side, the signature verification process is already recommendable by the general public. Furthermore, it is also comparatively less expensive than the other biometric schemes [6, 7]. The difficulties connected with biometric signature verification systems due to the wide intra-class variations, make biometric signature verification a complex pattern recognition issue. This scheme does not need additions cost (such as digitizing tablet, the pressure sensitive pen) like online methods, just requires a pen and a paper, and are therefore less persistent and more users friendly. In off-line biometric signature verification, the signature exists in a paper which is scanned to acquire its digital image. There are many types to extract features from biometric digital such basic functions, geometric normalization, extended functions, time derivatives, and signal normalization. In this paper, we focus on the basic function to extract main features for each user's signature and then employed them to work as the main factor to generate meta- data.

In the first type, the biometric signature representation depends on the following five elements: horizontal [x.sub.n] and vertical [y.sub.n] location trajectories, azimuth [y.sub.n] and altitude [Phi.sub.n] of a pen to preserve to the tablet, and single of the pen's pressure pan. The value n = 1,..., N denotes to the discrete time index specified by the gaining device and N is the time period of the biometric signature in sampling units. Consequently, the basic function set compounds of [x.sub.n] , [y.sub.n] , a synthetic timestamp [s.sub.n] pen ups [pu.sub.n] , and [p.sub.n].

2.4 Features of remote data integrity testing protocols

Any a remote data integrity checking scheme requires main conditions as follows:

1. Privacy preservation ([C.sub.1]): The TPA does not have any abilities to obtain knowledge of the real user data over the auditing process.

2. Unlimited number of queries ([C.sub.2]): The verifier allows applying an unbound number of queries in the challenge-response process for data verification.

3. Data dynamics ([C.sub.3]): The clients can perform processing on data files like add, delete, and update while maintaining data rightness.

4. Public verifiability ([C.sub.4]): Anyone must be permitted to confirm the integrity of data.

5. Block less verification ([C.sub.5]): Challenged file blocks must not be recovered by the verifier during the verification phase.

6. Recoverability ([C.sub.6]): The main section for checking the correct ownership of data, some scheme to retrieve the lost data is required.

7. With the help of TPA ([C.sub.7]).

8. Untrusted server ([C.sub.7]).


Simply, the Proof of retrivability (POR) method can be generated by using the keyed hash function [h.sub.k](F). In this approach the verifier, before archiving the data of original file F in the cloud storage side uses the cryptographic hash of F by using [h.sub.k](F) at a first process and then, saves this result with the secret key K as a second process. To ensure if the integrity of the original file F is missing, the verifier sends the secret key K to the cloud archive side and requires it to calculates and sends back the value of [h.sub.k](F). By saving multiple hash values for variant keys the verifier can test for the integrity of the original file F for several times, each one being a self-determining proof. Though this scheme considers very simple and effortless implementable, requires high resource costs for the implementation. On the verifier side this includes saving as several keys to check the integrity of the original file. Additionally, computing the hash function for many data files can be heavy for some clients such as mobile phones. In the archive side, each call of the protocol needs the archive to process the full file F. This can be computationally troublesome for the archive even for using a simple operation like hashing [6].

Juels and Burton presented a scheme called Proof of retrieve-ability for huge files using "sentinels" [8]. This scheme is different from the key-hash scheme, only the single key can be employed regardless of the size of the file or the amounts of files that retrieve-ability it wishes to verify. Additionally, the archive requires contacting only a small section of the file F . This small segment of the file F is autonomous of the length of the file. In this scheme, the cloud user must to note these segments of the sentinel values as well as the number of times that a cloud user side challenging the cloud server side is more limited.

Ateniese et al. [7] proposed "Provable Data Possession" model for checking possession of files on the unconfident storages. In their scheme, they demonstrate RSA- relying on homomorphic tags for auditing outsourced data. In this scheme the cloud user should pre-compute the tags at first and save all the tags at the second. These tags require a lot of computation and space of storage. Shacham and Waters [10] employed the homomorphic features for ensuring from the integrity of data. Chang and Xu [11] assisted the MAC and reed solomon code for testing the remote integrity. So, the homomorrphic features, MAC and the reed Solomon code cannot be used to check the validity of computations.

Sravan Kumar R and Ashutosh Saxena presented a scheme which includes the partial encryption of the whole data file. They encrypted only little bits of data for each data block thus decreasing the computational operating cost on the clients. In their scheme the verifier requires to save only a single cryptographic key regardless of the size of the file. The verifier will save each original data file and meta- data in the archive. In the verification phase, the verifier uses this Meta data to verify block data in the original data file. Their work is considered a good for the soft clients but when it performs to giants then there will be required a lot of computational overhead.

Scheme [12] is depended exclusively on symmetric-key encryption. The essential process is that, before outsourcing, data owner precomputes several verification tokens, each one presenting some set of data blocks. The real data is then given over to the server. Consequently, when the data owner wants to achieve a proof of data ownership, he sends his challenges values to the server. In the server side, he computes a short integrity test over the specified blocks and comebacks it to the owner. This scheme does not support the public verifiability, privacy preservation, and the number of quires is limited. Wan et al. [13] proposed a scheme allowed a third party auditor (TPA) to validate the integrity of the dynamic data saved in the cloud servers. This scheme describes by many features such as no privacy preservation, fully dynamic data operation, and block less verification.

Hao et al. [14] presented a new remote integrity checking scheme depended on homomorphic verifiable tags. This scheme has the procedures SetUp, TagGen, Challenge, GenProof and CheckProof, in addition to functions for data dynamically. The drawbacks of this scheme do not have ability to recover the lost or corrupted data. Table 1 describes a comparison of security properties between our proposed scheme and previous works.

Our proposed scheme will minimize the computational and storage operating cost of the client and reduces the computational overhead in the cloud storage server side. It also decreases the size of the proof of data integrity so as to minimize the network bandwidth burning up. In our data integrity scheme the verifier requires to save only a feature extraction of user's handwriting that used for generating encrypted Meta-data and then appended to original data file before storing the file at the archive side. At the time of verification, the verifier employed this meta-data to validate the integrity of the data. It is significant to know that our proof of data integrity just ensures the integrity of data (i.e. if the original data has been criminally modified or omitted. It does not avoid the archive from updating the data. In Merkle hash function, Wang et al. [15] proposed a scheme which gains to third party auditor to have the ability to confirm the rightness of the stored data on demand. Additionally, this scheme uses Merkle hash tree to allow the clients to process block-level operations on the original data files while processing the same level of data truth assurance. In this scheme, the third party verifier has the capability to misuse the data while they are performing the verification operation. Lifei et al. [14] presented a technique for ensuring the rightness of computations done by a cloud service provider. In this scheme, they have employed the Merkle hash tree to validate the accuracy of the computation. The weakness in this scheme refers to the number of cloud user's computations who submits to the service provider must be in the power of 2, so the Merkle hash tree can be generated for the nodes of power 2.

Our proposed scheme enhances the existing proof of storage techniques by manipulating the classic Merkle Hash Tree structure for preserving authentication protocol. Additionally, we improved Merkle hash tree by making a user's query to work one-time which leads to prevent an adversary from applying his malicious attacks such as Man-in-the-Middle (MITM) attack, insider attack, and replay attack.


The common notations in Table 2 will be used throughout this scheme. The client must be performed some processes to its original data file F before saving its data in cloud servers. The client extracts features from his signature and creates appropriate meta-data which is employed in the later phase of verification which ensues from the data integrity in the cloud storage. When the verifier wishes to validate the integrity of the file F, a user presents a challenge to the target server and requires the server to respond. The challenge detects the block number and the position of the byte number in the data block that possess to be verified. The server replies with two values (i) the first value of meta-data and (ii) the second value of the original data. The verifier uses feature extraction of signature handwriting to decrypt the metadata and ensures if the decrypted value is equal to the value of the original data. If the result is true then integrity is assured. The main structure for checking integrity between the cloud server and user is described in the Fig 5.

4.1 Configuration Phase

This phase distributes in three stages; the first one each of Verifier (V) and Cloud server (CS) agree to detect the shared key. In the second stage, Verifier will prepare meta-data to use in the next phase. A third stage is specified for ensuring the rightness of computations that done by the cloud server, we used Merkle hash tree. It is necessary for ensuring the authenticity and integrity of outsourced data.

First stage:

1. verifier selects random number [K.sub.v] [member of] Z[TEXT NOT REPRODUCIBLE IN ASCII] computes [K.sub.V] = h([K.sub.V]), and sends [K.sub.V] to the cloud server;

2. Cloud server chooses random number [K.sub.CS] [member of] [Z.sup.*], computes each of [K.sub.CS] = h([K.sub.CS]) and the shared key SK = [K.sub.CS] [intersection] [K.sub.V] and sends [K.sub.CS] to the verifier;

3. Verifier computes the shared key SK = [K.sub.V] [intersection] [K.sub.CS].

Second stage: The verifier V performs the following steps:

1. Extract features from signature hand writing fh and then divide to m byte.

2. Split original file F into n data block [d.sub.1], [d.sub.2],..., [d.sub.n] and split each data file to m byte.

3. Compute meta-data by using the equation Eq. 2:

Meta(i,j) = i*j*(d(i,j) + fh(j)) ... (2)

where (i = 1,2,3,...,n) , (j = 1,2,3,...,m). Hence, the existing of the feature extraction fh in meta-data is represented save meta-data in a secure manner.

1. Add the meta-data to the original data file.

2. Save the inserted meta-data and original data inside the cloud server.

Third stage: the cloud user selects a vector consisted n elements that have been derived from the feature extraction signature randomly. Then, he submits this vector to the cloud server for constructing Merkle hash tree which can be organized for the number of leaves in power of 2.

4.2 Verification Phase

Let the verifier V wishes to verify the integrity of the original file f. It sends a challenge to the archive and requires it to respond. The two values of challenge and the response are compared and the verifier decides to accept or reject by depending on the integrity proof. The main steps of this phase describe as follows:

1. V [Right Arrow] CS : [E.sub.i] The verifier challenges the cloud storage server by detecting the challenge value as the following computations:

* Verifier selects the block number i and the byte position j.

* Verifier generates a random number [r.sub.i] [member of] Z[TEXT NOT REPRODUCIBLE IN ASCII].

* Verifier encrypts and sends significant parameters[E.sub.i] = [Enc.sub.SK] (i, j, [r.sub.i]) to the cloud server.

2. CS [Right Arrow] V: E[TEXT NOT REPRODUCIBLE IN ASCII] The cloud server computes the following operations:

* Retrieve the significant parameters by using [Dec.sub.SK]([E.sub.i]).

* Compute [TEXT NOT REPRODUCIBLE IN ASCII] = h(d(i, j), [r.sub.i]).

* Compute one-time shared key as follows: SK = SK [direct sum] [r.sup.i].

* Send E[TEXT NOT REPRODUCIBLE IN ASCII] = [Enc.sub.SK]([TEXT NOT REPRODUCIBLE IN ASCII], Meta(i, j)) to the verifier.

3. V [Right Arrow] CS:E[TEXT NOT REPRODUCIBLE IN ASCII] The verifier computes the shared key (SK = SK [direct sum] [r.sub.i]) to decrypt E[TEXT NOT REPRODUCIBLE IN ASCII] by using encryption function [Dec.sub.SK] (E[TEXT NOT REPRODUCIBLE IN ASCII]). After that, Verifier performs some computations.

* The verifier performs the inverse function of Eq. 3 as follow:

d'(i,j) = Meta(i,j)/d(i,j) - fh(j) ... (3)

* He computes [h.sub.i] = h(d'(i,j),[r.sub.i]) and checks whether h[TEXT NOT REPRODUCIBLE IN ASCII] = [h.sub.i] if the result is true, then the data is not modified, Verifier selects a value of one leaf Xm that exist at the end level of the tree, encrypts this value E[TEXT NOT REPRODUCIBLE IN ASCII] = [Enc.sub.SK] (Xm) If the result is false, the data is modified, Verifier returns original data block d'(i,j) to the cloud server for recovering the original data block which is lost or modified, selects Xm, computes E[TEXT NOT REPRODUCIBLE IN ASCII] = [Enc.sub.SK](Xm, d'(i,j)) and then sends it to CS. From above steps (2, 3), the data integrity has been detected by using biometric signature thereby calming the data integrity.

4. CS [Right Arrow] V: H, sibling sets. The cloud server retrieves the Xm by decryption function [Dec.sub.SK](E[TEXT NOT REPRODUCIBLE IN ASCII]) and discovers in the Merkle hash tree a path from the leaf to the root by depending on Xm. For example, in figure 4, the challenge on Xm = data block 1 needs to calculate a path data block 1 with the vertices {datablock1, hash0-1, hash 0, Root hash}. The cloud server computes the hash value of root H = h(Root, [r.sub.i]) and sends the sibling sets of nodes in path (from Xm to the root) to the cloud user with H.

5.V [Right Arrow] CS:E[TEXT NOT REPRODUCIBLE IN ASCII]. The verifier obtains the values from the cloud server and generates the hashed root [H'] = h(Root, [r.sub.i]) during the result and the sibling value set submitted by the verifier. If the H' matches with H, the verifier authenticates that the computations are done acceptably. Otherwise, the verifier returns original sibling sets E[TEXT NOT REPRODUCIBLE IN ASCII] = [Enc.sub.SK](sibling set) to CS for restoring the lost sibling sets.

6. Cloud server decrypts E[TEXT NOT REPRODUCIBLE IN ASCII] by computing [Dec.sub.SK] (E[TEXT NOT REPRODUCIBLE IN ASCII]) and then restitutes the lost data as a main step for recovering lost data.

4.3 Data Dynamics

The proposed scheme provides data dynamics at the block level, which contains a block modification, block insertion, and block deletion. In cloud data storage, we notice several potential scenarios where data saved in the cloud is dynamic such as e-documents, video. Therefore, it is essential to consider the dynamic state, where a client may wish to execute the above operations while maintaining the storage exactness assurance. For performing any data dynamic operation, the client must first create the corresponding produced file blocks and sends challenges to CS for ensuring from his validity. Next we view how our scheme supports these operations.

Data Modification: We begin from data modification process, which is considered one of the most commonly used operations in the cloud data storage. An essential data modification operation denotes to the replacement of determining blocks with new ones. Assume the client wishes to modify the block d(i,j) number i and the byte position j. On the first, depend on the new block N(i,j) , the client computes meta-data of a new block NMeta(i,j) = i * j * (N(i,j) + Fh(j)) computes encryption function Update = [Enc.sub.SK](UP, N, NMeta, i, j) and then sends it to the CS, where UP refers to the modification operation. Upon receiving the request, the cloud server executes update operation by decrypting [Dec.sub.SK](Update) for retrieving(N, NMeta, i, j). Constantly, the cloud server:

(i) replaces the block data d(i.j) with N(i,j) and outputs F';

(ii) replaces the Meta(i,j) with NMeta(i,j);

(iii) replaces h(d(i,j)) with h(N(i,j)), in the Merkle hash tree structure and creates the new root R' (see the example in Fig. 6). Finally, the cloud server responses the client with a proof for this operation by computing Update' = [Enc.sub.SK](R') . After receiving the proof for updating operation of the cloud server, the client first creates root R based on a new data block N(i,j) and authenticates the cloud server by comparing R with R'. If it is true, the update operation has successfully.

Data Insertion: Compared to data modification operation, which does not update the logic organization of client's data file, data insertion, denotes to append new data block after some specified locations in the original data file F. Assume the client wishes to add block N(i, j) after the i'th block d(i, j). The mechanisms of processing are similar to the data updating state. At begin, based on N(i, j) the client constructs meta-data of new block NMeta(i, j) = i * j * (N(i, j) + Fh(j)), computes encryption function Update = [Enc.sub.SK](I, N, NMeta,i, j) and then sends it to the CS, where I refers to the insertion operation.

Upon receiving the request, the cloud server executes insert operation by decrypting [Dec.sub.SK](Update) for retrieving(N, NMeta, i, j). Continually, the cloud server (i) saves (N, NMeta) , adds N(i, j) and NMeta(i, j) "after" d(i, j) and Meta(i, j), respectively. Then, he adds a leaf h(N) "after" leaf h(d) in the Merkle hash tree and outputs F'; (ii) he creates the new root R'. Finally, the cloud server checks the validity of client by computing Update' = [Enc.sub.SK](R'). After receiving the proof for updating operation of the cloud server, the client first creates root R based on a new data block N(i, j) and authenticates the cloud server by comparing R with R'. If it is true, the insert operation has successfully.

Data Deletion: We can describe this operation as the opposite operation of data insertion. For deleting any data block, it denotes to deleting the detected block and shifting all the latter data blocks one block forward. Assume the cloud server receives the update request Update = [Enc.sub.SK](D, N, i, j) for deleting block D(i, j), where D refers to the deletion operation, he will delete each of (D(i, j), Meta(i, j)) from its storage space. Then, he omits the leaf node h(D(i, j)) in the Merkle hash tree and constructs the new root R'. The details of this operation are similar to that of data updating and insertion, which are thus deleted here.


In this section, we analyze the security features of our proposed scheme and view the comparison of file sizes for the original data and metadata by using the signature of handwriting.

5.1 Security Analysis

Proposition 1. Our proposed scheme can supply mutual verification.

Proof. This security feature means that an adversary cannot impersonate the legal V to CS, and vice versa. Only the genuine verifier who possesses the secret factors can successfully bring the factors (SK, Fh, [E.sub.i], E[TEXT NOT REPRODUCIBLE IN ASCII]) to the cloud server. In this state, CS can decrypt each of [E.sub.i], E[TEXT NOT REPRODUCIBLE IN ASCII], and computes ([h'], H, E[TEXT NOT REPRODUCIBLE IN ASCII]) . If so, a verifier is genuine. At the same time, the verifier can compute (H', sibling sets) that must be decrypted by using shared key SK. So, SK generates once for each verifier's request. Also, the verifier can detect the authority of CS by comparing each of h[TEXT NOT REPRODUCIBLE IN ASCII] and h with H and H'. Furthermore, it depends on features extraction of verifier's signature handwriting. Therefore, our proposed scheme achieves mutual verification between the two entities (see Fig. 5).

Proposition 2. Our proposed scheme can forward Secrecy.

Proof. Our proposed scheme protects of the password even when the shared key is disclosed or leaked. If the secret key SK is revealed by the adversary, the authentication of the system is not impressed, and he cannot use this key in the next verification phase. At the same time, it is extremely hard that an adversary can derive the secret key which consists of SK = h([K.sub.v]) [intersection] h([K.sub.CS]) and random number [r.sub.i]. Also, the attribute of the crypto one-way hash function h([K.sub.v]) n h([K.sub.CS]) and an adversary still cannot obtain shared SK which is used to encrypt [r.sub.i] and then sends [E.sub.i] = [Enc.sub.SK](i, j, [r.sub.i]) to CS in communication channel. CS has been benefited from [r.sub.i] to generate shared key for each verification phase. Hence, our work maintains the forward secrecy.

Proposition 3. Our proposed scheme can supply security of the digital handwriting (biometric agreement).

Proof. In the proposed scheme, we notice that the communication messages ([E.sub.i], E[TEXT NOT REPRODUCIBLE IN ASCII], E[TEXT NOT REPRODUCIBLE IN ASCII], H, sibling set) only include information about (i,j,[r.sub.i],d'(i,j),Xm,h,h[TEXT NOT REPRODUCIBLE IN ASCII]) They do not include any information related to the signature handwriting FH. Therefore, the messages of mutual verification stage are generated once for each verifier's request, denoting feature extraction of signature and verification messages are completely individualistic. Also, the cloud server does not contain file signature handwriting that helps him to increase time processing of our proposed scheme or exposes to malicious attacks. Thus, our work supports security of the digital handwriting.

Proposition 4. The proposed scheme can provide known-key security.

Proof. Known-key security refers that the compromise of a session shared key will not show the way to further compromise of session shared keys. In the case, the session key becomes exposed to an attacker, he fails to derive other session keys since they are constructed from the random numbers [r.sub.i] based on the key exchange manner which is started from CS by SK = SK [direct sum] [r.sub.i] and it's finshied to the verifier for each verification phase. Therefore, the proposed scheme can gain known-key security.

Proposition 5. The proposed scheme can provide recoverability.

Proof. The verification of the proof-of-retrievability (POR) is happened when our proposed scheme detects illegal updating/ losing of original data block d(i, j). We notice this state in two places:

1. The verifier computes [h.sub.i] = h(d'(i, j), [r.sub.i]) and checks whether h[TEXT NOT REPRODUCIBLE IN ASCII] = [h.sub.i], if the result is false, the data is modified in a illegally manner, Verifier returns original data block d' (i, j) to the cloud server for recovering the original data block which is lost or modified, sends E[TEXT NOT REPRODUCIBLE IN ASCII] = [Enc.sub.SK](Xm, d'(i, j)) sends to CS;

2. When the verifier compares H' with H, if does not match, the verifier returns original sibling sets E[TEXT NOT REPRODUCIBLE IN ASCII] = [Enc.sub.SK](sibling set) to CS for restoring the lost sibling sets. As a result, the proposed scheme can gain recoverability.

Proposition 6. Our proposed scheme can withstand a replay attack.

Proof. The verifier's login request message of our proposed scheme employs a random [r.sub.i] instead the timestamp to resist replay attack. Hypothetically, if the attacker detects the old secret keys authentication such as SK, Xm, h, h[TEXT NOT REPRODUCIBLE IN ASCII], H. sibling set , he still cannot perform a replay attack on the next authentication session. So, an attacker fails to get Fh, Xm, [r.sub.i] for generating Meta, SK, H. Obviously, the adversary fails to use the replay attack.

Proposition 7. Our proposed scheme can withstand a reflection attack.

Proof. This attack means when a legitimate user ships a login request to the server, the adversary tries to eavesdrop on user's request and replies to it. In our proposed scheme, the adversary fails to fool the service provider since he has to know the shared key SK and signature handwriting Fh. These keys are employed to compute SK, which is used to decrypt the ciphertext [E.sub.i] = [Enc.sub.SK](i, j, [r.sub.i]), E[TEXT NOT REPRODUCIBLE IN ASCII] used to decrypt the ciphertext [E.sub.i] = [Enc.sub.SK](Xm) sent to CS by the verifier. Addition to that, the adversary does not possess (SK, Fh) for computing [E.sub.i], E[TEXT NOT REPRODUCIBLE IN ASCII], E[TEXT NOT REPRODUCIBLE IN ASCII] which are used to verify the both entities. Obviously, the proposed scheme can resist the insider attack.

Proposition 8. Our proposed scheme can withstand Man-In-The-Middle (MITM) attack.

Proof. This type of attack is intended that an attacker has the ability to intercept the messages between the verifier and the cloud server. Then, he uses this message when the verifier signs out the cloud server. In our proposed scheme, the factors are securely encrypted and sent to the service provider. Generation of the random value [r.sub.i] is through the creation of sensitive data([E.sub.i], E[TEXT NOT REPRODUCIBLE IN ASCII]) by verifier as challenges to CS. This sensitive data becomes useless when V signs-off the cloud server. Therefore, an attacker spotting communication between V and CS can learn [r.sub.i] which is used only once; he is unable to compute SK. Nevertheless, when V signs out of the cloud server, an attacker cannot compute ([E.sub.i], E[TEXT NOT REPRODUCIBLE IN ASCII]) to impersonate the genuine verifier or calculates (E[TEXT NOT REPRODUCIBLE IN ASCII], H, Sibling set) to impersonate the cloud server. As a result, the proposed scheme can resist MITM attack.

5.2 Efficiency Analysis

The client constructs the meta-data, encrypts the meta-data and adds the data to the original data and saves the data at the cloud server. This requires some additional computation cost in the client part. After the computation phase, the size of the file becomes double. So the client will contain double the file size of storage space and signature hand writing the file. The comparison of file sizes for the original data and metadata is described in the Fig. 7. time processing of verification phase. So, the efficiency of signature handwriting has high performance, security, and does not effect to performance of the system. The efficiency of our work has been tested in term of measuring the response time of CS. Our proposed scheme has been executed and tested on a base of many signatures. These signatures were acquired using the Biometrics Ideal Test for supporting biometric database. Additionally, our experimental results are based on UC Irvine Machine Learning Repository database. Now, we study the performance of our work. The evaluation parameters are declared in Table 3. The time requirement of our proposed scheme is existed in Table 4. We utilize the computational overhead as the metrics to evaluate the performance of our proposed scheme.


In this paper, we presented a scheme for the data integrity over the cloud computing and we employ the feature extraction of hand writing signature and Merkle hash function to succeed the integrity principle, in such a way to aid the user to verify and test the data from unauthorized users that employ with the cloud data server. Additionally, we have used in our paper different algorithm compared to previous related work in this reverence for cloud data management and biometric. From this preserving data of cloud, a user can be robust confidence for his uploaded data for any work in the future. Additionally, the key idea of our proposed scheme is to gain integrity to the cloud storage area with sturdy reliability so that a user does not worry to upload his data in his allocated area. In encrypted processing, a user updates his/her sensitive data with a remote cloud from other components of the system. Furthermore, our proposed scheme is immune from, replay attacks, MITM attacks, and reflection attacks. Our work supports many system via separated processes whose execution in cloud environment and data is protected security features such as mutual verification, forward Secrecy, known-key security, revocation, and biometric agreement. In the performance our presented scheme has been evidenced to achieve sturdy security with low cost compared with previous schemes.


[1] S. Subashini and V. Kavitha, , "A survey on security issues in service delivery models of cloud computing," Journal of Network and Computer Applications, vol. 34, no. 1, pp. 1-11, Jan, 2011.

[2] M. Piccinelli and P. Gubian, " Detecting Hidden Encrypted Volum Files via Statistical Analysis", International Journal of Cyber-Security and Digital Forensics, vol. 3, no. 1, pp. 30-37, 2013.

[3] E. Mykletun, M. Narasimha, and G. Tsudik, "Authentication and integrity in outsourced databases," Trans. Storage, vol. 2, no. 2, pp. 107-138, 2006.

[4] D. X. Song, D. Wagner, and A. Perrig, "Practical techniques for searches on encrypted data," in SP '00: Proceedings of the 2000 IEEE Symposium on Security and Privacy. Washington, DC, USA: IEEE Computer Society, p. 44, 2000.

[5] Pearson, S., 2009. Taking account of privacy when designing cloud computing services. Proceedings of the ICSE Workshop on Software Engineering+ Challenges of Cloud Computing, (CLOUD' 09), ACM Press, USA., pp: 44-52. DOI: 10.1109/CLOUD.2009.5071532

[6] A. Juels and B. S. Kaliski, Jr., "Pors: proofs of retrievability for large files," in CCS '07: Proceedings of the 14th ACM conference on Computer and communications security. New York, NY, USA: ACM, pp. 584-597, 2007.

[7] Ateniese, G., R. Burns, R. Curtmola, J. Herring and L. Kissner et al., 2007. "Provable data possession at untrusted stores, ". Proceedings of the 14th ACM Conference on Computer and Communications Security, ACM Press, New York, pp: 598-609, Oct. 28-31.

[8] Shacham, H. and B. Waters, 2008. "Compact proofs of retrievability, ". Proceedings of the 14th International Conference on the Theory and Application of Cryptology and Information Security: Advances in Cryptology, (ASIACRYPT' 08), ACM Press, Heidelberg, pp: 90-107.

[9] Chang, E.C. and J. Xu, 2008. "Remote integrity check with dishonest storage serve, "r. Proceedings of the 13th European Symposium on Research in Computer Security: Computer Security, (ESORICS'08), ACM Press, Heidelberg, pp: 223-237.

[10] G. Ateniese, et al., "Scalable and efficient provable data possession, " Proceedings of the 4th international conference on Security and privacy in Communication networks, Istanbul, Turkey, 2008.

[11] Q. Wang, C. Wang, K. Ren, W. Lou and J. Li, "Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing, " IEEE Transactions on Parallel and Distributed Systems, vol. 22, no. 5, May, 2011.

[12] Z. Hao, S. Zhong and N. Yu, "A Privacy-Preserving Remote Data Integrity Checking Protocol with Data Dynamics and Public Verifiability, " IEEE Transactions on Knowledge and Data Engineering, Vol. 23, no. 9, September, 2011.

[13] Wang, C., Q. Wang, K. Ren and W. Lou, 2009a. "Ensuring data storage security in cloud computing. Proceedings of the17th International Workshop on Quality of Service, " IEEE Xplore Press, Charleston, SC., pp: 1-9, Jul. 13-15.

[14] Lifei, W., H. Zhu, C. Zhenfu and W. Jia, 2010. SecCloud: "Bridging secure storage and computation in cloud. Proceedings of the 2010 IEEE 30th International Conference on Distributed Computing Systems Workshops, " IEEE Xplore Press, Genova, pp: 52-6, Jun, 21-25.

Ali A. Yassin (1), Hikmat Z. Neima (2), and Haider Sh. Hashim (1)

(1) Computer Science Dept., Education College for Pure Science, Basrah University, Basrah, 61004, Iraq,

(2) Computer Science Dept., Science College, Basrah University, Basrah, 61004, Iraq,

Table 1 Notations of our proposed scheme

             Our          Ateniese et al.     Wan et        Hao et
             proposed     [12]                al. [13]      al. [14]

C1           Yes          No                  No            Yes
C2           Yes          No                  Yes           Yes
C3           Yes          Yes (not fully      Yes           Yes
C4           Yes          No                  Yes           Yes
C5           Yes          Yes                 Yes           Yes
C6           Yes          No                  No            No
C7           No           No                  Yes           No
C8           Yes          Yes                 Yes           Yes

Table 2 Notations of our proposed scheme

Symbol                               Definition

CS                                   Cloud server.
V                                    Verifier.
[K.sub.v], [K.sub.CS]                Two random numbers are used by CS
                                     and V to generate shared key
                                           between them.
KS                                   The shared key is between V and
Meta(i,j)                            It refers to the j'th byte in the
                                        i'th block of meta-data file.
fh(j)                                It refers to the j'th byte in the
                                     feature extraction of signature's
                                          handwriting file Fh.
[E.sub.i], E[TEXT NOT                The challenges parameters that are
REPRODUCIBLE IN ASCII]                  sent from V to CS.
E[TEXT NOT REPRODUCIBLE              The challenges parameters that are
IN ASCII], H, H', Sibling               sent from CS to V.
[X.sub.m]                            refers leaf that is selected by V
                                        to send as challenge to CS.
h[TEXT NOT REPRODUCIBLE IN           Other miscellaneous values which
ASCII], [r.sub.i], [h.sub.i]           are used in the verification.

Table 3 Evaluation Parameters

Symbol                            Definition

[T.sub.n]          Time processing of a hash function.
[T.sub.opr]        Time processing of the mathematic operations.
[T.sub.Enc]        Time processing of symmetric encryption operation.
[T.sub.Dec]        Time processing of a symmetric decryption operation.
[T.sub.            Time processing of an XOR operation.
 [direct sum]]

Table 4 Performance of Our Proposed Scheme

Phase                 Client                       Cloud Server

Configuration   [4T.sub.Opr] + [T.sub.h]     [4T.sub.opr] + [T.sub.h]
Verification    [T.sub.Enc] + T[direct sum]  [T.sub.Enc] + T[direct sum]
                + [T.sub.Dec] +    2+ [T.sub.Dec] +
                 [2T.sub.Opr] + [2T.sub.h]   [2T.sub.h]
Total           [T.sub.Enc] + T[direct sum]  [T.sub.Enc] + T[direct sum]
                + [T.sub.Dec] +              + [2T.sub.Dec] +
                 [6T.sub.Opr] + [3T.sub.h]     [3T.sub.h] + [4T.Opr]
COPYRIGHT 2014 The Society of Digital Information and Wireless Communications
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Yassin, Ali A.; Neima, Hikmat Z.; Hashim, Haider Sh.
Publication:International Journal of Cyber-Security and Digital Forensics
Article Type:Report
Date:Apr 1, 2014
Previous Article:The consequences of state-level intrusions: a risk worth taking?
Next Article:Comprehensive solution to mitigate the cyber-attacks in cloud computing.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters