Printer Friendly

Attaining integrity, secured data sharing and removal of misbehaving client in the public cloud using an external agent and secure encryption technique.

INTRODUCTION

The innumerable enhancements in the Cloud computing has motivated the enterprises and organizations to outsource their huge data to the third party cloud service providers which are the huge data centres .This will relieve the burden of the client in managing and maintaining the enormous data in their local devices. The security challenges related to this scheme is that these giant data centers are not trustworthy. In market, several giant net firms such as simple storage service(S3) ,software like Dropbox, Mozy, Memopal ,Google Drive and on-line data backup services like Elastic Compute Cloud (EC2) of Amazon are currently exploiting the very fact that they have data storage capacity which can be hired out to others. This allows the data stored remotely to be temporarily cached on desktop computers, mobile phones etc. Though the cloud computing supports various services, they also result in invalid results such as hardware/software failure, human maintainance and malicious attacks. This affects the security and privacy of the cloud clients data. The Cloud Service Provider hold the full management control on the clients data .The data which are seldom accessed may be deleted by the CSP and they hide the data loss to save their name and fame. In order to overcome the cloud storage services security issues, several protocols and techniques were implemented which failed in the actual practical implementation. In all the solutions, only the static data was taken into consideration, but the advanced techniques supported dynamic operations which included the addition, deletion and modifications of the files stored in the cloud.

In this paper, collection of user's are considered which form a group and there exists an owner of the group as well. The public verification of the data files are implemented which can be performed by the data owner or even the external party like the Third Party Auditor. Each time the file is created by the group members, it will be encrypted using cryptographic technique and then uploaded in the cloud. Each member of the group can have access to the file owned by other members of the same group at any time by decrypting the file. In the initial scenario, only the data owner of the group could audit the cloud data ,but this lead to the intense overload at the data owner s side due to computation and terrific communication. To overcome this problem ,several data integrity solutions came into the market, but they did not satisfy most of the characteristics like user revocation and also had huge impact on the cost of the auditing based on the data size and the group size. Hence in order to support user revocation, Wang et al. scheme which was based on the proxy re-signature was followed but this technique also faces few flaws when it comes to scalability performance. However this scheme guarantees that among the pair of entities, private and authenticated channels exist and there is no issue of collusion. Another technique was further designed for dynamic public integrity auditing scheme along with user revocation by Yuan and Yu. But this scheme failed to consider the secrecy of the data among the client group i.e. they couldn't support encrypted data.

Hence due to the above mentioned deficiency ,we propose the construction technique of encrypting and decrypting the data for each time of upload, modification and download . In this methodology, user revocation phase is conducted by the cloud, but there are chances of the untrusted cloud modifying the data and deleting the files which are not used frequently.

Due to the above mentioned issue, we propose a new methodology in which for each of the data modification process, the files would be encrypted and decrypted accordingly. Also the user revocation process will be handled by the admin of the group. In this scenario, the admin selects the misbehaving user that needs to be revoked and resign the data files of the revoked user. The resign key is generated based on the automatic key generator. The various challenges faced by the cloud computing includes information storage and sharing services. Here we tend to think about the general public information auditing within the cloud, where the information is shared among a group of users. The data is created, encrypted and then stored in the cloud by the Admin which is then shared among the group members. Members of the group will modify, update or perhaps delete the information. One amongst the foremost necessary and hardest task is to keep up the Cloud Security and Integrity Checking with economical User Revocation and Privacy conserving of the cloud data i.e once a group member is removed from the group, he is not allowed to access the same data or information once more.

This auditing may be performed in either of the two ways: Without use of Third party Auditing or use of TPA. Without the employment of TPA, downloading of cloud information by the user for its integrity verification is not possible task as it's terribly expensive owing to the transmission price across the network.

A TPA is associate external party which can crosscheck the exactitude of saved data against peripheral attacks. It liquidates the entanglement of client through auditing of whether or not the data or the information stored in cloud are indeed intact .Hence it lets off the burden of management of data by the data owner. TPA may also resolve the information inconsistency.

I. System Architecture:

The Cloud computing model is a distributed application structure where in the tasks and the workloads are partitioned among the service requestors also known as Cloud clients and the service providers known as cloud servers.

* Client:

These are either the individual consumers or organizations who store their entire data in the large data centers. The clients delete their local copy of data once it is moved to the Cloud Server. The Clients communicate with the Server over the computer network and initiation of the communication session is done by the Client.

* Cloud Storage Server (CSS):

An individual body, that is groomed by Cloud Service Provider (CSP), has enormous storage capacity and data processing resource to maintain the clients' data. This is a host machine which runs one or more of the server programs which share the resources with the clients.

* Third Party Auditor (TPA):

An individual body, which has masterliness that clients do not possess and is devoted to judge and unmask peril of cloud storage services on favor of the clients upon appeal. The architectural model for the public verification of the cloud data is depicted in figure 1.1.

[FIGURE 1.1 OMITTED]

The design objectives of this mechanism are:

1. Correctness: The public verifier is able to justly inquire the integrity of shared content.

2. Public Auditing: The public verifier can scrutinise the integrity of shared info without retraining the integrated data from the cloud, even if some contents in shared facts have been tamed by the cloud.

3. Scalability: Cloud info can be effortlessly shared among a populous sum of users.

This project enables in achieving the following two properties:

* Contentless crosschecking enables a tester to review the definiteness of info in the cloud using a linear combo of all the blocks via a claiming and feedback pact, without having to down load the entire sum of info by the auditor.

* Non-malleability indicates that other agents, who do not enjoy legitimate private keys, would not be allowed to engender absolute signatures on united blocks by combining existing signatures.

II. Implementation:

1.1 User Login:

The set of activities that can be performed by the user on the shared data within the cloud are as shown below in the flowchart diagram.

Registration:

In this functionality, every user is required to register into the cloud. As a result, these set of users will be allowed to login into the cloud server.

File Upload:

In this functionality, user uploads a block of files into the cloud with encryption by using his or her secluded key. This eliminates the illegal access of the cloud files.

[FIGURE 1.2 OMITTED]

Download:

This module permits the admin or the user to download the required file. The downloaded data needs to be decrypted using the private key of the owner of the respective file.

Reupload:

This functionality allows the user to reupload the downloaded files into the cloud after making the required editing on the file. The files are uploaded with unique stamp to protect the data from illegal access.

Unblock User:

This module grants the user to access his user dashboard by answering his security question that was provided in the interim of enrollment. If the results co-relates with the answer at the time of registration, account will be unlocked.

1.2 Auditor Login:

The set of action that can be performed by the auditor on the shared data within the cloud is as shown below in the flowchart diagram Fig 1.3.

[FIGURE 1.3 OMITTED]

File Verification:

The public tester is able to justly inquire the integrity of mutual data. The external agent can test the purity of shared information without having access to the wholesome data present in the cloud.

View files:

In this scenario, the public auditor view all the details of upload, download, blocked user, re-upload etc.

1.3 Admin Login:

The set of action that can be performed by the Admin on the shared data within the cloud are as shown in the flowchart diagram fig 1.4.

View Files:

In this functionality, the admin view all the details of upload, download, blocked user, and re-upload.

Block User:

In this functionality, the admin blocks the misbehaving client user profile.

[FIGURE 1.4 OMITTED]

HeidiSQL:

HeidiSQL is a simple open source SQL tool for MySQL databases. It's easy to manage, simple to download, install and configure. Traversing through different database objects of MySQL databases is also simple in case of HeidiSQL.

This form of database allows us to manipulate various objects like tables, fields etc. By clicking the "Data" tab, the data in each of the table can be explored. This presents us a gridview of the entire data in the table. Also the BLOB fields are also displayed at the bottom of the screen in the BLOB viewer and editor. This also involves import and export of databases, user management and more. The query editor encompasses field suggestion and completion -thereby limiting the number of errors related to typing errors.

[FIGURE 1.5 OMITTED]

Server Specification:

The following requirements apply to the server system environment:

* Microsoft Windows XP operating system supported by MS_access

* A minimum of 512 MB RAM.

* A backup system with larger capacity (recommended).

Client Specification:

The following requirements apply to the client system environment:

a) Microsoft Windows XP

b) 256 MB RAM

The existing concept of examining and safeguarding the privacy of the mutual data in the cloud was proposed in 2014.This scenario involved three parties: Cloud Server, Third Party Auditor and client users.

Due to the deficiency of the implementation of the user revocation concept, the previous safeguarding privacy of the mutual info notion was also taken into account and thus designed the effective public auditing of the shared data with effective user revocation using Proxy-resignature concept along with encryption of data file using DES algorithm(Data Encryption Algorithm).
Fig 1.6 : Data block Attributes.

Data Block   Signature   Block ID   Signer
                                    Identifier


In the existing system, the dynamic operations are being supported on the data block such as adding, modifying and deletion of the data block.

The block diagram in figure 1.6 denotes that the recommended public auditing mechanism supports dynamic operations on the data file. As a result, to support this technique, each data block is assigned a block identifier i.e assigning an index to each of the data block which helps in identification of the data file in the cloud. Along with the block Identifier, each block of data file is attached with signer identifier. A verifier or the external auditor can use a signer identifier to distinguish which key is required during verification phase, and the cloud can employ it to figure out which re-signing key is essential during client revocation. The various cloud operations performed are listed below:

(1) Update Operation:

In few of the scenarios, the user may need to edit some data content(s) stored in the cloud. This operation is referred to as data update. In other words, the user eliminates each occurrence of the old data block and replace it with the new one. Suppose we consider the current value of the data as dij, update would mean changing the value to a new one , dij+A dij.

(2) Delete Operation:

Sometimes, after storing the data in the cloud, certain data blocks may need to be deleted. The delete operation we are taking into account is a general one, in which user replaces the data block with zero or some specific predetermined data symbol. From this perspective, the delete operation is actually a special case of the data update operation, where the initial data files can be replaced with zeros or some predetermined special blocks.

(3) Append Operation:

In certain scenarios, the user may tend to raise the size of his stored data by adding blocks at the end of the data file, which is considered as data append. We anticipate that the most frequent append operation in cloud data storage is wholesome append, in which the user needs to upload a huge amount of data files at a time.

Techniques Implemented:

The implementation techniques are explained as follows:

1. Shared group data with efficient User Revocation:

Construction phase:

The algorithms involved in the construction phase are KeyGen, ReKey, Sign and ReSign .

GenerateKey:

Each client in the cluster needs to produce one's own private key. The public key will be generated automatically using a key generator. This phase is also known as Setup phase.

In ReKey, the cloud will select a re-signing key from the id-key pair list which is saved in the cloud. There are no collusion during the generation of the keys as we assume that private channels exist between the entities.

Sign: Prior to the creation of mutual cloud info, a stamp needs to be enumerated individually i.e. preprocessing the data file F to generate the verification metadata. This metadata information is also sent to the Third party Auditor. The id-key pair is also saved in the cloud so that this information can be used for re-signing during user revocation. The signing of the data block is based on DES algorithm.

Des Algorithm:

DES is based on a cipher known as the Feistel block cipher. This was a block cipher developed by the IBM cryptography analyzer Horst Feistel during the early 70's. It consists of a certain amount of rounds where each round consists of bit-shuffling, non-linear substitutions (S-boxes) and exclusive OR operations. DES encryption scheme expects two inputs - the plaintext to be encrypted and the secret key.

The public key generation is based on randomized algorithm which is based on the user statistics in the client group. The DES algorithm takes the public key, private key and Message M. This algorithm takes group public key (gpk), a private key (gsk[i]) and a message M e {0,1}* , and return user signature o .Phases of DES algorithm are :
Pseudo Code : Data Encryption Standard:
INPUT: input data a1 ... a64;
64-bit key P=p1 ... p64 (includes 8 parity bits).
OUTPUT: 64-bit ciphertext chunck O=o1 ... o64.
1. Calculate sixteen 48-bit round keys Pi, from P.
2. (LeftO, RightO) [??][end strikethrough] (a1, a2, ... a64) (Adopt IP
Table to permute bits; divide the outcome into
left and
right 32-bit bisection
3. (16 rounds) for j from 1 to 16, compute Lj and Rj as follows:
   3.1. Lj=Rj-1

3.2 Rj = Lj-1 XOR f (R j-1, Pj):
4. d1d2 ... d64 [??](R16, L16). (Swap terminal blocks L16, R16.)
5. O [left arrow] IP-1 (d1d2 ... d64).
6. End


If any other user of the same group wishes to download or modify the data block, the request will be sent to the concerned heir of the file. Thus the private key would be sent via email to the requested user. The downloading process is carried out using the DES decryption algorithm which is similar to that of the encryption process.

If a client in the cluster edits and would wish to re-upload a modified info in mutual data, the stamp on the modified info is also calculated which is similar to that in Sign.

The user then store the data file F at the cloud server, deletes its local copy and publishes the verification metadata to TPA for later audit.

Advantage of usage of DES:

* cipher text database, we delve into the shielded and potent mutual data integrate auditing for multi-user operation

* This scheme results in secured back-up and data storage in cloud.

User Revocation Process: 1) Traditional Approach:

[FIGURE 1.7 OMITTED]

In the traditional methodology, when a user in the shared group is revoked, the admin of the group will first download the data file from the cloud servers. Secondly the data file will be verified, the signature will be recomputed and then upload the file back to the cloud.

User A, User B and User C share the data in the cloud and User B is considered to be the admin of the group. When User A is revoked, the admin User B resign the data block that was previously signed by the User A.

2) Current Approach:

User A, User B and User C share the data in the cloud. When User A is revoked, the Cloud itself s resign the data block that was previously signed by the User A.

In this scenario, when a user is revoked from the shared group, re-signing of the revoked user's data block will be done by the cloud itself with a re-signing key that was previously stored in the cloud at the time of the storage of the encrypted data block in the cloud. Hence all the key user-id pair would be stored in the cloud and each time the Cloud Service Provider will select the resigning key from this list.

[FIGURE 1.8 OMITTED]

In ReSign, the cloud amends signatures of a nullified client into signatures of the prime user. The group manager is considered as the original user and we assume that he/she is secure. Another way to implement resigning key is to query the pioneer user to devise a priority index (PI). During the time of resigning decision , the first user shown in the PI would be elected. To establish the fidelity of the PI, it must be endorsed with the private key of the prime user (i.e., the group manager).

2. Public Auditing Mechanism:

The third party auditor has high end expertise and enough capabilities that cloud users do not have and based on the request from the users, they can assess the cloud storage service security. The issue with Cloud Service Providers is that they modify or delete the data which are rarely accessed or even for various application purposes and even make alterations of the data in the cloud. They also have enough capabilities to hide their wrongdoings and hence pretend to be trustworthy. As a result, users resort to the TPA to ensure the security of their outsourced data. But the TPA is not allowed to have access to the content of the cloud data.

The verification on data integrity is conducted through a challenge-and-response protocol between the external auditor and the cloud. The cloud will bring about a picture of proprietorship of mutual info in ProofGen beneath the defiance of a public verifier. In ProofVerify, a public verifier will review the faultlessness of a testament reciprocated by the cloud.

Performance Evaluation of Using TPA:

The external auditor does not need to have the possession of the data for performing the auditing.

* From the performance factor, we can say that the TPA does not deviate from the prescribed proptocol execution.

* TPA is considered to be independent and reliable and they do not collude with the Cloud Service Provider nor the group users during the process of auditing.

* Use of TPA helps in relieving the burden to the cloud users.

* TPA does not introduce any new vulnerability towards user data privacy.

* Hence use of TPA gaurantees the availabilty and data integrity.

* Dynamic data operation support: To let on the clients to perform block-level processing on the data files while pertaining the same level of actual data vow. The design should be as efficient as possible so as to safeguard the flawless combination of public auditability and dynamic file processing backing.

* Contentless verification: The external agent is not permitted from having access to the actual data file during the testing process.

* Dynamic data operation with integrity gaurantee: This scheme supports fully dynamic data operations such as data alteration (M), data infusion (I) and data expunction (D) of cloud file repository. Note that in the subsequent narration, we consider that the file F and the signature o have already been generated and properly saved at server.

III. Proposed System:

Privacy and security area units are the main problems faced by the cloud computing particularly storage of data, data integrity, error correction. From the above methods, it is clear that TPA techniques are very helpful for the integrity checking. TPA supports absolutely dynamic operation .Thus it is possible to verify data in case of modification or deletion .These techniques are often manipulated to cut back the security overhead of the client as well as to minimize the computation of the storage server. We have contemplated a contemporary public auditing structure for mutual info within the cloud. Also the authentication is taken into consideration. Only the authorized person or a valid user can have access to the shared data within the cloud.

In our proposed work as shown in figure 1.9, we are focusing through a more mathematically potent and classic cryptographic algorithm, AES data encryption. Its vital stability rests in the privilege for varied key span. AES grants us to adopt a 128-bit, 192-bit or 256-bit key, fabricating it to be robust than the 56-bit key of DES.

[FIGURE 1.9 OMITTED]

Also once a user within the group is revoked, a new technique to enable the cloud to re-sign blocks that were stamped by the terminated cluster client using mechanized key generator re-signatures would be focused on .As a result, the prevailing users within the group will save a major quantity of computation and communication resources throughout user revocation. Hence this solves the problem of scalability issue.

Proposed Data Block Signing Technique:

A much higher and powerful cryptographic technique known as AES algorithm is being made use of in this scenario. Certificateless cryptography is an alternative to the ID-based cryptography. Normally, keys are generated by a key generation center (KGC) who is inclined outright dominion.

Hence the key generation process is divided among the KGC and the cluster client. The KGC initially establish a key pair, in which the private key is currently the sectional private key of the system entity. The rest of the key is a random value created by the user, and is never revealed to anyone, not even the KGC. All cryptographic process performed by the client user are conducted by using a complete private key which involves both the KGC's partial key, and the user's arbitrary secret value. One drawback of this scenario is that the identity information no longer forms the entire public key.

Requestee to encrypt a message would need three sets of information 1) the public key of the other party and 2) identity information, and also 3) the public information of the third party. To decrypt, a user just requires their private key. A tight security. can be achieved through this certificateless system.

Comparison between the various symmetric encryption techniques:
                                    Symmetric encryption

Parameter                     DES                     AES

Key used            Same for encryption and   Same for encryption
                          decryption            and decryptions

Throughput              Lower than AES               Higher

Security               Proven Inadequate       Considered Secured

Power Consumption       Higher than AES       Higher than Blowfish

Key Length                  56 bits           128,192 or 256 bits


Performance Analysis:

Table 1.11 shows the experimental results based on the time taken for data encryption.
Table 1.11: Encryption time based on entropy.

File Size   AES(Time in sec)   DES(Time in sec)

512Kb            49.03              66.09
1Mb              44.84              67.94
1.8Mb            80.712            122.292
2.0Mb            89.68              135.88


The graph obtained based on the experimental analysis is as shown in the Fig 1.12.
Fig 1.12: Graphical representation of the analysis time.

Analysis time

                   512Kb   1Mb     1.8Mb     2.0Mb

AES(Time in sec)   49.03   44.84   80.712    89.68
DES(Time in sec)   66.09   67.94   122.292   135.88


Proposed User revocation Technique:

[FIGURE 1.13 OMITTED]

Fig 1.13 depicts the user revocation screen. In this technique, when a user from a group misbehaves, the admin will perform the following actions:

* The admin will first remove the misbehaving user from the group.

* The autokey generation will be enabled which will re-assign the blocks of the misbehaving user with a new key.

* Notification mail will be sent to the remaining users of the group regarding the reallotment of the new key value to the file blocks of the revoked user .

IV. Conclusions And Future Work:

We justify the integrity and security of our proposed construction and affirm the act of our strategy over concrete exertion and analogy.

We propel the public analyzing system of info repository surveillance in cloud computing, and move for a pact aiding for fully changing data activities, specifically to back chunck infusion, which is omitted in most of the current proposal. Steps to improve the scalability of the existing system would also be focused upon by performing batch auditing for verifying numerous auditing work concurrently.

Here we have proposed a contemporary public examining technique for group content with effective misbehaving client removal in the cloud environment. When a client user in the cluster is removed, we allow the semi-trusted cloud to re-sign data file that were signed by the misbehaving user. Experimental analysis shows that the cloud can boost the ability of cluster user removal, and current end users in the cluster can save a momentous ton of calculation of the assets amid client user termination.

REFERENCES

[1.] Mell, P. and T. Grance, "Draft NIST working definition of cloud computing".

[2.] Wang, C., Q. Wang, K. Ren and W. Lou, 2011. "Towards Secure and Dependable Storage Services in Cloud Computing," IEEE Transactions on ServicesComputing, 5(2): 220-232.

[3.] Zhu, Y., G.J. Ahn, H. Hu, S.S. Yau, H.G. An and S. Chen,"Dynamic Audit Services for Outsourced Storage in Clouds," IEEETransactions on Services Computing, accepted.

[4.] Balkrishnan, S., G. Saranya, S. Shobana and S. Karthikeyan, 2012. "Introducing Effective Third Party Auditing (TPA) for Data Storage Security in Cloud", International Journal of computer science and Technology, vol. 2, no. 2, ISSN 2229-4333 (Print) | ISSN: 0976-8491 (Online).

[5.] Jachak, K.B., S.K. Korde, P.P. Ghorpade and G.J. Gagare, 2012. "Homomorphic Authentication with Random Masking Technique Ensuring Privacy & Security in Cloud Computing", BioinfoSecurity Informatics, 2-2: 49-52, ISSN. 2249-9423, 12.

[6.] Yuan, J. and S. Yu, 2013. "Proofs of Retrievability with Public Verifiability and Constant Communication Cost in Cloud," in Proceedings of ACM ASIACCS-SCC'13.

[7.] Shacham, H. and B. Waters, 2008. "Compact Proofs of Retrievability, "in the Proceedings of ASIACRYPT Springer-Verlag, 90-107.

[8.] Ateniese, R., Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson and D. Song, 2007. "Provable Data Possession at UntrustedStores,"in the Proceedings of ACM CCS, 598-610.

[9.] Wang, C., Q. Wang, K. Ren and W. Lou, 2010. "Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing,"inthe Proceedings of IEEE INFOCOM, 525-533.

[10.] Wang, H., Proxy Provable Data Possession in Public Clouds,"IEEE Transactions on Services Computing, accepted.

[11.] Wang, B., B. Li and H. Li, 2012. "Oruta: Privacy-Preserving Public Auditing for Shared Data in the Cloud," in the Proceedings of IEEE Cloud, 295-302.

[12.] Wang, Q., C. Wang, J. Li, K. Ren and W. Lou, 2009. "Enabling Public Verifiability and Data Dynamic for Storage Security in Cloud Computing," in the Proceedings of ESORICS 2009. Springer-Verlag, 355-370.

[13.] Wang, B., B. Li and H. Li, 2014. "PANDA: Public Auditing for Shared Data with Efficient User Revocation in the Cloud," in the Proceedings of IEEE INFOCOM

(1) N. Shyamambika and (2) N. Thillaiarasu

(1) PG Scholar Department of Computer science and Engineering SNS College of Engineering, Coimbatore

(2) Assistant Professor Department of Computer science and Engineering SNS College of Engineering, Coimbatore

Address For Correspondence:

N. Shyamambika, PG Scholar Department of Computer science and Engineering SNS College of Engineering, Coimbatore

E-mail: Shyamambika@gmail.com
COPYRIGHT 2016 American-Eurasian Network for Scientific Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Shyamambika, N.; Thillaiarasu, N.
Publication:Advances in Natural and Applied Sciences
Date:Jun 15, 2016
Words:4816
Previous Article:Phase disposition PWM multicarrier based 5 level Modular Multilevel Inverter for PV applications.
Next Article:Comparative study on readymix concrete with and without stabilizer.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters