Printer Friendly

Protected scrutinization with key disclosure resistance.

INTRODUCTION

Cloud computing is a model to enable on demand, easy to access and the convenient to access the shared data through a shared pool. The concept of cloud computing has enabled the users and enterprises to use a third party storage data center to store and process their data. This concept of cloud also helps in the reduction of resources and provides economies of scale and similar to a utility over a network. The foundation of cloud computing is to overcome the concept of infrastructure and provide shared services. Cloud resources not only provide the concept of shared services but also dynamically reallocates as per users demand. Cloud provides the facility to allocate resources for users. This concept can increase the power of utilization at the same time can reduce the overall cost of resources, likeless power usage, rack space, air conditioning, etc. to maintain the systems. The cloud computation has also helped us to overcome the burden of purchasing licenses for different applications as a single server can be used by multiple users to access their data. The various attributes of cloud computing such as hardware virtualization, low- cost computers and storage devices, high-capacity networks, autonomic and utility computing along with service-oriented architecture has led to tremendous growth for cloud computing. Companies can scale up or scale down the cloud services as per their requirement. Cloud computing has become a highly demanded technology because of its high computing power, high performance, accessibility, scalability, availability and cheap cost of services. Many users of cloud have shown a tremendous growth in their business with the rate of 50% per annum. As this technology is still in its infancy stage, there are some loopholes to be worked upon to make cloud computing more users friendly and reliable.

[FIGURE 1.1 OMITTED]

The Homomorphic Linear Authenticator (HLA) technique which supports the blockless verification is observed to solve the tedious computation and communication in auditing protocols. This allows the user to verify the data in the storage server without downloading the entire data. This technique is proposed in many of the cloud computing storages. Another important aspect of cloud auditing is the privacy of data in the shared storage. To reduce the burden of the client, a third party auditor (TPA) will periodically check for the integration of data in cloud. There are possibilities that the TPA may need to execute the auditing protocol several times to get the client's data. Auditing protocols are mainly created and designed to secure the privacy of client's data in cloud technology. Another attribute of cloud which needs to be addressed is the data dynamic and how to support this in cloud storage auditing. Here is a proposed auditing protocol which supports 7 dynamic data operations such as insertion, modification and deletion. Other attributes like user revocation, eliminating proxy certificate management, proxy auditing are also being studied and investigated. Though there are a lot of investigations that happened around this subject of cloud computing, storage auditing is still unexposed in earlier researches. All the existing protocols targets on faults or dishonesty of the cloud technology, while eliminating the security settings of the client.

[FIGURE 1.2 OMITTED]

This proposed concept can overcome all the above existing problems. This new concept is called as auditing protocol with key exposure resilience. In this new theory the data integration can be validated even if the client's current secret key for cloud storage auditing is exposed. In other words let's propose a practical solution for a scenario where the security model of auditing protocol with key exposure resilience. The results show the existence of security proof and asymptotic performance. The binary tree structure and pre-order traversal technique to update the secret key for the client are used for a key management. Deploying a novel authenticator to support forward security and the blockless verification. The results show that the proposed method is secure and efficient.

II. Related Works:

Ateniese et al. was the first to propose "provable data possession" (PDP) for the safety of data in untrusted storages. In order to audit the outsourced data, HAL and random sample technique are used. [1, 2]. Wang et al. gave a public auditing protocol which proposes privacy preserving property. To achieve this privacy preserving property, HLA was integrated with random masking technique. Wang also introduced a proxy provable data possession protocol. In this method the data integrity test is performed by a proxy. Dynamic data operations are also included to make the auditing more flexible. [2] Yang and Jia produced a dynamic auditing protocol with privacy preserving property. [3] Zhu et al. spoke about a method which can support dynamic auditing with a cooperative provable data possession protocol. [4] Shah et al. gave a TPA for online storage honest. This protocol needs the auditor to maintain the state and suffers from bounded usage. [5] Erway et al. went further in this and proposed a PDP model with a skip list based protocol along with dynamic support. [6]

III. A. Segmentation Of Encoded Files:

In this section we deal with the authentication of the user and also handle with the user access permissions. Before all this could happen, the data owner should register with the help of a registration form. The provided details will be maintained in a database. The users are requested to login with their credentials namely username and password. After which the user can upload their files in the cloud storage. For security reasons the data is encoded using Base64 algorithm and the file is broken into six different parts. These are again stored in six different locations. In this method of segmentation the file size is split and saved in various locations.

B. Encryption Of Files:

Encryption is the process in which the information is secured by encoding the data in a way that only the authorized user can decode the data in a readable format. Encryption cannot protect from interception but it refuses to share the content to the interceptor. An encryption algorithm is used to convert the plain text message into a cipher text which is readable only when decrypted. It is possible to decrypt the data without the key of the recipient, but in order to do this it needs large computational resources and skill. But it is comparatively easy to decrypt the data with an authorized key. AES algorithm is applied on the segmented files in different locations. Now Blowfish algorithm is applied again on the encrypted file for more security measures. Now each segmented file will produce a key. Now the six different keys together form an aggregate key. The aggregate key is now encrypted using the RSA algorithm which will be used by the user in order to retrieve the data.

[FIGURE 3.1 OMITTED]

In all the existing techniques only one encryption algorithm can be used which leads to low security levels. But in the above proposed technique we have Base 64, AES, RSA and Blowfish algorithms used and hence the security level is high.

C. Decryption Of Files:

Decryption is nothing but simply converting the encrypted file back to its original form, which is in a readable mode. RSA algorithm is used to derive the aggregate key which in turn will provide the six different keys for each segmented folder. Now the contents of the files are decrypted using Blowfish and AES algorithm respectively. Now the segmented data will be desegmented and decoded and presented in its original form to the authenticated user.

[FIGURE 3.2 OMITTED]

In all the other existing systems we witness high level of damage in key management. But in this proposed system, the concept of key aggregation has resolved the security issues with the key management, which in turn provides high security level.

D. User Privacy:

User privacy is nothing but hiding the data owner name and showing only the file. Say, if a user wants to download the data, then the request is sent to the owner without knowing the owner name. If the owner accepts the request, then the file is shared.

IV. Proof Of The Theorem:

1. Base64 is a simple binary to text encoding concept which represents binary data aim ASCII string format through translating it into a radix-64 representation. The word Base64 is derived from a MIME content transfer encoding. The 64 characters that are chosen to represent the 64 place-values for the base vary between implementations. These 64 characters are chosen in such a way that they are common to the subset of most encodings and also printable. This method does not modify the data in transit through information systems, like email. Say, the MIME's Base64 implementation uses A-Z, a-z and 0-9 for the first 62 values. Other methods also posses the same property just that the last two values differ with symbols. Example- UTF-7. This type of encoding was created initially for the dialup communication between two systems that have the same OS, like Bin Hex for TRS-80, uuencode for UNIX. Uuencode uses digits, uppercase letters and many punctuation characters with no lower case.

2. AES works on the substitution-permutation network, which is a combination of substitution and permutation. This is fast in both in hardware and software. Unlike DES which uses Festal network, AES is a variant of Rijndael. Rijndael has fixed block size of 128 bits, key size of 128, 192 0r 256 bits. The specification for per se in Rijndael is that the key and the block can be any multiple of 256 bits. With a minimum of 128 and maximum of 256 bits. AES works on a 4X4 column-major order matrix of bytes. There are also some versions of Rijndael which has larger block size and additional columns in the state. The AES calculations are always done a special finite filed.

1. Key Expansions--the round keys are derivate from the cipher key with the rijndael's key schedule. It requires a 128-bit round key block for each round plus one mopre.

2. Initial round

3. Add Round Key--In this each byte in the state is combined with the block of the round key with the help of bitwise XOR.

4. Rounds

a. SubBytes--Sub bytes are nothing but where each byte is replaced with another byte with the help of lookup table. This is a non linear substitution step.

b. ShiftRows--This is a transposition step in which the last three rows in the state are moved cyclically for a certain number of steps.

c. MixColumns--This is a mixing operation where the four bytes in each column is combined.

d. AddRoundKey

5. Final Round (no MixColumns)

a. SubBytes

b. ShiftRows

c. AddRoundKey

3. Blowfish

Blowfish has a key length which varies from 32bits to 448 bits and a block size of 64- bits. Blowfish uses large key-dependent S-boxes and 16-round Festal cipher. It resembles the CAST-128 in structure which uses fixed S-Boxes. In this algorithm two sub key arrays used. 18-entry P-array and four 256-entry S-boxes. 8-bit input is accepted by the S-box and 32-bit output is produced. In every round, one entry of P-array is accepted and after the final round, each half of the data block is XORed with the two remaining unused P-entries. The 32 bit output is now split into four 8-bit quarters, which is used as an input for S-boxes. The outputs are added to the modulo 232 and the final 32-bit output is produced by XORed. The RSA algorithm contains three steps namely, key generation, encryption and decryption.

A. Key Generation:

RSA involves a public key and a private key. The public key can be known by everyone and is used for encrypting messages. Messages encrypted with the public key can only be decrypted in a reasonable amount of time using the private key. The keys for the RSA algorithm are generated the following way:

1. Choose two distinct prime numbersp and q.

[] For security purposes, the integers p and q should be chosen at random, and should be similar in magnitude but differ in length by a few digits, to make factoring harder. Prime integers can be efficiently found using a primarily test.

2. Compute n = pq.

[] n is used as the modulus for both the public and private keys. Its length, usually expressed in bits, is the key length.

3. Compute [phi](n) = [phi](p)[phi](q) = (p - 1)(q - 1) = n - (p + q -1), where [phi] is Euler's totient function. This value is kept private.

4. Choose an integer e such that 1 < e < [phi](n) and gcd(e, [phi] (n)) = 1; i.e., e and [phi](n) are co-prime.

[] e is released as the public key exponent.

[] e having a short bit-length and small Hamming weight results in more efficient encryption--most commonly 216 + 1 = 65,537. However, much smaller values of e (such as 3) have been shown to be less secure in some settings.

1. Determine d as d = e-1 (mod 9(n)); i.e., d is the modular multiplicative inverse of e (modulo [phi](n)).

[] This is more clearly stated as: solve for d given d x e = 1 (mod [phi] (n))

[] This is often computed using the extended Euclidean algorithm. Using the pseudo code in the Modular integers section, inputs a and n correspond to e and [phi](n), respectively.

[] d is kept as the private key exponent.

The public key consists of the modulus n and the public (or encryption) exponent e. The private key consists of the modulus n and the private (or decryption) exponent d, which must be kept secret. p, q, and [phi](n) must also be kept secret because they can be used to calculate d.

[] An alternative, used by PKCS#1, is to choose d matching de = 1 (mod [lambda]) with [lambda] = lcm(p - 1, q - 1), where lcm is the Least Common Multiple. Using [lambda] instead of [phi](n) allows more choices for d. [lambda] can also be defined using the Carmichael function, [lambda](n). Since any common factors of (p-1) and (q-1) are present in the factorization of p*q-1, it is recommended that (p-1) and (q-1) have only very small common factors, if any besides the necessary 2.

Conclusion:

Cloud storage auditing is termed as a sensitive topic as it reveals the integrity of data in public cloud. All the previously proposed protocols are working on the assumption that the client's secret key is secure. As we know this cannot be the factor at all circumstances as the security setting can be low at the client end. If in case the client secret key n is exposed then all the proposed auditing protocols will fail to work. The concept of blockless verifiability is that the integrity of the data can be checked even when the auditor does not access to the actual file block. Hence the proposed system is used when the user stores the data in cloud. The stored data is encoded and split into six segments and stored in different locations. Each segmented part is now encrypted again with AES algorithm and Blowfish algorithm. Now each of the segments will generate a key and which in turn contributes to an aggregated key which will be encrypted using RSA algorithm. When user enters this key, the user is authenticated and the retrieves the transferred file securely.

REFERENCES

[1.] Jia Yu, Kui Ren, Cong Wang and Vijay Varadharajan, 2015. 'Enabling Cloud Storage Auditing With Key-Exposure Resistance', IEEE Transactions On Information Forensics And Security, 10(6): 1167-1179.

[2.] Cash, D., A. Kupcu and D. Wichs, 2013. 'Dynamic Proofs Of Retrievability Via Oblivious RAM', in Advances in Cryptology- Eurocrypt. Berlin, Germany: Springer-Verlag, pp: 279-295.

[3.] Yu, J., F. Kong, X. Cheng, R. Hao and G. Li, 2014. 'One Forward-Secure Signature Scheme Using Bilinear Maps And Its Applications', Inf. Sci., 279: 60-76.

[4.] Wang, H., Q. Wu, B. Qin and J. Domingo-Ferrer, 2014. 'Identity-Based Remote Data Possession Checking In Public Clouds', IET Inf. Secur., 8(2): 114-121.

[5.] Wang, H., 2013. 'Proxy Provable Data Possession In Public Clouds',IEEE Transactions Services Comput., 6(4): 551-559.

[6.] Yang, K. and X. Jia, 2013. 'An Efficient And Secure Dynamic Auditing Protocol For Data Storage In Cloud Computing', IEEE Transactions Parallel Distributed Systems, 24(9): 1717-1726.

[7.] Zhu, Y., G.-J. Ahn, H. Hu, S.S. Yau, H.G. An and C.-J. Hu, 2013. 'Dynamic Audit Services For Outsourced Storages In Clouds', IEEE Trans. Services Comput., 6(2): 227-238.

[8.] Wang, C., S.S.M. Chow, Q. Wang, K. Ren and W. Lou, 2013. 'Privacy Preserving Public Auditing For Secure Cloud Storage', IEEE Trans. Comput., 62(2): 362-375.

[9.] Wang, B., B. Li, and H. Li, 2012. 'Public Auditing For Shared Data With Efficient User Revocation In The Cloud', in Proc.IEEE INFOCOM, pp: 2904-2912.

[10.] Zhu, Y., H. Hu, G.-J. Ahn and M. Yu, 2012. 'Cooperative Provable DatavPossession For Integrity Verification In Multicloud Storage', IEEE Trans. Parallel Distrib. Syst., 23(12): 2231-2244.

(1) Ms. M. Sneha, PG Student; (2) Mrs. R. Bama

(1,2) Computer Science And Engineering Sri Sairam Engineering College Chennai, Tamil Nadu

Received 25 January 2016; Accepted 28 April 2016; Available 5 May 2016

Address For Correspondence:

Ms. M.Sneha, PG Student; Computer Science And Engineering Sri Sairam Engineering College Chennai, Tamil Nadu E-mail: sneha.sneha44@gmail.com
COPYRIGHT 2016 American-Eurasian Network for Scientific Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Sneha, M.; Bama, R.
Publication:Advances in Natural and Applied Sciences
Date:May 15, 2016
Words:2880
Previous Article:An adaptive approach for efficient energy saving technique in enterprise cloud data centers.
Next Article:Data sharing for dynamic groups using GDH and TGDH in public cloud.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters