cloud security Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

A Review on AWS - Cloud Computing Technology

Abstract: Cloud computing is something simple we can define as maintaining data centers and data servers and also u can access technology services by computing power, storage, and database using cloud computing technology AWS(Amazon Web Services). It is an emerged model which is already popular among almost all enterprises. It provides us the concept of ondemand services where we are using and scaling cloud resources on demand and as per demand respectively. AWS Cloud computing is a cost-effective model. The major concern in this model is Security and Storage in the cloud. This is one of the major reasons many enterprises of choosing AWS cloud computing. This paper provides a review of security research in the field of cloud security and storage services of the AWS cloud platform. After security and storage, we have presented the working of AWS (Amazon Web Service) cloud computing. AWS is the most trusted provider of cloud computing which not only provides excellent cloud security but also provides excellent cloud storage services. The main aim of this paper is to make cloud computing storage and security a core operation and not an add-on operation. As per the increase in the Service provider and related companies, this AWS Cloud Platform plays a vital role in service industries by giving its best web services, so, therefore, choosing the cloud service providers wisely is the basic need of the industry. Therefore we are going to see how AWS fulfills all these specific needs. Keywords: Trusted Computing, AWS, Information-Centric Security, Cloud Storage, S3, EC2, Cloud Computing

Deep Learning Approaches to Cloud Security

Genetic algorithm-based pseudo random number generation for cloud security, cloud security service for identifying unauthorized user behaviour, qos based cloud security evaluation using neuro fuzzy model, azure cloud security for absolute beginners, mitigating theft-of-service attack - ensuring cloud security on virtual machines, cloud computing security requirements: a review.

Abstract Cloud computing is a new technology that is undergoing tremendous development today. People who use it are not able to separate the reasonable from the unreasonable arguments that come with the security requirements in the cloud. The claim that cloud computing is hereditarily insecure is as absurd as the claim that cloud computing does not create new security problems. Cloud computing is a way to dynamically increase resources without the need for in-depth knowledge of a brand new infrastructure, without training new workers or designing new software solutions. The article aims to analyse the different cloud security issues and models of cloud architectures. Some of the main problems with security in virtualization, concerns about storing data in the cloud and the assessment of risk tolerance in cloud computing are presented. Legal and regulatory issues for the protection of personal data are addressed.

The Vulnerabilities of Cloud Computing : A Review

A Cloud is a type of analogous and scattered system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources “ . cloud computing is the dynamic provisioning of IT capabilities (hardware, software, or services) from third parties over a network. However this technology is still in its initial stages of development, as it suffers from threats and vulnerabilities that prevent the users from trusting it. Various malicious activities from illegal users have threatened this technology such as data misuse, inflexible access control and limited monitoring. The occurrence of these threats may result into damaging or illegal access of critical and confidential data of users. This article is in order to describe the impact of those vulnerabilities and threats to create awareness among organisations and users so that they can Adopt this technology with trust And form a trusted provider Who has trusted security policies. Here we have defined cloud-specific vulnerabilities, cloud feature vulnerabilities and propose a reference vulnerabilities architecture of cloud computing and threats related cloud computing. Cloud security and privacy plays an important role to avoid cloud threats .Cloud Privacy Concerns the expression of or devotion to various legal and non- legal norms regarding the right to private life. Cloud Security Concerns the confidentiality, ease of use and reliability of data or information. As the development of cloud computing, issue of security has become a top priority. In this article we are going to discuss about the Characteristics of vulnerabilities , cloud vulnerabilities and cloud threats , Also how we can overcome or avoid them and keep our data safe.

Security and Privacy in Cloud Computing: Technical Review

Advances in the usage of information and communication technologies (ICT) has given rise to the popularity and success of cloud computing. Cloud computing offers advantages and opportunities for business users to migrate and leverage the scalability of the pay-as-you-go price model. However, outsourcing information and business applications to the cloud or a third party raises security and privacy concerns, which have become critical in adopting cloud implementation and services. Researchers and affected organisations have proposed different security approaches in the literature to tackle the present security flaws. The literature also provides an extensive review of security and privacy issues in cloud computing. Unfortunately, the works provided in the literature lack the flexibility in mitigating multiple threats without conflicting with cloud security objectives. The literature has further focused on only highlighting security and privacy issues without providing adequate technical approaches to mitigate such security and privacy threats. Conversely, studies that offer technical solutions to security threats have failed to explain how such security threats exist. This paper aims to introduce security and privacy issues that demand an adaptive solution approach without conflicting with existing or future cloud security. This paper reviews different works in the literature, taking into account its adaptiveness in mitigating against future reoccurring threats and showing how cloud security conflicts have invalidated their proposed models. The article further presents the security threats surrounding cloud computing from a user perspective using the STRIDE approach. Additionally, it provides an analysis of different inefficient solutions in the literature and offers recommendations in terms of implementing a secure, adaptive cloud environment.

Export Citation Format

Share document.

Help | Advanced Search

Computer Science > Cryptography and Security

Title: leveraging ai planning for detecting cloud security vulnerabilities.

Abstract: Cloud computing services provide scalable and cost-effective solutions for data storage, processing, and collaboration. Alongside their growing popularity, concerns related to their security vulnerabilities leading to data breaches and sophisticated attacks such as ransomware are growing. To address these, first, we propose a generic framework to express relations between different cloud objects such as users, datastores, security roles, to model access control policies in cloud systems. Access control misconfigurations are often the primary driver for cloud attacks. Second, we develop a PDDL model for detecting security vulnerabilities which can for example lead to widespread attacks such as ransomware, sensitive data exfiltration among others. A planner can then generate attacks to identify such vulnerabilities in the cloud. Finally, we test our approach on 14 real Amazon AWS cloud configurations of different commercial organizations. Our system can identify a broad range of security vulnerabilities, which state-of-the-art industry tools cannot detect.

Submission history

Access paper:.

  • Download PDF
  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Comput Intell Neurosci
  • v.2022; 2022

Logo of cin

This article has been retracted.

The rise of cloud computing: data protection, privacy, and open research challenges—a systematic literature review (slr), junaid hassan.

1 Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad, Chiniot-Faisalabad Campus, Chiniot 35400, Pakistan

Danish Shehzad

2 Department of Computer Science, Superior University, Lahore 54000, Pakistan

Usman Habib

3 Faculty of Computer Sciences and Engineering, GIK Institute of Engineering Sciences and Technology, Topi, Swabi 23640, Khyber Pakhtunkhwa, Pakistan

Muhammad Umar Aftab

Muhammad ahmad, ramil kuleev.

4 Institute of Software Development and Engineering, Innopolis University, Innopolis 420500, Russia

Manuel Mazzara

Associated data.

The data used to support the findings of this study are provided in this article.

Cloud computing is a long-standing dream of computing as a utility, where users can store their data remotely in the cloud to enjoy on-demand services and high-quality applications from a shared pool of configurable computing resources. Thus, the privacy and security of data are of utmost importance to all of its users regardless of the nature of the data being stored. In cloud computing environments, it is especially critical because data is stored in various locations, even around the world, and users do not have any physical access to their sensitive data. Therefore, we need certain data protection techniques to protect the sensitive data that is outsourced over the cloud. In this paper, we conduct a systematic literature review (SLR) to illustrate all the data protection techniques that protect sensitive data outsourced over cloud storage. Therefore, the main objective of this research is to synthesize, classify, and identify important studies in the field of study. Accordingly, an evidence-based approach is used in this study. Preliminary results are based on answers to four research questions. Out of 493 research articles, 52 studies were selected. 52 papers use different data protection techniques, which can be divided into two main categories, namely noncryptographic techniques and cryptographic techniques. Noncryptographic techniques consist of data splitting, data anonymization, and steganographic techniques, whereas cryptographic techniques consist of encryption, searchable encryption, homomorphic encryption, and signcryption. In this work, we compare all of these techniques in terms of data protection accuracy, overhead, and operations on masked data. Finally, we discuss the future research challenges facing the implementation of these techniques.

1. Introduction

Recent advances have given rise to the popularity and success of cloud computing. It is a new computing and business model that provides on-demand storage and computing resources. The main objective of cloud computing is to gain financial benefits as cloud computing offers an effective way to reduce operational and capital costs. Cloud storage is a basic service of cloud computing architecture that allows users to store and share data over the internet. Some of the advantages of cloud storage are offsite backup, efficient and secure file access, unlimited data storage space, and low cost of use. Generally, cloud storage is divided into five categories: (1) private cloud storage, (2) personal cloud storage, (3) public cloud storage, (4) community cloud storage, and (5) hybrid cloud storage.

However, when we outsource data and business applications to a third party, security and privacy issues become a major concern [ 1 ]. Before outsourcing private data to the cloud, there is a need to protect private data by applying different data protection techniques, which we will discuss later in this SLR. After outsourcing the private data to the cloud, sometimes the user wants to perform certain operations on their data, such as secure search. Therefore, while performing such operations on private data, the data needs to be protected from intruders so that intruders cannot hack or steal their sensitive information.

Cloud computing has many advantages because of many other technical resources. For example, it has made it possible to store large amounts of data, perform computation on data, and many other various services. In addition, the cloud computing platform reduces the cost of services and also solves the problem of limited resources by sharing important resources among different users. Performance and resource reliability requires that the platform should be able to tackle the security threats [ 2 ]. In recent years, cloud computing has become one of the most important topics in security research. These pieces of research include software security, network security, and data storage security.

The National Institute of Standards and Technology (NIST) defines cloud computing as [ 3 ] “a model for easy access, ubiquitous, resource integration, and on-demand access that can be easily delivered through various types of service providers. The Pay as You Go (PAYG) mechanism is followed by cloud computing, in which users pay only for the services they use. The PAYG model gives users the ability to develop platforms, storage, and customize the software according to the needs of the end-user or client. These advantages are the reason that the research community has put so much effort into this modern concept [ 4 ].

Security is gained by achieving confidentiality, integrity, and data availability. Cloud users want assurance that their data must be saved while using cloud services. There are various types of attacks that launch on a user's private data, such as intrusion attacks, hacking, stealing the user's private data, and denial of service attacks. 57% of companies report security breaches using cloud services [ 5 ]. Data privacy is more important than data security because cloud service providers (CSPs) have full access to all cloud user's data and can monitor their activities, because of which the cloud user privacy is compromised. For example, a user is a diabetic, and the CSP is analyzing their activities, such as what he is searching for more and what kind of medicine he is using the most. Because of this access, CSP can get all the sensitive information about an individual user and can also share this information with a medicine company or an insurance company [ 6 ]. Another problem is that the user cannot fully trust CSP. Because of this reason, there are many legal issues. Users cannot store their sensitive data on unreliable cloud services because of this mistrust. As a result, many users cannot use cloud services to store their personal or sensitive data in the cloud. There are two ways to solve this problem. One is that the user installs a proxy on his side, and this proxy takes the user's data, encrypts and saves their data using some data protection techniques, and then sends it to the untrusted CSP [ 7 ].

The recent Google privacy policy is that any user can use any Google service free of cost; however, Google monitors their activity by monitoring their data to improve their services [ 8 ]. In this paper, we compare different types of data protection techniques that provide privacy and security over the data stored on the cloud. Many papers discuss outsourcing data storage on the cloud [ 9 , 10 ], however, we also discuss how we can secure the outsourced data on the cloud. Most of the paper describes the data security on the cloud vs the external intruder attacks [ 11 , 12 ]. This paper not only discusses the security attacks from outside intruders and securing mechanisms but also inner attacks from the CSP itself. Many surveys cover data privacy by applying cryptographic techniques [ 13 , 14 ]. These cryptographic techniques are very powerful for the protection of data and also provide a very significant result. However, there is a problem as these cryptographic techniques require key management, and some of the cloud functionalities are not working on these cryptographic techniques. In this paper, we also discuss some steganographic techniques. To the best of our knowledge, no study discusses all the conventional and nonconventional security techniques. Therefore, all the data protection techniques need to be combined in one paper.

The rest of this paper is organized as follows: Section 3 of the paper describes the research methodology that consists of inclusion, exclusion criteria, quality assessment criteria, study selection process, research questions, and data extraction process. Also, we discuss assumptions and requirements for data protection in the cloud. Section 4 presents all the cryptographic and also noncryptographic techniques that are used for data protection over the cloud. Also, we discuss the demographic characteristics of the relevant studies by considering the following four aspects: (i) publication trend, (ii) publication venues (proceeding and journals), (iii) number of citations, and (iv) author information. Section 4 also compares all these data protection techniques. Lastly, in Section 5 , we discuss results and present conclusion and future work.

2. Related Work

The first access control mechanism and data integrity in the provable data possession (PDP) model is proposed in the paper [ 15 ], and it provides two mobile applications based on the RSA algorithm. Like the PDP, the author in the paper [ 16 ] proposed a proof of retrievability (PoR) scheme that is used to ensure the integrity of remote data. PoR scheme efficiency is improved using a shorter authentication tag that is integrated with the PoR system [ 17 ]. A more flexible PDP scheme is proposed by the author of the paper [ 18 ] that uses symmetric key encryption techniques to support dynamic operations. A PDP protocol with some flexible functionality is developed, in which, we can add some blocks at run time [ 19 ]. A new PDP system with a different data structure is introduced, and it improves flexibility performance [ 20 ]. Similarly, another PDP model with a different data structure is designed to handle its data functionality [ 21 ]. To improve the accuracy of the data, the author of the paper [ 22 ] designed a multireplicas data verification scheme that fully supports dynamic data updates.

A unique data integration protocol [ 23 ] for multicloud servers is developed. The author of the paper [ 24 ] also considers the complex area where multiple copies are stored in multiple CSPs and builds a solid system to ensure the integrity of all copies at once. A proxy PDP scheme [ 25 ] is proposed, which supports the delegation of data checking that uses concessions to verify auditor consent. In addition, the restrictions of the verifier are removed that strengthened the scheme, and it proposes a separate PDP certification system [ 26 ]. To maintain the security of information, a concept for information security is proposed and a PDP protocol for public research is developed [ 27 ]. To resolve the certification management issue, the PDP system with data protection is introduced [ 28 ].

Identity-based cryptography is developed, in which a user's unique identity is used as input to generate a secret key [ 29 ]. Another PDP protocol is recommended to ensure confidentiality [ 30 ]. The author of the paper [ 31 ] proposed a scheme, in which tags are generated through the ring signature technique for group-based data sharing that supports public auditing and maintains user privacy. A new PDP system is introduced for data sharing over the cloud while maintaining user privacy [ 32 ]. Additionally, it supports the dynamic group system and allows users to exit or join the group at any time. Another PDP system [ 33 ] that is based on broadcast encryption and supports dynamic groups [ 34 ] is introduced. The issue of user revocation has been raised [ 35 ], and to address this issue, a PDP scheme has been proposed, which removes the user from the CSP using the proxy signature method. A PDP-based group data protocol was developed to track user privacy and identity [ 36 ]. A PDP system [ 37 ] is proposed for data sharing between multiple senders. The author of the paper [ 38 ] provides SEPDP systems while maintaining data protection. However, the author of the paper [ 39 ] proved that the scheme proposed in [ 38 ] is vulnerable to malicious counterfeiting by the CSP. A collision-resistant user revocable public auditing (CRUPA) system [ 40 ] is introduced for managing the data that is shared in groups. Another scheme [ 41 ] is introduced as a way to ensure the integrity of mobile data terminals in cloud computing.

To address the PKI issue, identity-based encryption [ 42 ] is designed to enhance the PDP protocol and maintain user privacy in a dynamic community. Before sharing user-sensitive data with third parties or researchers, data owners ensure that the privacy of user-sensitive data is protected. We can do this using data anonymization techniques [ 43 ]. In recent years, the research community has focused on the PPDP search area and developed several approaches for tabular data and SN [ 44 – 49 ]. There are two popular settings in PPDP: one is interactive, and the other is noninteractive [ 50 ]. The K-anonymity model [ 51 ] and its effects are most commonly used in the noninteractive setting of PPDP [ 52 – 56 ]. Differential privacy (DP) [ 57 ] and an interactive configuration of PPDP make extensive use of DP-based methods [ 58 – 60 ]. Meanwhile, several studies for a noninteractive setting reported a PD-dependent approach [ 61 ]. Researchers have expanded the concepts used to anonymize tabular data to protect the privacy of SN users [ 62 – 64 ].

Most images on the internet are in a compressed form. Hence, various studies design some techniques for AMBTC-compressed images. Data concealment has become an active research area. We can hide the data by adding confidential information to the cover image, and as a result, we get the stego image. There are two types of data hiding schemes: one is irreversible [ 65 – 68 ], and the other is a reversible data hiding scheme [ 69 – 71 ]. A cipher text designated for data collection can be re-encrypted as designated for another by a semitrusted proxy without decryption [ 72 ]. The first concrete construction of collusion-resistant unidirectional identity-based proxy re-encryption scheme, for both selective and adaptive identity, is proposed in the paper [ 73 ]. One of the data hiding schemes is the histogram shifting scheme [ 74 – 76 ], and it is the most widely used. A histogram-shifting data hiding scheme [ 77 ] that detects pixel histograms in the cover image is introduced. When big and diverse data are distributed everywhere, we cannot control the vicious attacks. Therefore, we need a cryptosystem to protect our data [ 78 – 80 ].

Some identity-based signature (IBS) schemes [ 81 – 84 ] are introduced that are based on bilinear pairing. However, the authentication schemes based on bilinear pairing over elliptic curve are more efficient and safer than traditional public key infrastructure [ 85 , 86 ]. The paper [ 87 ] proposed a preserving proxy re-encryption scheme for public cloud access control. A differential attack is performed on one-to-many order preserving encryption OPE by exploiting the differences of the ordered ciphertexts in [ 88 ]. Another scheme is proposed, which consists of a cancelable biometric template protection scheme that is based on the format-preserving encryption and Bloom filters [ 89 ]. Some of the researchers also use the concept of paring free identity-based signature schemes [ 90 – 93 ]. A lightweight proxy re-encryption scheme with certificate-based and incremental cryptography for fog-enabled e-healthcare is proposed in [ 94 ].

3. Research Methodology

The objective of this SLR is to evaluate, investigate, and identify the existing research in the context of data storage security in cloud computing to find and evaluate all the existing techniques. SLR is a fair and unbiased way of evaluating all the existing techniques. This way provides a complete and evidence-based search related to a specific topic. At this time, there is no SLR conducted on data storage security techniques that explains all the cryptographic and noncryptographic techniques. Hence, this SLR fulfills the gap by conducting itself. This SLR aims to provide a systematic method using the guidelines of an SLR provided by Kitchenham [ 95 ]. Furthermore, to increase the intensity of our evidence, we follow another study that is provided by [ 96 ]. Our SLR consists of three phases, namely planning, conducting, and reporting. By following these three phases, we conduct our SLR, as shown in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.001.jpg

Review procedure.

3.1. Research Questions

The primary research question of this systematic literature review is “What types of data protection techniques have been proposed in cloud computing?” This primary research question is further divided into four RQs. All these four questions are enlisted below.

  •   RQ1: what types of data protection techniques have been proposed in cloud computing?
  •   RQ2: what are the demographic characteristics of the relevant studies?
  •   RQ3: which data protection technique provides more data protection among all the techniques?
  •   RQ4: what are the primary findings, research challenges, and directions for future research in the field of data privacy in cloud computing?

3.2. Electronic Databases

Six electronic databases were selected to collect primary search articles. All these six electronic databases are well-reputed in the domain of cloud computing. Most of the relevant articles are taken from two electronic databases, namely IEEE and Elsevier. All the electronic databases that we use in this research process are given in Table 1 .

Databases sources.

3.3. Research Terms

First of all, the title base search is done on the different electronic databases, which are given in Table 1 . After that, most related studies/articles are taken. Search is done using the string (p1 OR p2. . . . . .OR pn.) AND (t1 OR t2. . . . . . OR tn.). This string/query is constructed using a population, intervention, control, and outcomes (PICO) structure that consists of population, intervention, and outcome. Database search queries are given in Table 2 .

  •   Population : “cloud computing”
  •   Intervention : “data security,” “data privacy,” “data integrity”
  •   Using the PICO structure, we construct a general query for the electronic database. Generic: ((“Document Title”: cloud∗) AND (“Document Title”: data AND (privacy OR protect∗ OR secure∗ OR integrity∗))).

Databases search query.

3.4. Procedure of Study Selection

The procedure of study selection is described in Figure 2 . This procedure has three phases: the first one is exclusion based on the title, in which articles are excluded based on the title, and the relevant titles are included. The second is exclusion based on the abstract in which articles are excluded. By reading the abstract of the articles, the most relevant abstract is included, and the last one is exclusion based on a full text that also includes quality assessment criteria.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.002.jpg

Study selection procedure.

3.5. Eligibility Control

In this phase, all the selected papers are fully readied, and relevant papers are selected to process our SLR further. Table 3 shows the final selected papers from each database based on inclusion and exclusion criteria. The related papers are selected based on inclusion and exclusion criteria, which are given in Table 4 .

Results from electronic databases.

Inclusion and exclusion criteria.

3.6. Inclusion and Exclusion Criteria

We can use the inclusion and exclusion criteria to define eligibility for basic study selection. We apply the inclusion and exclusion criteria to those studies that are selected after reading the abstract of the papers. The criteria for inclusion and exclusion are set out in Table 4. Table 4 outlines some of the conditions that we have applied to the articles. After applying the inclusion and exclusion criteria, we get relevant articles, which we finally added to our SLR. The search period is from 2010 to 2021, and most of the papers included in our SLR are from 2015 to onward.

We apply inclusion and exclusion criteria in the third phase of the study selection process, and we get 139 results. After that, we also apply quality criteria, and finally, we get 52 articles, which are included in this SLR. Most of the articles are taken from Elsevier and IEEE electronic databases. IEEE is the largest Venus for data storage security in cloud computing. The ratio of the selected articles from different electronic databases is shown in Figure 3 .

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.003.jpg

Percentage of selected studies.

3.7. Quality Assessment Criteria

Quality checking/assessment is done in the 3 rd phase of the study selection process. A scale of 0-1 is used for the quality assessment (QA) of the articles.

Poor-quality articles get 0 points on the scale, and good-quality articles get 1 point on the scale. The articles with 1 point on the scale are included in this SLR. Hence, by applying the quality checking/assessment criteria on all the articles, we finally get 52 articles. All the selected papers have validity and novelty for different data protection techniques, and also, we find the relevance of the articles in the quality assessment criteria, which ensures that all the articles are related to the SLR (data storage protection and privacy in cloud computing). The quality checking (QC) criteria are given in Table 5 .

Quality checking criteria.

3.8. Taxonomy of the Data Protection Techniques

In this section, all the data protection techniques are depicted in Figure 4 . All the data protection techniques are arranged and classified in their related categories. The purpose of the taxonomy is to give a presentational view of all the data protection techniques. The data protection techniques are mainly divided into two categories, namely (1) noncryptographic techniques and (2) cryptographic techniques.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.004.jpg

Taxonomy of the data protection techniques.

4. Results and Discussions

Data protection on the cloud is done by developing a third-party proxy that is trusted by the user. The trusted proxy is not a physical entity. It is a logical entity that can be developed on the user end (like on the user's personal computer) or at that location on which the user can trust. Mostly, all the local proxies are used as an additional service or as an additional module (like browser plugins). To fulfill the objective of data protection by proxies, some requirements are needed to fulfill necessarily. The requirements are given below:

  • User privilege. There are several objectives of user privilege or user empowerment, however, the main objective is to increase the trust of the users in data protection proxies used by the cloud.
  • Transparency. Another important objective is that when users outsource their sensitive data to trusted proxies, their data should remain the same and should not be altered.
  • Cloud computing provides large computing power and cost saving resources. However, one concern is that if we increase data security, computation overhead should not increase. We want to minimize the computation overhead over the proxies.
  • Cloud functionalities preservation. Cloud functionalities preservation is the most important objective. The users encrypt their sensitive data on their personal computers by applying different encryption techniques to increase the protection of their data, however, by applying these different encryption techniques, they are not able to avail some of the cloud functionalities because of compatibility issues [ 97 ]. Hence, it is the main issue.

Figure 5 provides a data workflow for protecting sensitive data on the cloud using a local proxy. There are different types of the assumption that are made for data protection, and some of them are discussed below.

  • Curious CSPs, the most commonly used model in cloud computing, is given in the literature [ 98 ]. The cloud service provider honestly fulfills the responsibilities, i.e., they do not interfere in the user activities, and they only follow the stander protocols. The CSP is honest, however, sometimes, it is curious to analyze the users' queries and analyze their sensitive data, which is not good because it is against the protocol. Also, by this, the privacy of the user is compromised. Hence, we can avoid these things by applying some data protection techniques on the user end to protect the users' sensitive data from the CSPs.
  • In some cases, CSPs may collaborate with data protection proxies that are present on the users' sides to increase the level of trust between the users and CSPs because better trust can motivate more users to move to the cloud. This collaboration can be done if CSPs provide some services to the users with a stable interface for storing, searching, and computing their data.
  • A multicloud approach to cloud computing infrastructure has also been proposed to improve their performance. In this regard, multiple cloud computing services are provided in the same heterogeneous architecture [ 19 ]. A multicloud gives the user multiple different places to store their data at their desired location. There are several benefits to use a multicloud, e.g., it reduces reliance on a single CSP, which increases flexibility.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.005.jpg

Data workflow on cloud using local proxy.

4.1. RQ1: What Type of Data Protection Techniques has Been Proposed in Cloud Computing?

In this session, we will discuss all the techniques for data storage security over the cloud. All these techniques are divided into two main categories, namely (i) cryptographic techniques and (ii) noncryptographic techniques. The local proxy uses different techniques to protect data that are stored on the cloud. Because of this reason, we cannot gain all the advantages of cloud services. Therefore, we analyze and compare all these techniques based on different criteria. These different criteria are as follows: (i) the data accuracy of all the techniques, (ii) the data protection level of all the techniques, (iii) all the functionalities these schemes allow on masked and unmasked data, and (iv) the overhead to encrypt and decrypt data over the cloud.

4.1.1. Noncryptographic Techniques

There are some noncryptographic techniques, and we discuss them in this paper as follows:

(1) Data Anonymization . Data anonymization is a data privacy technique used to protect a user's personal information. This technique hides the person's personal information by hiding the person's identifier or attributes that could reveal a person's identity. Data anonymization can be done by applying various mechanisms, for example, by removing or hiding identifiers or attributes. It can also be done by encrypting the user's personal information. The main purpose of performing data anonymization is that we can hide the identity of the person in any way. Data anonymity can be defined as the user's personal data being altered in such a way that we cannot directly or indirectly identify that person, and the CSP cannot retrieve any person's personal information. Data anonymization techniques have been developed in the field of statistical control disclosure. These techniques are most often used when we want to outsource sensitive data for testing purposes. Data anonymization is graphically represented in Figure 6 .

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.006.jpg

Data anonymization flow diagram.

Data anonymization techniques are most often used when we want to outsource sensitive data for testing purposes. For example, if some doctors want to diagnose certain diseases, some details of these diseases are required for this purpose. This information is obtained from the patients that suffer from these diseases, but it is illegal to share or disclose anyone's personal information. However, for this purpose, we use data anonymization technique to hide or conceal the person's personal information before outsourcing the data. In some cases, however, the CSP wants to analyze the user's masked data. In the data anonymization technique, attributes are the most important part. Attributes can include name, age, gender, address, salary, etc. Table 6 shows the identifiers classification.

Identifiers classification.

Data anonymization can be performed horizontally or vertically on this table and also on the record or group of records. The attributes are further classified into the following categories.

  • Sensitive Attributes: sensitive attributes possess sensitive information of the person, such as salary, disease information, phone number, etc. These attributes are strongly protected by applying some protection techniques.
  • Nonsensitive Attributes: these types of attributes do not belong to any type of category. Hence, they do not disclose the identity of a person.
  • Identifiers: identifier belongs to the identity of a person, such as Id card, name, social security number, etc. Because of the presence of these identifiers, the relationship between different attributes can be detected. Hence, these identifiers must be replaced or anonymized.
  • Quasi-Identifiers: quasi-identifiers are the group of identifiers that are available publicly, such as zip-code, designation, gender, etc. Separately, these identifiers cannot reveal the personal identity, however, by combining them, they may reveal the identity of the person. Hence, we want to separate these quasi-identifiers to avoid the discloser.

There are two main categories of data masking: (1) perturbative masking and (2) nonperturbative masking.

  • (1) Perturbative Masking
  • In perturbation, masking data is altered or masked with dummy datasets. Original data is replaced with dummy data, however, this data looks like the original data with some noise addition. The statistical properties of the original data are present in the masked data, however, nonperturbative masking does not contain the statistical properties of original data, because in perturbation masking, data is altered or masked with physically same but dummy data.
  • Data swapping
  • In data swapping, the data is randomly changed with the same but dummy data between different records [ 99 ]. However, if the numerical values are present in the dataset, then in certain limits, the values can be changed. Otherwise, the meaning of the data is changed. The masked data cannot look like the original data. For those attributes that can be ranked, the attribute is replaced with the nearby ranked attributes, and a very large difference between ranks is not suitable [ 100 ]. In data swapping, higher-level attributes are swapped [ 101 ] and individual values are not changed.
  • Noise Addition
  • In this mechanism, some noise is added to the original dataset to alter the original data. Noise is only added to the data that is continuous and divided into categories [ 102 ]. The noise is added into all the attributes that are present in the original dataset, such as sensitive attributes and also quasi-attributes.
  • Microaggregation
  • In this technique, all the relevant data is stored into different groups, and these different groups release average values from each record [ 103 ]. If a large number of similar records is present in different groups, then more data utility is done. We can cluster the data in many ways, e.g., in categorical versions [ 104 ]. Microaggregation is done on a quasi-attribute to protect these attributes from reidentification, and the quasi-attributes protect all the other attributes from reidentification. We can also minimize reidentification by data clustering [ 105 ].
  • Pseudonymization
  • In this method, the original data is replaced with artificial datasets [ 106 ]. In this technique, each attribute present in the original data is a pseudonym, and by doing this, data is less identifiable.
  • (2) Nonperturbative Masking
  • Nonperturbative masking does not change or alter the original data, however, it changes the statistical properties of the original data. Mask data is created by the reduction of the original data or suppressions of the original data [ 107 ].
  • Bucketization
  • In this method, original data is stored in different buckets, and these buckets are protected through encryption [ 108 ]. We can protect the sensitive attributes through bucketization.
  • Data slicing is a method in which a larger group of data is divided into smaller slices or segments [ 109 ]. Hence, we can slice the data, and in this way, the sensitive attribute and the quasi-attributes are divided into different slices. By identifying the individual slice, the identity of the person cannot be disclosed.
  • Sampling is a technique in which the population and sample concept is present. The entire data is called population, and the masked data is called a sample. In this technique, we make different samples of the original data. A smaller data sample provides more protection [ 110 ].
  • Generalization
  • It is a technique in which some additional attributes are added to the record. If the number of quasi-attributes is less rare, then some dummy attributes are added into the record, which look like the quasi-attributes. Hence, by doing this, reidentification becomes more difficult [ 111 ]. By applying generalization on data, we can protect the identity of a person because it hides the relationship between the quasi-attributes.

The summary of data anonymization techniques is given in Table 7 .

The summary of data anonymization techniques.

(2) Data Splitting . Data splitting is a technique in which sensitive data is divided into different fragments [ 112 ] to protect it from unauthorized access. In this technique, we first split the data into different fragments, then these fragments are randomly stored on different clouds. Even if the intruder gains access to a single fragment in any way, still the intruder will not be able to identify the person. For example, if an intruder gets a fragment from the cloud that contains the salary information of an organization, it is useless until he knows which salary belongs to which person. Hence, data splitting is a very useful technique for protecting data stored on the cloud.

Local proxies outsource data to the cloud without splitting the data, and they can also split the data first and then outsource to the same cloud using different accounts in the same CSP. It can also store data on different cloud platforms that run through different CSPs but provide some of the same services. Data is split before storing in different locations because even if some part or piece of data is known to an intruder, they will not be able to identify anyone.

Firstly, the local proxy retrieves sensitive data from the user and then calculates the risk factor for disclosure. In this method, the user can define the privacy level, and this privacy level provides information about all the sensitive attributes that can reveal someone's identity. These sensitive attributes are called quasi-attributes or quasi-identifiers. Next, the local proxy decides the number of pieces into which the sensitive data will be split and the number of locations that will be needed to store those pieces. Therefore, no one can reveal a person's identity, and all this information about the data splitting mechanism is stored at the local proxy. However, the system must be able to function properly and respond to the queries on time. After that, the local proxy stores these different data fragments in different cloud databases, and now, they are free from disclosure. The data-splitting mechanism supports almost all the functions of the cloud. Hence, we can use almost all the services provided by CSP using the data-splitting mechanism for storing data in the cloud.

When the users want to retrieve the original data, they process a query on a local proxy. The query is processed, and the data storage locations are retrieved from the local database. After that, the query is replicated as many times as the data is split into fragments, and these queries are forwarded to the relevant CSPs. As a result, each CSP provides a set of results that represent a partial view of the complete result. Finally, the proxy collects partial results according to the criteria used to split the data and provides the complete result to the user. Mostly, all these fragments are stored on different cloud databases in their original structure. Therefore, computation on these fragments can be performed easily. However, there is a problem if we want to perform computation separately on the individual fragment. Then, there is no algorithm that exists for this computation. Therefore, some algorithms are required to perform these types of computation as this computation requires communication between different CSPs. The redundancy of proxy metadata and backup policies must be essential to ensure the robustness of the mechanism. The data-splitting is graphically represented in Figure 7 .

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.007.jpg

Data-splitting flow diagram.

The summary of the data-splitting is given in Table 8 . Different data-splitting techniques are used for the protection of data stored on the cloud. Some of these are given below.

  • Byte level splitting
  • In this type, all the sensitive data is converted into bytes [ 113 ]. Then, these bytes are randomly shuffled with each other. After that, all the bytes are recombined. Fixed length fragments are made, and then, these fragments are stored on a different cloud.
  • Privacy level splitting
  • In this mechanism, the user chose the privacy level of each file [ 114 ] that is to be stored on a cloud database. Hence, a privacy level is attached with the file that is to be stored on the cloud. Using this privacy level, the user can decide that the higher privacy level files should be stored on the trusted cloud.
  • Byte level splitting with replication
  • Byte-level data splitting is combined with data replication to improve both performance and security. The author of the paper [ 115 ] proposed an algorithm to store the data fragments on different clouds, so that they are at a certain distance and by doing this; we can avoid confabulation attacks where the intruder can aggregate the split fragments.
  • Byte level splitting with encryption
  • Firstly, byte-level data splitting [ 116 , 117 ] is proposed. In this scheme, every fragment of data is encrypted to enhance the security of sensitive data. In this mechanism, the data is split into bytes, and these bytes are randomly shuffled and finally recombined. This type of data splitting is suitable for binary or multimedia files that are not processed through the cloud.
  • Another problem is the length of a fragment in which we can say that the data cannot be reidentified or the identity of a person cannot be revealed. If the length is too short, then the probability of disclosure increases, and if the length is too long, then it is difficult to handle these fragments. Hence, it should have a certain length so that we can also protect the identity of a person.
  • There is another type of data splitting in which we split data into attributes. The attribute level splitting is performed in two ways: one is horizontal splitting and the second is vertical splitting. These types of splitting are mostly done on structural databases, and they provide strong privacy.
  • Vertical splitting
  • In vertical data splitting [ 118 , 119 ], we divide quasi-identifiers or quasi-attributes in such a way that all the risky attributes are divided into different fragments to secure the reidentification. Some of the sensitive fragments required encryption on it. Hence, we can encrypt these fragments by applying some encryption algorithms or by applying some other privacy methods to increase the security level.

The summary of the data-splitting techniques.

A solution for sensitive data splitting without performing encryption on fragments is proposed [ 120 ]. This mechanism is suitable for data on which we want to perform some computation, because on encrypted data, we cannot perform computation directly. Another technique has been proposed [ 121 ], which demonstrates the redaction and sanitization of a document that identifies all sensitive attributes and protects the data in most documents.

The schemes that use vertical splitting to protect data are faster than other splitting techniques because data fragments consist of a single attribute or multiple attributes. It does not involve data masking or encryption. Hence, the computation is easy. There is another type of encryption in which we do not encrypt and decrypt every time to perform computation. It is called homomorphic encryption. In this case, all data modification is done on encrypted data, and actual data is not changed, however, the final result is preserved [ 122 ].

(3) Steganography . Steganography is the practice of concealing a message within another message or a physical object. In computing contexts, video, audio, image, message, or computer file is concealed within another image, message, or file. The steganography flow diagram is depicted in Figure 8 . There are two main types of steganography, namely (1) linguistic steganography and (2) technical steganography. These techniques are given as follows:

  • (1) Linguistic Steganography
  • It uses images and symbols alone to cover the data. There are two types of Semagrams [ 123 ]. The first is a visual Semagram. In this type, we can visualize the massage. The second type is a text Semagram. In this type, we change the font, color, or symbols of the text message.
  • In this case, we hide the real message from the intruder by installing the original massage in an authorized carrier [ 124 ]. Open code technique is further divided into two types: one is jargon code, and the second is covered ciphers.
  • (2) Technical Steganography
  • Text steganography
  • In this type, we change some textual characteristics of text, such as the font, color, or symbols of the text message [ 127 ]. Three coding techniques are used to change these textual features, which are as follows: (1) line-shift coding, (2) word-shift coding, and (3) feature coding.
  • Image steganography
  • It is the most popular type of steganography. Image steganography refers to the process of hiding sensitive data inside an image file [ 128 ]. The transformed image is expected to look very similar to the original image because the visible features of the stego image remain the same. The image steganography is divided into three parts, namely (1) least significant bits coding, (2) masking and filtering, and (3) transformations.
  • Audio steganography
  • Audio steganography is a technique that is used to transmit secret data by modifying a digitalized audio signal in an imperceptible manner [ 129 ]. Following types of audio steganography are given: (1) least significant bits coding, (2) phase coding, (3) spread spectrum, and (4) echo hiding.
  • Video steganography
  • In video steganography, both image and audio steganography are used [ 130 ]. A video consists of many frames. Hence, video steganography hides a large amount of data in carrier images. In this type of steganography, we select the specific frame in which we want to hide the sensitive data.
  • (ii) Methods
  • Frequency Domain
  • A frequency-domain steganography technique is used for hiding a large amount of data with no loss of secret message, good invisibility, and high security [ 131 ]. In the frequency domain, we change the magnitude of all of the DCT coefficients of the cover image. There are two types of frequency domain: (1) discrete cosine transformation and (2) discrete wavelet transformation.
  • Spatial Domain
  • The spatial domain is based on the physical location of pixels in an image [ 132 ]. A spatial domain technique gives the idea of pixel regulation, which minimizes the progressions of a stego image created from the spread image. Some methods of the spatial domain are given as follows: (1) least significant bit, (2) pixel value differencing, (3) pixel indicator, (4) gray level modification, and (5) quantized indexed modulation.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.008.jpg

Steganography flow diagram.

The summary of the steganographic techniques is given in Table 9 .

The summary of the steganographic techniques.

4.1.2. Cryptographic Techniques

Cryptography is the most important and most widely used technique for security purposes. In cryptography, the plain text is converted into ciphertext using a key and some encryption algorithms. Cryptographic techniques are the most secure techniques among all the other security techniques. Hence, these cryptography techniques are widely used in data storage security over the cloud. The present day's cryptography techniques are more realistic. We can achieve different objectives by applying these cryptographic techniques, for example, data confidentiality and data integrity. Because of an increase in the number of data breaches in the last few years, some cloud service provider companies are shifting toward cryptographic techniques to achieve more security. The most commonly used cryptographic technique is AES [ 133 ]. Key management is an important issue in cryptographic techniques because if the key is hacked by an intruder, then all the data will be hacked or stolen by this intruder. Hence, key protection or key management is a very important issue. Therefore, it is mostly the responsibility of CSP to manage the key and also provide the protection of key. Cryptographic techniques also protect the user from an untrusted CSP because sometimes the CSP outsources sensitive data without taking the permission of users, and it is an illegal activity. Hence, to avoid these things and protect our sensitive data from untrusted CSPs, we use cryptographic techniques, and it is the best option for users. However, there are some difficulties the user has to face while using cryptographic techniques, i.e., if a user wants to update a small amount of data, the user needs to decrypt the data and then perform this minor update. Hence, this work is very costly. Over time, implementing cryptographic techniques gives us a higher level of security, however, we compromise on performance or speed. It all depends on the user, the standard, the performance, or the high level of security the user wants to achieve. In this paper, we are focusing on the four main functionalities that are required or needed on cloud computing when using cryptographic techniques. Figure 9 shows the flow diagram of encryption.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.009.jpg

Encryption flow diagram.

Some of the main functionalities of cryptographic functions are given below.

  • Search on encrypted data
  • If a user wants to retrieve their data stored in a cloud database, they generate a query and run the query on a local proxy server and search for the data they want. Searching for encrypted data is a very important part of cryptography because every user who stores their sensitive data in a cloud database wants to retrieve it, and it is done by searching their sensitive data through queries. Therefore, the procedure of retrieving their data is very difficult.
  • Storage control
  • Sometimes the user wants to store data in a desired location or trusted database. Hence, the user must have full control over the storage of data.
  • Access control
  • It is a very important control and is referred to as data access restriction. Sometimes, the user does not want to share a private file publicly. Hence, access control is an important functionality.
  • Computation on data
  • Data computation is the main functionality of cloud computing. Sometimes, the user wants to perform some computation on data that are stored on a cloud database. For example, if a user wants to perform computation on encrypted data that is stored on cloud databases, then there are two ways. One is that the user, firstly, decrypts the entire data, performs computation on the data, and finally, the user encrypts the entire data and stores on the cloud database. This process is very expensive in terms of computation.

Some of the cryptographic techniques are as follows:

(1) Homomorphic Encryption . Homomorphic encryption is a form of encryption that permits users to perform computations on encrypted data without decrypting it. These resulting computations are left in an encrypted form, which, when decrypted, result in an identical output to that produced had the operations been performed on the unencrypted data. There are some types of homomorphic encryption that are described below.

  • Partial Homomorphic Encryption
  • In partial homomorphic encryption, only one arithmetic function addition or multiplication is performed at one time. If the resultant ciphertext is the addition of the plain text, then it is called an additive homomorphic scheme, and if the resultant ciphertext is the multiplication of the plaintext, then it is called the multiplicative homomorphic scheme. Two multiplicative homomorphic schemes are given as in [ 134 , 135 ]. There is one additive homomorphic scheme that is called Paillier [ 136 ].
  • Somewhat Homomorphic Encryption
  • This technique allows the user to perform the multiplication and subtraction mathematical operations. However, this scheme allows a limited number of arithmetic operations, because if it allows a large number of arithmetic operations, then it produces noise. This noise changes the structure of the original data. Hence, limited numerical math operations are allowed. There is a somewhat homomorphic encryption scheme that is presented by the authors of the papers [ 137 , 138 ]. In this scheme, the time of encryption and decryption is increased when multiplication operations are increased. To avoid this increase in time, we allow only a limited number of mathematical operations.
  • Fully Homomorphic Encryption
  • This technique allows a large number of arithmetic operations, namely multiplication and subtraction. Multiplication and addition in this technique are performed in the form of XOR and AND gates [ 139 ]. Completely homomorphic encryption techniques require a higher computation time to encrypt and decrypt data. Therefore, this technique is not applicable in real-life applications for implementation. This technique uses a bootstrapping algorithm when a large number of multiplication operations is performed on data and also for the decryption of the data it is used. Homomorphic encryption, on the other hand, represents the trade-off between operations and speed performance. Only a limited number of arithmetic operations are allowed if someone wants low computation, and a large number of arithmetic operations are allowed if someone wants high security. It depends on the needs of the user.

(2) Searchable Encryption . A searchable encryption technique is proposed by the author of the paper [ 140 ]. In this technique, before storing data on a cloud database, encryption is performed, and after that, it is stored on the cloud. The advantage of this technique is that when we search for some data over the cloud database, this technique provides a secure search over the cloud database.

  • Searchable Asymmetric Encryption
  • Over the past two decades, we have focused on searchable encryption. Much of the work is related to the multiwriter and single-reader cases. Searchable encryption is also called public keyword search encryption along with keyword search (PEKS) [ 141 ].
  • Searchable Symmetric Encryption
  • Symmetric-key algorithms use the same key for massage encryption and ciphertext decryption. The keys can be the same, or there can be a simple transformation to go between the two keys. Verifiable searchable symmetric encryption, as a key cloud security technique, allows users to retrieve encrypted data from the cloud with keywords and verify the accuracy of the returned results. Another scheme is proposed for keyword search over dynamic encrypted cloud data with a symmetric-key-based verification scheme [ 142 ].

(3) Encryption . In cryptography, encryption is the process of encoding information. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Ideally, only authorized parties can decipher a ciphertext back to plaintext and access the original information.

  • Symmetric Key Encryption
  • Only one key is used in symmetric encryption to encrypt and decrypt the message. Two parties that communicate through symmetric encryption should exchange the key so that it can be used in the decryption process. This method of encryption differs from asymmetric encryption, where a pair of keys is used to encrypt and decrypt messages. A secure transmission method of network communication data based on symmetric key encryption algorithm is proposed in [ 143 ].
  • Public Key Encryption
  • The public-key encryption scheme is proposed by the author of the paper [ 144 ]. In this scheme, a public key pair is created by the receiver. This public key pair consists of two keys. One is called a public key, which is known publicly to everyone, and the second is the private key, which is kept a secret. Hence, in this scheme, the sender performs encryption on the data using the public key of the receiver and then sends this encrypted data to the receiver. After receiving this encrypted data, the receiver can decrypt this data using the private key. Hence, in this way, we can perform secure communication between two parties.
  • Identity-Based Encryption
  • Identity-based encryption is proposed by the author of the paper [ 145 ]. In this technique, a set of users is registered on the database and a unique identity is assigned to all the registered users by an admin that controls this scheme. The identity of the users can be represented by their name or their e-mail address. Just like in a public-key encryption, there is a public key pair that consists of one public key, which is the identity of the user, and one private key, which is a secret key. Just like in public-key encryption, the receiver cannot generate their public key in identity-based encryption. The identity cannot be generated by the user. There is a central authority that generates and manage the user's identity. The identity-based encryption is improved by the author [ 146 ]. The main advantage of identity-based encryption is that anyone can generate the public key of a given identity with the help of the central main authority.
  • Attribute-Based Encryption
  • The authors of the papers [ 147 , 148 ] propose a technique called attribute-based encryption. Similar to identity-based encryption, attribute-based encryption also depends on the central main authority. The central main authority generates the private key and distributes it to all the registered users. It can be encrypting the messages, however, if it does not have this designation, then it cannot be generating the messages. Attribute-based encryption is used when the number of registered users is very large. Then, the attribute-based encryption is useful. The attribute-based encryption consists of two schemes, which are key policy and ciphertext policy.
  • Functional Encryption
  • A functional encryption technique [ 149 , 150 ] consists of identity-based encryption, attribute-based encryption, and public-key encryption. All the functionalities of these three techniques combinedly make function encryption. In this technique, all the private keys are generated by the central main authority, which is associated with a specific function. Functional encryption is a very powerful encryption technique that holds all the functionalities of three encryption techniques. A functional encryption technique is used in many applications.

(4) Signcryption . Cryptography is publicly open-source, and it functions simultaneously as a digital signature and cipher. Cryptography and digital signatures are two basic encryption tools that can ensure confidentiality, integrity, and immutability. In [ 151 ], a new scheme called signature, encryption and encryption is proposed, based on effectively verifiable credentials. The system not only performs encryption and encryption but also provides an encryption or signature form only when needed [ 152 ]. The paper proposes lightweight certificate-based encryption using a proxy cipher scheme (CSS) for smart devices connected to an IoT network to reduce computing and communications costs. To ensure the security and efficiency of the proposed CBSS project, we used a cipher system encoded with 80 bit subparameters. Reference [ 153 ] proposes an input control scheme for the IoT environment using a cryptographic scheme corresponding to the efficiency and robustness of the UK security system. The proposed scheme shows that besides security services, such as protection against attacks, confidentiality, integrity, nonblocking, nondisclosure, and confidentiality, accounting and communication costs are low compared to the current scheme. Document [ 154 ] gives the informal and formal security proof of the proposed scheme. Automated Validation of Internet Security Protocols and Applications (AVISPA) tool is used for formal security analysis, which confirms that the proposed CB-PS scheme can potentially be implemented for resource-constrained low-computing electronic devices in E-prescription systems. The proposed scheme [ 155 ] introduced a new concept that does not require a reliable channel. The main production center sends a part of the private key to the public consumers. The summary of the cryptographic schemes is given in Table 10 .

The summary of the cryptographic techniques.

All data storage protection on cloud computing is discussed in session 3. There are a lot of data protection techniques, however, all these techniques are only divided into three main categories, namely (i) data splitting, (ii) data anonymization, and (iii) cryptography. From different points views, we discuss all these techniques, e.g., overhead on the local proxy, computation cost, search on encrypted data, data accuracy all these techniques retained, and data protection level all these techniques have, and all the masked data techniques have the functionalities. These are some different views, and by considering them, we can analyze all the data protection techniques. Cryptography provides high-level security but limited cloud functionalities and a high cost of performing computation on cloud data. Data splitting provide low computation cost but a low level of security. Data anonymization is of two types: one is perturbative masking, and the second is nonperturbative masking. Hence, in perturbative masking, data is altered with dummy data. Hence, security is high, however, we cannot perform some functionalities.

4.2. RQ2: What are the Demographic Characteristics of the Relevant Studies?

We answer this question by considering the four following aspects: (i) publication trend, (ii) publication venues (proceeding and journals), (iii) number of citations, and (iv) author information.

4.2.1. Publication Trend

From 2010 to 2021, we found 52 papers that were of top ranked journals and conferences. From 2010 to 2017, there is linear work in cloud computing, however, after 2017, a lot of work is done in cloud computing data security. From 2018 to 2021, 37 papers are published. After 2018, the trend about data security in cloud computing increased very vastly. Most of the work is done in 2021. High-ranked studies are published in 2021. Figure 10 shows all trends of all the publications from 2010. Most of the articles are published in journals venue, and the highest number of papers have been published in IEEE Access journal. 6 papers were published in this journal.

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.010.jpg

Number of publications per year.

4.2.2. Publication Venues

There are different types of publication venues, and some of them are book articles, conference proceedings, journals, workshop proceedings, and symposium proceedings. Hence, in our SLR, the number of publications in a different venue is given in Figure 11 . We have a total of 52 papers after applying the inclusion and exclusion criteria in Section 2 .

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.011.jpg

Publication venues.

Out of 52 papers, 0 papers are published in book chapters. 1 paper is published in workshop proceedings. 0 papers are published in symposium proceedings. 43 papers are published in journals. 8 papers are published in conference proceedings. There are some most active journals in cloud data security, which are enlisted in Table 11 .

Top 5 most active journals.

The most active journal is the IEEE Access. In this journal, 6 papers are published. Journal of Cryptology is the second most active journal in the field of data storage, security, and privacy in cloud computing. In this journal, 3 papers are published. In the third journal, i.e., in the Journal of Information Fusion, 3 papers are published. The fourth journal is the Information Science. In this journal, 2 papers are published. The fifth journal is IEEE Transactions on Knowledge and Data Engineering, and in this journal, 2 papers are published. Most active conferences are given in Table 12 .

Top 5 most active conferences.

4.2.3. Number of Citations

The number of citations of a paper also tells the quality of the paper. The more the number of citations, the higher the quality, and the fewer the number of citations of the paper, the lower the paper quality. Table 13 shows the most influential authors, and Figure 12 shows the number of citations of all the papers that we have used in this SLR. Few papers have citations of more than 100. Hence, it shows that papers have a very high quality, and hence, the citation of those papers is very high. These papers are [ 105 , 118 , 124 , 139 ].

An external file that holds a picture, illustration, etc.
Object name is CIN2022-8303504.012.jpg

Number of citations of the papers.

Top 10 most influential authors in data protection in cloud computing.

4.2.4. Author Information

Some authors are most active in their publication. To identify these authors, we enlist the names of the top 10 authors that are more active in the field of data protection and privacy in cloud computing. Hence, we enlist the names of the top 10 authors and also their numbers of publications in Table 13 .

4.3. RQ3: Which Data Protection Technique Provides More Data Protection among all the Techniques?

We answer this question by considering the following four aspects: (i) publication trend, (ii) publication venues (proceeding and journals), (iii) number of citations, and (iv) author information.

4.3.1. Comparison of Data Protection Techniques

In this section, we compare all the data protection techniques that are discussed in this SLR, and finally, we review which technique is better and provides more protection among all these data protection techniques. We compare these techniques based on different functionalities, which are given as (i) local proxy overhead, (ii) data accuracy retain, (iii) level of data protection, (iv) transparency, and (v) operation supported, and finally, we discuss RQ2. Table 14 depicts a comparison of all the data protection techniques and provides a brief comparison of all the data protection techniques discussed in this SLR. Now, we discuss all these five functionalities one by one in more detail.

  • The overhead on the local proxy for encryption is very high because the data is encrypted. If the user wants to update the data, firstly, the user decrypts the data and then updates the data. After that, the user encrypts the data again. Hence, this operation requires a lot of time, and all this work is performed by the local proxy. It is the reason the overhead on the local proxy for encryption is very high for encryption.
  • Data Splitting
  • The overhead on a local proxy for data splitting is very low. The local proxy overhead remains constant while splitting data into fragments.
  • Anonymization
  • The overhead on a local proxy for anonymization is average because most of the anonymization methods require quasilinear computation in the number of records to generate the anonymized data set. Whenever the anonymized data is generated and stored in the cloud database, then there is no overhead on the local proxy.
  • Homomorphic Encryption
  • The overhead on local proxies for homomorphic encryption is very high because homomorphic encryption involves a large number of mathematical operations. Therefore, there is a lot of overhead on local proxies for homomorphic encryption.
  • Steganography
  • The overhead on the local proxy for steganography is not too much as the data is concealed inside the cover for secure communication. However, based on the complexity of the operation in the transformed domain technique, the local proxy overhead is more than the spatial domain technique.
  • Signcryption
  • The overhead on the local proxy for signcryption is high compared to the simple encryption because in signcryption, hashing and encryption are performed in a single logical step. Because of an extra operation in signcryption, the overhead on the local proxy is higher than the simple encryption.
  • The data accuracy level for encryption is very high because data is encrypted by applying some algorithms. The sensitive data is encrypted by the sender, and this data is decrypted by the receiver using a key. This data cannot be read by anyone who does not have the secret key. Therefore, data accuracy is very high for encryption.
  • The data accuracy level for data splitting is average because data-splitting data is present in the form of fragments. Therefore, CSP can easily access the fragments of data. Both encryption and data splitting are irreversible methods. Hence, we can retrieve the original data easily.
  • The data accuracy level for data anonymization is very low because anonymization is not irreversible. In anonymization, data is replaced with dummy data, and it cannot be retrieved back. Therefore, anonymization has a very low level of data accuracy.
  • The data accuracy level for homomorphic encryption is very high because data is encrypted by applying some algorithms.
  • The data accuracy level for steganography is very low as compared to the other cryptographic techniques because data is embedded inside the cover of another medium. Any change in the cover during transmission results in the change of the concealed data. Therefore, it is hard to ensure a high accuracy level in steganography. The stego image contains the secrete data that is transmitted over the communication channel. Data concealed by the sender is extracted from the cover by the receiver. Therefore, the concealment of data results in accurate data transmission.
  • The data accuracy level for signcryption is also very high, because in signcryption, confidentiality and authentication are achieved. Therefore, we can also verify the identity of the sender.
  • The level of data protection is very high for encryption techniques, because in encryption, data is changed into ciphertext, which cannot be understood. Therefore, we can say that the identification of data is impossible without decryption using a secret key because encryption is a one-way function that is easy to execute in one direction, however, it is impossible to execute in the opposite direction.
  • The level of data protection for data splitting is less high as compared to cryptographic techniques because data is split into different fragments, and these fragments contain original forms of data. Hence, if an intruder hacks or steal these fragments, then the untired data can be easily read. Hence, the data protection level is not high as compared to encrypted methods.
  • The level of data protection for data anonymization is less high as compared to cryptographic techniques, because in anonymization techniques, quasi-identifiers are protected if the quasi-identifiers are not protected strongly. Then, there is a change in the reidentification of person-sensitive data.
  • The level of data protection is very high for homomorphic encryption techniques because encryption data is changed into ciphertext, which cannot be understood.
  • The data protection level for steganography is medium because data is embedded inside the cover of another medium. The stego image contains the secrete data that is transmitted over the communication channel. Data concealed by the sender is extracted from the cover by the receiver. Therefore, the concealment of data results in secure data transmission.
  • The data protection level for signcryption is also very high, because in signcryption, both confidentiality and authentication are achieved. Therefore, we can also verify the identity of the sender.
  • There is no transparency for the encrypted data, because in encryption, there is a need for key management. Hence, the local proxy needs to keep the records of all the keys and manage all these keys. Therefore, there is no transparency for the encrypted data.
  • There is no transparency for the data-splitting mechanism, because in the data-splitting mechanism, data is split into different fragments, and the local proxy stores these fragments in different locations. Hence, there is a need to keep the record of the location of all the fragments that are stored on different locations.
  • Anonymization is fully transparent, because in anonymization, there is no need to keep the record of data storage by the local proxy. In anonymization, data is statistically similar to the original data. Hence, CSP also performs computation and some analysis on the anonymized data.
  • There is no transparency for the homomorphically encrypted data, because in encryption, there is a need for key management. Hence, the local proxy needs to keep the records of all the keys.
  • In steganography, as compared to other data protection techniques, the main aim is to transmit data without letting the attacker know about the data transmission as it is concealed inside the cover of another medium. The data transmission in steganography is fully transparent. No key management is required, and there is no need to keep track of data storage.
  • There is no transparency for the signcrypted data, because in signcryption, there is a need for key management. Hence, the local proxy needs to keep the records of all the keys and also manage all these keys.
  • Only the data storage operation is supported on the encrypted data, because if the user wants to update some encrypted data that are stored on a cloud database, firstly, the user needs to decrypt this data, and then the user performs an update on this data. We cannot perform any modification operation on encrypted data.
  • All the operations cloud be performed on data splitting, because in data splitting, the data is present in their original structure. Hence, we can perform data storage, search, data update, and also data computation.
  • In anonymization, there are two types of data anonymization: one is data masking, and the second is data nonmasking. If data is nonmasked, then we can perform data storage and search on this data. Otherwise, we can only perform data storage.
  • Only the data storage operation is supported on the encrypted data, because if the user wants to update some encrypted data that are stored on the cloud database, firstly, the user needs to decrypt this data, and then the user performs some updates on this data.
  • A stego image only supports data storage operations because if the user wants to update the data hidden in a stego image, the user, firstly, retrieves that data from the stego image, and the user can perform any modification on this data.
  • Only the data storage operation is supported on the signcrypted data, because if the user wants to update signcrypted data that are stored on the cloud database, firstly, the user needs to unsign this data, and then the user can perform any update on this data.

Comparison of data protection techniques.

5. Conclusion and Future Work

5.1. rq4: what are the primary findings, research challenges, and direction for future work in the field of data privacy in cloud computing, 5.1.1. conclusion and research challenges.

In this SLR, we have presented all the data privacy techniques related to data storage on cloud computing systematically, and we also present a comparison among all the protection techniques concerning the five finalities, which are the (i) local proxy overhead, (ii) data accuracy retains, (iii) level of data protection, (iv) transparency, and (v) operation supported. There are some research gaps we found in all these techniques of data splitting, anonymization, steganography, encryption, homomorphic encryption, and signcryption.

  • There is a very strong need to develop some ad hoc protocols for the communication of data splitting fragments that are stored on different CSPs, and also, there is a strong need to develop some protocol for the communication between different CSPs. Noncryptographic techniques are faster on different CSPs but do not provide enough security. Hence, we can improve security by developing some methods for data-splitting techniques.
  • Anonymity techniques work very effectively on a small amount of data but not for big data. Hence, there is a search gap in which we can develop some anonymity techniques to achieve more efficient performance. Therefore, some anonymous schemes need to be developed, which provide stronger protection to the quasi-identifier. Current anonymity techniques are very immature.
  • One of the limitations of steganography is that one can only use it to defend against a third party who does not know steganography. If the third party knows steganography, it can extract the data in the same way that the recipient extracts it. Therefore, we always use encryption with steganography. Therefore, there is a need to develop such steganography techniques that can protect sensitive data from third parties.
  • There is a need to develop some cryptographic techniques that can take less time than the existing cryptographic techniques to perform search and computation operation on encrypted data. Cryptographic techniques provide high security but low computational utility. Therefore, it is a search gap to develop some techniques that provide both high security with more efficiency.
  • The complexity of homomorphic encryption and decryption is far greater than that of normal encryption and decryption, and it is not applicable to many applications, such as healthcare and time-sensitive applications. Therefore, there is an urgent need to develop such homomorphic encryption schemes that have low complexity and computation cost.
  • Signcryption is used to verify and authenticate users. We can obtain confidentiality and authentication using signcryption, however, the main limitation of signcryption is that the calculation costs of the encryption algorithm used in signcryption are very high. Therefore, there is a need to develop such signcryption schemes that use such encryption algorithms, which have low computation cost.

Acknowledgments

This research was financially supported by The Analytical Center for the Government of the Russian Federation (Agreement nos. 70-2021- 00143 dd. 01.11.2021, IGK 000000D730321P5Q0002).

Data Availability

Conflicts of interest.

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Book cover

International Conference on Intelligent Computing

ICIC 2018: Intelligent Computing Methodologies pp 575–584 Cite as

Research on Cloud Storage and Security Strategy of Digital Archives

  • Hua-li Zhang 17 ,
  • Fan Yang 17 ,
  • Hua-yong Yang 17 &
  • Wei Jiang 17  
  • Conference paper
  • First Online: 06 July 2018

2467 Accesses

Part of the Lecture Notes in Computer Science book series (LNAI,volume 10956)

In the process of enterprise information, more and more enterprises begin to pay attention to the construction and implementation of digital archives system. At present, the storage of the digital archives system is local storage and local area network storage, it will lead to the isolation of the archives information, the high cost of construction and management, and the difficulties in the later maintenance. This paper studies and designs the digital archives cloud storage platform, the physical architecture design of the cloud platform, the architecture and planning of the cloud storage platform, the open interface and service access design, and the cloud storage security strategy. Based on the security requirements of digital archival cloud storage, a hybrid encryption mode based on symmetric encryption and asymmetric encryption is proposed, and a digital archives encryption algorithm based on RSA and AES encryption algorithm is implemented. The construction of safe digital archives cloud storage platform can popularize the consciousness of digital archives, reduce the cost of archives management, improve the level of archives management, and ultimately promote the level of modernization management.

  • Digital archives
  • Cloud storage
  • Security policy

Foundation project: Scientific planning subject of Hubei Provincial Department of Education (2016GB123).

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Zhaosheng, F., Zhiguang, Q., Ding, Y.: Cloud data security storage technology ]. J. Comput. Sci. 38 (01), 150–163 (2015)

Google Scholar  

Xiaojun, W.: Problems and improvement suggestions of cloud data security storage technology. Electron. Technol. Softw. Eng. 1 , 212 (2017)

Camastra, F., Ciaramella, A., Staiano, A.: Machine learning and soft computing for ICT security: an overview of current trends. J. Ambient Intell. Humaniz. Comput. 4 , 235–247 (2013)

Article   Google Scholar  

Wenfang, Z., Dan, X., Xiaomin, W., Zhen, C., Xudong, L.: convertible. J. Comput. Sci. 40 (05), 1168–1180 (2017)

Quanfu, W., Wenai, S., Shunmin, Y.: Homomorphic encryption method for data security in cloud environment. Comput. Eng. Des. 38 (1), 42–46 (2017)

Mathur, N., Bansode, R.: AES based text encryption using 12 rounds with dynamic key selection. Procedia Comput. Sci. 79 , 1036–1043 (2016)

Priya, S.S.S., Karthigaikumar, P., Sivamangai, N.M., et al.: High throughput AES algorithm using parallel subbytes and mixcolumn. Wirel. Personal Commun. 95 (2), 1433–1449 (2016)

Yanping, L., Qiuhui’s, L.: AES algorithm research and its key expansion algorithm improvement. Mod. Electron. Technol. 39 (10), 5–8+13 (2016)

Jie, H.: Intensive computing algorithm for mobile data terminals based on elastic cloud computing. Comput. Appl. Softw. 11 , 210–213 (2013)

Han, Y.J.: Research on library archives information resources sharing mechanism under the network environment. J. Chifeng Univ. Nat. Sci. Ed. 17 , 152–153 (2013)

Anlian, Y.: Problems and suggestions for archival information construction. China Inf. World 14 , 30–31 (2006)

Zuhua, H.: The implementation plan and safety strategy of university collection archives digitalization. Heilongjiang Arch. 1 , 44–45 (2014). kinds of Jincheng

Jingyan, X.: Strategy of archives management in colleges and universities in the era of knowledge economy. J. Chifeng Univ. Nat. Sci. Ed. 31 (11), 173–175 (2015)

Xiujuan, X., Yuqing, W.: The practical significance of digitalized management of university archives in the new era. Off. Bus. 13 , 132 (2016)

Yingxun, F., Shengmei, L., Jiwu, S.: Security cloud storage system and key technologies. Comput. Res. and Dev. 50 (1), 136–145 (2013)

Qing, H., Yongwei, W., Weimin, Z., Guangwen, Y.: A method to protect the privacy of users’ data on the cloud storage platform. Comput. Res. Dev. 48 (7), 1146–1154 (2011)

Mell, P., Grance, T.: The NIST definition of cloud computing. National Institute of Standards (2011)

Cheng, H., Chang, A., Dengguo, F.: AB-ACCS: A cloud storage cryptographic access control method. Comput. Res. and Dev. 47 (z1), 259–365 (2010)

Meiyun, L., Jian, L., Chao, H.: Based on homomorphic encryption, trusted cloud storage platform. Inf. Netw. Secur. 9 , 35–40 (2012)

Hui, L., Wenhai, S., Fenghua, L., Boyang, W.: Summary of data security and privacy protection technology in public cloud storage service. Comput. Res. Dev. 51 (7), 1397–1409 (2014)

Zhi, X., Ping, W., Jiangyan, X., Weihong, C.: An attribute based enterprise cloud storage access control scheme. Comput. Appl. Res. 30 (2), 513–517 (2013)

Xiao, Z.-J., Hu, C., Jiang, Z.-T., Chen, H.: Optimization of AES and RSA algorithm and its mixed encryption system. Appl. Res. Comput. 31 (4), 1189–1198 (2003). Bo, Y.: Modern Cryptography. Tsinghua University press, Beijing (2003)

Yan, L., Li, H.: Dynamic key AES encryption algorithm based on compound chaotic sequence. Comput. Sci. 44(6), 133–138 + 160 (2017)

Download references

Author information

Authors and affiliations.

City College, Wuhan University of Science and Technology, Wuhan, 430083, China

Hua-li Zhang, Fan Yang, Hua-yong Yang & Wei Jiang

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Hua-li Zhang .

Editor information

Editors and affiliations.

Tongji University, Shanghai, China

De-Shuang Huang

Indian Institute of Technology Madras, Chennai, India

M. Michael Gromiha

Inha University, Incheon, Korea (Republic of)

Kyungsook Han

Liverpool John Moores University, Liverpool, United Kingdom

Abir Hussain

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Cite this paper.

Zhang, Hl., Yang, F., Yang, Hy., Jiang, W. (2018). Research on Cloud Storage and Security Strategy of Digital Archives. In: Huang, DS., Gromiha, M., Han, K., Hussain, A. (eds) Intelligent Computing Methodologies. ICIC 2018. Lecture Notes in Computer Science(), vol 10956. Springer, Cham. https://doi.org/10.1007/978-3-319-95957-3_60

Download citation

DOI : https://doi.org/10.1007/978-3-319-95957-3_60

Published : 06 July 2018

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-95956-6

Online ISBN : 978-3-319-95957-3

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Cryptopolitan

Leonardo Spearheads European Space Cloud Initiative for Military Applications

L eonardo, a global leader in aerospace, defense, and security, has embarked on a groundbreaking project titled “Military Space Cloud Architecture” (MILSCA), commissioned by the Italian Ministry of Defense. 

This initiative, part of the National Military Research Plan, aims to revolutionize space-based computing and storage capabilities for government and national Armed Forces.

Innovative space cloud architecture

The MILSCA project marks a significant milestone for Europe, introducing a space architecture akin to terrestrial cloud systems. This pioneering endeavor envisions a constellation of cyber-secure satellites orbiting Earth, equipped with supercomputers, artificial intelligence (AI), and extensive storage capacity. 

Developed with integrated cyber security measures, the Space Cloud will facilitate swift and flexible information processing and sharing, ensuring enhanced operational efficiency.

Each satellite within the MILSCA constellation will boast remarkable capabilities, including storage capacity exceeding 100 Terabytes and processing power surpassing 250 TFLOPS. Leveraging advanced algorithms powered by AI and machine learning techniques, these satellites can autonomously communicate and exchange data with one another. 

This cyber-secure supercomputer and archive system will grant users access to critical data such as communication, earth observation, and navigation, even in remote areas and under challenging conditions.

Strategic significance

The implementation of a Space Cloud system offers numerous strategic advantages. The project significantly reduces processing times by processing data directly in orbit, enabling real-time information delivery and facilitating multi-domain and multi-national operations. 

Moreover, traditional transmission networks are freed for other communications by offloading data processing to space, enhancing overall network efficiency. Additionally, storing data in orbit is a vital backup, mitigating risks associated with terrestrial centers vulnerable to natural disasters.

Leonardo leads the MILSCA project in collaboration with joint ventures Telespazio and Thales Alenia Space, leveraging their combined expertise in aerospace, telecommunications, and defense. 

The 24-month study encompasses two phases: defining the architecture and developing a digital twin of the satellite with a demonstrator for high-performance computing (HPC) and multi-constellation satellite terminals. 

These phases aim to simulate various application scenarios and validate the feasibility and benefits of the Space Cloud architecture.

Future implications and beyond

The MILSCA project lays the foundation for future endeavors, extending its potential applications beyond defense. Civil Earth observation programs and space exploration missions stand to benefit from the rapid data processing capabilities offered by an in-orbit cloud computing architecture. 

As technological innovation progresses in the aerospace sector, initiatives like MILSCA pave the way for advancements in space-based infrastructure and data management.

Leonardo Spearheads European Space Cloud Initiative for Military Applications

An Overview of Data Storage in Cloud Computing

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. (PDF) Cloud Computing and Security Fundamentals

    cloud storage security research papers

  2. (PDF) An Analysis of Cloud Security

    cloud storage security research papers

  3. (PDF) Security Analysis of Cloud Computing

    cloud storage security research papers

  4. (PDF) An Approach for Data Storage Security in Cloud Computing

    cloud storage security research papers

  5. (PDF) Cloud Computing Security Issues and Its Challenges: A

    cloud storage security research papers

  6. (PDF) A Study on Cloud Computing and its Security

    cloud storage security research papers

VIDEO

  1. Recent Trends in Cloud Storage Security

  2. Ed Casmer, Cloud Storage Security

  3. Demo: Antivirus for Managed File Transfers

  4. Cloud Storage Security 2023 in Review

  5. Trends in Amazon S3 Security

  6. Cloud Security Career, AI, Job Security, Certifications, LayOff with Chandra Shekar from FLIPKART

COMMENTS

  1. A Systematic Literature Review on Cloud Computing Security: Threats and

    We selected 80 papers after a meticulous screening of published works to answer the proposed research questions. The outcomes of this SLR reported seven major security threats to cloud computing services. The results showed that data tampering and leakage were among the highly discussed topics in the chosen literature.

  2. Security and privacy protection in cloud computing ...

    Firstly, we introduce some privacy security risks of cloud computing, propose a comprehensive privacy security protection framework; secondly, we describe the research progress of several technologies, for example, access control, ciphertext policy attribute-based encryption (CP-ABE), key policy attribute-based encryption (KP-ABE), fine-grain, m...

  3. Research on Data Security Technology Based on Cloud Storage

    This paper studies the modeling and analysis methods of some key problems of data security in cloud storage, such as encryption storage, integrity verification, access control, and verification and so on.

  4. Data Security and Privacy Protection for Cloud Storage: A Survey

    In this paper, we make a comprehensive review of the literatures on data security and privacy issues, data encryption technology, and applicable countermeasures in cloud storage system. Specifically, we first make an overview of cloud storage, including definition, classification, architecture and applications.

  5. cloud security Latest Research Papers

    cloud security Latest Research Papers | ScienceGate cloud security Recently Published Documents TOTAL DOCUMENTS 833 (FIVE YEARS 305) H-INDEX 22 (FIVE YEARS 4) Latest Documents Most Cited Documents Contributed Authors Related Sources Related Keywords A Review on AWS - Cloud Computing Technology

  6. A survey on security challenges in cloud computing: issues, threats

    This paper will focus and explore a detailed knowledge about the security challenges that are faced by cloud entities such as cloud service provider, the data owner, and cloud user. Similar content being viewed by others Cloud Computing Security Challenges: A Review Chapter © 2021

  7. Data Security and Privacy in Cloud Computing

    In this paper, we make a comparative research analysis of the existing research work regarding the data security and privacy protection techniques used in the cloud computing. 1. Introduction Cloud computing has been envisioned as the next generation paradigm in computation.

  8. [2402.10985] Leveraging AI Planning For Detecting Cloud Security

    Cloud computing services provide scalable and cost-effective solutions for data storage, processing, and collaboration. Alongside their growing popularity, concerns related to their security vulnerabilities leading to data breaches and sophisticated attacks such as ransomware are growing. To address these, first, we propose a generic framework to express relations between different cloud ...

  9. Full article: Secure cloud data storage solution with better data

    2. Review of literature. This section reviews the related literature concerning data outsourcing schemes of cloud computing. In [Citation 14], an identity-based Provable Data Possession (PDP) scheme is presented on the basis of RSA assumption for privacy ensured secure cloud storage.The identity-based homomorphic authenticators are produced by taking the outsourced file and a time-bound global ...

  10. Data Security and Privacy Protection for Cloud Storage: A Survey

    This paper makes a comprehensive review of the literatures on data security and privacy issues, data encryption technology, and applicable countermeasures in cloud storage system, and gives an overview of cloud storage, including definition, classification, architecture and applications. The new development trends including Internet of Things (IoT), smart city, enterprises digital ...

  11. A literature survey of security issues in Cloud, Fog, and Edge IT

    In this paper, cloud, fog and edge computing are conceptualized as part of a modern multi-tiered computing infrastructure as shown in Fig. 1, acting as the data warehousing and processing technologies of modern times, with numerous benefits and challenges alike.

  12. Blockchain for Cloud Storage Security: A Review

    The decentralized and disseminated nature of the blockchain procedure ensure security prerequisites and persuade the improvement of giving security to cloud storage. In this paper, we discuss the concept of blockchain technology and its hot research trends is discussed.

  13. (PDF) Data Security in Cloud Computing

    This paper discusses the security of data in cloud computing. It is a study of data in the cloud and aspects related to it concerning security. The paper will go in to details of data...

  14. (PDF) A Systematic Literature Review on Cloud Computing Security

    Our survey paper identified the blockchain as a partnering technology to alleviate security concerns. The SLR findings reveal some suggestions to be carried out in future works to bring data...

  15. PDF 2023 Cloud Security Report

    Cloud Security Report Introduction The 2023 Cloud Security Report highlights the critical cybersecurity concerns, incidents, and trends as organizations increasingly adopt multi-cloud environments.

  16. Review of Data Storage and Security in Cloud Computing

    6 Conclusion. Data security and storage security in cloud computing were explained in detail in this survey work. The proposed strategies explain the different classifications of data and storage protection. Finally, a comparison table with all existing methods is given, tabulated with advantages and disadvantages.

  17. Adoption of cloud computing as innovation in the organization

    Barriers to Cloud Computing deployment can be observed in the work of Jangjou M et al., 2022 where there is a strong focus on the Cybersecurity risks when adopting Cloud Computing technology in both client and server-side layers of Cloud architecture. 26 These risks include Providing vulnerable APIs to Cloud users, lack of awareness of the ...

  18. (PDF) Cloud Security

    This paper reviews the cloud security issues and concerns , while addressing various key topics like vulnerabilities, threats and mitigations, and cloud models. Discover the world's...

  19. A detailed review of Cloud Security: Issues, Threats & Attacks

    It is an emerging technology that deliver computing services such as online business applications and data storage over the Internet. Implementing cloud enables a distributed working environment, where it reduces expenditure of the organization, provides data, information security and so on. ... This paper discusses about the cloud security ...

  20. The Rise of Cloud Computing: Data Protection, Privacy, and Open

    In this paper, we conduct a systematic literature review (SLR) to illustrate all the data protection techniques that protect sensitive data outsourced over cloud storage. Therefore, the main objective of this research is to synthesize, classify, and identify important studies in the field of study.

  21. Cloud Computing: Security Issues and Research Challenges

    Cloud Computing: Security Issues and Research Challenges. R. Padhy, M. Patra, +1 author. Oracle India Pvt. Published 2011. Computer Science, Engineering. TLDR. This research paper outlines what cloud computing is, the various cloud models and the main security risks and issues that are currently present within the cloud computing industry and ...

  22. Study on Data Security Policy Based on Cloud Storage

    Study on Data Security Policy Based on Cloud Storage Abstract: Along with the growing popularisation of Cloud Computing. Cloud storage technology has been paid more and more attention as an emerging network storage technology which is extended and developed by cloud computing concepts.

  23. Research on Cloud Storage and Security Strategy of Digital ...

    The security of cloud storage in digital archives mainly includes transmission security and storage security. In order to ensure data security, it is necessary to adopt corresponding security policies. In this paper, we use data encryption to achieve secure transmission and secure storage of digital archives.

  24. Leonardo Spearheads European Space Cloud Initiative for Military ...

    Leonardo, a global leader in aerospace, defense, and security, has embarked on a groundbreaking project titled "Military Space Cloud Architecture" (MILSCA), commissioned by the Italian ...

  25. An Overview of Data Storage in Cloud Computing

    Storage on the Cloud makes use of the internet, virtualization, encryption and others technologies to ensure security of data. This paper presents the state of the art from some literature available on Cloud storage. The study was executed by means of review of literature available on Cloud storage.