CHOOSE YOUR CURRENCY

DEVELOPMENT OF A MODEL FOR USER-CENTRIC CYBER DISASTER RECOVERY

Amount: ₦5,000.00 |

Format: Ms Word |

1-5 chapters |



ABSTRACT

In a world of interdependent economies and online transactions, a large volume of data is hosted on the cyberspace on daily basis. Cyber threats and attacks are steadily increasing. Most time, these threats and attacks are targeted at service providers but service users are greatly affected by the attacks due to their vulnerability level. When disasters knockdown the infrastructures of a single service provider, it will have ripple effects on thousands of innocent service users. Therefore, service users need more than ever to prepare for major crises targeted at their service providers. To cope with this trend, every service user requires an independent business continuity plan (IBCP) or disaster recovery plan (DRP) and data backup policy which falls within their cost constraints while achieving the target recovery requirements in terms of recovery time objective (RTO) and recovery point objective (RPO). The aim of this research work is to develop a user-centric cyber disaster recovery mechanism to enable service users to independently develop and implement their independent data backup policies that best suits their remote databases. The system developed is highly compatible with MYSQL, MSSQL and Oracle databases. With this system, service users have the liberty to independently define and implement their private backup plans and disaster recovery policies and also to configure their remote databases by selecting the entities to be backed up and for each entity selected, the backup frequency is also selected. The system creates backup files for the remote databases in accordance with their configuration settings. The backup files are encrypted to prevent its contents from Man in the Middle attacks (MITM). The encrypted backup files are also compressed to enhance its transmission across networks. A combination of Dynamic System Development Methodology (DSDM) and Object-Oriented Analysis and Design Methodology (OOADM) are used to design the system while Java Enterprise Edition (JEE) is used to develop the system. The system is well tested and the results obtained are compared with some well- known systems and outputs are relatively good.

CHAPTER ONE

INTRODUCTION

1.1                Background of the Study

In the world of technology, change is the only constant factor. The change is so dynamic that what was considered at one era to be an obsolete technology can become the current trend in another era. Computing in the past three decades witnessed a tremendous change in the way and manner data were processed which resulted in the emergence of distributed systems against the pre-existing centralized systems. It is interesting to see that presently, computing is going back to some sort of centralization with a brand name called Cloud computing (Westerlund and Kratzke, 2018).

The term Cloud first appeared in the early 1990s in the communication world. It was a time when telecom providers introduced the use of Virtual Private Network (VPN) services for data communication. Virtual Private Network services could maintain the same bandwidth as fixed networks could at significantly less cost. These networks (VPN and the fixed network) supported dynamic routing which enhanced balanced utilization across the network and an increase in bandwidth efficiency. This led to the coining of the term telecom cloud. Cloud computing has a similar premise in that it provides a virtual computing environment which is dynamically allocated to meet the need of the user.

Cloud computing is a computing paradigm that employs internet technologies to provide scalable and elastic computing infrastructure (hardware, software, processing, and storage) as a service to the external customer(Mishra, Mohapatra, Mishra, and Sahoo, 2018). It relies basically on both internet and virtualization technologies while the former provides client access to the cloud, the later offers each subscriber one or more individual virtual instances. With virtualization technology, several virtual servers can be hosted by each physical server (Zhao, Amagasaki, Iida, Kuga, and Sueyoshi, 2017).

Prior to the emergence of cloud computing, supercomputers were used in specific areas like the military, government agencies, universities and research laboratories to handle enormous complex calculations. Cloud computing therefore aims at further diversifying

the use of supercomputers by applying their power to solving problems that require complex computational resources (Buyya and Son, 2018). With the Internet connection, users are granted immediate access to a large number of the world’s most sophisticated supercomputers together with their corresponding processing power, interconnected at diverse locations around the world.

In the past, Information Technology (IT) resources and applications were provided as products which were sold or licensed from vendors to users and then exploited locally on local computers. Cloud Computing brings about a shift in the paradigm in that instead of purchasing hardware or software, a user purchases remote access to them via the Internet. Service Providers (SPs) only issue an invoice to Service Users (SUs) on a utility basis that is pay-as-you-go as done with electricity, water, and telecommunication or on a subscription basis. That is why it is viewed as a business model which delivers IT resources and applications as services rather than products and accessible remotely rather than locally (Shovon, Roy, Sharma, and Whaiduzzaman, 2018). The evolution of cloud services has enabled entities to do more with less few resources and better operating efficiency. This has many tangible benefits for business, however, there are inherent security risks that must be evaluated, addressed, and resolved before business owners will have confidence in completely outsourcing their IT requirements to service providers. IT companies take security, performance, data availability and difficulty in bringing back data in-house (Data Backup) as top challenging factors inhibiting them from adopting cloud services (Jangra and Bala, 2012). Their finding is as presented in Figure 1.1.

Figure 1.1 presents security, performance, data availability and difficulty in bringing back data in-house (i.e. Data Backup) are the most challenging issues in cloud computing. These hiccups explain why many business owners and some government agencies are yet to trust and utilize the immense benefits of cloud computing. Many enterprises which have planned to migrate to cloud prefer using the cloud for less sensitive data and store important data within enterprise boundary (Jangra and Bala, 2012). It is important to note however, that no matter how careful you are with your personal data, by subscribing to cloud services, you will be giving up control to an external source. This distance between you and the physical location of your data creates a barrier. It may also create more space for a third party to access your information without your knowledge or approval. With this, regular back up of your private data becomes very difficult. This Inadequate data backups and improper data syncing are what has made many businesses vulnerable to ransomware, a specific type of cloud security threat (Stergiou, Psannis, Kim and Gupta, 2018). Ransomware operates by locking away company’s data in encrypted files, only allowing them to access the data once a ransom has been paid. With appropriate data backup solutions, companies need no longer fall prey to these threats.

Thus, despite the overwhelming benefits available on cloud computing, two major challenges that are yet to be completely handled are the issues of security flexibility in

disaster recovery policies (Jangra and Bala, 2012). The cloud is often seen as valuable to individuals with malicious intent like terrorists and hackers due to the large volume of information hosted on it. There is a lot of personal information and potentially secure data that people store on their computers, and this information is now being transferred to the cloud. This makes it critical for cloud users to understand the security measures that cloud service providers have in place. Disaster recovery as an aspect of the security package is one of the major factors service users take into consideration when subscribing for online hosting. What encryption methods do the providers have in place? What methods of protection do they have in place for the actual hardware that your data will be stored on?

Presently, most cloud service users depend on the security and the backup policies provided for them by their service providers (Rachana and Guruprasad, 2014). With the emergence of cloud computing as a preferred technology for outsourcing IT operations, security issues inherent in the hosting model have assumed greater significance. Inherent in the concept of cloud computing are the risks associated with entrusting confidential and sensitive data to third parties or service providers (SPs). In spite of the several advantages that cloud computing offers, there are several concerns and issues which need to be solved before the ubiquitous adoption of this computing paradigm. First, in cloud computing, the user may not have the kind of control over his/her data. Secondly, the cloud users may risk losing data by having them locked into proprietary formats and may lose control over their data since the tools for monitoring who is using them or who can view them are not always provided to the service users (Garg, Thakral, Nalwa and Choudhury, 2018). Data loss is, therefore, a potentially real risk in some specific deployments. Thirdly, it may not be easy to tailor service-level agreements (SLAs) to the specific needs of a business. Compensation for downtime may be inadequate and SLAs are unlikely to cover the concomitant damages.

1.1              Statement of the Problem

Studies in the literature have shown that a lot of disaster recovery models and data backup policies are currently available. However, service users do not have control over these models and policies since they are always on the premises of service providers (Thomas, 2018). This explains why service users constantly depend on service

providers for backup policies and disaster recovery plans. This issue of dependency inherent in available disaster recovery solutions makes service users to face the following challenges;

  1. Difficulty in bringing back their data to a location of their choice once hosted (Jangra and Bala, 2012).
    1. Inability to configure their backup policy
    1. Storage services provided by one service provider may be incompatible with another service provider as such it becomes very difficult for service users to transfer their applications from one service provider to other in the phase of disaster without losing chunk of their sensitive data (e.g. Microsoft cloud is incompatible with Google cloud) (Basu et al., 2018; Popović and Hocenski, 2010).

This work therefore, seeks to assess these problems and offer a simple, concise and direct solution by developing a User-Centric Cyber Disaster Recovery Model that will place critical backup and disaster recovery decision making into the hands of Service Users. Affordable, efficient, and scalable, cloud computing is still the best solution for most businesses, but it leaves Service Users vulnerable if the proper precautions are not taken (Shovon et al., 2018).

1.2              Aim and Objectives of the Study

The aim of this research is to develop a Model for User-Centric Cyber Disaster Recovery. The objectives of the research are to;

  1. develop a prototype to implement the model
    1. grant Service Users the privilege to define and implement their independent Disaster Recovery Plan (DRP) and data backup policy
    1. automatically generate compressed and encrypted backup files in accordance to the backup policy
    1. grant Service Users the privilege to download the compressed and encrypted backup files to any location of their choice within their premises
    1. provide an interface to decompressed and decrypted the backup file to generate pure Structural Query Language (SQL) scripts to facilitate recovery process in an event of cyber disaster

1.3              Significance of the Study

This study is important in a number of ways. ICT units of both government, private sectors and public institutions will find the outcome of this research work useful because it will aid them to easily configure the backup policies of their remote databases in accordance with their specific needs. With this, database administrators of any organisation will be very sure that very important entities in their remote databases are being backup at their chosen frequencies and their backup files can be downloaded to any location or device of their choice at any time even when their service provider is suffering from any form of service down time as a result of cyber disaster.

The implementation of this system will grant flexibility to Service Users to migrate from one Service Provider to another without losing their sensitive data as copies of their remote database instances are always within their premises. In event of disaster hitting their Service Providers or even a prolonged disruption of services, Service Users can easily change to other Service Providers in less time without loss of sensitivity

The work when completed shall be very useful in many governmental and non- government institutions around the globe where application hosting on cyberspace has become eminent. In Nigeria for instance, it will be useful for Military and all the Para- Military to deploy this solution to enable them to bring their operation data which are currently hosted by Service Providers in other countries back to their premises. All Universities, Polytechnics and Colleges of Education in Nigeria have one portal or the other with which the institutions are being managed. Currently, all of these portals have their operational data in the custody of Service Providers. The findings of this research shall be useful to such institutions as it will help them to have access to their operational data and have a base to run back to in the event of a disaster. The findings of this research shall also very useful to all the commercial banks, examination bodies and small and medium scale businesses all over the globe.

The works shall also be very useful to Data Center Operators all over the globe as it will provide a gateway for backup files from one Service Provider to be ported in the infrastructures of other Service providers thereby providing handy solution to the issue of incompatibility in storage services deployed by various Service Provider.

1.4              Scope of the Study

This study is limited to quality and timely backup of databases which is a critical aspect of a disaster recovery system. The scope of the study is therefore summarized as follows:

The system so modeled can work perfectly with the three most commonly adopted Relational Database Management Systems (RDBMS) in cyberspace which are; Oracle, MySql and MSSql.

Due to the fact that the model hopes to grant absolute freedom for service users in terms of Disaster Recovery Policy, the new system generates encrypted backup files for users in accordance with their configurations which they can log in and download such backup files into their local system at any point in time.

Also, due to the fact that it would be difficult to get sensitive login details of corporate websites of most organisations, the system is mostly tested on local systems and an online portal (https://www.naitesmkd.org) which is an educational portal design and hosted by the researcher.

1.5              Limitations of the Study

There is no 100% guarantee that all online applications can be backed up with this system since some online applications are running on different Database Management Systems (DMBS) other than the once covered by the system. Also, Databases with pictures and video clips cannot be backed up and encrypted by the system.

1.6              Definition of Terms

The following terms are used throughout this research work.

Backup Replication: Backup replication is the frequent electronic copying of databases schema and data from a database in one computer or server to another location for storage and possibly recovery in event of a disaster (Zou and Jahanian, 1999).

Catastrophe: This can occur as a result of the occurrence of a disaster. Catastrophes may be avoided by using disaster avoidance mechanisms (Sánchez and Goldberg, 2003).

Cold Backup Site: In a cold backup site, data is often only replicated on a periodic basis, leading to a high Recovery Point Objective (RPO) of hours or days. In addition, servers to run the application after failure are not readily available, and there may be a delay of hours or days as hardware is brought out of storage or re-purposed from test and development systems, resulting in a high Recovery Time Objective (RTO). It can be difficult to support business continuity with cold backup sites, but they are a very low-cost option for applications that do not require strong protection or availability guarantees (Wood et al., 2010).

Cyber: This is used in referring to the Internet (Kumar, Raghavan, Rajagopalan, and Tomkins, 1999).

Cyberspace: The entire region of Internet coverage in the world

Disaster: A disaster is an event that creates an inability for an organization to provide essential services (Raphael, 1986).

Disaster Recovery (DR): This is the area of security planning that deals with protecting an organization from the effects of significant negative events of disasters

Disaster Recovery Plan (DRP): This is sometimes referred to as a Business Continuity Plan (BCP) or Business Process Contingency Plan (BPCP) it describes how an organization is to deal with potential disasters. Just as a disaster is an event that makes the continuation of normal functions impossible, a disaster recovery plan consists of the precautions taken so that the effects of a disaster will be minimized and the organization will be able to either maintain or quickly resume mission-critical functions (Wallace and Webber, 2017). Typically, disaster recovery planning involves an analysis of business processes and continuity needs; it may also include a significant focus on disaster prevention.

(DRPM): The individual or individuals assigned to oversee the creation, implementation, testing, periodic review and distribution of the DRP (Smith, Martin, and Wenger, 2018).

Hot Backup Site: A hot backup site typically provides a set of mirrored standby servers that are always available to run the application once a disaster occurs, providing minimal RTO and RPO. Hot standby typically uses synchronous replication to prevent any data loss due to a disaster (Wood et al., 2010).

Recovery: Recovery pertains to the immediate reinstatement of an organization’s essential services after a natural or man-made disaster or other emergency situations.

Recovery Point Objective (RPO): This is the time (relative to the disaster) to which you plan to recover your data (Chatterjee, Mahalingam, Jayaraman, and Maliakal, 2016). Recovery Time Objective (RTO): This is the maximum tolerable length of time that a computer, system, network, or application can be down after a failure or disaster occurs (Chatterjee et al., 2016).

Service Users (SU): These are businesses or individuals that subscribe to the services on the internet (Subashini and Kavitha, 2011).

Service Provider (SP): A service provider is a company that offers some component of cloud computing typically Infrastructure as a Service (IaaS), Software as a Service (SaaS) or Platform as a Service (PaaS) to other businesses or individuals (Subashini and Kavitha, 2011).

Threat: A threat is a potential attack that may lead to a misuse of information or resources (Mirkovic and Reiher, 2004).

Traditional Security: This can be defined as the measure taken to ensure the safety and material existence of data and personnel against theft, espionage, sabotage, or harm (Tow, Thakur, and Hyun, 2000).

Transmission Control Protocol/Internet Protocol (TCP/IP): This is an Internet protocol that operates as a thin layer. It controls data transmission in packets and routes the data to its destination (Feit, 1998).

Unified Modeling Language (UML): It is a system design tool or language that utilizes design models (drawings) independent of any programming language that can be implemented developing information systems (Booch, 2005).

Universal Resources Locator (URL): It is a unique website or web page address (Bahrs, Lillie, and Van Horn, 2006).

Vulnerability: Vulnerability refers to the flaws in a system that allows an attack to be successful (Mirkovic and Reiher, 2004).

Warm Backup Site: A warm backup site may keep state up to date with either synchronous or asynchronous replication schemes depending on the necessary RPO. Stand-by servers to run the application after failure are available, but are only kept in a warm state where it may take minutes to bring them online (Wood et al., 2010).


This material content is developed to serve as a GUIDE for students to conduct academic research



DEVELOPMENT OF A MODEL FOR USER-CENTRIC CYBER DISASTER RECOVERY

NOT THE TOPIC YOU ARE LOOKING FOR?



PROJECTOPICS.com Support Team Are Always (24/7) Online To Help You With Your Project

Chat Us on WhatsApp » 07035244445

DO YOU NEED CLARIFICATION? CALL OUR HELP DESK:

  07035244445 (Country Code: +234)
 
YOU CAN REACH OUR SUPPORT TEAM VIA MAIL: [email protected]


Related Project Topics :

DEPARTMENT CATEGORY

MOST READ TOPICS