Member Musings: Cloud Adoption for Organizations with traditional IT
By John Power - Founder & C.E.O. of Ostia Solutions
Organizations such as Amazon and SalesForce were ‘born in the Cloud’ and do not have the same concerns as organizations with traditional, on premise IT systems. Having grown around the Cloud they do not have the issues that confront traditional IT when dealing with the Cloud. While many traditional organizations are dabbling with the Cloud by using it for applications like Email and CRM, there are few who have managed to use Cloud to augment their core business IT processing and there are no public examples of where they have migrated their on premise systems to the Cloud. This article explores why this is the case by paring it down to the fundamental issue and proposes how traditional IT can start to gain the real benefits available in the Cloud.
What is different about Cloud?
There are many views of what is different about the Cloud, however, Cloud is in reality an evolution of what has been there before. There are also some myths which we will try to dispel here:
- Massive virtualization of hardware capability: large organizations have been doing this on IBM Mainframe systems since the early 1980s.
- Unlimited scalability of systems: this certainly has evolved; however, earlier systems had the capacity to be massively scalable, but this came at a cost.
- The ability to stand up environments very quickly: this capability has also been available for some time but again, at a cost.
So what is different?
- Cost and charging models: Pay as you use (OpEx) compared with large up-front payments (CapEx).
- Self Service: non-technical people can manage the creation of Cloud instances.
- Practically unlimited scalability: a cost effective ability to handle unpredictable loads as demanded by today’s mobile and internet driven world.
From the differences above, it would seem to be a ‘no brainer’; everyone should be racing to the Cloud, right? Well there is one further difference which is the key to Cloud adoption by traditional IT environments: location of and access to their data.
Location of the Infrastructure
So why is this a major issue? Consider that organizations have spent the last 50 years and literally billions of dollars burying their data in silos behind firewalls so that only internally authorised employees could see the data. This was done for good reason to protect the data from unauthorised access and, more a phenomenon of today than the past, to avoid massive data leakage. There is growing regulations mandating what organizations can do with this data leading to massive fines if organizations are in breach of these regulations.
Today these organizations host their data in their own data centres or in facilities managed on their behalf by other large organizations. In all cases, there are key characteristics to this arrangement; the organization…
- knows where their data is ‘at rest’ at all times and can sign off any audit confident of this fact.
- knows that they have backup and disaster recovery procedures in place that have been developed and matured over many years. Again, this is a fact that can be confidently signed off in any audit.
- knows that their data is protected from external networks by multiple layers of security, in many cases including physical disconnect of these systems from the Internet thus ensuring complete confidence that a data breach cannot happen at least via the public Internet.
So consider the alternative of putting this data on a public Cloud:
Despite efforts by a large number of companies, it is still not always clear where the data will reside. This is changing fast with many companies offering to guarantee location of data but there is still a fundamental distrust. How does one sign off on an audit in this case?
The Cloud offers massive potential for disaster recovery and speedy reinstatement of systems, however, these services still need to be developed to the maturity levels that are available to on premise installations.
Finally the public Cloud is just that, it’s connected to the Internet and opens up massive potential for a data breach. Looking at this logically, the data centres that host these Cloud environments offer some of the most secure installations in the world. Coupled to that they have the most up to date security and monitoring processes installed which are likely to be light years ahead of what any on premise installation offers. However, the perception of a threat is still there and will only dissipate with time and as organizations get to understand the real benefits of Cloud offerings.
So how can traditional organizations make use of the Cloud for their traditional core IT without risk of compromising their core principles? The answer is around the testing of their applications.
Application Testing and Service/API Virtualization
Organizations are producing more and more applications to support the increasing numbers of market channels. These applications still in some way connect to back office systems and thus are putting massive pressure on the organization to make back office test systems available which accurately mirror the behaviour of production systems. This is costly in terms of hardware, software licenses and management. Offshore testing presents another problem, as connectivity cannot always be provided to the usual test systems due to data regulations. The solution is often expensive masking projects to enable offshore testing access. Then when systems are finally delivered, a huge amount of time is wasted on-shore to ensure the systems work together.
The key to all of this is to ensure the back office systems are accessed using well define IT Services or APIs. These APIs offer access to newer, external, customer facing applications reusing existing capabilities of the core IT systems that have been built up over time and are reliable. These APIs are stable as they are linked to the core IT systems which change infrequently and the key that offers a solution to this problem.
Using a technique called Service or API Virtualization; it is possible to mimic the behaviour of these Services or APIs, such that newer applications can call the mimicked versions of these APIs which can run on commodity hardware and software boxes. To all intents and purposes the application under test believes and acts as if it is talking to the real Service so full testing of the application under test can take place in isolation of the real systems.
The second part to this puzzle is that not only can the API be virtualized, but the data supporting the API can be synthetic. What this means is that it is valid data is returned by the API but that data has been generated based on the signature of the Service and thus bears no relationship to the real data residing on the on premise systems.
Service and data virtualization offers the following features:
- Creates exact copies of back office, on premise Services/APIs.
- No connectivity required to back office systems.
- Uses valid but synthetic data for Services/APIs.
- No data from on premise systems is used.
- In that that no connectivity is needed to the back office systems and that no data is used from those systems, we have removed the blocks to an ideal use of the Cloud by traditional IT organizations.
- Service/API Virtualization in the Cloud
- Service/API and Data Virtualization in the Cloud offers the following benefits:
- Enables test environments to be stood up in minutes.
- Use of as many test environments as required.
- Avoid interference between testing groups.
- Improves the performance of agile projects by enabling the creation of test Services/APIs that have not yet been developed or completed.
- Avoids all data governance issues.
- Perfect for offshore or offsite development or testing.
- Avoids load on expensive back office or external systems.
- Creates a fixed and controllable cost of testing.
- Organizations can benefit in many ways through the use of service virtualization to reduce risk and improve the efficiency and effectiveness of the development and testing process. This is achieved by an ability to stand up multiple testing environments in minutes while reducing the load on back office systems.
Service Virtualisation is a proven technology that provides significant benefits for organizations that adopt, implement and use it. This adoption is fast becoming main stream and is approaching critical mass. There are a number of suppliers of this type of technology including Ostia, CA, IBM, HP and Parasoft.