For continuity and availability from the cloud, old rules apply



Read More:

30 June 2016 | 0

Despite the sea change for many aspects of traditional ICT from the advent of cloud computing, the major rules and accepted wisdom in continuity and availability still apply in the cloud world.

That was one of the key points from the latest TechFire, in association with SunGard AS, made by Accenture solutions architect Adrian Fitzpatrick.

“Downtime at 04:00 on Saturday is not the same as in the middle of a business day,” Adrian Fitzpatrick, Accenture

Building on that point, Brian Finnegan, solutions architect for SunGard, argued both sides of the current axiom: there is no cloud, just someone else’s computer. On one side of this argument is the assertion that many of the old rules apply in the cloud world. The other side of the argument was the point that when troubleshooting in a cloud or hybrid environment, it is often best to think of the structures less as infrastructure, and more as code.

When considering availability services and requirements, Fitzpatrick argued that all downtime is not the same and both infrastructures and procedures must account for it. Downtime at 04:00 on Saturday is not the same as in the middle of a business day, said Fitzpatrick. Finnegan added to this by highlighting that even the famous five nines, 99.999% uptime is not 100% and irrespective of service level agreements, or the past record of a provider, organisations must adequately plan for outages.

Finnegan also said that in the event of resolution of an outage, organisations should be prepared, in the immediate aftermath, for services to resume in less than a 100% performance state.

Finnegan said that as services resume, backlogs and cached data may cause high traffic as the systems re-balance themselves and resume normal service. Depending on the amount of data to transfer, and the length of the outage, this performance degradation could take some time to be resolved and must be factored into recovery and contingency plans.

Finnegan summed up much of the discussion by saying, “When it comes to high availability and the cloud, it’s all the same — only the terminology has changed.”

A question from the audience concerned visibility around costs, with specific reference to predictability. Finnegan addressed the issue by saying that specific provisions were made during the initial scoping phases to allow users see exactly what services would cost, based on their requirements and usage. The point was made that on investigation, cloud services in continuity and available were not necessarily cheaper than on-premises solutions, and while this was acknowledged by the panel in some cases, the benefits of cloud elasticity, availability and security generally outweighed such concerns as the cost differences were not substantial. Indeed, the authors Brian Foote and Joseph Yoder of the Department of Computer Science of the University of Illinois, were quoted, “if you think good architecture is expensive, try bad architecture”.

A show of hands revealed that while only person present was already using continuity and availability services from the cloud, around a third of those present said they were actively evaluating such services.

Another audience question asked about continuity and availability services in the light of requirements such as Right to be Forgotten requests. The panel acknowledged that while it is difficult to ensure request are thoroughly carried out, there were provisions there for them. Fitzpatrick highlighted one database system that encrypts base information. It then stores the encryption key in active storage, even when data encrypted by that key may be moved to the lowest level of storage, such as tape. When something like a Right to be Forgotten request is made, the corresponding encryption key can be deleted ensuring that even in the event of cold media such as tape being restored to active media, it is inaccessible because its encryption key no longer exists.

There was a strong focus on security in general, in relation to the usage of continuity and availability services. Again, the panel emphasised that all of the old rules apply, and highlighted the need for security for data at rest and in transit, but overall suggested that data is safer with large providers who have the resources, and the interest, to ensure that customer data is safe and protected.


TechCentral Reporters

Read More:

Leave a Reply

Back to Top ↑