Cloud fight: the migration tool race
ALEX MEEHAN finds that it has never been easier to get to the cloud, but less so for getting between themPrint
13 September 2018 | 0
Are you happy your company currently has the best deal going for its cloud activity? What if a better deal came along, could you take advantage of it? That is the question many enterprises have to ask themselves as the cloud market matures and an ever increasing number of vendors enter the fray.
What was once the perfect partnership with a cloud provider may no longer represent the value it once did, and many companies’ needs change over time, necessitating a rethink of business processes. Surely being able to move provider is a basic requirement of any business arrangement?
Not so fast. Think about how deeply entangled you are. What would the logistics of a move look like, and would your provider be happy to help you pack up your data and move it elsewhere or would you find yourself high and dry? How long would it take you to pull your data back on site and then ship it across the Internet to a new host? Is there a hardware solution, and if so, what does it look like?
Those are the questions many companies are asking themselves and in turn that many providers are keen to answer. Some cloud giants have created massive storage boxes to facilitate wholesale migration to the cloud while others have been working on better tools to help you move between them.
Meanwhile the development of hybrid and multi-cloud technologies is now beginning to facilitate this.
“In terms of making it easy for customers to move workloads, we understand the trepidation customers have about ‘lock-in,’ especially if you look at how companies have been locked into places with their database providers for a number of years,” said Danilo Poccia, senior evangelist at Amazon Web Services (AWS).
“Those proprietary offerings are expensive with punitive licensing and auditing terms. If you look at the way we build our services, they’re built on a lot of open standards like SQL, Linux, and web APIs. And we provide a number of migration tools for servers, storage, and databases to not only allow customers to move resources from on-premise to AWS easily but also move resources back on-premise if customers so choose.”
According to Poccia, AWS is doing its best to build relationships with customers that will last and the mindset the company has is that it needs to continually earn its customer’s business. He also claims that in the 12 years AWS has been in business, very few companies have chosen to leave it.
“While we believe that in the fullness of time, the vast majority of companies will run almost all of their IT workloads in the cloud, it is and always has been a priority for us to make it easy for customers to run AWS as a seamless extension of their existing on-premises infrastructure,” he said.
To facilitate customers that want to move workloads either into AWS – or out of it – the company offers a number of products, tiered in relation to the amount of data that needs to be moved. Its Snowball hardware product is designed to move petabyte-scale workloads, typically at around one fifth the cost of high speed internet, while Snowmobile is geared towards exabyte-scale data transport, using a secure 40-foot shipping container.
“On the subject of multi-cloud, when we talk to customers, most of them start off believing that they’re going to split their workloads in the cloud relatively evenly among two or three providers. But when they get into the practicality and the rigor of assessing it, very few end up going that route. Most predominantly pick one provider,” said Poccia.
The reasons that they do not spread it evenly are a few-fold. One is that it forces them to standardise on the lowest common denominator and these platforms are in widely different spots at this point.
“AWS has so much more functionality than anybody else; a much larger, more mature community of service providers, software developers, software solutions, and systems integrators; and a much more mature platform because we’ve been operating six to seven years longer,” said Poccia.
“Also, it’s a big transition to go from on-premises to the cloud. And if you force teams not only to make that transition but then on top of it to have to be fluent in multiple cloud platforms, it’s tough. Development teams hate it, and it’s pretty wasteful in terms of resources.”
The third thing is that most cloud providers have volume discounts so if a company spreads its workloads around, then it loses the ability to get buying power in terms of volume discounts.
“So the vast majority of people predominantly pick one infrastructure provider. But, for those that are worried about getting locked in or wanting to make sure if something goes sideways that they have the ability to switch, they will run a small percentage of their workloads with a second provider. This is just so they know they can do it, and they have experience, and they’ve built that relationship — and also for comparison purposes,” said Poccia.
According to Mitesh Chauhan, senior product manager for Interxion, there is a lot of history behind the way in which the cloud industry has evolved. Many long term cloud customers made decisions originally based on moving spend from capital expenditure in on-premises technology to operating expenditure in cloud technology that looked great at the time, but time has shown that things are not usually so neat.
“A lot of the movement we see today is happening because of cost optimisation. If you’re tied into a particular cloud provider and let’s say for example, you’ve got a single line of connectivity then your choices are narrowed,” he said.
“Reversing out of stuff like that, especially when you become really entrenched, can be very difficult.”
Chauhan advises that companies looking for data centre providers make sure that any potential suitor has the kind of connectivity offering that allows them to de-risk the move into the cloud.
“Can they offer scalable bandwidths, can they offer access to multiple clouds? Let’s not just tie it to hyperscalers, you might want to tie it to niche providers as well. Also, one of the things to definitely look out for is what sort of providers are within these data centres? Do they have a range of suites? Do they have a multi cloud offering through that platform and, if they do, great,” he said.
“Does the provider have a single way you can administrate multiple providers — that in itself is becoming more and more prevalent now. Take for example a situation where you have a single pane of glass and want to, say, use a Google access node in Madrid but also access Microsoft’s German cloud — which we know is a bit of an island because it’s basically built for the German cloud — do you have access to those?”
What is the latency likely to be like in such a situation in terms of application performance? If you’re going to use a multi cloud, is it going to be good enough that users are not going to see delays in application performance?
“Those are some of the considerations to look at,” said Chauhan.
The reality of the market in this area is that companies are looking at different vendors to see what deal suits them best. That is just a fact, and any provider that does not work with that will miss out, according to Paul Shanahan, Microsoft Ireland cloud and enterprise business group lead.
“A competitive landscape creates an environment that benefits customers – it pushes each cloud provider to have better technology, better pricing, and a better service. We’ll always ensure a customer receives a compelling reason to consider Microsoft Azure, but we don’t make migration from other platforms our core priority – instead we focus on workloads that have legitimate, lasting business impact and recognise multi-cloud for customers.”
“We already see across the market customers adopting a multi-cloud strategy. It’s in their best interest to do so and any cloud vendor would be naive to expect customers to stick with one platform going forward.”
The reality, according to Microsoft, is that huge volumes of data still reside on-premises and it sees its role as one of supporting customers gaining confidence in moving this to the cloud.
Easy and complicated
“Moving workloads can be easy and it can be complicated. Infrastructure-as-a-service and some containerised workloads are relatively easy to lift and shift and for this reason I’d say it will be only a matter of time before we start to see cloud brokers in the marketplace, negotiating terms for customers to leverage underutilised capacity in data centres,” said Shanahan.
“If your workloads are relatively low maintenance, low criticality and easy to manoeuvre then it could present a nice cost saving measure. Platform-as-a-service and business critical services are of course by their nature more complicated.”
Microsoft has developed a set of tools to allow customers to make qualified assessment of the business and technology benefits of migrating to Azure.
“If you want to know how to migrate from a competitor’s platform or move from a hosted environment we can service that. We’re more than capable of shifting entire datacentres in and out of Azure to meet customers’ needs,” said Shanahan.
“We have a dedicated team of technical and project management professionals in our Customer Success Unit specifically tasked to assist customers in doing this and we’ve invested in our First Data Transfer tool which can transfer four terabytes of data to Azure in an hour.”
“If we’re talking serious scale we have Azure Data Box which is a physical unit weighing 45 pounds (20kg) and holding 100TB of data. Users order the box from the Azure portal, load it up with their data and then ship it to Microsoft for ingestion into the Azure cloud.”
Part of the problem for companies looking to operate a multi-cloud environment is that historically when the hyperscalers came into the market, they did their best to capture market share while, understandably, offering tools that were native to their own cloud environments.
“The problem is that this makes it very hard for a company or an enterprise to use say an Amazon cloud along with a secondary provider, say like Microsoft, at the same time,” said Sachin Sony, senior manager for cloud strategy with Equinix.
“The hyperscalers realised that this creates a challenge because while that is okay for non-mission-critical workloads (to be spread out like this), as more and more enterprises start migrating mission-critical workloads to the clouds, resilience becomes a key requirement.”
The theory goes that if cloud providers are unable to address this particular challenge, enterprises will stay away from the cloud when it came down to mission-critical workloads.
“That applies both across infrastructure as a service, platform as a service and software as a service, so I think the fact that they need to grow their business further has meant that hyperscalers have had to make their environments more compatible, to support a multi-cloud scenario if they want to grow their business further and ensure that more mission-critical workloads, and especially with the emergence of big data and IoT, more of that data and more heavy lifting goes into the cloud.”
The result is that the market has seen an attempt to bring a certain degree of compatibility between different cloud environments into being.
“You’re seeing that emerge over the past couple of years, where, for instance, now you’re able to run VMware, Extensys in workloads in an AWS environment. That makes sense because it is now almost mandatory to a certain extent for enterprises to have this kind of resilience and the only way to achieve that is in a multi-cloud environment.”
“So the hyperscalers have had to comply with the requirement and consequently we’re seeing more and more tools come out to ensure that multi-cloud data transfer becomes easier and more hassle-free.”