dark clouds

The hidden costs of cloud

Priced at cents or less per hour, the cloud seems like the best bargain since penny candy. How can you go wrong?
Pro
Image: Christopher Alzati

16 June 2020

Is there anything more seductive than cloud machine price lists? There are not many of us old enough to remember paying a penny for a piece of candy, but cloud users enjoy prices that are even smaller.

Google’s N1 standard machine’s price is $0.0475 per hour but you can get it for just $0.0100 per hour for your batch processing needs—if you are willing to be pre-empted by more important jobs. The crazy spenders can step up to the high CPU version for $0.015 per hour – still less than two cents.

Azure charges a miniscule $0.00099 per gigabyte to store data for a month in its archival storage tier. Amazon, though, may offer the most eye-popping low prices – charging an infinitesimal $0.0000002083 for 128 megabytes of memory to support a Lambda Function. (Four digits of precision?)

 

advertisement



 

Those tiny numbers throw us off our guard. The medical insurance and real estate bills may be crushing the budget, but when it comes to the cloud, we can enjoy throwing money around like confetti. That is because the prices for many cloud services are literally less than the cost of a piece of confetti.

Then the end of the month comes, and the cloud bill is much larger than anyone expected. How do those fractions of pennies add up so quickly?

Here are seven dark secrets of how the cloud companies turn fractions of cents into real money.

Hidden extras

Sometimes the showiest numbers are dominated by the extras that you do not notice. Amazon’s S3 Glacier has a Deep Archive tier designed for long-term backups that is priced seductively at $0.00099 per gigabyte, something that works out to $1 per terabyte per month. It is easy to imagine putting aside the backup tapes and the hassles for the simplicity of Amazon’s service.

But let us say you want to actually look at that data. If you click through to a second tab on the price sheet, you can see the cost for retrieval is $0.02 per gigabyte. It is 20 times more expensive to look at the data than to store it for a month. If a restaurant used this pricing model, they would charge you $2 for the steak dinner, but $40 for the silverware.

I suppose Amazon’s pricing model makes plenty of sense because they designed the product to support long-term storage not casual browsing and endless report generation. If we want frequent access, we can pay up for the regular S3 tier. But if the goal is to save on archival storage, we need to understand the secondary costs and plan ahead.

Location matters

The cloud companies often dazzle us with maps showing data centres around the globe, inviting us to park our workloads wherever we feel most comfortable. The prices, though, are not always the same. Amazon may charge $0.00099 per gigabyte in Ohio but it is $0.002 per gigabyte in Northern California. Is it the warm weather? The proximity to the beach? Or just the cost of real estate?

Alibaba, the Chinese cloud company, clearly wants to encourage developers to use their data centres around the globe. Low-end instances start at just $2.50 per month outside of China but jump to $7 per month in Hong Kong and $15 per month in mainland China.

It is up to us to watch these prices and choose accordingly. We cannot pick data centres just because they seem more convenient or make ideal candidates for an inspection trip.

Data transfer costs

The only problem with scrutinising the price lists and moving our workload to the cheapest data centres is that the cloud companies charge for data movement too. If we try to be clever and arbitrage the costs by shifting the bits around the globe searching for the cheapest computation and storage, we can end up with bigger bills for moving the data.

The costs for data flow across the network are surprisingly large. Oh, an occasional gigabyte will not make a difference, but it can be a big mistake to replicate a frequently updated database across the country every millisecond just because some earthquake or hurricane may come along.

Roach motels

The famous ads for one cockroach trap announced, “Roaches check in, but they don’t check out.” You might feel the same way when you look at the cost for data egress. Cloud companies often do not charge you to bring data into the cloud. Would a store charge a customer to walk in the door? But if you try to ship the data out, the bill for egress is infinitely larger.

This can bite anyone, small or large, who watches some content go viral. Suddenly everybody wants to see some meme or video on your server and as your web server valiantly satisfies all the requests, the meter for egress charges spins faster and faster.

Sunk cost fallacy

There are always moments when the current machine or configuration will struggle to do the job but if you just increase the size it will be fine. And it is only an extra few cents per hour. If we are already paying several dollars an hour, another few pennies will not bankrupt us. And the cloud companies are there to help with just a click.

Casinos know the same path to our wallets. We have already come so far – another small payment is nothing. But sharp pencilled accountants know that the sunk cost fallacy – aka throwing good money after bad – is a big problem for gamblers, managers, and pretty much everyone but small children. The money we have spent is gone. It will not ever come back. New spending, though, is something we can control.

It is a bit different when you are developing software. We often cannot be sure just how much memory or CPU a feature will require. We are going to have to ratchet up the power of the machines some of the time. The real challenge is keeping our eye on the budget and controlling costs along the way. Just blithely adding a bit more CPU here or memory there is the path to a big bill at the end of the month.

Overhead

A cloud machine is not a machine per se, but a slice of a larger physical machine that has been divided into N portions. The slices, though, are not powerful enough to handle the load on their own so we deploy tools like Kubernetes to keep N pieces working together. Why are we slicing a fat box into N pieces just to sew it back together? Why not just have the one fat machine handling one fat load?

Cloud evangelists might say that people who ask impertinent questions like that do not get the benefits of cloud. All the extra layers and extra copies of the OS bring plenty of redundancy and flexibility. We should be grateful that all these instances are booting and shutting down in an elaborate, orchestrated dance.

But the ease of recovery with Kubernetes encourages sloppy programming. A node failure is not a problem because the pod will sail on as Kubernetes replaces the instance. So we pay a bit more for all of the overhead to maintain the extra layers, thankful that we can just start a clean fresh machine without any of the cruft that seems to get in the way.

Cloud infinity

In the end, the tricky problem with cloud computing is that the best feature, its seemingly infinite ability to scale up to handle any demand, is also a budgetary minefield. Is each user going to average 10Gb of egress or 20Gb? Will each server need two gigabytes of RAM or four? When we start up the projects, it is impossible to know.

The old solution of buying a fixed number of servers for a project may start to pinch when demand spikes, but at least the budget costs do not skyrocket. The fans on the servers may whine from all the load and the users may grouse about the slow response, but you are not going to get a panicked call from the accounting team.

We can pencil together estimates but no one will really know. Then the users show up and anything can happen. No one notices when the costs come in lower, but when the meter starts to spin faster and faster, the boss starts to pay attention. The deepest problem is that our bank accounts do not scale like the cloud.

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie