Fixing it with Docker
Efficiency is the prime goal of any operation, so let’s take a couple minutes and talk about how using docker to run a compute workload can be WAY more efficient than the traditional server sprawl. Before I get into that, let’s do a brief look at what docker is, what a containerized workload is, and why you care.
A trip down memory lane.
Prior to the explosion of virtualization, one application would run on one server. It was hugely inefficient (wasted compute, memory and disk), so everyone turned to virtualization. Virtualization allows companies to leverage a pool of compute, memory and disk resources, and oversubscribe a single server anywhere from 4x to 10x capacity. No more hardware purchases!
In this example, you see traditional physical servers versus a virtualized workload. The shaded red area is the excess capacity. That’s what you’ve paid for that you’re not using.
Magical, right? Well, sort of.
Every sysadmin, network manager (and hopefully CIO) knows about virtual machine server sprawl. Since an incremental virtual machine is typically looked upon as “free” – not having to spend additional capital to enable – it’s too easy to turn on additional machines without much resistance. The path of least resistance becomes to add another server and you find yourself in a “one-app-per-VM” situation.
Great, but what’s the challenge now?
We’re now at a new inefficiency horizon: virtual machine inefficiency. Each virtual machine has its own operating system, compute, storage, backup and networking. An app may be installed in /usr/local/bin on one server and in /opt on another. Maybe there’s a user account with a cron job running as a user. It becomes very difficult to keep track of everything unless you have bullet proof process and procedures.
The issue compounds itself when you go to the cloud. Each VM costs real money, and those bills add up. Let’s look at an example of a low-spec server on AWS: 2 vCPUs, 4 GB memory, low I/O – a t2.medium instance.
Per month, that virtual machine costs $34.41. Let’s say you have 20 VMs – your monthly bill (excluding disk and file transfer) is in the neighborhood of $688.
Now that you’re paying mini-bar prices, are you really using the capacity that you’re paying for? Does each application really need two virtual CPUs, 4 gig of memory and a low I/O allotment? You’re paying for it – are you using it?
In short, when you shift a virtualized workload to the cloud, you’re now in the SAME situation as you were when you moved from physical to virtual servers. The business model changed, and if you’re not careful, you’ll be paying for capacity you’re not using.
So, enter docker.
Docker is simple. Docker takes all the “stuff” that makes up an application and packages it together into a single blob, called a container. This container is EVERYTHING you need to run an application, except for the data.
The beauty of a docker container is that the entire application is self-contained and portable. Everything that container needs to run is in the container, except the data itself.
A great example is a web server – there’s a docker container for an apache web server, but your actual web site data lives in a folder outside of the container.
Need to upgrade the web server? No problem, just upgrade the container. Your data is completely separate. From an operational perspective, you know exactly where your web server is, and exactly where the data is.
Depending on how many resources your application consumes, you can run tens or even hundreds of docker containers on a single machine. It all depends on the workload intensity.
Let’s go back to our hypothetical example of 20 VMs in AWS. If you were to take them and run them as docker workloads, you may be able to run one machine. Even if you run a larger machine instance (let’s say you go with an m3.2xlarge with 8 vCPUs, 30 gigs of RAM and high I/O), you’re only at $389.43 per month.
That’s over a 40% savings on your AWS monthly bill, just by packaging your workload differently.
And, in case you outgrow your VM, simply buy a bigger virtual machine for the workload you need. Downsizing? No problem, just reduce your spend.
That’s the beauty of containers. Same workload, same servers, easier management, less server sprawl, less cost. Migrate the containers from one host to another and you’re done.
At 24/7, we’re having this conversation every day with our customers and doing amazingly cool things with their infrastructure. For instance, we can seamlessly extend your on-premise docker environment right to the cloud, so you can scale up and scale back an application instantly. Imagine clicking a button to 2x, 4x or 100x your capacity only for the minutes that you need it, and turn it off when you don’t.
That’s the power of a container-based workload!
For more questions about containerized workloads, scaling compute from on-premise to the cloud, and making sure that the cloud doesn’t bankrupt you inadvertently, reach out to me either via LinkedIn or at 24/7 Networks at (303)991-2224 or firstname.lastname@example.org.