With the amount of industry excitement around virtualized cloud, it is easy to downplay the importance of directly utilizing bare metal infrastructure. And yet bare-metal is the one thing every experienced operator knows they can rely on when deploying mission-critical services, representing guaranteed resource, predictable performance, and best of all, no external interference.
Put very bluntly, bare-metal automation has always been terrible. Making a bare-metal server available instantly and on-demand, at the swipe of a credit card, is simply not something any tool aimed to deliver. Instead, hardware vendors invested in building expensive, incompatible and proprietary automation stacks that locked customers into a brand and use case. Third-party automation tools focused on bridging the gap between vendor technologies but unfortunately missed the opportunity to provide the on-demand experience that the virtualized cloud got exactly right. Automating hardware is a harder than virtualized infrastructure, sure, but it’s not unsolvable; the core of the problem has been lack of vision.
The first thing that cloud — actually, let’s properly credit Amazon Web Services for this — got right was API-centric service. AWS’ first product, the S3 object store (see our recent storage post for more) was a runaway success, yet all it really provided was a very simple API to post and retrieve files. It turns out that if you let people use an API to immediately access compute resources they desperately need, they use a ton of it — way more than you might initially predict. And that’s how AWS disrupted the snail pace of on-site infrastructure, up to then controlled by an incredibly inefficient procure-rack-configure-deploy loop.
It turns out that that terrible loop is still how most physical infrastructure is allocated and consumed today. Even if you can rent dedicated physical servers from your favorite service provider, do they come via an API that lets you instantly configure, deploy, redeploy — and most importantly, add more systems to your available pool? Most likely, not. Softlayer, the pioneer in this space, was so successful that even IBM identified their potential — and they did not even have the instant on-demand problem completely solved.
Typical bare-metal tooling leaves everything beyond installing the operating system up to the operator, who has to template and configure each individual system. That’s a lot of boring, repetitive work: without serious automation, a dedicated engineer is likely to only get through one or two systems per day.
Canonical partners like Scaleway have identified this gap and are moving fast to provide rich bare-metal functionality which can be entirely API-driven; you get live machines running in minutes. And what’s more, Canonical provides an automation solution with Ubuntu that enables any service provider to deliver a similarly flexible, robust bare-metal service: MAAS, which is used by hundreds of enterprises worldwide, including every Canonical OpenStack customer.
MAAS was created from scratch with one purpose: API-centric bare-metal provisioning. MAAS automates all aspects of hardware provisioning, from detecting a racked machine to deploying a running, custom-configured operating system. It can make managing a row of fully-loaded racks feasible for a single individual — because essentially it invites the admin to deliver a slice of the available resource directly to its end user, who can decide how they want the system configured, what OS to load, and what credentials and automation to set up.
The API-driven aspect of MAAS makes it particularly suited to embedding in wider systems. Companies like NEC have adopted MAAS (featured back in our 14.10 release announcement) as part of their cloud infrastructure, and we are actively working with our integration partners on bringing MAAS into service providers that are keen on delivering bare metal APIs for their end-customers.
MAAS can be leveraged in a number of ways by a service provider hosting a lot of server systems:
Bare-metal can form the basis of a differentiated product, a revenue generator that actually delivers something many customers are looking for, but can’t currently buy. Bring your expertise to bear in the final solution, but don’t worry about having to build it all yourself — that’s what Canonical brings to the table.
Canonical has a very successful history of working with service providers to enable them to address new customer demands and expand to new markets. We work with partners worldwide to do last-mile integration of our technologies with existing infrastructure. At World Hosting Days next week in Rust, Germany three of the most illustrious, Fairbanks, StackVelocity and Teuto will be presenting with us and meeting with hosting companies that are taking the next step in the infrastructure revolution.
Get in touch to join our service provider focused channel programme and build successful clouds together.
Interested in running Ubuntu Desktop in your organisation?
This article on Kubernetes & vSphere integration originally appeared at Kevin Monroe’s blog Background Recently, Juju began supporting cloud-native features via “integrator” charms (e.g.: aws-integrator, gcp-integrator,…
Introduction Managing a service with deployments in multi-cloud environments can be a challenge in terms of troubleshooting and scalability due to the complexity of dealing with different public cloud providers. An effective way to manage…
Independent Report highlights the TCO of Canonical’s managed private cloud in a diverse multi-cloud strategy and enterprise infrastructure portfolio 451 Research’s latest report, ‘Busting the myth of private cloud economics ’, found that…