Container technology has brought about a step-change in virtualisation technology. Organisations implementing containers see considerable opportunities to improve agility, efficiency, speed, and manageability within their IT environments. Containers promise to improve datacenter efficiency and performance without having to make additional investments in hardware or infrastructure.
Traditional hypervisors provide the most common form of virtualisation, and virtual machines running on such hypervisors are pervasive in nearly every datacenter.
Containers offer a new form of virtualisation, providing almost equivalent levels of resource isolation as a traditional hypervisor. However, containers are lower overhead both in terms of a smaller memory footprint and better efficiency. This means higher density can be achieved – simply put, you can get more for the same hardware.
Our latest white paper, For CTOs: the no-nonsense way to accelerate your business with containers, explains the background and timeline to the development of containers, and the most recent technology that has led to the proliferation of containers on the Linux platform. It explains the differences between, advantages, and disadvantages of both process and machine containers. It examines the two main software tools (Docker and LXD) that are used to manipulate containers. Lastly, a glossary at the end of the paper provides a convenient reference point for the technical terms used within the paper.
Canonical, the company behind the Ubuntu operating system, is intimately involved in the development of Linux containers. LXD machine containers is a Canonical-initiated project and Canonical employs several of the lead container developers as well as the overall LXD project leader. Further, Canonical has also developed fan networking, a technique to handle efficiently the challenges in IP address management and overlay networks created by the proliferation of IP addresses consequent upon the increased virtualisation density that containers provide.
Canonical’s Ubuntu operating system underpins most container initiatives, including over 70% of Docker containers in use today. Moreover, Microsoft’s Azure Kubernetes service is built on Ubuntu and in partnership with Google, Canonical has released its own distribution of Kubernetes. Canonical is an active partner in all the leading container initiatives, including Docker, and is a member of the Cloud Native Compute Foundation.
Containers present a new opportunity for the CTO to reduce cost, to increase agility, and to move to a more scalable and resilient architecture. However, CTOs must also recognise that some use cases are better suited to containers than others, that grasping this opportunity requires a recognition of transition costs, and that in some cases this change in infrastructure also brings some challenges. These concepts are more fully explained in this white paper.
Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.
As highlighted in the Ubuntu is Everywhere infographic to coincide with the 16.04 LTS, Ubuntu is used by millions across every sector and technology imaginable. Two years on, and with 18.04 LTS now released, we take a new look at how…
Today we are delighted to introduce the new Minimal Ubuntu, optimized for automated use at scale, with a tiny package set and minimal security cross-section. Speed, performance and stability are primary concerns for cloud developers and…
Stu Miniman and John Boyer of theCUBE interviewed Mark Baker, Field Product Manager, Canonical at the OpenStack Summit in Vancouver. Read on to to find out about OpenStack’s increasing maturity. The Kubernetes and OpenStack story isn’t…