Container technology has brought about a step-change in virtualisation technology. Organisations implementing containers see considerable opportunities to improve agility, efficiency, speed, and manageability within their IT environments. Containers promise to improve datacenter efficiency and performance without having to make additional investments in hardware or infrastructure.
Traditional hypervisors provide the most common form of virtualisation, and virtual machines running on such hypervisors are pervasive in nearly every datacenter.
Containers offer a new form of virtualisation, providing almost equivalent levels of resource isolation as a traditional hypervisor. However, containers are lower overhead both in terms of a smaller memory footprint and better efficiency. This means higher density can be achieved – simply put, you can get more for the same hardware.
Our latest white paper, For CTOs: the no-nonsense way to accelerate your business with containers, explains the background and timeline to the development of containers, and the most recent technology that has led to the proliferation of containers on the Linux platform. It explains the differences between, advantages, and disadvantages of both process and machine containers. It examines the two main software tools (Docker and LXD) that are used to manipulate containers. Lastly, a glossary at the end of the paper provides a convenient reference point for the technical terms used within the paper.
Canonical, the company behind the Ubuntu operating system, is intimately involved in the development of Linux containers. LXD machine containers is a Canonical-initiated project and Canonical employs several of the lead container developers as well as the overall LXD project leader. Further, Canonical has also developed fan networking, a technique to handle efficiently the challenges in IP address management and overlay networks created by the proliferation of IP addresses consequent upon the increased virtualisation density that containers provide.
Canonical’s Ubuntu operating system underpins most container initiatives, including over 70% of Docker containers in use today. Moreover, Microsoft’s Azure Kubernetes service is built on Ubuntu and in partnership with Google, Canonical has released its own distribution of Kubernetes. Canonical is an active partner in all the leading container initiatives, including Docker, and is a member of the Cloud Native Compute Foundation.
Containers present a new opportunity for the CTO to reduce cost, to increase agility, and to move to a more scalable and resilient architecture. However, CTOs must also recognise that some use cases are better suited to containers than others, that grasping this opportunity requires a recognition of transition costs, and that in some cases this change in infrastructure also brings some challenges. These concepts are more fully explained in this white paper.
Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.
Canonical is pleased to announce full enterprise support for Kubernetes 1.13 on Ubuntu, including support for kubeadm, and updates to MicroK8s – our popular single-node deployment of Kubernetes. Canonical’s certified, Charmed…
This post walks through the use of GPGPUs with Kubernetes and DevicePlugins. We’ll use MicroK8s for a developer workstation example and charmed K8s for a cluster since that’s a consistent multi-cloud Kubernetes approach. The various cloud…
Dell EMC and Canonical today announced the continued evolution of their long-standing partnership to bring a tested and validated container orchestration solution to market through a reference architecture framework that helps…