Network Time Protocol is one of those oft-ignored-but-nonetheless-essential subsystems which is largely unknown, except to a select few. Those who know it well generally fall into the following categories:
Fortunately, Ubuntu & other major Linux distributions come out of the box with a best-practice-informed NTP configuration which works pretty well. So sometimes taking a hands-off approach to NTP is justified, because it mostly “just works” without any special care and attention. However, some environments require tuning the NTP configuration to meet operational requirements.
One such environment is Canonical’s managed OpenStack service, BootStack. A primary service provided in BootStack is the distributed storage system, Ceph. Ceph’s distributed architecture requires the system time on all nodes to be synchronised to within 50 milliseconds of each other. Ordinarily NTP has no problem achieving synchronisation an order of magnitude better than this, but some of our customers run their private clouds in far-flung parts of the world, where reliable Internet bandwidth is limited, and high-quality local time sources are not available. This has sometimes resulted in time offsets larger than Ceph will tolerate.
A technique for dealing with this problem is to select several local hosts to act as a service stratum between the global NTP pool and the other hosts in the environment. The Juju ntp charms have supported this configuration for some time, and historically in BootStack we’ve achieved this by configuring two NTP services: one containing the manually-selected service stratum hosts, and one for all the remaining hosts.
We select hosts for the service stratum using a combination of the following factors:
Here’s a diagram depicting what a typical NTP deployment with a manual service stratum might look like (click for a larger image).
# Create the two ntp applications: $ juju deploy cs:ntp ntp-service # ntp-service will use the default pools configuration $ juju deploy cs:ntp ntp-client $ juju add-relation ntp-service:ntpmaster ntp-client:master # ntp-client uses ntp-service as its upstream stratum # Deploy them to the cloud nodes: $ juju add-relation infra-node ntp-service # deploys ntp-service to the existing infra-node service $ juju add-relation compute-node ntp-client # deploys ntp-client to the existing compute-node service
It’s been my desire for some time to see this process made easier, more accurate, and less manual. Our customers come to us wanting their private clouds to “just work”, and we can’t expect them to provide the ideal environment for Ceph.
One of my co-workers, Stuart Bishop, started me thinking with this quote:
[O]ne of the original goals of charms [was to] encode best practice so software can be deployed by non-experts.
That seemed like a worthy goal, so I set out to update the ntp charm to automate the service stratum host selection process.
My goals for this update to the charm were to:
All this means that you can deploy a single ntp charm across a large number of OpenStack hosts, and be confident that the most appropriate hosts will be selected as the NTP service stratum.
This updated ntp charm has been tested successfully with production customer workloads. It’s available now in the charm store. Those interested in the details of the code change can review the merge proposal – if you’d like to test and comment on your experiences with this feature, that would be the best place to do so.
Here’s how to deploy it:
# Create a single ntp service: $ juju deploy cs:ntp ntp # ntp service still uses default pools configuration $ juju config ntp auto_peers=true # Deploy to existing nodes: $ juju add-relation infra-node ntp $ juju add-relation compute-node ntp
You can see an abbreviated example of the juju status output for the above deployment at http://pastebin.ubuntu.com/25901069/.
Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.
Are you getting maximum value from your big data? Despite most businesses understanding the power and competitive advantage they could gain from harnessing their big data more effectively and leveraging it more efficiently, it’s not an…
The appeal of Kubernetes is universal. Application development, operations and infrastructure teams recognise diverse reasons for its immediate utility and growing potential — a testament of Kubernetes’ empathetic design. Web apps,…
Amazon Web Services (AWS) announced the availability of their new Amazon EC2 A1 instances powered by custom AWS Graviton processors based on the Arm architecture, which brings Arm to the public cloud as a first class citizen. Arm based…