The past few weeks have been very busy in the snapd world, and involved not only coding by itself but also face-to-face meet ups to discuss and detail what is to come.
If you’re interested in what has been going on and what is to come, please read on.
2.26 went stable!
It has been quite a while since we had a stable release, with the previous release being a minor inside the 2.24 series. The delays have been mainly due to edge cases we’ve found in the reverts of such updates into previous releases of the core snap. We take that sort of regression very seriously because some of our systems are entirely based on snaps, and we can’t afford to have a release that cannot be reverted if issues are detected in the wild.
But that’s behind us now. 2.26 went public, and we had a couple of fast minor releases to fix reported issues, and we’re now working towards 2.27.
Part of the release pain was in getting seccomp to work reliably both when moving the core forward and backwards in time. The solution we implemented was in part to have seccomp filters being serialized to the disk in its final binary form that is handed into the kernel, as that is a much more stable format and hasn’t changed much at all in the past decade.
This work is already available in the released 2.26 snapd.
Development sprint in London
We were all very happy to have a development focused sprint after a very long time going without one. It was great to get the whole team together, and small sprints like these are good to shape up features in detail. We were also lucky to have on board some prominent members of the community, which helped ensuring discussions were addressing everybody’s needs.
We covered quite a lot of ground, as may be observed in the raw notes from the sprint. These now need to be more detailed so that everybody can have a better idea of what they mean, and also so that we don’t forget ourselves about the conversations behind those agreements. I expect that to surface as posts here in the forum in the coming weeks.
Participation in product sprint
Some of us also joined Canonical’s wider product sprint and took the chance to share news from our end and to see how we might help other areas of the company to make better use of snaps. Being together was also a chance to dive down into some of the upcoming plans.
One of major upcoming changes and also the first thing discussed during the development sprint were base snaps. We detailed much of the desired the inner workings, and plan to start working on it in the coming weeks.
Also one of the most exciting and upcoming features, layouts will enable arbitrary filesystem mappings inside the snap namespace. This feature was also discussed and detailed in the London sprint, and implementation is about to start.
Epochs (stepped upgrades)
The development sprint also covered the implementation of epochs in great detail, and this is one of the exciting features we’ll be working on soon. They enable a much more controlled way, and thus more stable, to update snaps over longer time spans, by forcing the refresh to happen through specific milestone revisions.
interfaces in classic snaps
During the product sprint we quickly covered the idea of unblocking interfaces in classic snaps, and that spawned a good conversation here in the forum. Looks like this will be moving forward.
The trickery bits of tab-completion are all in place, and the only bits missing are the final integration into the shell that must be done by the packaging system. We’re trying to get that in for 2.27.
During the product sprint we brought back the topic of snaps not working inside lxd due to the difficulties in mounting squash filesystems there. The work to leverage fuse for this is all in place for a long time, but we’ve never gone the final mile of enabling systems to just use it, because they’ll often lack the squashfuse support.
We have a preliminary agreement to incorporate the tiny squashfuse implementation inside snapd proper, naming it as snapfuse, so that such systems will be able to mount snaps out of the box.
Connections in gadget
During the product sprint we also got requests from teams interacting with device manufacturers which are shipping pure snap devices to get the ability for a device to specify specific connections to be established via the gadget snap. We already had this in our roadmap, and will just bump the priority of that work so it gets done sooner.
The repair capability (emergency fixes) functionality continued to see a relevant amount of attention, and at this point most of its implementation is either in place or pending in the review queue. We want to surface the functionality in the coming weeks.
We continue to discuss and detail aspects of device statistics. The background of those conversations is that there’s relevant functionality such as health checks, canarying of releases, and offering of coarse-grained statistics to snap publishers, which requires considering the installation success of a particular snap in the device.
Snapshots and backups
This topic came up again in the product sprint, as some people were concerned about losing data due to snap removals. We don’t want to just change that behavior since it’s a nice property that snaps clean after themselves on removals, so there’s a chance those conversations will speed up the design and implementation of snapshots and backups, which would solve the underlying problem without the behavior change.
This is about the improvements that allow snapd to control services via the API and the command line, as detailed in this topic. Work on it continued, and it’s now pretty much complete and in review. Good chances it will be in 2.28.
Scheduling of refreshes
Plenty of discussions around this topic, and we got to the point of agreeing on having monthly-based scheduling. In other words, the system may be configured to update inside one or more time windows inside the month (1st and 3rd Tuesday between 4 and 6am, for example).
The implementation is still pending, and we have had some initial syntax discussions, but the latter will likely still change to get the upcoming timer feature and the refresh schedule using the same syntax.
Core configure hook in Go
The configure hook in the core snap itself is being changed from being written in shell to being Go code, as the rest of snapd itself. This brings some more stability and makes testing it easier.
A “classic” interface
This idea came up during the product sprint, and seems like a winner. It consists in creating an interface which would be used as a stepping stone between classic snaps and fully confined snaps. The main difference from classic snaps is that the snap would see its own filesystem as being the core snap (or the base snap, once that change lands). It would still have access to the host system, and would still enforce the acknowledgement of its use via the –classic parameter, but to the snap itself it would look more like a strict snap otherwise, helping in the transition towards strict confinement.
User and group mapping
In the development sprint we managed to take the early ideas from the prior sprint and get them into a more concrete plan that might turn into action soon, and in the following product sprint we finally managed to close the loop with the security team in terms of these plans.
The quick summary is that we’ll soon see the ability to have custom users and groups inside snaps, with privilege dropping and all. We don’t yet have the work scheduled, but we hope to get started soon.
Install and remove hooks
These new hooks are in place, and they do what one would expect: notify the snap about the fact it’s being installed, or removed. More of these lifecycle hooks are likely to be seen soon.
The more advanced work to support multi-stage hooks continues to make progress, and we’re hoping to have this finalized in the coming weeks. As a reminder, this is what enables the snap carrying the plug and the snap carrying the slot to communicate dynamic data to each other, and also to do dynamic actions as a consequence of the connection being established.
snapctl outside of hooks
This work, previously reported as in progress, got completed and landed. The core idea here is being able to use snapctl at any point during the the snap lifetime, which means being able to consult and change snap configuration settings, for example, but really anything else the snapctl command offers. This set is quickly increasing since this is the main mechanism through which snaps communicate with the snapd daemon.
Testing infrastructure woes
We had some bumps in our virtual machine provider, which was later attributed to an image that got corrupted. This settled us back a bit during this period, since we exercise every PR before merging and our testing machinery was highly unreliable. We though we’d have to work on migrating the infrastructure out to another provider, but it looks like things have settled after the problem was found.
Kernel and gadget asset updates
During the development sprint, we finalized the design that will enable kernel and gadgets to be independently updated while properly handling cases that depend on synchronicity, such as when the gadget defines the need for binary device trees that were shipped with the kernel.
This work is not yet scheduled, but we know what we want to do when we have a slot in the development team.
… and much more!
There are really many other topics in that list, but this report is sufficiently long already. If you’re curious, please have a look at some of the whiteboard pictures from the development sprint mentioned above. They are mostly action notes about what is to come, so certainly worth a read.
Learn how the Ubuntu desktop operating system powers millions of PCs and laptops around the world.
Welcome to the latest work and updates from the design and web team. The team manages all web projects across Canonical. From www.ubuntu.com to the Juju GUI we help to bring beauty and consistency to all the web projects. MAAS squad Code…
Stu Miniman and John Boyer of theCUBE interviewed Mark Baker, Field Product Manager, Canonical at the OpenStack Summit in Vancouver. Read on to to find out about OpenStack’s increasing maturity. The Kubernetes and OpenStack story isn’t…
Independent Report highlights the TCO of Canonical’s managed private cloud in a diverse multi-cloud strategy and enterprise infrastructure portfolio 451 Research’s latest report, ‘Busting the myth of private cloud economics ’, found that…