US presidential elections grab the attention of people the world over. Media advertising, on-location campaigning, debates, TV appearances and voter engagement through social media, web sites, data polling are absolutely essential to a successful campaign. All those initiatives require a robust, agile IT infrastructure. In the most recent US election, the Obama team came up trumps not only in terms of a winning campaign but also with the much heralded IT infrastructure that powered that campaign. The heart of that infrastructure? Ubuntu, of course.
The Obama for America campaign’s relatively small IT team relied heavily on data analytics and digital media. This enabled them to combine real-time information from the census, Facebook, research, marketing, internet profiles and social media patterns to create the most accurate profile of a voting population in history. To be able to achieve this level of engagement and data analysis all day every day, the campaign needed to rely on robust IT infrastructure, that would not fail or fall over, even at the busiest times throughout the campaign.
The Obama for America team chose an open, cloud-based IT model. This kept costs down and consistent, even at the most demanding times. The team chose Ubuntu as the operating system, generally running instances on Amazon Web Services. Ubuntu provided a cost effective OS, that is stable and reliable and that scales easily as required. The technology behind the Obama for America campaign will likely change the face of government elections of the future, as well as how enterprises will run their infrastructures. We expect Ubuntu to continue to be at the heart of those initiatives.
Harper Reed, the CTO behind the Obama for America spoke with Canonical about project Narwhal, the cloud, Ubuntu, open source and the use of disruptive technologies that supported President Obama’s successful campaign for a second term.
Harper Reed: We started from zero, so it’s an interesting case of scaling applications – we knew the size we would be in general terms and we went 1000 miles an hour for 18 months…it was quite a challenge for us. For scale, we had a really large engineering team; at its peak we had 150 tech staff, and at the core there were about 40 engineers focussed on backend engineering. We had used 250 separate repositories, so we have 305, and 50 of those are unused and each of those are discrete apps or very few of them are components that would be sub modules.
4Gb/s, 10k requests per second, 2,000 nodes, 3 datacenters, 180TB and 8.5 billion requests. Design, deploy, dismantle in 583 days to elect the President. #madops
HR: A very large amount of it was hosted in the cloud. We had a Vertica instance running on physical boxes and they were the only physical boxes we used, except laptops for programming and do all the setting up. We used Ubuntu AWS images and wanted to keep them as default as possible to make sure we were in a position to quickly light up a new box. We didn’t need to think about it too much.
HR: We didn’t want to reinvent the wheel, so we used the default Amazon image – Ubuntu 12.04 LTS. We didn’t have interest in focusing on LTS or a new desktop experience, because of the relatively short amount of time they would be running – just 16 months. We had 100 instances running, but at the max it was a little over 1,000. We may have even got passed 1000, but we tried to scale as horizontally as possible. We didn’t have super good insight into how many instances, it was more ‘is our app up and working”, then no one really cared. Which is the magic of cloud, right. You can just do that. When I was at Threadless, it was all physical boxes and it was RHEL, I wanted to kill myself!
HR: We had 5,000 field officers and so were about as hardcore ops IT as you can get. What we did is we put the devops people – and this took a lot and was frustrating – straight into engineering / developing. At first the ops folks were like “we can’t do that because we’re ops people!” But they had to do something, so we finally they relented and from then on, we had no issues at all, and we just rocked. What made it easy is that we didn’t have any infrastructure locally. If we needed that stuff locally, it was a pain in the ass to get it. Honestly, I think you have to you have to move devops as far outside of regular IT operations as possible and give them to engineers and devops should be under the arm of engineering. That meant that support from engineering side, that meant that they had the resources which could be director of engineering, do code reviews, etc etc A lot of great opportunity that came with that, but it took a while to get there.
HR: We relied on MySQL quite a bit, Rails, Flask which is an awesome framework for Python. Puppet and random parts for Puppet like Mirinet collective, orpc for automation. And of course, Ubuntu as part of our AWS cloud infrastructure.
HR: We released an open source voter registration app that would allow everyone to register their vote. However, the legal team would not allow us to have a standard open source license, which made it become not open because of the licence restrictions. The lawyers’ first priority was to protect the President!
HR: It looks like Republicans did use a similar stack that the Democrats did. The breakdown they had was so blown out of proportion – I don’t think there was a break down as we say. One of their apps, which was a small app, ended up not working as they thought it would and that I think was was frustrating for them. It happened to us in ‘08. The thing I’m frustrated at when people talk about this stuff, is they talk about the failure of Orca vs Narwhal, but they’re such different things, it doesn’t matter. It’s is much different than everyone thought it was.
HR: One of the things that we dealt with a lot, and this is something that we did aggressively, is that we spent a whole load of time testing and re-testing and re-testing and re-testing. To the point where, in the last couple months, it was just testing over and over again. That’s something that I think the Republicans did not do. It shows how they were interacting with their tech; they just trusted it.
I definitely don’t trust technology. It’s great, but I also think if you don’t understand aggressively what you’re doing, you’re going just to blindly jump into it. We have all been around technology and you know that you have to test over and over and over again. But we were making sure that our assumptions worked and our failures worked and our scenarios when things break worked – so we knew, in the middle of election day when something goes wrong, exactly how that is going to work. It didn’t seem that the Republicans had that level of knowledge about their infrastructure.
“We worked through every possible disaster situation,” Reed said. “We did three actual all-day sessions of destroying everything we had built.”
HR: I think its most immeasurable. I can’t even imagine what we would have had to do to even support owning our own hardware. It’s absolutely fascinating to think about that. As a comparison, in 2008 / 2009, at Threadless [where Reed was CTO previously] we paid around 100K per month for 65 devices that were hosted and managed externally. At OFA, we had 1000 devices and at the max we paid probably 250K per month. That’s insane if you think about what we were able to accomplish for pennies vs what we could do just a few years before. The other thing to keep in mind, is the flexibility that you get and you can barely quantify what that flexibility is until you can do it both ways. I can not even imagine buying hardware at this point in time.
HR: What we did that everyone else will follow in is using things that aren’t stupid! It honestly has a lot less to do with open vs proprietary – and more to do with just using the stuff that you would use if you were a developer, instead of using Microsoft’s product, specific programming language or somesuch that are determined by your manager, which for the longest time is how things used to work.
I think that engineers know what they want – you have to trust those people and they will get you to the right answer. The more opportunities there are to give developers leadership within organisations, the more chance there will be for open source to be used. There are a lot of organisations out there that have great software that they don’t need to have as code source, and so showing success from open source is really important and the people that need to speak on that are CIOs and CTOs of those giant organisations. If Facebook’s CIO gets up and says this is how we open source things that speaks more than a small start up does. But we need people like HP etc who say it’s ok to do it – because then when I go to, in my case, the campaign manager or the president, you can stand of the shoulders of those people which I think is the hard thing.
There aren’t a lot of examples of companies open sourcing their core “thing” at large. When I bought it up, the lawyers were being fearful because there was just a lot of FUD. I represent is being disruptive and being comfortable with people that are doing that start up stuff and doing innovative technology and focusing on solving a problem – we did that and just continuing doing that is where innovation will come from.
Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.
Canonical and AWS are excited to announce the public release of AWS IoT Greengrass as a snap. AWS IoT Greengrass is software that brings local compute, messaging, data caching, sync, and ML inference capabilities to your IoT device. IoT…
The new AWS Marketplace for Containers, launched this week at AWS Re:Invent, provides another fantastic opportunity for developers across public, private and multi-cloud environments to use Ubuntu. The AWS Marketplace provides customers…
Automate your Kubernetes deployments on AWS, Azure, and Google Recently, there’s been talk about how Kubernetes has become hard to deploy and run on virtual substrates such as those offered by the public clouds. Indeed, the cloud-specific…