I love making things. I love making things so much I will sometimes neglect basic life-sustaining measures in order to squeeze more time in to creating things. Whether it’s creating more software or hacking on things in meatspace – creating is just something I can’t stop doing. So, as a result of this, I’m always looking for things to help ramp up my ability to create. Sometimes this is creating a jig or buying a tool, other times it’s learning a new programming language or using a new framework.
I recently started working on such a side project, but what I want to write about is how I built this application. Every time I embark on these ventures I always seem to hit a similar group of problems. The spin up time on learning how to install/manage/configure new software is too long and once launching scaling and managing these newly acquired services takes away from time I could spend on landing code. In order to really highlight these problems let me quickly outline what I do when I normally embark on a new software project.
The ways of yesterday
The first thing I do is figure out what technology I want to use, or at least try, in this project. In this case I’m going with Pyramid as the framework and I’m using ElasticSearch and redis as data stores. The software is a simple web service which will run periodic jobs to retrieve, compute, transform, and store data that’s indexed in ElasticSearch and made available via a web interface for users to search.
Next, I need to set up my dependencies. This typically involves spinning up either a container/VM on my home machine, or allocating a cloud instance somewhere and installing all my dependencies. In this case ElasticSearch, Redis, and Python.
This is fairly obvious, write some code! I will write code, store in DVCS tool, cycle on this and create.
This is the exciting part, launch the application! However, this often is the hardest and most annoying part of the process. Typically, I just replicate everything in a cloud instance or VM somewhere and let people at it.
For the most part this is fine but I eventually end up maintaining protoduction stuff. How do you scale this instance now? I just clobbered together a bunch of random help guides, user docs, and blog posts to get where I am. I’m not an expert in ElasticSearch or Redis I just want to create. Between that and exploration a lot of time is wasted that could be spent cycling on more code.
I work on Juju for a living, I’ll spare all the biased comments on how awesome it is (protip: it’s fucking amazing) and jump to what it is. Juju is a tool that describes a language and provides encapsulation for the concept of orchestration. Who cares? Well, as someone who values what little spare time I have, I care. When I started this process, I knew this concept was going to be a successful one and that scale was going to be an issue. I wanted to take this into consideration when I started on this venture.
This is where Juju comes in. Using the above model I knew I would have to learn about not only how to install ElasticSearch but also use it. I wanted curb that time by as much as possible so instead of spinning up a VM somewhere or a container locally I did the following:
juju bootstrap -e local juju deploy trusty/elasticsearch juju deploy trusty/redis-master
That was the extent of my need to know how to deploy elasticsearch. Within about 2 minutes I had both elasticsearch and redis running on LXC containers on my local laptop ready to be accessed. From there I simply created a new virtual environment, installed my pip requirements, configured pyramid application to look at the elasticsearch server and the redis server, and I was on my way writing code.
Now, this where I’m sure a lot of people are saying
There are at least 12 tools that can do that already, why Juju other than you pimping your warez
Remember, this is a language of orchestration, not a tool for deployment. Under the covers when I do a deployment of ElasticSearch with Juju it’s using (in this case) Ansible under the hood. What juju is doing is providing a layer of orchestration. I think this is best illustrated by showing how to do point number 4 with Juju.
So, it’s time to go live with the application and in order to do that I need to wrap my application in a way that Juju knows how to communicate with it. These deployments that I showed earlier are all done by encapsulating the logic of deployment for that service in a charm. I know, these names are adorable, but keep up for a hot minute – this is where the whole Orchestration language comes in to play. By language I mean it not in the traditional sense of a new DSL or a specific language that you have to code in.
Charms can be written in any language and leverage virtually any tool to perform the actual deployment. Charms that we currently have exist in forms not limited to: bash, python, ruby, powershell, Ansible, Salt, Chef, and the list goes on. I’m looking to get set up as quickly as possible so I decided to write my charm in bash. This involves me invoking a single command and adding a few lines of code to describe, in my language of choice, how to set up the service. Since I’m using Pyramid to do this (and I have a Makefile in my project directory) the code I wrote is almost exactly the same code I’d have used to begin my deployment in my above example.
juju charm create -t bash awesome-webservice edit metadata.yaml edit hooks/install edit hooks/elasticsearch-relation-changed edit hooks/redis-relation-changed edit hooks/start
You can read all about the structure of a charm in the docs, but getting to my point about orchestration language Juju models events in an orchestrated environment as hooks, which have simple names like “install”, “config-changed”, “start”, “stop”, etc. These all correspond to events that occur during a deployment and are what Juju is all about. here are some code snippets of each of the files I edited above:
name: awesome-webservice summary: It's still a secret but it's totally awesome maintainer: Marco Ceppi <email@example.com> description: | I'm telling you it's awesome, but this is where I'd describe more about it's awesomness provides: website: interface: http requires: elasticsearch: interface: elasticsearch redis: interface: redis
#!/bin/bash set -ex if [ -d /srv/awesome-service ]; then rm -rf /srv/awesome-service fi git clone https://github.com/marcoceppi/not-telling-my-secret-yet.git /srv/awesome-service cd /srv/awesome-service make # This builds my virutalenv and does all the req setups open-port 80
#!/bin/bash set -ex server=`relation-get hostname` port=`relation-get port` if [ ! -f /srv/awesome-service/juju.ini ]; then cp /srv/awesome-service/production.ini /srv/awesome-service/juju.ini fi sed -i -e 's/elasticsearch.server = .*/elasticsearch.server = $server:$port/' /srv/awesome-service/juju.ini hooks/start
This is probably the most complicated piece of code because it’s using some native juju invocations.
relation-get is one of the 10+ tools Juju exposes to hooks to allow authors to automate their infrastructure. It’s another piece of the orchestration language Juju provides. In this case I’m expecting ElasticSearch to advertise these two details: hostname and port. From that point forward, I’m using that information and writing it to configuration files in my application and running the start event.
#!/bin/bash set -ex if [ ! -f /srv/awesome-service/juju.ini ]; then exit 0 fi cd /srv/awesome-service make stop || true # Try to stop the service if it's already running make start
This is a very simple example of how I converted my service, and the commands I ran typically to set up the service, to a working charm.
Well, now you can deploy everything. Here’s how I launched into production:
juju bootstrap -e azure juju deploy trusty/elasticsearch juju deploy trusty/redis-master juju deploy trusty/awesome-webservice juju add-relation elasticsearch awesome-webservice juju add-relation redis-master awesome-webservice juju expose awesome-webservice
Not only have I deployed my entire infrastructure to azure in under a few minutes, but I have a couple of options now. Traffic increase and I need more indexing power and web application presence. This is both the best and worst thing to happen. It’s great that I’ve become as awesome as predicted but horrifying because it means I need to throw more infrastructure at this. In past examples I’ve either cloned the VM then tried to bake clustering afterwards but that means I have to scale all resources and I can’t simply pick and choose. Or spin up another cloud server and manually install and configure the portion of code I need.
With juju this becomes distilled into just a few commands
juju add-unit elasticsearch juju add-unit awesome-webservice juju deploy haproxy # or varnish, or squid, or even a charm that utilizes a clouds native loadbalancer juju add-relation haproxy awesome-webservice
Because juju is this language of orchestration, and I’ve done the little bit required in my application’s charm to talk this language, my service will just grow more servers to respond to the growing demand and I’ve added another component to my infrastructure without having to spend time ramping up on how to deploy that. Now that everything is running smoothly I can download these charms and read the code that goes into building them and learn about my infrastructure by reading the actual deployment strategy.
As time moves on, I find that I want to try some experimentations on my infrastructure, or try out other clouds. All of this can be done still with Juju since all of this falls under the orchestration purview. I can export my environments structure, which produces a very simple YAML representation, and deploy set that up against Amazon, HP Cloud, Joyent, Digital Ocean, or play around with more on my local environment. The same services, the same density, and the same code I run in production I can run anywhere else.
This tool has helped take my spin up time on a new framework or service from days of experimenting to a few minutes and a few commands. Having this greatly raises my productivity and I look forward to launching my web service a weeks before I initially expected.