Deploying OpenStack with just two machines. The MaaS and Juju way.

A lot of people have been asking lately about what the minimum number of nodes are required to setup OpenStack and there seems to be a lot of buzz around setting up OpenStack with Juju and MAAS. Some would speculate it has something to do with the amazing keynote presentation by Mark Shuttleworth, others would conceed it’s just because charms are so damn cool. Whatever the reason my answer is as follows

You really want 12 nodes to do OpenStack right, even more for high availability, but at a bare minimum you only need two nodes.

So, naturally, as more people dive in to OpenStack and evaluate how they can use it in their organizations, they jump at the thought “Oh, I have two servers laying around!” and immediately want to know how to achieve such a feat with Juju and MAAS. So, I took an evening to do such a thing with my small cluster and share the process.

This post makes a few assumptions. First, that you have already set up MAAS, installed Juju, and configured Juju to speak to your MAAS environment. Secondly, that the two machine allotment is nodes after setting up MAAS and that these two nodes are already enlisted in MAAS.

My setup

Before I dive much deeper, let me briefly show my setup.

the rig

I realize the photo is terrible, the Nexus 4 just doesn’t have a super stellar camera compared to other phones on the market. For the purposes of this demo I’m using my home MAAS cluster which consists of three Intel NUCs, a gigabit switch, a switched PDU, and an old Dell Optiplex with an extra nick which acts as the MAAS region controller. All the NUCs have been enlisted in MAAS and commissioned already.

Diving in

Once MAAS and Juju are configured you can go ahead and run juju bootstrap. This will provision one of the MAAS nodes and use it as the orchestration node for your juju environment. This can take some time, especially if you don’t have fastpath installer selected, if you get a timeout during your first bootstrap don’t fret! You can increase the bootstrap timeout in the environments.yaml file with the following directive in your maas definition: bootstrap-timeout: 900. During the video I increase this timeout to 900 seconds in the hopes of eliminating this issue.

After you’ve bootstrapped it’s time to get deploying! If you care to use the Juju GUI now would be the time to deploy it. You can do so with by running the following command:

juju deploy --to 0 juju-gui

To avoid having juju spin us up another machine we can tell Juju to simply place it on machine 0.

NOTE: the --to flag is crazy dangerous. Not all services can be safely co-located with each other. This is tantamount to “hulk smashing” services and will likely break things. Juju GUI is designed to coincide with the bootstrap node so this has been safe. Running this elsewhere will likely result in bad things. You have been warned.

Now it’s time to get OpenStack going! Run the following commands:

juju deploy --to lxc:0 mysql
juju deploy --to lxc:0 keystone
juju deploy --to lxc:0 nova-cloud-controller
juju deploy --to lxc:0 glance
juju deploy --to lxc:0 rabbitmq-server
juju deploy --to lxc:0 openstack-dashboard
juju deploy --to lxc:0 cinder

To break this down, what you’re doing is deploying the minimum number of components required to support OpenStack, only your deploying them to machine 0 (the bootstrap node) in LXC containers. If you don’t know what LXC containers are, they are very light weight Linux containers (virtual machines) that don’t produce a lot of overhead but allow you to safely compartmentalize these services. So, after a few minutes these machines will begin to pop online, but in the meantime we can press on because Juju waits for nothing!

The next step is to deploy the nova-compute node. This is the powerhouse behind OpenStack and is the hypervisor for launching instances. As such, we don’t really want to virtualize it as KVM (or XEN, etc) don’t work well inside of LXC machines.

juju deploy nova-compute

That’s it. MAAS will allocate the second, and final node if you only have two, to nova-compute. Now while all these machines are popping up and becoming ready lets create relations. The magic of Juju and what it can do is in creating relations between services. It’s what turns a bunch of scripts into LEGOs for the cloud. You’ll need to run the following commands to create all the relations necessary for the OpenStack components to talk to eachother:

juju add-relation mysql keystone
juju add-relation nova-cloud-controller mysql
juju add-relation nova-cloud-controller rabbitmq-server
juju add-relation nova-cloud-controller glance
juju add-relation nova-cloud-controller keystone
juju add-relation nova-compute nova-cloud-controller
juju add-relation nova-compute mysql
juju add-relation nova-compute rabbitmq-server:amqp
juju add-relation nova-compute glance
juju add-relation glance mysql
juju add-relation glance keystone
juju add-relation glance cinder
juju add-relation mysql cinder
juju add-relation cinder rabbitmq-server
juju add-relation cinder nova-cloud-controller
juju add-relation cinder keystone
juju add-relation openstack-dashboard keystone

Whew, I know that’s a lot to go through, but OpenStack isn’t a walk in the park. It’s a pretty intricate system with lots of dependencies. The good news is we’re nearly done! No doubt most of the nodes have turned green in the GUI or are marked as “started” in the output of juju status.


One of the last things is configuration for the cloud. Since this is all working against Trusty, we have the latest OpenStack being installed. All that’s left is to configure our admin password in keystone so we can log in to the dashboard.

juju set keystone admin-password="helloworld"

Set the password to whatever you’d like. Once complete, run juju status openstack-dashboard find the public-address for that unit, load it’s address in your browser and navigate to /horizon. (For example, if the public-address was you would go to Log in with the username admin and the password as you set it in the command line. You should now be in the horizon dashboard for OpenStack. Click on Admin -> System Panel -> Hypervisors and confirm you have a hypervisor listed.

Congradulations! You’ve created a condensed OpenStack installation.

  • Adam Bauer

    Omg thank god my life is made

  • विशाल गर्ग

    Thanks for the guide. It helped me a lot :)

  • robert tingirica

    what power adapter did you use? How do you power on the nucs? AFAIK they don’t have wake on lan

    • Marco Ceppi

      In MAAS i’m using the AMT power type. I made sure to select NUCs that support the AMT Tools

  • Delair

    Thanks for the amazing video. Can you pleaseeeeeee find a solution for neutron-gateway. I am stuck with that right now.

  • Jacob

    But whats the point

    • David Steven-Jennings

      You must not be into devops or ops :)

      • Jacob

        Well Im currently running a windows machine with node and mongo. I thought abouy checking out ubuntu again but it seems like a hassle. I dont think any normal user has the bandwidth that would require db sharding, also the master machine seems like its still bottlenecking the system. I could be wrong, but thats why I asked

        • Marco Ceppi

          This is literally the exact opposite of a hassle with Juju. When you need to cluster mongodb you simply run `juju add-unit mongodb` and it does the clustering for you.

          The bootstrap node is a bottle neck, but in the next release we’ll have HA for the bootstrap node. So you can set up n+ bootstrap nodes that will allow for failover.

  • Damian ONeill

    Hi Marco, first off thanks for the video, very useful. I was wondering could you add to the blog post / comments the remaining steps to connect to your spinning vm via ssh?


  • Mark

    I tried to replicate this using KVM+MAAS (not LXC). When I deploy mysql and keystone charms to the same guest, keystone is unable to connect to mysql. I do not have this problem when deploying them to different systems. Did you run into this?

  • Chris

    I got all running with 3 physical standard I7 machine ready and tested with OpenStack. The thing is i have to manually start up the machines each time if i deploy a service, my only power type is wakeonlan hope someone can help me with that.

  • chris

    I’m running 3 standard psychical I7 PC’s only option for power is wake on LAN Ive got everything tested and deploy via JUJU OpenStack, but cant get wake on LAN to work with MAAS Ive tested wake on LAN it works but how do you tell MAAS to use wake on LAN configuration.

  • Rafael

    Sir, I’m trying to do something similar to your tutorial, but mine is a local environment and the tutorial I followed was this one: (I don’t know if this tutorial is local or not). Most configuration used was to change the openstack origin to: “cloud:precise-grizzly”, after setting everything up, I tried to log in horizon, but, All what I get is a screen saying:

    Internal Server Error

    An unexpected error occurred while processing your request. Please try your request again.

    Please, could you help me? I have no deep knowledge on juju nor openstack, and I have to set up a private cloud with at least 1 resource provisiong, for a university assignment.

    Att, Rafael. Thank you.

  • riccardo

    hi, why in juju-gui I continue to see the services deployed in active status when the virtual nodes are in power off (in vMaaS are in ready status)? thanks

  • riccardo

    Hi Marco,
    I’ve followed all steps you reported in this guide but the relation between nova-compute and mysql gives me an error:

    ERROR ambiguous relation: “nova-compute mysql” could refer to “nova-compute:nrpe-external-master mysql:nrpe-external-master”; “nova-compute:shared-db mysql:shared-db”

    I’ve to replaced that with this as suguests by note:

    $: juju add-relation nova-compute:shared-db mysql:shared-db

    with this the relation is ok but I can’t open the dashboard on my broswer.
    thanks in advance for your help

  • Raghu

    We have the following MAAS setup- One node is setup as the server with both MAAS cluster and region controllers running on it. We added 2 nodes which are in a private virtual LAN with the server node. We brought the nodes into ‘Ready’ state and installed juju on the server. Now when we try to run juju bootstrap, it says Attempting to connect to and fails after 10min with the connection refused error. is one of our node in the private vLAN and that which was already in MAAS.

    My suspicion is – the node is in ‘Ready’ state and hence no OS is installed yet on it. juju is attempting to connecting to it. It should obviously be unable to connect, as MAAS collects all info required from the nodes during PXE boot and shuts down the machines.

    juju wants to install OS on the nodes but the machines are not up.

    PS: Our power on type is IPMI

    On running juju bootstrap –debug, we see a slew of these messages:
    2014-10-12 02:50:58 DEBUG juju.utils.ssh ssh_openssh.go:122 running: ssh -o “StrictHostKeyChecking no” -o “PasswordAuthentication no” -i /root/.juju/ssh/juju_id_r sa -i /root/.ssh/id_rsa ubuntu@slot13.maas /bin/bash

    And after 10mins, it now fails with
    waited for 10m0s without being able to connect: /var/lib/juju/nonce.txt does not exist

    • donald

      I faced the same problem i solved it ssh to the maas node and do this :

      1) create directory : /var/log/juju
      2) Change the ownership of the directory to ubuntu:ubuntu
      3) Create a text file /var/lib/juju/nonce.txt with content – user-admin:bootstrap

      Sorry for my english hope it help

  • Neel Basu

    `juju deploy –to lxc:0 mysql` leaves deployment pending for hours and I don’t see any quotable entries in `juju debug-log`. I have disabled ufw still nothing is happening. Did you installed `juju-local` ?

  • riccardo

    Hi Marco, I tried to following your guide but when I try to open Horizon I see just a white page.
    To realize that I’ve used Ubuntu 14.04Lts with MaaS and 3 VM (one for juju-gui and the rest for Openstack as you reported on guide).
    I also try to disable ufw and run a ping from Host to VM (using IP address and FQDN) and it works perfectly.
    Could y help me, please?

    • riccardo

      I’ve resolved that. I’ve just wait after some minutes and that it……

  • Justin

    Do you have to have the maas box with dual nic? Do you know how the Orangebox handles this since I assume none of the nucs there have dual nics?

    • Marco Ceppi

      The Orange Box’s node0 (the MAAS master) has a two NICs, you have to have two nics in order for MAAS to manage networking. If you’re virutalizing MAAS, which you can do, you can create a network bridge instead to get around this, but it’s a much more complicated setup.

      • Justin

        Ah thanks. I spent ages trying to figure out how they did it

      • Gregg

        This is the sanest and most straightforward walkthrough on this subject I have yet read.
        I have almost the exact same setup as yourself, however only 2 Intel NUCs and no other spare hardware with which to add a second NIC to follow this procedure.

        Please, is there *any* documentation you can point to in which to set up the alternative scenario (virtualized/kvm MAAS + bare metal juju nodes)?

        I have searched all over the net for a simple guide on how to do this, only to end up in a tangled, tangled web of ovs-vsctl commands and vlans and vnics and bridges and tunnels and diagrams like the one below that I literally looked at until I became lightheaded and keeled over. It *can’t* be this difficult.

  • madejackson

    In Fact, you’ve done this with 3 machines, because the maas controller is a machine itself. Is it possible to add the maas-controller as a juju-node?

    • madejackson

      Well… I’m already workin on a solution: bootstrap 2nd juju-environment “local” and deploy LXC-Containers on local environment :) Then I can add as many compute services I’d like to

      • Marco Ceppi

        That’s another possibility. You’ll run into issues with nova-compute as I outlined above, but juju has the ability to “add-machine” any arbitrary machine with SSH access. So you could install Ubuntu Cloud on the physical nodes, then run `juju add-machine ubuntu@machine` and it’ll be added to the available pool of machines.

        There are networking issues to take into consideration as LXC machines won’t be addressable from the local provider to the other hardware without setting up NAT or exposing the bridge to your network.

      • Gregg

        Please tell me you were successful with this endeavor and there is a blog post somewhere on the Internet that details the procedure that you used.

        Desperately Seeking Openstack Closure

    • Marco Ceppi

      So, you can virtualize the maas-controller as a KVM on one of the hardware nodes, you’ll just need to setup a network bridge to connect it to the rest of your hardware.

  • hakakj123

    This post is worthy of appreciation, looking forward to more exciting! Fitch Calzoncillos

  • Eric

    This is brilliant; I’ll definitely be trying this soon!

  • Dgn

    It doesn’t work for me !! I use Maas and Juju behind a proxy and I have always this error when I run “juju bootstrap” ( ERROR juju.cmd.juju common.go:32 the environment could not be destroyed: gomaasapi: got error back from server: 504 Gateway Time-out (
    I set my environments.yaml to use my proxy !!
    I’m really stuck if I can’t use a proxy with Juju

  • Pheakdey

    What is deference between Maas and Iaas?

    • Jonathan Dill

      Seems lower level as it doesn’t automatically provide several things you’d normally expect with an IaaS provider like storage management, replication and backups, you would have to implement all of those things yourself with Metal as a Service seems more comparable to something like vCenter. Also, it seems like a poor choice of acronym as MaaS is also emerging in common use as Monitoring as a Service.

  • viperlogic

    Nice article. All works ok for me bar deploying a charm not on the bootstrapped node. I’ve documented it here and it would be great if you can provide some insight and guidence.

  • Rick Hicksted

    The above guidelines are a great source of info. With the latest bits I was able to get it all working except for the containers using the latest MAAS on Ubuntu 15.04 (Containers stuck waiting for agent completion when the agent was started fine on the container). I even dropped the firewall as listed in other threads with out luck. This caused me to use “7” NUCs as my OpenStack charms do not play well in the same root container (0).

    Physical Machines (7 Intel NUCs, 1 NUC as the MAAS server: All model: Intel NUC Kit NUC5i7RYH Barebone System BOXNUC5I7RYH: 16G Ram/256BG SSD M2):

    NUC0 MAAS server

    NUC1 mysql/juju GUI
    NUC2 rabbitmq server
    NUC3 glance
    NUC4 cinder
    NUC5 nova cloud controller
    NUC6 nova
    NUC7 horizon

    Did you ever add the neutron config here for networking support?

    Did you use neutron-gateway? How is this possible with a NUC as it only has one Intel 1G NIC?



    • Rick Hicksted

      Update: Got the Neutron Gateway up where VMs can ping each other (Horizon now shows the Networking topo/settings.

      NUC0 MAAS server

      NUC1 mysql/juju GUI
      NUC2 rabbitmq server/keystone
      NUC3 glance
      NUC4 cinder
      NUC5 nova cloud controller
      NUC6 nova
      NUC7 horizon

      NUC8 neutron-gateway (required special config for nova cloud controller)