Cloud, juju, MAAS, Planet, Ubuntu

Deploying OpenStack with just two machines. The MaaS and Juju way.

A lot of people have been asking lately about what the minimum number of nodes are required to setup OpenStack and there seems to be a lot of buzz around setting up OpenStack with Juju and MAAS. Some would speculate it has something to do with the amazing keynote presentation by Mark Shuttleworth, others would conceed it’s just because charms are so damn cool. Whatever the reason my answer is as follows

You really want 12 nodes to do OpenStack right, even more for high availability, but at a bare minimum you only need two nodes.

So, naturally, as more people dive in to OpenStack and evaluate how they can use it in their organizations, they jump at the thought “Oh, I have two servers laying around!” and immediately want to know how to achieve such a feat with Juju and MAAS. So, I took an evening to do such a thing with my small cluster and share the process.

This post makes a few assumptions. First, that you have already set up MAAS, installed Juju, and configured Juju to speak to your MAAS environment. Secondly, that the two machine allotment is nodes after setting up MAAS and that these two nodes are already enlisted in MAAS.

My setup

Before I dive much deeper, let me briefly show my setup.

the rig

I realize the photo is terrible, the Nexus 4 just doesn’t have a super stellar camera compared to other phones on the market. For the purposes of this demo I’m using my home MAAS cluster which consists of three Intel NUCs, a gigabit switch, a switched PDU, and an old Dell Optiplex with an extra nick which acts as the MAAS region controller. All the NUCs have been enlisted in MAAS and commissioned already.

Diving in

Once MAAS and Juju are configured you can go ahead and run juju bootstrap. This will provision one of the MAAS nodes and use it as the orchestration node for your juju environment. This can take some time, especially if you don’t have fastpath installer selected, if you get a timeout during your first bootstrap don’t fret! You can increase the bootstrap timeout in the environments.yaml file with the following directive in your maas definition: bootstrap-timeout: 900. During the video I increase this timeout to 900 seconds in the hopes of eliminating this issue.

After you’ve bootstrapped it’s time to get deploying! If you care to use the Juju GUI now would be the time to deploy it. You can do so with by running the following command:

juju deploy --to 0 juju-gui

To avoid having juju spin us up another machine we can tell Juju to simply place it on machine 0.

NOTE: the --to flag is crazy dangerous. Not all services can be safely co-located with each other. This is tantamount to “hulk smashing” services and will likely break things. Juju GUI is designed to coincide with the bootstrap node so this has been safe. Running this elsewhere will likely result in bad things. You have been warned.

Now it’s time to get OpenStack going! Run the following commands:

juju deploy --to lxc:0 mysql
juju deploy --to lxc:0 keystone
juju deploy --to lxc:0 nova-cloud-controller
juju deploy --to lxc:0 glance
juju deploy --to lxc:0 rabbitmq-server
juju deploy --to lxc:0 openstack-dashboard
juju deploy --to lxc:0 cinder

To break this down, what you’re doing is deploying the minimum number of components required to support OpenStack, only your deploying them to machine 0 (the bootstrap node) in LXC containers. If you don’t know what LXC containers are, they are very light weight Linux containers (virtual machines) that don’t produce a lot of overhead but allow you to safely compartmentalize these services. So, after a few minutes these machines will begin to pop online, but in the meantime we can press on because Juju waits for nothing!

The next step is to deploy the nova-compute node. This is the powerhouse behind OpenStack and is the hypervisor for launching instances. As such, we don’t really want to virtualize it as KVM (or XEN, etc) don’t work well inside of LXC machines.

juju deploy nova-compute

That’s it. MAAS will allocate the second, and final node if you only have two, to nova-compute. Now while all these machines are popping up and becoming ready lets create relations. The magic of Juju and what it can do is in creating relations between services. It’s what turns a bunch of scripts into LEGOs for the cloud. You’ll need to run the following commands to create all the relations necessary for the OpenStack components to talk to eachother:

juju add-relation mysql keystone
juju add-relation nova-cloud-controller mysql
juju add-relation nova-cloud-controller rabbitmq-server
juju add-relation nova-cloud-controller glance
juju add-relation nova-cloud-controller keystone
juju add-relation nova-compute nova-cloud-controller
juju add-relation nova-compute mysql
juju add-relation nova-compute rabbitmq-server:amqp
juju add-relation nova-compute glance
juju add-relation glance mysql
juju add-relation glance keystone
juju add-relation glance cinder
juju add-relation mysql cinder
juju add-relation cinder rabbitmq-server
juju add-relation cinder nova-cloud-controller
juju add-relation cinder keystone
juju add-relation openstack-dashboard keystone

Whew, I know that’s a lot to go through, but OpenStack isn’t a walk in the park. It’s a pretty intricate system with lots of dependencies. The good news is we’re nearly done! No doubt most of the nodes have turned green in the GUI or are marked as “started” in the output of juju status.

Selection_220

One of the last things is configuration for the cloud. Since this is all working against Trusty, we have the latest OpenStack being installed. All that’s left is to configure our admin password in keystone so we can log in to the dashboard.

juju set keystone admin-password="helloworld"

Set the password to whatever you’d like. Once complete, run juju status openstack-dashboard find the public-address for that unit, load it’s address in your browser and navigate to /horizon. (For example, if the public-address was 10.0.1.2 you would go to http://10.0.1.2/horizon). Log in with the username admin and the password as you set it in the command line. You should now be in the horizon dashboard for OpenStack. Click on Admin -> System Panel -> Hypervisors and confirm you have a hypervisor listed.

Congradulations! You’ve created a condensed OpenStack installation.

  • Adam Bauer

    Omg thank god my life is made

  • विशाल गर्ग

    Thanks for the guide. It helped me a lot 🙂

  • robert tingirica

    what power adapter did you use? How do you power on the nucs? AFAIK they don’t have wake on lan

    • In MAAS i’m using the AMT power type. I made sure to select NUCs that support the AMT Tools

  • Delair

    Thanks for the amazing video. Can you pleaseeeeeee find a solution for neutron-gateway. I am stuck with that right now.

  • Jacob

    But whats the point

    • You must not be into devops or ops 🙂

      • Jacob

        Well Im currently running a windows machine with node and mongo. I thought abouy checking out ubuntu again but it seems like a hassle. I dont think any normal user has the bandwidth that would require db sharding, also the master machine seems like its still bottlenecking the system. I could be wrong, but thats why I asked

        • This is literally the exact opposite of a hassle with Juju. When you need to cluster mongodb you simply run `juju add-unit mongodb` and it does the clustering for you.

          The bootstrap node is a bottle neck, but in the next release we’ll have HA for the bootstrap node. So you can set up n+ bootstrap nodes that will allow for failover.

  • Damian ONeill

    Hi Marco, first off thanks for the video, very useful. I was wondering could you add to the blog post / comments the remaining steps to connect to your spinning vm via ssh?

    Thanks.

  • Mark

    I tried to replicate this using KVM+MAAS (not LXC). When I deploy mysql and keystone charms to the same guest, keystone is unable to connect to mysql. I do not have this problem when deploying them to different systems. Did you run into this?

  • I got all running with 3 physical standard I7 machine ready and tested with OpenStack. The thing is i have to manually start up the machines each time if i deploy a service, my only power type is wakeonlan hope someone can help me with that.

  • Hi
    I’m running 3 standard psychical I7 PC’s only option for power is wake on LAN Ive got everything tested and deploy via JUJU OpenStack, but cant get wake on LAN to work with MAAS Ive tested wake on LAN it works but how do you tell MAAS to use wake on LAN configuration.

  • Rafael

    Sir, I’m trying to do something similar to your tutorial, but mine is a local environment and the tutorial I followed was this one: https://www.youtube.com/watch?v=V2H3fat0K5w (I don’t know if this tutorial is local or not). Most configuration used was to change the openstack origin to: “cloud:precise-grizzly”, after setting everything up, I tried to log in horizon, but, All what I get is a screen saying:

    Internal Server Error

    An unexpected error occurred while processing your request. Please try your request again.

    Please, could you help me? I have no deep knowledge on juju nor openstack, and I have to set up a private cloud with at least 1 resource provisiong, for a university assignment.

    Att, Rafael. Thank you.

  • riccardo

    hi, why in juju-gui I continue to see the services deployed in active status when the virtual nodes are in power off (in vMaaS are in ready status)? thanks

  • riccardo

    Hi Marco,
    I’ve followed all steps you reported in this guide but the relation between nova-compute and mysql gives me an error:

    ERROR ambiguous relation: “nova-compute mysql” could refer to “nova-compute:nrpe-external-master mysql:nrpe-external-master”; “nova-compute:shared-db mysql:shared-db”

    I’ve to replaced that with this as suguests by note:

    $: juju add-relation nova-compute:shared-db mysql:shared-db

    with this the relation is ok but I can’t open the dashboard on my broswer.
    thanks in advance for your help

  • Raghu

    We have the following MAAS setup- One node is setup as the server with both MAAS cluster and region controllers running on it. We added 2 nodes which are in a private virtual LAN with the server node. We brought the nodes into ‘Ready’ state and installed juju on the server. Now when we try to run juju bootstrap, it says Attempting to connect to 10.10.10.104 and fails after 10min with the connection refused error. 10.10.10.104 is one of our node in the private vLAN and that which was already in MAAS.

    My suspicion is – the node is in ‘Ready’ state and hence no OS is installed yet on it. juju is attempting to connecting to it. It should obviously be unable to connect, as MAAS collects all info required from the nodes during PXE boot and shuts down the machines.

    juju wants to install OS on the nodes but the machines are not up.

    PS: Our power on type is IPMI

    On running juju bootstrap –debug, we see a slew of these messages:
    ————————————————————————————————————
    2014-10-12 02:50:58 DEBUG juju.utils.ssh ssh_openssh.go:122 running: ssh -o “StrictHostKeyChecking no” -o “PasswordAuthentication no” -i /root/.juju/ssh/juju_id_r sa -i /root/.ssh/id_rsa ubuntu@slot13.maas /bin/bash
    ————————————————————————————————————-

    And after 10mins, it now fails with
    ————————————————————————————————————-
    waited for 10m0s without being able to connect: /var/lib/juju/nonce.txt does not exist
    ————————————————————————————————————-

    • donald

      I faced the same problem i solved it ssh to the maas node and do this :

      1) create directory : /var/log/juju
      2) Change the ownership of the directory to ubuntu:ubuntu
      3) Create a text file /var/lib/juju/nonce.txt with content – user-admin:bootstrap

      Sorry for my english hope it help

  • Neel Basu

    `juju deploy –to lxc:0 mysql` leaves deployment pending for hours and I don’t see any quotable entries in `juju debug-log`. I have disabled ufw still nothing is happening. Did you installed `juju-local` ?

  • riccardo

    Hi Marco, I tried to following your guide but when I try to open Horizon I see just a white page.
    To realize that I’ve used Ubuntu 14.04Lts with MaaS and 3 VM (one for juju-gui and the rest for Openstack as you reported on guide).
    I also try to disable ufw and run a ping from Host to VM (using IP address and FQDN) and it works perfectly.
    Could y help me, please?

    • riccardo

      I’ve resolved that. I’ve just wait after some minutes and that it……

  • Justin

    Do you have to have the maas box with dual nic? Do you know how the Orangebox handles this since I assume none of the nucs there have dual nics?

    • The Orange Box’s node0 (the MAAS master) has a two NICs, you have to have two nics in order for MAAS to manage networking. If you’re virutalizing MAAS, which you can do, you can create a network bridge instead to get around this, but it’s a much more complicated setup.

      • Justin

        Ah thanks. I spent ages trying to figure out how they did it

        • Rick Hicksted

          You could add a USB 3 to Ethernet (10/100/1000) dongle to the Intel NUC.

          • Justin

            Exactly what I ended up doing. Last attempt I made had all manner of issues with the network setup. First the nodes couldn’t connect out, then after resolving that I couldn’t access services running on the nodes externally. Configuring networks is not my strong suite 🙁

      • Gregg

        This is the sanest and most straightforward walkthrough on this subject I have yet read.
        I have almost the exact same setup as yourself, however only 2 Intel NUCs and no other spare hardware with which to add a second NIC to follow this procedure.

        Please, is there *any* documentation you can point to in which to set up the alternative scenario (virtualized/kvm MAAS + bare metal juju nodes)?

        I have searched all over the net for a simple guide on how to do this, only to end up in a tangled, tangled web of ovs-vsctl commands and vlans and vnics and bridges and tunnels and diagrams like the one below that I literally looked at until I became lightheaded and keeled over. It *can’t* be this difficult.

  • madejackson

    In Fact, you’ve done this with 3 machines, because the maas controller is a machine itself. Is it possible to add the maas-controller as a juju-node?

    • madejackson

      Well… I’m already workin on a solution: bootstrap 2nd juju-environment “local” and deploy LXC-Containers on local environment 🙂 Then I can add as many compute services I’d like to

      • That’s another possibility. You’ll run into issues with nova-compute as I outlined above, but juju has the ability to “add-machine” any arbitrary machine with SSH access. So you could install Ubuntu Cloud on the physical nodes, then run `juju add-machine ubuntu@machine` and it’ll be added to the available pool of machines.

        There are networking issues to take into consideration as LXC machines won’t be addressable from the local provider to the other hardware without setting up NAT or exposing the bridge to your network.

      • Gregg

        Please tell me you were successful with this endeavor and there is a blog post somewhere on the Internet that details the procedure that you used.

        Sincerely,
        Desperately Seeking Openstack Closure

    • So, you can virtualize the maas-controller as a KVM on one of the hardware nodes, you’ll just need to setup a network bridge to connect it to the rest of your hardware.

  • hakakj123

    This post is worthy of appreciation, looking forward to more exciting! Fitch Calzoncillos

  • Eric

    This is brilliant; I’ll definitely be trying this soon!

  • Dgn

    It doesn’t work for me !! I use Maas and Juju behind a proxy and I have always this error when I run “juju bootstrap” ( ERROR juju.cmd.juju common.go:32 the environment could not be destroyed: gomaasapi: got error back from server: 504 Gateway Time-out (
    I set my environments.yaml to use my proxy !!
    I’m really stuck if I can’t use a proxy with Juju

  • Pheakdey

    What is deference between Maas and Iaas?

    • Jonathan Dill

      Seems lower level as it doesn’t automatically provide several things you’d normally expect with an IaaS provider like storage management, replication and backups, you would have to implement all of those things yourself with Metal as a Service seems more comparable to something like vCenter. Also, it seems like a poor choice of acronym as MaaS is also emerging in common use as Monitoring as a Service.

  • viperlogic

    Nice article. All works ok for me bar deploying a charm not on the bootstrapped node. I’ve documented it here http://askubuntu.com/questions/661170/juju-deploy-not-installing-juju-agent and it would be great if you can provide some insight and guidence.

  • Rick Hicksted

    The above guidelines are a great source of info. With the latest bits I was able to get it all working except for the containers using the latest MAAS on Ubuntu 15.04 (Containers stuck waiting for agent completion when the agent was started fine on the container). I even dropped the firewall as listed in other threads with out luck. This caused me to use “7” NUCs as my OpenStack charms do not play well in the same root container (0).

    Physical Machines (7 Intel NUCs, 1 NUC as the MAAS server: All model: Intel NUC Kit NUC5i7RYH Barebone System BOXNUC5I7RYH: 16G Ram/256BG SSD M2):

    NUC0 MAAS server

    NUC1 mysql/juju GUI
    NUC2 rabbitmq server
    NUC3 glance
    NUC4 cinder
    NUC5 nova cloud controller
    NUC6 nova
    NUC7 horizon

    Did you ever add the neutron config here for networking support?

    Did you use neutron-gateway? How is this possible with a NUC as it only has one Intel 1G NIC?

    Thanks,

    Rick

    • Rick Hicksted

      Update: Got the Neutron Gateway up where VMs can ping each other (Horizon now shows the Networking topo/settings.

      NUC0 MAAS server

      NUC1 mysql/juju GUI
      NUC2 rabbitmq server/keystone
      NUC3 glance
      NUC4 cinder
      NUC5 nova cloud controller
      NUC6 nova
      NUC7 horizon

      NUC8 neutron-gateway (required special config for nova cloud controller)

      • Amit

        hi Rick, did you deploy neutron-api neutron-openvswitch and neutron-gateway – all three of those charms on the same LXC?

        Or did you just end up installing neutron-gateway — to get the overall networking up and running in Openstack with horizon showing you the network topology.

        Other query I had was – Are you able to reach to your Openstack VM instances from your main linux box (i.e. where the juju commands are run)?

        • Rick Hicksted

          I put each charm in bare metal given the number of NUCs are have:

          root@NUC10:/home/ubuntu# neutron agent-list
          +————————————–+——————–+——-+——-+—————-+
          | id | agent_type | host | alive | admin_state_up |
          +————————————–+——————–+——-+——-+—————-+
          | 01e13b77-e695-4377-bcae-293e881cedd2 | Open vSwitch agent | NUC11 | 🙂 | True |
          | 1935e783-ca54-49e1-813e-48deae8a1d71 | DHCP agent | NUC10 | 🙂 | True |
          | 3d798100-6238-4cce-ac1d-15d3e1c77ef4 | L3 agent | NUC10 | 🙂 | True |
          | 424319cd-6f3f-41d1-af0c-4e1885f6d723 | Open vSwitch agent | NUC10 | 🙂 | True |
          | 51576687-8983-44d3-96c0-249d98dec2c6 | Metadata agent | NUC10 | 🙂 | True |
          | 72029565-4373-4503-a8cc-72b5e26b58be | Metering agent | NUC10 | 🙂 | True |
          | 942a09b4-a743-4a0b-a2fb-c2a8abdbef6c | Loadbalancer agent | NUC10 | 🙂 | True |
          | f2c3bbf9-534d-4370-b642-5e361f5e41a3 | Open vSwitch agent | NUC8 | 🙂 | True |
          +————————————–+——————–+——-+——-+—————-+
          root@NUC10:/home/ubuntu#

          NUC11/NUC8 are compute nodes.

          NUC10 is running the neutron-gateway charm.

          with the following config.yaml:

          root@maas:/home/rhicksted# cat config.yaml
          nova-cloud-controller:
          network-manager: Neutron

          neutron-gateway:
          ext-port: eth0

          Thanks,

          Rick

          • Amit

            Rick, Thank you for sharing your expertise and parting your advice.

            Okay. So looks like you did independent deploys on individual machines.

            juju deploy neutron-gateway –config config.yaml
            juju deploy neutron-api

            juju deploy neutron-openvswitch

            and then their respective “juju add-relation”. Is that right?

            My setup on the other hand – is exactly same as yours minus the MAAS. Instead, it is a bare metal x86 host, with JUJU configured to create and use KVM VMs when a “juju deploy” command is invoked.

            My config.yaml is similar to yours. However below are my queries:

            1. ) When you run “neutron agent-list” on your neutron gateway – I see a bunch agents running there. Did your “juju deploy” command automatically trigger the installation, configuration and starting of those services on NUC10? Or did you have to run any additional steps similar to below, after your neutron-gateway node was up and running.

            sudo apt-get install neutron-server neutron-dhcp-agent neutron-plugin-openvswitch-agent neutron-l3-agent

            2.) Trying to set ext-port setting in my config.yaml for “neutron-gateway” — is causing the juju charm to fail with a message saying “connection to the agent is lost”.

            My (NUC10 equivalent) neutron-gateway node has below ifconfig :

            eth0 Link encap:Ethernet HWaddr 52:54:00:71:13:02

            inet addr:10.0.3.195 Bcast:10.0.3.255 Mask:255.255.255.0

            lo Link encap:Local Loopback

            inet addr:127.0.0.1 Mask:255.0.0.0

            virbr0 Link encap:Ethernet HWaddr 3e:3b:1c:8b:89:8b

            inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0

            The egress traffic going outside, seems to be going via eth0. So ideally based on my understanding – the ext-port should correctly be eth0 – similar to your NUC10 config.

            3) Finally after getting your ubuntu juju-installed openstack cloud up and running — when you spin up new VM instances in openstack — are those instances able to reach outside internet (say google.com) and are you able to SSH into those VMs from outside world?

            thanks again.

          • Rick Hicksted

            I deployed the nova-cloud-controller –config.yaml to NUC9. BTW NUC 10 (neutron-gateway )needs 2 NICs.

            root@NUC10:/home/ubuntu# ifconfig
            br-data Link encap:Ethernet HWaddr c2:c0:d7:1a:51:43
            inet6 addr: fe80::247f:38ff:fedd:33f2/64 Scope:Link
            UP BROADCAST RUNNING MTU:1500 Metric:1
            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
            TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:0
            RX bytes:0 (0.0 B) TX bytes:738 (738.0 B)

            br-ex Link encap:Ethernet HWaddr 3c:18:a0:40:5b:47
            inet6 addr: fe80::c888:33ff:fe3b:81b8/64 Scope:Link
            UP BROADCAST RUNNING MTU:1500 Metric:1
            RX packets:1926893 errors:0 dropped:272 overruns:0 frame:0
            TX packets:9932 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:0
            RX bytes:406287399 (406.2 MB) TX bytes:417588 (417.5 KB)

            br-int Link encap:Ethernet HWaddr 1a:6d:09:f5:50:4c
            inet6 addr: fe80::47c:6bff:fe3f:ddc7/64 Scope:Link
            UP BROADCAST RUNNING MTU:1500 Metric:1
            RX packets:38990 errors:0 dropped:0 overruns:0 frame:0
            TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:0
            RX bytes:4360426 (4.3 MB) TX bytes:738 (738.0 B)

            br-tun Link encap:Ethernet HWaddr 8a:6a:46:cb:e5:44
            inet6 addr: fe80::c0d4:6eff:fedd:fdcf/64 Scope:Link
            UP BROADCAST RUNNING MTU:1500 Metric:1
            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
            TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:0
            RX bytes:0 (0.0 B) TX bytes:738 (738.0 B)

            eth0 Link encap:Ethernet HWaddr b8:ae:ed:75:f0:f8
            inet6 addr: fe80::baae:edff:fe75:f0f8/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
            RX packets:4728976 errors:0 dropped:0 overruns:0 frame:0
            TX packets:6591569 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:1188600821 (1.1 GB) TX bytes:4867618959 (4.8 GB)
            Interrupt:20 Memory:f7100000-f7120000

            eth1 Link encap:Ethernet HWaddr 3c:18:a0:40:5b:47
            inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
            inet6 addr: fe80::3e18:a0ff:fe40:5b47/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
            RX packets:4051797 errors:0 dropped:0 overruns:0 frame:0
            TX packets:1395516 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:3353635639 (3.3 GB) TX bytes:104950201 (104.9 MB)

            int-br-data Link encap:Ethernet HWaddr c6:40:24:83:1e:c1
            inet6 addr: fe80::c440:24ff:fe83:1ec1/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
            RX packets:7 errors:0 dropped:0 overruns:0 frame:0
            TX packets:38997 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:738 (738.0 B) TX bytes:4361164 (4.3 MB)

            juju-br0 Link encap:Ethernet HWaddr b8:ae:ed:75:f0:f8
            inet addr:192.168.2.189 Bcast:192.168.2.255 Mask:255.255.255.0
            inet6 addr: fe80::baae:edff:fe75:f0f8/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
            RX packets:4508074 errors:0 dropped:0 overruns:0 frame:0
            TX packets:6478642 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:0
            RX bytes:1091817557 (1.0 GB) TX bytes:4833509142 (4.8 GB)

            lo Link encap:Local Loopback
            inet addr:127.0.0.1 Mask:255.0.0.0
            inet6 addr: ::1/128 Scope:Host
            UP LOOPBACK RUNNING MTU:65536 Metric:1
            RX packets:14451 errors:0 dropped:0 overruns:0 frame:0
            TX packets:14451 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:0
            RX bytes:4284928 (4.2 MB) TX bytes:4284928 (4.2 MB)

            phy-br-data Link encap:Ethernet HWaddr a2:d0:bf:eb:f2:46
            inet6 addr: fe80::a0d0:bfff:feeb:f246/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
            RX packets:38997 errors:0 dropped:0 overruns:0 frame:0
            TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:4361164 (4.3 MB) TX bytes:738 (738.0 B)

            tap6aa9fc54-a4 Link encap:Ethernet HWaddr 9a:23:66:eb:c3:29
            inet6 addr: fe80::9823:66ff:feeb:c329/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
            RX packets:1353084 errors:0 dropped:0 overruns:0 frame:0
            TX packets:4043219 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:103162589 (103.1 MB) TX bytes:3409827937 (3.4 GB)

            tapbd4ef3aa-f6 Link encap:Ethernet HWaddr 8e:a5:8c:11:78:dc
            inet6 addr: fe80::8ca5:8cff:fe11:78dc/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
            RX packets:2146155 errors:0 dropped:0 overruns:0 frame:0
            TX packets:1357702 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:3005023042 (3.0 GB) TX bytes:106192413 (106.1 MB)

            tapca998e74-89 Link encap:Ethernet HWaddr f2:0b:00:4a:9a:b3
            inet6 addr: fe80::f00b:ff:fe4a:9ab3/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
            RX packets:183 errors:0 dropped:0 overruns:0 frame:0
            TX packets:39060 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:43276 (43.2 KB) TX bytes:4374251 (4.3 MB)

            tape3e77829-87 Link encap:Ethernet HWaddr 62:37:71:f4:f8:85
            inet6 addr: fe80::6037:71ff:fef4:f885/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
            RX packets:8 errors:0 dropped:0 overruns:0 frame:0
            TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:648 (648.0 B) TX bytes:1259 (1.2 KB)

            root@NUC10:/home/ubuntu#

            Thanks,

            Rick

          • Rick Hicksted

            You do not need these:

            juju deploy neutron-api

            juju deploy neutron-openvswitch

            They come as part of neutron-gateway (trusty).

            Thanks,

            Rick

          • Rick Hicksted

            On a final note to browse to google/etc via the VM I needed to set all the compute/network nodes to mtu 9000:

            ex: on NUC 10 (rc.local)

            ip link set eth0 mtu 9000
            ip link set eth1 mtu 9000
            ip link set juju-br0 mtu 9000

            Note: this is required if you are using the default GRE overlay network. Also make sure to assign a floating IP to all VMs via horizon if you want to use RDP (Remote Desktop Protocol) on Windows VMs.

            I just added another compute node and spun up another 3 Windows 10 medium instances(VMs)…all is working:-)

            Thanks,

            Rick

          • Amit

            Rick, your remark about “neutron-gateway” being sufficient by itself – was very helpful. Now my setup is much simpler with just neutron-gateway deployed by itself. I can also now able to see all the agents, DHCP etc. processes running properly on the neutron-gateway node.

            However – I am still a bit far from getting my VM instances talking to internet.

            Follow up questions:

            1) The juju-br0 on neutron-gateway : Is that interface something that you added manually? or was it auto created by Juju deploy?

            2) The new interface eth1 that you added to neutron-gateway shows an IP address of “192.168.1.2”. So that means your goal here was to bridge the JUJU controlled “192.168.2.xx” network and the internal network “192.168.1.xx”. Is that correct?

            3) Based on your eth1 IP address “192.168.1.2” — Is the OpenStack private network that you may have created for VMs to be part of is having a subnet “192.168.1.0/24” with their gateway as 192.168.1.2. Is that correct understanding?

            4) Finally the neutron router in your openstack dashboard – It would have one internal facing IP from 192.168.1.xx series — and the outward facing gateway IP attached to the router would be 192.168.2.189 (same IP as assigned to the newly added network card on NUC10)?

            Thanks again.

          • Rick Hicksted

            1) The juju-br0 on neutron-gateway : Is that interface something that
            you added manually? or was it auto created by Juju deploy?

            Auto created by juju

            2) The new interface eth1 that you added to neutron-gateway shows an IP
            address of “192.168.1.2”. So that means your goal here was to bridge
            the JUJU controlled “192.168.2.xx” network and the internal network
            “192.168.1.xx”. Is that correct?

            192.168.1.x is the external network to the internet.

            3) Based on your eth1 IP address “192.168.1.2” — Is the OpenStack
            private network that you may have created for VMs to be part of is
            having a subnet “192.168.1.0/24” with their gateway as 192.168.1.2. Is
            that correct understanding?

            192.168.2.x is the private Maas provisioning network.

            192.168.1.x is the external network (to the internet).

            4) Finally the neutron router in your openstack dashboard – It would
            have one internal facing IP from 192.168.1.xx series — and the outward
            facing gateway IP attached to the router would be 192.168.2.189 (same
            IP as assigned to the newly added network card on NUC10)?

            My VM private network is 10.0.0.x the router does from 10.0.0.0 to the public 192.168.1.x.

            The router has two interfaces: 191.268.1.100 and 10.0.0.1 (tying the two networks).

            Thanks,

            Rick

          • Rick Hicksted

            Detailed router info:

            root@NUC10:/home/ubuntu# neutron router-list
            +————————————–+———+—————————————————————————–+
            | id | name | external_gateway_info |
            +————————————–+———+—————————————————————————–+
            | 2d573a53-5a2e-4da5-821b-693b6370d728 | router1 | {“network_id”: “a696e6e7-1fca-45bc-b80c-1b8ccd60c976”, “enable_snat”: true} |
            +————————————–+———+—————————————————————————–+
            root@NUC10:/home/ubuntu# ip netns list
            qrouter-2d573a53-5a2e-4da5-821b-693b6370d728
            qdhcp-c89800bc-bd29-4ccd-a944-d09d4ccd5a77
            qdhcp-a696e6e7-1fca-45bc-b80c-1b8ccd60c976

            root@NUC10:/home/ubuntu# ip netns exe qrouter-2d573a53-5a2e-4da5-821b-693b6370d728 ifconfig
            lo Link encap:Local Loopback
            inet addr:127.0.0.1 Mask:255.0.0.0
            inet6 addr: ::1/128 Scope:Host
            UP LOOPBACK RUNNING MTU:65536 Metric:1
            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:0
            RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

            qg-6aa9fc54-a4 Link encap:Ethernet HWaddr fa:16:3e:71:8a:36
            inet addr:192.168.1.105 Bcast:192.168.1.255 Mask:255.255.255.0
            inet6 addr: 2602:306:36c0:eac0:f816:3eff:fe71:8a36/64 Scope:Global
            inet6 addr: fe80::f816:3eff:fe71:8a36/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
            RX packets:4675886 errors:0 dropped:360 overruns:0 frame:0
            TX packets:1424183 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:3598514858 (3.5 GB) TX bytes:110116530 (110.1 MB)

            qr-bd4ef3aa-f6 Link encap:Ethernet HWaddr fa:16:3e:9c:f1:14
            inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
            inet6 addr: fe80::f816:3eff:fe9c:f114/64 Scope:Link
            UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
            RX packets:1435094 errors:0 dropped:0 overruns:0 frame:0
            TX packets:2217219 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:1000
            RX bytes:114741738 (114.7 MB) TX bytes:3064689562 (3.0 GB)

            root@NUC10:/home/ubuntu#

            Thanks,

            Rick

          • Rick Hicksted

            Here is a picture

          • Amit

            Rick,

            I was finally able to get the VMs working (pinging ingress and egress both).

            The root cause turned out to be incorrect parameter that I had passed during “virsh attach-interface” command which was preventing the packets from flowing between the multiple interfaces on the neutron-gateway. Thanks again so much. Really appreciate all the help.

            Strangely, the router_gateway interface still shows as “DOWN” in my cloud, just as it shows in your screenshot. Although all the network functionality is working just fine.

            By the way, I am curious – you mentioned about windows 10 running on your juju deployed openstack cloud. Did custom prepare your Windows 10 ISO with Virtio.iso drivers — and then manually installed it into a VM? Or were you able to find a working QCOW2 image for windows 10 that worked well with KVM ? (similar to the one found here – http://www.cloudbase.it/windows-cloud-images/)

          • Rick Hicksted

            Yes, I built my own custom image using virt-manager/KVM on a Linux box (used my same MAAS server).

            I followed this for Windows 7 (link below) and it works for Windows 10 as well (for the most part):

            http://cloud-ninja.org/2014/05/14/running-windows-7-guests-on-openstack-icehouse/

            Thanks,

            Rick

          • Tomáš Marný

            Hi guys, would you mind to put your setup/experience it into article similar to this one? It would be really appreciated (I am stuck at the moment somewhere in pipeline network -> VirtualBox -> maas -> juju -> openstack neutron -> virtualbox bridge to public network)

          • Rick Hicksted

            Here is another link where I was dealing with the external VM network access: http://askubuntu.com/questions/670620/openstack-network-compute-to-neutron-gateway-to-external-gateway-not-functional

      • Saurabh Keskar (sk)

        information given by you was very helpful.

        can you please tell me what special config required for that.

        i have installed neutron gateway on new node with 2 NICs and connected it to all properly (add-relation). but on dashboard there is no network tab. even no entry for neutron in keystone user-list. in neutron node all services are running fine.

        i have chosen neutron|quantum in nova-cloud-controller..
        please help….

  • Saurabh Keskar (sk)

    thank you for that. Great work can you guide for neutron installation please…

  • Gilbert Standen

    I got this working perfectly, but it needed some tweaks for my setup which I’m going to share here in case it helps someone else. The problem I ran into for quite some time was that the Juju Charms deployed to LXC containers were perpetually in “Pending” state in the Juju GUI and of course same status in “juju status”. I hope I’ve captured all the steps that were required to completely fix this pending status problem. I’ll update here if I missed anything, and I hope this helps others.

    The basic strategy is to make these edits to several files in the /var/lib/lxc/juju-trusty-lxc-template/rootfs fileystem BEFORE you begin deploying LXC containers. These edits will work globally across all subsequently deployed containers. Just substitute your interface names and your LAN network in what follows, and it should work.

    My maas server setup is:
    Dell PowerEdge 2850, BIOS at latest A07 rev, and Dell BMC Firmware, v.1.83, A10 (these BIOS and Firmware upgrades are required)
    15.10 ubuntu server edition on the maas server; all enlisted bare-metal nodes use the 14.04 trusty image from maas.
    2 x 1Gb network ports
    enp6s7 192.168.1.37 static IP
    enp7s8 10.207.39.100 static IP

    My maas enlistment bare-metals are also Dell PowerEdge 2850’s. The Dell BMC IPMI is set to DHCP, and gets addresses on the 192.168.1.0/24 network, and the bare-metal maas enlisted nodes come up on the 10.207.39.0/24 network.

    Tweak 1: Since I’m using two networks, I needed routing for the 10.207.39.0/24 network via the WAN on 192.168.1.0/24 in order to be able to download from the outside world to the LXC containers via “the internet”. This was accomplished as follows by adding these rules to the maas server:

    sudo iptables -A FORWARD -s 10.207.39.0/24 -o enp6s7 -j ACCEPT
    sudo iptables -A FORWARD -d 10.207.39.0/24 -m state –state ESTABLISHED,RELATED -i enp6s7 -j ACCEPT
    sudo iptables -t nat -A POSTROUTING -s 10.207.39.0/24 -o enp6s7 -j MASQUERADE
    sudo iptables-save | sudo tee /etc/iptables.sav

    Once the above rules are set, it should be possible to “nslookup google.com” and “ping -c 3 google.com” from the maas-enlisted bare-metal LXC host.

    Note 1: Here, enp6s7 is the “internet-connected” interface on the 192.168.1.0/24 network (i.e. that goes to the router provided by my broadband internet service provider), and 10.207.39.0/24 is the private LAN network on the enp7s8 interface which is the network that maas and its’ maas-enlisted bare-metal servers use.

    Note 2: The rules are made permanent on the maas server across reboots by the following line added before the “exit 0” line in /etc/rc.local as shown below:
    sudo iptables-restore < /etc/iptables.sav
    exit 0

    Note 3: Check that the rules were applied after reboot of the maas server as follows:
    gstanden@ubuntu:~$ sudo iptables -S
    -P INPUT ACCEPT
    -P FORWARD ACCEPT
    -P OUTPUT ACCEPT
    -A FORWARD -s 10.207.39.0/24 -o enp6s7 -j ACCEPT
    -A FORWARD -d 10.207.39.0/24 -i enp6s7 -m state –state RELATED,ESTABLISHED -j ACCEPT
    gstanden@ubuntu:~$

    Tweak 2: Enable DNS lookups on the MAAS server. This could probably also be done in /etc/network/interfaces, but I put these tweaks in /etc/dhcp/dhclient.conf on the maas server as shown below. You can just uncomment and edit the example lines in the file:

    supersede domain-name "maas";
    prepend domain-name-servers 10.207.39.100;

    Note 1: This will allow lookups of enlisted bare-metal servers on the maas node in the 10.207.39.0/24 network that belong to this maas.

    Tweak 3: Several tweaks are needed to establish robust networking services for the LXC containers. These tweaks to the lxc template will fix problems with downloading the juju tools from the bare-metal lxc host, and will fix problems resolving the WAN (192.168.1.0/24 addresses, and will fix problems resolving addresses on the 10.207.39.0/24 network).

    Note 1: The problem that was encountered is/was that no matter how the /var/lib/lxc/machine/config file is tweaked, containers ALWAYS get an address on the 192.168.1.0/24 network, even though they are using the juju-br0 bridge which has a 10.207.39.0/24 address; but in this two-network LAN/WAN setup, what is needed is ultimately networking to both networks.

    Tweak 3a: ssh ubuntu@portly-legs (portly-legs is the enlistment name that maas gave to my maas bare-metal lxc host)

    sudo su –

    cd /var/lib/lxc/juju-trusty-lxc-template/rootfs/etc/network/interfaces.d/

    vi eth0.cfg and add the following lines to the eth0.cfg file (add the "up ip route…" and "dns-search maas" lines).

    # The primary network interface
    auto eth0
    iface eth0 inet dhcp
    up ip route add 10.207.39.0/24 via 192.168.1.37
    dns-search maas

    The ip route line will establish ssh / ping / scp connectivity etc. from the lxc containers on 192.168.1.0/24 (eth0 in the LXC container) to the maas-enlisted bare-metal lxc-host on the 10.207.39.0/24 network. The "dns-search maas" line is part of how maas DNS lookups are made available inside the LXC containers so that full DNS resolution including the maas network will work.

    Tweak 3b: Edit the /var/lib/lxc/juju-trusty-lxc-template/rootfs/etc/dhcp/dhclient.conf file and add this line (you can uncomment the example line and modify it):

    prepend domain-name-servers 10.207.39.100;

    Note 1: This could probably alternatively be done in /var/lib/lxc/juju-trusty-lxc-template/rootfs/etc/network/interfaces.d/eth0.cfg using a "dns-nameservers 10.207.39.100" entry, but I've used dhclient.conf in this case.

    I think this is all the tweaks. Once all these tweaks have been made, one can start deploying openstack components to LXC containers in this dual-network setup and they come out of pending state and into ready state as fast as flapjacks off the griddle! Please let me know if I missed any steps, and I will try to double-check these also. But I think that's it.

    There is one other tweak needed: Edit /etc/sysctl.conf on the maas server such that the following command gives result as shown:

    gstanden@ubuntu:~$ sudo sysctl -p
    net.ipv4.ip_forward = 1
    gstanden@ubuntu:~$

    i.e. uncomment the "net.ipv4.ip_forward" line and make sure it is set to "1" as shown above.

    Once all of these steps are done, try deploying an openstack component to LXC and monitor the progress:

    (from terminal on the maas server as the user who owns the maas deployment): ssh ubuntu@(ip address of a container)

    on the container: sudo su –
    on the container: tail -f /var/log/cloud-init.log
    and/or
    tail -f /var/log/cloud-init-output.log

    If you see message like this, you are probably all good:

    Attempt 1 to download tools from https://10.207.39.152:17070/tools/1.25.3-trusty-amd64
    + curl -sSfw tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s –noproxy * –insecure -o /var/lib/juju/tools/1.25.3-trusty-amd64/tools.tar.gz https://10.207.39.152:17070/tools/1.25.3-trusty-amd64
    tools from https://10.207.39.152:17070/tools/1.25.3-trusty-amd64 downloaded: HTTP 200; time 1.698s; size 18722171 bytes; speed 11026214.000 bytes/s + echo Tools downloaded successfully.
    Tools downloaded successfully.

    When there are issues with networking in the LXC containers, this will fail on multiple attempts. The above tweaks should completely fix those issues, and will also provide full DNS lookup for bare-metal maas enlisted servers on the maas network (the LXC hosts and single product hosts such as nova-compute). Containers should go to fully deployed ready status in less than 5 minutes each. I deploy them one-at-at-time and I wait for each one to reach ready status. YMMV, HTH, Gilbert Standen, St. Louis, MO Feb. 11, 2016, 1:25 PM CT

  • Jean-Baptiste

    Thanks a lot for this introduction to openstack, Out of this tutorial, we cannot spawn any instance without falling in [Error: No valid host was found. ].
    As far a I understand, this is related to VM network. Would you have any direction we could follow to setup flat network or bridge on the nova-compute node, via juju ? Or is it mandatory to add a machine for installing neutron ?

    wish you the Best !

  • Nikos

    I tried this on Ubuntu 14 and the setup seems ok, but when I try to login to the horizon I get this message “An error occurred authenticating. Please try again later.”, so I believe that I have make some settings with the nodes?
    I am newbie with openstack btw 🙂

  • Dilip Renkila

    after following the steps you listed , the containers aren’t deployed and in juju gui they are in pending state

  • Dilip Renkila

    gilbert Standen can you please list the steps you followed