Getting Started with OpenStack Neutron
Platform9 recently announced beta support for OpenStack Neutron networking. In this blog post, I explain the differences between Linux bridges and Open vSwitch and why Platform9 chose one over the other.
I have come across quite a few people who are very excited to get onboard with deploying OpenStack and bringing up their own private clouds. The first step anyone would do is to git clone devstack (well you’d do it if you were a developer), read a bunch of documentation and get hacking on configuration files. One of the important decisions that people would need to make (amongst many others) is to decide whether you want to go with Linux bridges, Open vSwitch (OVS), or some other third party switch to wire cloud instances to the physical network. To keep it brief I’m going to focus on Linux bridges and Open vSwitch in this basic overview. This blog post does not talk about switch configuration or switch internals.
If you ask me whether you should choose one over the other, I would go ahead by stating the standard answer to all questions computer science and networking – “It depends.” A Linux bridge is much more than just a virtual bridge. It also does filtering and traffic shaping. Also, you can program it to refer to IP tables before forwarding traffic using the bridge-netfilter module built into the Linux kernel. On the other hand an Open vSwitch is an L2 virtual switch which allows programming flows conforming to OpenFlow standards and by design is inherently a distributed virtual switch.
Linux Bridge and Neutron
The Linux bridge primarily started out as a kernel module and to date remains a kernel module. It uses the “bridge” module built into the Linux kernel. The initial implementation of the Linux bridge went into the Linux 2.2 kernel. Since then it has evolved to more than just packet forwarding. Using netfilter (iptables) the Linux bridge becomes a very powerful extension of the Linux networking stack (Well also the increased debugging overhead that comes with this). It is much more simple to build a solution out of Linux bridging than that of OVS. It’s easier to debug and has also proven to be stable from its years of existence.
A Linux bridge by default trunks all packets. The advantage of this approach is that each VM can sit on a different VLAN and terminate VLANS internally. When a firewall is present in your architecture as a VM it typically expects to see all packets tagged or untagged. As pros and cons go this approach however means that all packets would flow to all ports connected to the linux bridge. Generally however, VLAN termination normally is configured at the physical ingress points i.e. ethernet interfaces. Linux bridges also works quite well when coupled with GRE tunnels.
OVS and Neutron
Neutron’s reference implementation talks about Open vSwitch. Existing documentation mostly provide the procedure to setup your private cloud networking using OVS. The switch is installed on all of your physical servers and one or more server becomes the neutron server instructing & configuring the individual switches.
An individual OVS on a physical server consists of 3 components – the ovsdb database, the ovs-vswichd daemon, and the kernel module data plane. Any packet processing and routing decisions are made in the user space and hash rules (match, action) are pushed down to the data plane kernel space. When a packet that matches a rule comes into the switch, it gets acted upon by the specified action. Now when a packet that does not match a rule arrives at the data plane component of the OVS it is pushed up to the user space to figure out where it should go. The action is then cached and pushed into the data plane. As you can guess, this improves performance to a large extent.
As well, the latest versions of OVS has caching mechanisms at the flowlets level providing more stability & speed. OVS can be used out of the box with Neutron when it comes to overlay networks such as GRE & VXLAN and is not restricted to VLANs.
Setting up Linux bridge or OVS with neutron is your choice. There are always pros and cons to any technology. At Platform9, we chose to go with OVS so that we can provide our customers with broad SDN controller support like Cisco APIC and VMWare NSX.These and various other SDN controllers already support OVS and you would need to jump through hoops to get it to work with Linux bridges in OpenStack. We believe that by offering OVS initially with Platform9 networking will give our customer new capabilities for building a robust private cloud.
- Beyond Kubernetes Operations: Discover Platform9’s Always-On Assurance™ - November 29, 2023
- KubeCon 2023 Through Platform9’s Lens: Key Takeaways and Innovative Demos - November 14, 2023
- Getting to know Nate Conger: A candid conversation - June 12, 2023