Native IPv6 Functionality in Docker

The days of kludgy hacks for IPv6-connected docker containers are over. A recent PR merged native IPv6 functionality into the docker daemon. The new bits have not yet made it into the docker ppa package as of 1/17/2015. Therefore, some assembly is required.

You’ll need to compile docker from source. The only currently supported docker build process uses docker. Does this remind anyone of the movie Inception?

Here’s an installation process you can use on a fresh 64-bit Ubuntu 14.04 VM.


sudo apt-get update
sudo apt-get -y install git build-essential
git clone
cd docker
sudo make build
sudo make binary
sudo service stop
sudo ip link set docker0 down
sudo ip link delete docker0
VERSION=1.4.1  # version as of this writing
sudo cp ./bundles/$VERSION-dev/binary/docker-$VERSION-dev $(which docker)
sudo echo "DOCKER_OPTS='--ipv6 --fixed-cidr-v6=\"2001:DB8::/64\"'" >> /etc/default/
service start

Docker does not put an address within the /64 on the docker0 bridge. It uses fe80::1/64. The default route in the container is set to this link local address.

Your containers will not be able to communicate with the IPv6 internet unless the /64 you’ve selected is routed to the docker host. Unlike how docker handles IPv4 in containers, there is no NAT. Use a provider that will route the /64 to your docker host. Linode did this for me after I emailed the request to its support team. Using providers such as DigitalOcean that support IPv6 but do not route a /64 to your VM are not positioned to offer IPv6 connectivity to containers. You’ll have to use the Neighbor Discovery hack that I described in another post.

I’m not sure why docker doesn’t have an option to connect containers directly to a bridge that includes the internet-facing port. Doing this with LXC is easy to accomplish. I suspect this can be done with docker. I don’t know how though. Perhaps someone with more more knowledge of docker can explain how to attach the daemon to a bridge with the LAN interface.

I’ll note that the docker build environment appears to have a bug with name resolution in the build container if IPv6 DNS servers are in /etc/resolv.conf. I didn’t want to invest the time to troubleshoot it. You can comment out the IPv6 DNS servers in the docker host’s /etc/resolv.conf file to avoid the defect.

If you run into problems, let me know in the comments.


Executing Arbitrary Junos ‘Show’ Commands with PyEZ and ncclient

Still screen scraping routers with Expect scripts? It’s time to move to NETCONF, the industry standard for communicating with network infrastructure. The protocol was heavily influenced by Juniper’s XML API. Not surprisingly, Junos routers have solid NETCONF support. In this post, I’ll detail how an operator can use ncclient and PyEZ to execute arbitrary ‘show’ commands to obtain operational state. By “arbitrary”, I mean that the ‘show’ commands can be any valid Junos operational command that is not known at the time the script was written. The input could take the form of a file.

Let’s start with PyEZ, as this framework is much more friendly to network engineers who dabble in Python. To execute a ‘show’ command, the documentation recommends using the associated RPC for the command. You can map ‘show’ commands to RPCs by appending ‘| display xml rpc’. Here’s example output.

jeffl@SRX_R1> show version | display xml rpc
<rpc-reply xmlns:junos="">


The RPC in this example is get-software-information. This process is covered in more depth on the PyEZ wiki.

If you wanted to create a script that executed user-specified ‘show’ commands in a text file, do you see the problem? You’d need some hack to obtain the RPC for the ‘show’ command. Fortunately, PyEZ makes use of the <command> tag shortcut that is also employed by slax scripts. The method is Device.cli() (my thanks go to Kurt Bales and Nitin Kumar on the PyEZ mailing list for pointing this out to me).

The documentation rightly warns against the use of Device.cli(). The purpose of PyEZ is to get away from screen scraping and programmatically handling the router’s response. Fortunately, Device.cli() acepts an format=’xml’ parameter. This returns the RPC reply in XML for easy parsing in the language of your choosing. An example of this is op = jdev.cli(command, format=’xml’), where jdev is an instance of the Device class.

The PyEZ module is great for many automation tasks. The module is not needed for this use case, however. We can use the lower-level ncclient to achieve the same result. This requires a higher comfort level with Python and the lxml module.

The following code borrows from the Juniper examples in the ncclient github repo.


#!/usr/bin/env python
# Demonstrates the use of the 'command' tag to execute arbritrary 'show' commands.
# This code was inspired by Ebben Aries's at
# usage: python <show command> <xpath expression>
# python 'show route 2600::/64' '//rt-entry/nh'
# Jeff Loughridge
# August 2014

import sys

from lxml import etree as etree
from ncclient import manager
from ncclient.xml_ import *

def connect(host, port, user, password, source):

        show_command = sys.argv[1]
    except IndexError:
        print "please specify show command as first argument."

        xpath_expr = sys.argv[2]
    except IndexError:

    conn = manager.connect(host=host,
            device_params = {'name':'junos'},

        result = conn.command(command=show_command, format='xml')
    except Exception, e:
        print " Encountered critical error"
        print e

    tree = etree.XML(result.tostring)

    if xpath_expr:
        filtered_tree_list = tree.xpath(xpath_expr)
        for element in filtered_tree_list:
            print etree.tostring(element)
        print etree.tostring(tree)

if __name__ == '__main__':
    connect('ROUTER', 830, 'USER', 'PASSWORD', 'candidate')

I posted this a gist.

To use this example, there are several prerequisites.

  1. You must have a Junos router configured with ‘set system services netconf’.
  2. The ROUTER, USER, and PASSWORD placeholders in the script must be filled in with a valid router IP/FQDN, user, and password.
  3. The lxml and ncclient modules must be installed. These are in PyPi and can be installed with pip.

The script supports the optional use of XPath expressions to parse the output. XPath is complex topic that is covered elsewhere. The parsing can be performed with lxml’s ElementPath or countless other python modules used to parse XML.

I hope readers find this example useful.


IPv6 in Docker Containers on DigitalOcean

This post details how I enabled IPv6 addresses in docker containers on DigitalOcean.

DigitalOcean supports IPv6 in its London 1 and Singapore 1 data centers as of July 2014. Create a droplet using the Ubuntu 14.04 image and DO’s docker installation under Applications in the “Create VM” screen. Make sure to check “IPv6”.

DO provides a VM with 20 addresses within a /64. docker0 gets assigned 0x1 as the last hex character. I give the docker0 interface 0x4. This leaves 0x5 to 0xf for my containers.

Let’s take an example. DO gives me 2a03:b0c0:1:d0::18:d000 to 2a03:b0c0:1:d0::18:d00f. docker0 gets 2a03:b0c0:1:d0::18:d004. The last double octet for containers ranges from 0xd005 to 0xd00f.

To break up the 20 addresses provided by DO, we need a way for containers to respond to IPv6 neighbor discovery on the host’s eth0 interface. This could be performed using a ND proxy daemon (see here for one implementation). For most docker use cases, a static ND proxy entry should do.

docker does not natively support IPv6. LXC, the foundation of docker, can handle IPv6 in containers. As of docker 1, the software uses libcontainer by default instead of LXC. We’ll have to configure /etc/default/docker to use the LXC driver.

See below for example for one time set-up of the droplet.


# enable IPv6 forwarding and ND proxying
echo net.ipv6.conf.all.proxy_ndp=1 &gt;&gt; /etc/sysctl.conf
echo net.ipv6.conf.all.forwarding=1 &gt;&gt; /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf

# install LXC
sudo apt-get install -y lxc

# use the LXC driver
echo DOCKER_OPTS=\&quot;--exec-driver=lxc\&quot; &gt;&gt; /etc/default/docker

service docker restart

The script below demonstrates how to set-up the static ND proxy entries. Make sure to change the V6_START variable.

# This script provides an example of setting up IPv6 static
# ND proxy entries. Edit the V6_START to match
# what you see in the DO control panel
# strip the last hex character
V6_MINUS_LAST_HEX_CHAR=`echo $V6_START|sed s/.$//`
ip addr add ${V6_MINUS_LAST_HEX_CHAR}4/124 dev docker0
echo &quot;adding ND proxy entries...&quot;
for character in 4 5 6 7 8 9 a b c d e f; do
  echo &quot;ip -6 neigh add proxy ${V6_MINUS_LAST_HEX_CHAR}${character} dev eth0&quot;
  ip -6 neigh add proxy ${V6_MINUS_LAST_HEX_CHAR}${character} dev eth0

Now we’re ready to bring up the container. The first argument must be an IPv6 address in your assigned
range with the last double octet between 0xXXX5 and 0xXXXF. For me, this is 0xd005 to 0xd00f.

# first argument to script must be IPv6 address from DO-allocated
# space that is not part of the first /126 (e.g. 0x4 to 0xF as last
# hex character
if [ -z &quot;$IPV6_ADDRESS&quot; ]; then
  echo &quot;please specify IPv6 address for container's eth0&quot;
  exit 1
echo &quot;container eth0: $IPV6_ADDRESS&quot;

# run container so that docker0 gets a link local address
docker run busybox:ubuntu-14.04 /bin/true
docker rm $(docker ps -lq)

# get docker0's link local address
  ip addr show docker0 | \
   grep &quot;inet6 fe80&quot; | \
    awk '{print $2}' | \
      sed 's/\/.*//' \

if [ -z &quot;$LINK_LOCAL&quot; ]; then
  echo &quot;unable to find link local address on docker0. something is wrong.&quot;
  exit 1
echo &quot;docker0 link local: $LINK_LOCAL&quot;
docker run -i -t \
   --lxc-conf=&quot; = up&quot; \
   --lxc-conf=&quot; = $IPV6_ADDRESS/124&quot; \
   --lxc-conf=&quot; = $LINK_LOCAL&quot; busybox:ubuntu-14.04 /bin/sh

Executing the script will put you in interactive mode in a shell in the container. Try ‘ping6 2600::’ to to test connectivity. If you are having trouble, let me know in the comments.

I want to thank Andreas Neuhaus for his IPv6 in Docker Containers post and his suggestion to use static ND proxy when the provider does not route a /64 to your docker host.

UPDATE 1/17/2015 – Docker now has native IPv6 functionality. See this post.

Five Years of Going Solo

About six years ago, I knew my career needed a change in direction. What I expected in the change at the time resembled nothing close to what transpired. In the fall of 2008, my circumstances were such that starting an independent consulting business seemed very appealing. (Note that I did not say “ideal,” as I suspect that starting a business is like having a baby for most people. There is never an ideal time.). In addition, an opportunity for three months of consulting work landed in my lap thanks to former co-workers. I went for it. Brooks Consulting was born.

Helping large organizations–typically Tier 1 ISPs and wireless providers–over the last five years has been a very enriching experience. I’ve been exposed to many different networking environments and met numerous sharp engineers. I’ve been very fortunate that the positives of this line of work have vastly outweighed the negatives.

I would not be writing this blog entry without my professional network and clients. Of all my projects, I’d estimate that 98% originated through word-of-mouth or referrals. I feel humbled.

Thanks for the all the support. I look forward to continuing my work with existing clients and finding new projects to take on.

Peter Löthberg’s Terastream Presentation at RIPE 67

Do you ever wonder why the industry keeps layering complexity on top of complexity to scale IP networks? Perhaps you feel like there must be a better way to build IP networks.

Peter describes an alternative. Build a dumb network that provides one service–simple IPv6 transport–and deliver all other services from commodity x86 hardware.

If you watch one presentation on IP design this year, make it this one.

TeraStream – A Simplified IP Network Service Delivery Model



L2TPv3 in Linux Using IPv6 Endpoints

Pseudowires have traditionally been deployed in ISP and wireless provider networks to carry Ethernet and TDM frames across an IP/MPLS network. Now you can find an implementation of L2TPv3 in the Linux kernel. Pseudowires for the masses without the need for an MPLS network! You get the added benefit of open source code that can be modified to meet requirements specific to your environment.

L2TPv3 is a lightweight protocol for transporting L2 frames across an IP network (see RFC3931). Cisco’s implementation can carry a variety of L2 protocols (ATM, FR, Ethernet, TDM) while Linux supports Ethernet and PPP. I have use cases in mind in the Iaas space, so Ethernet pseudowires will do the trick. The linux implementation support static tunnels only. If you want an L2TPv3 control channel, check out Katalix’s commercial software, ProL2TP.

Some bad news–L2TPv3 will not work out-of-the box with any of the major Linux distributions (This holds true as of Sept 2013 at least).  You need a more recent version of iproute2 that provides the “ip l2tp” configuration command. If you want IPv6 endpoints, you’ll also need kernel 3.5 or later.

I’ll step through an example of getting L2TPv3 to function with IPv6 in a debian wheezy VM. I’m using a 32-bit Debian 7.1 image.

Install Required Packages

roote@debian:~# apt-get install flex bison libdb5.1-dev build-essential kernel-package bridge-utils

Install iproute2

Download iproute2 from this link. I linked to version 3.11, the latest at the time of this writing. Earlier versions may work. I can’t determine where this patch to handle IPv6 endpoints got merged with the code.

Execute ./configure and ignore the error about xtables. Run ‘make’ to compile. I renamed /sbin/ip to /sbin/ip-ss120521. I moved the new file in iproute2-3.11.0/ip/ip to /sbin/ip. If you want to put the new ip in /usr/local/sbin, you can do that instead. I got burned doing this because I didn’t realize that bash retains a hash to matching binaries in your path. Logging out and logging back in is one way to fix this.

Run ‘ip -V’ to ensure you using the correct binary. With iproute2-3.11.0, you’ll see ‘ip utility, iproute2-ss130903’.

Compile 3.5 or Later Kernel

The kernel compilation can be very slow. I set the VM memory to 2Gb with two processors.

root@debian:~# wget

root@debian:~# tar xf linux-3.11.1.tar.xz

root@debian:~# cd linux-3.11.1

root@debian:~/linux-3.11.1# cat /boot/config-`uname -r`>.config

root@debian:~/linux-3.11.1# yes “” | make oldconfig

root@debian:~/linux-3.11.1# make-kpkg clean

root@debian:~/linux-3.11.1# time fakeroot make-kpkg –initrd –revision=3.5.0 –append-to-version=-1custom kernel_image kernel_headers

root@debian:~/linux-3.11.1# dpkg -i ../linux-*1custom*

root@debian:~/linux-3.11.1# shutdown -r now

Note: I borrowed most of the compilation steps from lindqvist’s blog. I’ll also point out that you can edit .config to build L2TPv3 into the kernel rather than as a loadable module.

You’ll want to take a snapshot of this VM and make clones based on it. The next step assumes you have two Debian VMs with L2TPv3.

Build L2TPv3 Tunnels

Let’s assume you have two VMs. The interface eth0 on both is a shared network (2001:DB8::/64). The eth1 interface on both connect to the networks you want bridged together. Since the transport is IP, there is no need for a shared network. The remote endpoint could be any node that is reachable by IP.


root@debian:~# modprobe l2tp_eth

root@debian:~# ip l2tp add tunnel tunnel_id 3000 peer_tunnel_id 4000 encap udp local 2001:DB8::1 remote 2001:DB8::2 udp_sport 5000 udp_dport 6000

root@debian:~# ip l2tp add session tunnel_id 3000 session_id 1000 peer_session_id 2000

root@debian:~# ip link set l2tpeth0 up mtu 1488

root@debian:~# brctl addbr br0

root@debian:~# brctl addif br0 l2tpeth0

root@debian:~# brctl addif br0 eth1 ip link set br0 up promisc on


root@debian:~# modprobe l2tp_eth

root@debian:~# ip l2tp add tunnel tunnel_id 4000 peer_tunnel_id 3000 encap udp local 2001:DB8::2 remote 2001:DB8::1 udp_sport 6000 udp_dport 5000

root@debian:~# ip l2tp add session tunnel_id 4000 session_id 2000 peer_session_id 1000

root@debian:~# ip link set l2tpeth0 up mtu 1488

root@debian:~# brctl addbr br0

root@debian:~# brctl addif br0 l2tpeth0

root@debian:~# brctl addif br0 eth1 ip link set br0 up promisc on

You can verify the tunnel with ‘ip l2tp show tunnel’. It should look like the following.

root@debian:~# ip l2tp show tunnel
Tunnel 4000, encap UDP
From 2001:db8::2 to 2001:db8::1
Peer tunnel 3000
UDP source / dest ports: 6000/5000

Now you have a pseudowire between the networks connected to eth1 of both VMs.

If you have any problems while following my directions, let me know in the comments. I will make corrections. My thanks go to James Chapman and his peers at Katalix for writing the code and providing pointers on getting it working.

Simplifying Your Junos SLAX Development Environment

I’m excited by the possibilities that network programmability offers network operators. Through JUNOS SLAX scripts, Juniper has offered a simple mechanism for programming its routers for many years.

Over the last several months, I’ve been writing primarily ops scripts in SLAX. I want to share my findings on how to simplify a SLAX development environment.

JUISE/libslax – If you want to write SLAX scripts, download the JUNOS User Internet Scripting Environment (juise) for “off-the-box” scripting. juise requires libslax, another open source project by Juniper. The combination allows you to remotely execute ops and commit scripts on JUNOS devices. You don’t have to manually scp/ftp scripts from your server to the routers. libslax contains a SLAX syntax checker called slaxproc. JUISE has a built-in debugger that will make you wonder how you ever gotten anything done in SLAX without it.

refresh command – Another way to avoid manually copying SLAX scripts to routers is the use of the refresh command in JUNOS configuration mode.  I use a lightweight web server from google called mongoose to serve files from the directory in which I write the scripts. Any web server will do the trick though.

I set the sources in the config using this syntax.

set system scripts op file script1.slax source
set system scripts op file script2.slax source
set system scripts op file slax-doctor.slax source

Now you can execute ‘set system scripts op refresh’ and JUNOS will download the files to /var/db/script/ops. Invoking the command from configuration mode feels very un-JUNOS-like. You’ll be prompted by the CLI when exiting config mode to confirm that you want to exit with uncommitted changes.

slax-doctor – Curtis Call’s slax-doctor SLAX script is invaluable for SLAX scripting.  libslax does very limited error checking, leaving most errors undiscovered until runtime. The reporting of errors often does not identify the one unterminated string or other typo that is wreaking havoc on the script. I execute slax-doctor every time that I make non-trivial change. You can download the script here or in the collection of scripts in the junoscriptorium project on github.

op invoke-debugger cli – JUNOS 13.1 introduces official support of the ‘op invoke-debugger cli‘ operational mode command. The command is hidden is some prior versions of JUNOS. I’ve used the on-box debugger very rarely. I prefer using the debugger in juise.

If you develop SLAX scripts and want to share other tips, please include them in the comments section. For other readers who have not delved into SLAX, start reading the This Week: Applying the Junos Automation ebook and get coding today.

IPv6 in XCP 1.6

The intent of this post is to document how to enable IPv6 in XCP 1.6 and manage the host using IPv6 transport. I hope Google leads many people to this page, as I wasn’t able to find anything else on the web on the subject. I’d like to see more people experimenting with IPv6 on XCP hosts.

XCP 1.6 is built on an optimized Centos 6 dom0 kernel. Enabling IPv6 is not as simply a matter of editing files in /etc/sysconfig/ as you would for a typical Centos server. XCP takes over network configuration during system start-up. Fortunately, the process is very straightforward. To manage IPv6 networking on an XCP host, you must be comfortable with the ‘xe’ command line tools, as this cannot be performed using XenCenter.

Here are the steps for enabling IPv6 and configuring an IPv6 address.

  1. Log in to the XCP host as root and execute ‘/opt/xensource/bin/xe-enable-ipv6 enable’.
  2. Reboot.
  3. Verify that an IPv6 link-local address appears on xenbr0 using ‘ip -6 addr show dev xenbr0’. You should see a /64 address that begins with fe80 (e.g., fe80::20c:29ff:fe48:5bc9/64).
  4. Configure an IPv6 address on the host using ‘xe pif-reconfigure-ipv6’.

I’ll  provide some sample ‘xe pif-reconfigure-ipv6’ configuration commands.

Static IPv6 address – ‘xe pif-reconfigure-ipv6 mode=static uuid=<PIF_UUID> ipv6=<IPV6_ADDRESS>. The address is specified using the standard IPv6 notation with a slash followed by the prefix length (e.g., 2001:DB8::2/64). Fill in the UUID parameter with the Physical Interface (PIF) UUID of your management domain as provided by ‘xe pif-list’.

Stateless Autoconfiguration  (SLAAC) – ‘xe pif-reconfigure-ipv6 mode=autoconf uuid=<PIF_UUID> ipv6=<IPV6_ADDRESS>’. After executing this command the XCP host will create an IPv6 address using IPv6 Route Advertisements (RA). If there are no routers sending RAs on your network, the XCP host will not assign an address to xenbr0.

DHCPv6 – ‘xe pif-reconfigure-ipv6 mode=dhcp uuid=<PIF_UUID>’. XCP starts dhcp6c after the command is executed; however, the address assignment does not take place. If anyone wants DHCP on the hypervisor, feel free to fire up wireshark and track down the problem. I’ll update the post.

I’ve verified that the ‘xe’ commands can be run remotely using IPv6 transport. XenCenter also connects over IPv6. If you use an IPv6 literal, you must enclose it in brackets as you would in a web browser (e.g., [2001:DB8::2]). As I mentioned earlier in the post, XenCenter cannot configure IPv6 networking.

I want to thank Grant McWilliams for his tips that led me to figuring out IPv6 in XCP 1.6.

UPDATE 9/4/13 – This process also works in Xenserver 6.2.

The Glorious Return of End-to-end Connectivity with IPv6

For almost 15 years, I relied on hacks and trickery to get servers in my residence to communicate with Internet hosts. I used DMZ hosts, port forwarding, and a variety of tunneling mechanisms. The situation deteriorated four years ago when I moved into a neighborhood with a monopoly ISP that implemented NAT in its infrastructure. The days of contorting my services to work around the address space scarcity kludges are over. End to end connectivity is restored.

My ISP allocated a /64 of IPv6 addresses to me in early 2013. Almost every IP-enabled device in my abode has an IPv6 stack. Some do SLAAC only while others can obtain IPs with DHCPv6. (Apparently printer manufacturers have disdain for IPv6. The printer I bought in 2012 can’t communicate via IPv6.)

I can access my home servers anywhere I go. Verizon’s Mobile Hotspot MiFi 5510L supplies an IPv6 address with both EVDO and LTE access. If I’m deep inside a building with no wireline IPv6 access, I have to fire up my SixXs AYIYA tunnel to connect to my home servers. This isn’t ideal but is better than no connectivity.

The days of the Internet over HTTP are coming to an end. We’ll all benefit as application designers implement innovative services unrestricted by NAT.

A Press Release is Not an IPv6 Network Strategy

A strategy for enabling your network for IPv6 requires significant planning. The planned future state must be an IPv6-only network rather than dual stack. How do you get from an IPv4-only network to there? I’ll cover some broad themes, as the details are too many for a single post. I’ll also share my thoughts on what an IPv6 strategy is not.

Take the hypothetical conversation between two network engineer colleagues.

Engineer 1: What’s our strategy for IPv6?

Engineer 2: We posted a press release. Check it out on our media relations page.

Engineer 1: <Reads press release> Hmm. OK. I see that we’re going to provide our customers access the IPv6 Internet by end-of-year 2013. How are we going to do that?

Engineer 2: You know…I’m not sure who’s working on that.

A press release–or any other high-level assertions about IPv6 enablement– is not an IPv6 strategy. It a goal and only that. You can’t roll out a scalable, production-quality service without focusing on crucial aspects of the deployment. Some examples follow.

  • IPv6 addressing schemes
  • Address assignment mechanisms
  • Internal and 3rd party applications/services
  • Security
  • IP infrastructure tools and systems
  • Transition technologies
  • Remote access
  • End system and network IPv4/IPv6 protocol interaction
  • Routing protocol selection

The carriers write detailed network design documents for new services and infrastructure components. The design document is shared with all stakeholders and revised based on feedback. This practice should be adopted by enterprises and other entities that operate large IP networks. There is necessary complexity in IP networks that can’t be informally passed along in organizational memory. Experience tells us that writing documentation is time-consuming and that engineers don’t like doing it. I believe a formalized design document is an absolute requirement for network changes on the scale of IPv6 introduction.

If the IPv6 design document is thorough and communicated throughout the organization, you’ll be positioned to avoid issues such as a department continuing to invest in IPv4-only hosts, applications, and network infrastructure. The lack of last-minute surprises helps you meet your IPv6 goals and breaks your dependence on the rapidly depleting IPv4 space.

Get those IPv6 design documents written. When you have an actionable plan, then you can issue the press release.