5 Minutes to Firecracker with Packet

I was eager to try Firecracker after hearing about its release at re:Invent 2018. The microVM technology require access to the hardware; a bare metal server is needed. GCP and AWS both offer bare metal machines. I opted to go with Packet, a company that specializes in bare metal-as-a-service.

You can sign up for an account at Packet that comes with a $25 service credit. Use this to fire up a c1.small.x86 server. I tried using the smaller t1.small.x86 server type but encountered problems. I found available capacity in the EWR1 region.

Firecracker requires a 4.14+ kernel version. I selected Ubuntu 18.04 LTS as the operating system to get a recent kernel. Then off to the races!

Packet’s automation provisions bare metal servers in about a minute.

SSH Window 1

root@firecracker:~# curl -LOJ https://github.com/firecracker-microvm/firecracker/releases/download/v0.11.0/firecracker-v0.11.0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 610 0 610 0 0 4039 0 --:--:-- --:--:-- --:--:-- 4039
100 6006k 100 6006k 0 0 11.2M 0 --:--:-- --:--:-- --:--:-- 11.2M
curl: Saved to filename 'firecracker-v0.11.0'
root@firecracker:~# mv firecracker-v0.11.0 firecracker
root@firecracker:~# chmod +x firecracker
root@firecracker:~# ./firecracker --api-sock /tmp/firecracker.socket

 

SSH Window 2

root@firecracker:~# curl -fsSL -o hello-vmlinux.bin https://s3.amazonaws.com/spec.ccfc.min/img/hello/kernel/hello-vmlinux.bin
root@firecracker:~# curl -fsSL -o hello-rootfs.ext4 https://s3.amazonaws.com/spec.ccfc.min/img/hello/fsfiles/hello-rootfs.ext4
root@firecracker:~# curl --unix-socket /tmp/firecracker.socket -i \
> -X PUT 'http://localhost/boot-source' \
> -H 'Accept: application/json' \
> -H 'Content-Type: application/json' \
> -d '{
> "kernel_image_path": "./hello-vmlinux.bin",
> "boot_args": "console=ttyS0 reboot=k panic=1 pci=off"
> }'
HTTP/1.1 204 No Content
Date: Wed, 05 Dec 2018 09:52:32 GMT

root@firecracker:~# curl --unix-socket /tmp/firecracker.socket -i \
> -X PUT 'http://localhost/drives/rootfs' \
> -H 'Accept: application/json' \
> -H 'Content-Type: application/json' \
> -d '{
> "drive_id": "rootfs",
> "path_on_host": "./hello-rootfs.ext4",
> "is_root_device": true,
> "is_read_only": false
> }'
HTTP/1.1 204 No Content
Date: Wed, 05 Dec 2018 09:52:39 GMT

root@firecracker:~# curl --unix-socket /tmp/firecracker.socket -i \
> -X PUT 'http://localhost/actions' \
> -H 'Accept: application/json' \
> -H 'Content-Type: application/json' \
> -d '{
> "action_type": "InstanceStart"
> }'
HTTP/1.1 204 No Content
Date: Wed, 05 Dec 2018 09:52:46 GMT

root@firecracker:~#

 

Now if you return to Window 1, you’ll see the VM’s boot messages and login prompt.

Welcome to Alpine Linux 3.8
Kernel 4.14.55-84.37.amzn2.x86_64 on an x86_64 (ttyS0)

localhost login:

 

You can access the VM using root/root.

I encourage readers to check out Packet while experimenting with Firecracker. Edge computing is a hot topic and this company has an impressive service.

Installing Python 3.6 for AWS Lambda Development

Python 3.6 is currently the only python 3.x version that AWS Lambda supports. I’m writing this post to document the installation process on the linux distributions I use.

Amazon Linux AMI 2017.09 or later

You can use Amazon Linux AMI 2017.09 or later.

sudo yum install python36 python36virtualenv python36pip

The Lamba execution environment uses Amazon Linux. Using this distribution ensures that modules with C extensions will be binary compatible.

 

Amazon Linux release 2 (2017.12) LTS Release Candidate

sudo yum install python3

 

Ubuntu 18.04 LTS

Python 3.6 is installed by default. :)

 

Ubuntu 16.04 LTS

You have a number of options for installing python 3.6 on this Ubuntu version. Check out this post for the list. I prefer to use J Fernyhough’s PPA.

sudo apt-get install software-properties-common python-software-properties
sudo add-apt-repository ppa:jonathonf/python-3.6
sudo apt-get update
sudo apt-get install python3.6 python3.6-venv

 

I encourage pythonistas to try writing an app to be deployed on Lambda. Using the chalice framework vastly simplifies deployment.

Native IPv6 Functionality in Docker

The days of kludgy hacks for IPv6-connected docker containers are over. A recent PR merged native IPv6 functionality into the docker daemon. The new bits have not yet made it into the docker ppa package as of 1/17/2015. Therefore, some assembly is required.

You’ll need to compile docker from source. The only currently supported docker build process uses docker. Does this remind anyone of the movie Inception?

Here’s an installation process you can use on a fresh 64-bit Ubuntu 14.04 VM.

 

sudo apt-get update
sudo apt-get -y install git build-essential docker.io
git clone https://github.com/docker/docker.git
cd docker
sudo make build
sudo make binary
sudo service docker.io stop
sudo ip link set docker0 down
sudo ip link delete docker0
VERSION=1.4.1  # version as of this writing
sudo cp ./bundles/$VERSION-dev/binary/docker-$VERSION-dev $(which docker)
sudo echo "DOCKER_OPTS='--ipv6 --fixed-cidr-v6=\"2001:DB8::/64\"'" >> /etc/default/docker.io
service docker.io start

Docker does not put an address within the /64 on the docker0 bridge. It uses fe80::1/64. The default route in the container is set to this link local address.

Your containers will not be able to communicate with the IPv6 internet unless the /64 you’ve selected is routed to the docker host. Unlike how docker handles IPv4 in containers, there is no NAT. Use a provider that will route the /64 to your docker host. Linode did this for me after I emailed the request to its support team. Using providers such as DigitalOcean that support IPv6 but do not route a /64 to your VM are not positioned to offer IPv6 connectivity to containers. You’ll have to use the Neighbor Discovery hack that I described in another post.

I’m not sure why docker doesn’t have an option to connect containers directly to a bridge that includes the internet-facing port. Doing this with LXC is easy to accomplish. I suspect this can be done with docker. I don’t know how though. Perhaps someone with more more knowledge of docker can explain how to attach the daemon to a bridge with the LAN interface.

I’ll note that the docker build environment appears to have a bug with name resolution in the build container if IPv6 DNS servers are in /etc/resolv.conf. I didn’t want to invest the time to troubleshoot it. You can comment out the IPv6 DNS servers in the docker host’s /etc/resolv.conf file to avoid the defect.

If you run into problems, let me know in the comments.

Executing Arbitrary Junos ‘Show’ Commands with PyEZ and ncclient

Still screen scraping routers with Expect scripts? It’s time to move to NETCONF, the industry standard for communicating with network infrastructure. The protocol was heavily influenced by Juniper’s XML API. Not surprisingly, Junos routers have solid NETCONF support. In this post, I’ll detail how an operator can use ncclient and PyEZ to execute arbitrary ‘show’ commands to obtain operational state. By “arbitrary”, I mean that the ‘show’ commands can be any valid Junos operational command that is not known at the time the script was written. The input could take the form of a file.

Let’s start with PyEZ, as this framework is much more friendly to network engineers who dabble in Python. To execute a ‘show’ command, the documentation recommends using the associated RPC for the command. You can map ‘show’ commands to RPCs by appending ‘| display xml rpc’. Here’s example output.

jeffl@SRX_R1> show version | display xml rpc
<rpc-reply xmlns:junos="http://xml.juniper.net/junos/12.2I0/junos">
    <rpc>
        <get-software-information>
        </get-software-information>
    </rpc>
    <cli>
        <banner></banner>
    </cli>
</rpc-reply>

jeffl@SRX_R1>

The RPC in this example is get-software-information. This process is covered in more depth on the PyEZ wiki.

If you wanted to create a script that executed user-specified ‘show’ commands in a text file, do you see the problem? You’d need some hack to obtain the RPC for the ‘show’ command. Fortunately, PyEZ makes use of the <command> tag shortcut that is also employed by slax scripts. The method is Device.cli() (my thanks go to Kurt Bales and Nitin Kumar on the PyEZ mailing list for pointing this out to me).

The documentation rightly warns against the use of Device.cli(). The purpose of PyEZ is to get away from screen scraping and programmatically handling the router’s response. Fortunately, Device.cli() acepts an format=’xml’ parameter. This returns the RPC reply in XML for easy parsing in the language of your choosing. An example of this is op = jdev.cli(command, format=’xml’), where jdev is an instance of the Device class.

The PyEZ module is great for many automation tasks. The module is not needed for this use case, however. We can use the lower-level ncclient to achieve the same result. This requires a higher comfort level with Python and the lxml module.

The following code borrows from the Juniper examples in the ncclient github repo.

 

#!/usr/bin/env python
# Demonstrates the use of the 'command' tag to execute arbritrary 'show' commands.
# This code was inspired by Ebben Aries's command-jnpr.py at
# https://github.com/leopoul/ncclient/blob/master/examples/juniper/command-jnpr.py
#
# usage: python ncclient_demo.py <show command> <xpath expression>
# python ncclient_demo.py 'show route 2600::/64' '//rt-entry/nh'
# 
# Jeff Loughridge
# August 2014

import sys

from lxml import etree as etree
from ncclient import manager
from ncclient.xml_ import *

def connect(host, port, user, password, source):

    try:
        show_command = sys.argv[1]
    except IndexError:
        print "please specify show command as first argument."
        sys.exit(1)

    try:
        xpath_expr = sys.argv[2]
    except IndexError:
        xpath_expr=''

    conn = manager.connect(host=host,
            port=port,
            username=user,
            password=password,
            timeout=3,
            device_params = {'name':'junos'},
            hostkey_verify=False)

    try:
        result = conn.command(command=show_command, format='xml')
    except Exception, e:
        print "ncclient_demo.py: Encountered critical error"
        print e
        sys.exit(1)

    tree = etree.XML(result.tostring)

    if xpath_expr:
        filtered_tree_list = tree.xpath(xpath_expr)
        for element in filtered_tree_list:
            print etree.tostring(element)
    else:
        print etree.tostring(tree)

if __name__ == '__main__':
    connect('ROUTER', 830, 'USER', 'PASSWORD', 'candidate')

I posted this a gist.

To use this example, there are several prerequisites.

  1. You must have a Junos router configured with ‘set system services netconf’.
  2. The ROUTER, USER, and PASSWORD placeholders in the script must be filled in with a valid router IP/FQDN, user, and password.
  3. The lxml and ncclient modules must be installed. These are in PyPi and can be installed with pip.

The script supports the optional use of XPath expressions to parse the output. XPath is complex topic that is covered elsewhere. The parsing can be performed with lxml’s ElementPath or countless other python modules used to parse XML.

I hope readers find this example useful.

 

IPv6 in Docker Containers on DigitalOcean

This post details how I enabled IPv6 addresses in docker containers on DigitalOcean.

DigitalOcean supports IPv6 in its London 1 and Singapore 1 data centers as of July 2014. Create a droplet using the Ubuntu 14.04 image and DO’s docker installation under Applications in the “Create VM” screen. Make sure to check “IPv6”.

DO provides a VM with 20 addresses within a /64. docker0 gets assigned 0x1 as the last hex character. I give the docker0 interface 0x4. This leaves 0x5 to 0xf for my containers.

Let’s take an example. DO gives me 2a03:b0c0:1:d0::18:d000 to 2a03:b0c0:1:d0::18:d00f. docker0 gets 2a03:b0c0:1:d0::18:d004. The last double octet for containers ranges from 0xd005 to 0xd00f.

To break up the 20 addresses provided by DO, we need a way for containers to respond to IPv6 neighbor discovery on the host’s eth0 interface. This could be performed using a ND proxy daemon (see here for one implementation). For most docker use cases, a static ND proxy entry should do.

docker does not natively support IPv6. LXC, the foundation of docker, can handle IPv6 in containers. As of docker 1, the software uses libcontainer by default instead of LXC. We’ll have to configure /etc/default/docker to use the LXC driver.

See below for example for one time set-up of the droplet.

#!/bin/bash

# enable IPv6 forwarding and ND proxying
echo net.ipv6.conf.all.proxy_ndp=1 &gt;&gt; /etc/sysctl.conf
echo net.ipv6.conf.all.forwarding=1 &gt;&gt; /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf

# install LXC
sudo apt-get install -y lxc

# use the LXC driver
echo DOCKER_OPTS=\&quot;--exec-driver=lxc\&quot; &gt;&gt; /etc/default/docker

service docker restart

The script below demonstrates how to set-up the static ND proxy entries. Make sure to change the V6_START variable.


#!/bin/bash
 
# This script provides an example of setting up IPv6 static
# ND proxy entries. Edit the V6_START to match
# what you see in the DO control panel
 
V6_START=2a03:b0c0:1:d0::18:d000
 
# strip the last hex character
V6_MINUS_LAST_HEX_CHAR=`echo $V6_START|sed s/.$//`
 
ip addr add ${V6_MINUS_LAST_HEX_CHAR}4/124 dev docker0
 
echo &quot;adding ND proxy entries...&quot;
for character in 4 5 6 7 8 9 a b c d e f; do
  echo &quot;ip -6 neigh add proxy ${V6_MINUS_LAST_HEX_CHAR}${character} dev eth0&quot;
  ip -6 neigh add proxy ${V6_MINUS_LAST_HEX_CHAR}${character} dev eth0
done

Now we’re ready to bring up the container. The first argument must be an IPv6 address in your assigned
range with the last double octet between 0xXXX5 and 0xXXXF. For me, this is 0xd005 to 0xd00f.

#!/bin/bash
 
# first argument to script must be IPv6 address from DO-allocated
# space that is not part of the first /126 (e.g. 0x4 to 0xF as last
# hex character
 
IPV6_ADDRESS=$1
 
if [ -z &quot;$IPV6_ADDRESS&quot; ]; then
  echo &quot;please specify IPv6 address for container's eth0&quot;
  exit 1
fi
 
echo &quot;container eth0: $IPV6_ADDRESS&quot;

# run container so that docker0 gets a link local address
docker run busybox:ubuntu-14.04 /bin/true
docker rm $(docker ps -lq)

# get docker0's link local address
LINK_LOCAL=$( \
  ip addr show docker0 | \
   grep &quot;inet6 fe80&quot; | \
    awk '{print $2}' | \
      sed 's/\/.*//' \
)

if [ -z &quot;$LINK_LOCAL&quot; ]; then
  echo &quot;unable to find link local address on docker0. something is wrong.&quot;
  exit 1
fi
 
echo &quot;docker0 link local: $LINK_LOCAL&quot;
 
docker run -i -t \
   --lxc-conf=&quot;lxc.network.flags = up&quot; \
   --lxc-conf=&quot;lxc.network.ipv6 = $IPV6_ADDRESS/124&quot; \
   --lxc-conf=&quot;lxc.network.ipv6.gateway = $LINK_LOCAL&quot; busybox:ubuntu-14.04 /bin/sh

Executing the script will put you in interactive mode in a shell in the container. Try ‘ping6 2600::’ to to test connectivity. If you are having trouble, let me know in the comments.

I want to thank Andreas Neuhaus for his IPv6 in Docker Containers post and his suggestion to use static ND proxy when the provider does not route a /64 to your docker host.

UPDATE 1/17/2015 – Docker now has native IPv6 functionality. See this post.

Five Years of Going Solo

About six years ago, I knew my career needed a change in direction. What I expected in the change at the time resembled nothing close to what transpired. In the fall of 2008, my circumstances were such that starting an independent consulting business seemed very appealing. (Note that I did not say “ideal,” as I suspect that starting a business is like having a baby for most people. There is never an ideal time.). In addition, an opportunity for three months of consulting work landed in my lap thanks to former co-workers. I went for it. Brooks Consulting was born.

Helping large organizations–typically Tier 1 ISPs and wireless providers–over the last five years has been a very enriching experience. I’ve been exposed to many different networking environments and met numerous sharp engineers. I’ve been very fortunate that the positives of this line of work have vastly outweighed the negatives.

I would not be writing this blog entry without my professional network and clients. Of all my projects, I’d estimate that 98% originated through word-of-mouth or referrals. I feel humbled.

Thanks for the all the support. I look forward to continuing my work with existing clients and finding new projects to take on.

Peter Löthberg’s Terastream Presentation at RIPE 67

Do you ever wonder why the industry keeps layering complexity on top of complexity to scale IP networks? Perhaps you feel like there must be a better way to build IP networks.

Peter describes an alternative. Build a dumb network that provides one service–simple IPv6 transport–and deliver all other services from commodity x86 hardware.

If you watch one presentation on IP design this year, make it this one.

TeraStream – A Simplified IP Network Service Delivery Model

Video

Presentation

L2TPv3 in Linux Using IPv6 Endpoints

Pseudowires have traditionally been deployed in ISP and wireless provider networks to carry Ethernet and TDM frames across an IP/MPLS network. Now you can find an implementation of L2TPv3 in the Linux kernel. Pseudowires for the masses without the need for an MPLS network! You get the added benefit of open source code that can be modified to meet requirements specific to your environment.

L2TPv3 is a lightweight protocol for transporting L2 frames across an IP network (see RFC3931). Cisco’s implementation can carry a variety of L2 protocols (ATM, FR, Ethernet, TDM) while Linux supports Ethernet and PPP. I have use cases in mind in the Iaas space, so Ethernet pseudowires will do the trick. The linux implementation support static tunnels only. If you want an L2TPv3 control channel, check out Katalix’s commercial software, ProL2TP.

Some bad news–L2TPv3 will not work out-of-the box with any of the major Linux distributions (This holds true as of Sept 2013 at least).  You need a more recent version of iproute2 that provides the “ip l2tp” configuration command. If you want IPv6 endpoints, you’ll also need kernel 3.5 or later.

I’ll step through an example of getting L2TPv3 to function with IPv6 in a debian wheezy VM. I’m using a 32-bit Debian 7.1 image.

Install Required Packages

roote@debian:~# apt-get install flex bison libdb5.1-dev build-essential kernel-package bridge-utils

Install iproute2

Download iproute2 from this link. I linked to version 3.11, the latest at the time of this writing. Earlier versions may work. I can’t determine where this patch to handle IPv6 endpoints got merged with the code.

Execute ./configure and ignore the error about xtables. Run ‘make’ to compile. I renamed /sbin/ip to /sbin/ip-ss120521. I moved the new file in iproute2-3.11.0/ip/ip to /sbin/ip. If you want to put the new ip in /usr/local/sbin, you can do that instead. I got burned doing this because I didn’t realize that bash retains a hash to matching binaries in your path. Logging out and logging back in is one way to fix this.

Run ‘ip -V’ to ensure you using the correct binary. With iproute2-3.11.0, you’ll see ‘ip utility, iproute2-ss130903’.

Compile 3.5 or Later Kernel

The kernel compilation can be very slow. I set the VM memory to 2Gb with two processors.

root@debian:~# wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.1.tar.xz

root@debian:~# tar xf linux-3.11.1.tar.xz

root@debian:~# cd linux-3.11.1

root@debian:~/linux-3.11.1# cat /boot/config-`uname -r`>.config

root@debian:~/linux-3.11.1# yes “” | make oldconfig

root@debian:~/linux-3.11.1# make-kpkg clean

root@debian:~/linux-3.11.1# time fakeroot make-kpkg –initrd –revision=3.5.0 –append-to-version=-1custom kernel_image kernel_headers

root@debian:~/linux-3.11.1# dpkg -i ../linux-*1custom*

root@debian:~/linux-3.11.1# shutdown -r now

Note: I borrowed most of the compilation steps from lindqvist’s blog. I’ll also point out that you can edit .config to build L2TPv3 into the kernel rather than as a loadable module.

You’ll want to take a snapshot of this VM and make clones based on it. The next step assumes you have two Debian VMs with L2TPv3.

Build L2TPv3 Tunnels

Let’s assume you have two VMs. The interface eth0 on both is a shared network (2001:DB8::/64). The eth1 interface on both connect to the networks you want bridged together. Since the transport is IP, there is no need for a shared network. The remote endpoint could be any node that is reachable by IP.

VM1

root@debian:~# modprobe l2tp_eth

root@debian:~# ip l2tp add tunnel tunnel_id 3000 peer_tunnel_id 4000 encap udp local 2001:DB8::1 remote 2001:DB8::2 udp_sport 5000 udp_dport 6000

root@debian:~# ip l2tp add session tunnel_id 3000 session_id 1000 peer_session_id 2000

root@debian:~# ip link set l2tpeth0 up mtu 1488

root@debian:~# brctl addbr br0

root@debian:~# brctl addif br0 l2tpeth0

root@debian:~# brctl addif br0 eth1 ip link set br0 up promisc on

VM2

root@debian:~# modprobe l2tp_eth

root@debian:~# ip l2tp add tunnel tunnel_id 4000 peer_tunnel_id 3000 encap udp local 2001:DB8::2 remote 2001:DB8::1 udp_sport 6000 udp_dport 5000

root@debian:~# ip l2tp add session tunnel_id 4000 session_id 2000 peer_session_id 1000

root@debian:~# ip link set l2tpeth0 up mtu 1488

root@debian:~# brctl addbr br0

root@debian:~# brctl addif br0 l2tpeth0

root@debian:~# brctl addif br0 eth1 ip link set br0 up promisc on

You can verify the tunnel with ‘ip l2tp show tunnel’. It should look like the following.

root@debian:~# ip l2tp show tunnel
Tunnel 4000, encap UDP
From 2001:db8::2 to 2001:db8::1
Peer tunnel 3000
UDP source / dest ports: 6000/5000
root@debian:~#

Now you have a pseudowire between the networks connected to eth1 of both VMs.

If you have any problems while following my directions, let me know in the comments. I will make corrections. My thanks go to James Chapman and his peers at Katalix for writing the code and providing pointers on getting it working.

Simplifying Your Junos SLAX Development Environment

I’m excited by the possibilities that network programmability offers network operators. Through JUNOS SLAX scripts, Juniper has offered a simple mechanism for programming its routers for many years.

Over the last several months, I’ve been writing primarily ops scripts in SLAX. I want to share my findings on how to simplify a SLAX development environment.

JUISE/libslax – If you want to write SLAX scripts, download the JUNOS User Internet Scripting Environment (juise) for “off-the-box” scripting. juise requires libslax, another open source project by Juniper. The combination allows you to remotely execute ops and commit scripts on JUNOS devices. You don’t have to manually scp/ftp scripts from your server to the routers. libslax contains a SLAX syntax checker called slaxproc. JUISE has a built-in debugger that will make you wonder how you ever gotten anything done in SLAX without it.

refresh command – Another way to avoid manually copying SLAX scripts to routers is the use of the refresh command in JUNOS configuration mode.  I use a lightweight web server from google called mongoose to serve files from the directory in which I write the scripts. Any web server will do the trick though.

I set the sources in the config using this syntax.

set system scripts op file script1.slax source http://192.168.100.10:8080/script1.slax
set system scripts op file script2.slax source http://192.168.100.10:8080/script2.slax
set system scripts op file slax-doctor.slax source http://192.168.100.10:8080/slax-doctor.slax

Now you can execute ‘set system scripts op refresh’ and JUNOS will download the files to /var/db/script/ops. Invoking the command from configuration mode feels very un-JUNOS-like. You’ll be prompted by the CLI when exiting config mode to confirm that you want to exit with uncommitted changes.

slax-doctor – Curtis Call’s slax-doctor SLAX script is invaluable for SLAX scripting.  libslax does very limited error checking, leaving most errors undiscovered until runtime. The reporting of errors often does not identify the one unterminated string or other typo that is wreaking havoc on the script. I execute slax-doctor every time that I make non-trivial change. You can download the script here or in the collection of scripts in the junoscriptorium project on github.

op invoke-debugger cli – JUNOS 13.1 introduces official support of the ‘op invoke-debugger cli‘ operational mode command. The command is hidden is some prior versions of JUNOS. I’ve used the on-box debugger very rarely. I prefer using the debugger in juise.

If you develop SLAX scripts and want to share other tips, please include them in the comments section. For other readers who have not delved into SLAX, start reading the This Week: Applying the Junos Automation ebook and get coding today.

IPv6 in XCP 1.6

The intent of this post is to document how to enable IPv6 in XCP 1.6 and manage the host using IPv6 transport. I hope Google leads many people to this page, as I wasn’t able to find anything else on the web on the subject. I’d like to see more people experimenting with IPv6 on XCP hosts.

XCP 1.6 is built on an optimized Centos 6 dom0 kernel. Enabling IPv6 is not as simply a matter of editing files in /etc/sysconfig/ as you would for a typical Centos server. XCP takes over network configuration during system start-up. Fortunately, the process is very straightforward. To manage IPv6 networking on an XCP host, you must be comfortable with the ‘xe’ command line tools, as this cannot be performed using XenCenter.

Here are the steps for enabling IPv6 and configuring an IPv6 address.

  1. Log in to the XCP host as root and execute ‘/opt/xensource/bin/xe-enable-ipv6 enable’.
  2. Reboot.
  3. Verify that an IPv6 link-local address appears on xenbr0 using ‘ip -6 addr show dev xenbr0’. You should see a /64 address that begins with fe80 (e.g., fe80::20c:29ff:fe48:5bc9/64).
  4. Configure an IPv6 address on the host using ‘xe pif-reconfigure-ipv6’.

I’ll  provide some sample ‘xe pif-reconfigure-ipv6’ configuration commands.

Static IPv6 address – ‘xe pif-reconfigure-ipv6 mode=static uuid=<PIF_UUID> ipv6=<IPV6_ADDRESS>. The address is specified using the standard IPv6 notation with a slash followed by the prefix length (e.g., 2001:DB8::2/64). Fill in the UUID parameter with the Physical Interface (PIF) UUID of your management domain as provided by ‘xe pif-list’.

Stateless Autoconfiguration  (SLAAC) – ‘xe pif-reconfigure-ipv6 mode=autoconf uuid=<PIF_UUID> ipv6=<IPV6_ADDRESS>’. After executing this command the XCP host will create an IPv6 address using IPv6 Route Advertisements (RA). If there are no routers sending RAs on your network, the XCP host will not assign an address to xenbr0.

DHCPv6 – ‘xe pif-reconfigure-ipv6 mode=dhcp uuid=<PIF_UUID>’. XCP starts dhcp6c after the command is executed; however, the address assignment does not take place. If anyone wants DHCP on the hypervisor, feel free to fire up wireshark and track down the problem. I’ll update the post.

I’ve verified that the ‘xe’ commands can be run remotely using IPv6 transport. XenCenter also connects over IPv6. If you use an IPv6 literal, you must enclose it in brackets as you would in a web browser (e.g., [2001:DB8::2]). As I mentioned earlier in the post, XenCenter cannot configure IPv6 networking.

I want to thank Grant McWilliams for his tips that led me to figuring out IPv6 in XCP 1.6.

UPDATE 9/4/13 – This process also works in Xenserver 6.2.