Archive for ‘virtualization’

01/17/2015

Native IPv6 Functionality in Docker

by Jeff Loughridge

The days of kludgy hacks for IPv6-connected docker containers are over. A recent PR merged native IPv6 functionality into the docker daemon. The new bits have not yet made it into the docker ppa package as of 1/17/2015. Therefore, some assembly is required.

You’ll need to compile docker from source. The only currently supported docker build process uses docker. Does this remind anyone of the movie Inception?

Here’s an installation process you can use on a fresh 64-bit Ubuntu 14.04 VM.

 

sudo apt-get update
sudo apt-get -y install git build-essential docker.io
git clone https://github.com/docker/docker.git
cd docker
sudo make build
sudo make binary
sudo service docker.io stop
sudo ip link set docker0 down
sudo ip link delete docker0
VERSION=1.4.1  # version as of this writing
sudo cp ./bundles/$VERSION-dev/binary/docker-$VERSION-dev $(which docker)
sudo echo "DOCKER_OPTS='--ipv6 --fixed-cidr-v6=\"2001:DB8::/64\"'" >> /etc/default/docker.io
service docker.io start

Docker does not put an address within the /64 on the docker0 bridge. It uses fe80::1/64. The default route in the container is set to this link local address.

Your containers will not be able to communicate with the IPv6 internet unless the /64 you’ve selected is routed to the docker host. Unlike how docker handles IPv4 in containers, there is no NAT. Use a provider that will route the /64 to your docker host. Linode did this for me after I emailed the request to its support team. Using providers such as DigitalOcean that support IPv6 but do not route a /64 to your VM are not positioned to offer IPv6 connectivity to containers. You’ll have to use the Neighbor Discovery hack that I described in another post.

I’m not sure why docker doesn’t have an option to connect containers directly to a bridge that includes the internet-facing port. Doing this with LXC is easy to accomplish. I suspect this can be done with docker. I don’t know how though. Perhaps someone with more more knowledge of docker can explain how to attach the daemon to a bridge with the LAN interface.

I’ll note that the docker build environment appears to have a bug with name resolution in the build container if IPv6 DNS servers are in /etc/resolv.conf. I didn’t want to invest the time to troubleshoot it. You can comment out the IPv6 DNS servers in the docker host’s /etc/resolv.conf file to avoid the defect.

If you run into problems, let me know in the comments.

Advertisements
Tags:
07/22/2014

IPv6 in Docker Containers on DigitalOcean

by Jeff Loughridge

This post details how I enabled IPv6 addresses in docker containers on DigitalOcean.

DigitalOcean supports IPv6 in its London 1 and Singapore 1 data centers as of July 2014. Create a droplet using the Ubuntu 14.04 image and DO’s docker installation under Applications in the “Create VM” screen. Make sure to check “IPv6”.

DO provides a VM with 20 addresses within a /64. docker0 gets assigned 0x1 as the last hex character. I give the docker0 interface 0x4. This leaves 0x5 to 0xf for my containers.

Let’s take an example. DO gives me 2a03:b0c0:1:d0::18:d000 to 2a03:b0c0:1:d0::18:d00f. docker0 gets 2a03:b0c0:1:d0::18:d004. The last double octet for containers ranges from 0xd005 to 0xd00f.

To break up the 20 addresses provided by DO, we need a way for containers to respond to IPv6 neighbor discovery on the host’s eth0 interface. This could be performed using a ND proxy daemon (see here for one implementation). For most docker use cases, a static ND proxy entry should do.

docker does not natively support IPv6. LXC, the foundation of docker, can handle IPv6 in containers. As of docker 1, the software uses libcontainer by default instead of LXC. We’ll have to configure /etc/default/docker to use the LXC driver.

See below for example for one time set-up of the droplet.

#!/bin/bash

# enable IPv6 forwarding and ND proxying
echo net.ipv6.conf.all.proxy_ndp=1 >> /etc/sysctl.conf
echo net.ipv6.conf.all.forwarding=1 >> /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf

# install LXC
sudo apt-get install -y lxc

# use the LXC driver
echo DOCKER_OPTS=\"--exec-driver=lxc\" >> /etc/default/docker

service docker restart

The script below demonstrates how to set-up the static ND proxy entries. Make sure to change the V6_START variable.


#!/bin/bash
 
# This script provides an example of setting up IPv6 static
# ND proxy entries. Edit the V6_START to match
# what you see in the DO control panel
 
V6_START=2a03:b0c0:1:d0::18:d000
 
# strip the last hex character
V6_MINUS_LAST_HEX_CHAR=`echo $V6_START|sed s/.$//`
 
ip addr add ${V6_MINUS_LAST_HEX_CHAR}4/124 dev docker0
 
echo "adding ND proxy entries..."
for character in 4 5 6 7 8 9 a b c d e f; do
  echo "ip -6 neigh add proxy ${V6_MINUS_LAST_HEX_CHAR}${character} dev eth0"
  ip -6 neigh add proxy ${V6_MINUS_LAST_HEX_CHAR}${character} dev eth0
done

Now we’re ready to bring up the container. The first argument must be an IPv6 address in your assigned
range with the last double octet between 0xXXX5 and 0xXXXF. For me, this is 0xd005 to 0xd00f.

#!/bin/bash
 
# first argument to script must be IPv6 address from DO-allocated
# space that is not part of the first /126 (e.g. 0x4 to 0xF as last
# hex character
 
IPV6_ADDRESS=$1
 
if [ -z "$IPV6_ADDRESS" ]; then
  echo "please specify IPv6 address for container's eth0"
  exit 1
fi
 
echo "container eth0: $IPV6_ADDRESS"

# run container so that docker0 gets a link local address
docker run busybox:ubuntu-14.04 /bin/true
docker rm $(docker ps -lq)

# get docker0's link local address
LINK_LOCAL=$( \
  ip addr show docker0 | \
   grep "inet6 fe80" | \
    awk '{print $2}' | \
      sed 's/\/.*//' \
)

if [ -z "$LINK_LOCAL" ]; then
  echo "unable to find link local address on docker0. something is wrong."
  exit 1
fi
 
echo "docker0 link local: $LINK_LOCAL"
 
docker run -i -t \
   --lxc-conf="lxc.network.flags = up" \
   --lxc-conf="lxc.network.ipv6 = $IPV6_ADDRESS/124" \
   --lxc-conf="lxc.network.ipv6.gateway = $LINK_LOCAL" busybox:ubuntu-14.04 /bin/sh

Executing the script will put you in interactive mode in a shell in the container. Try ‘ping6 2600::’ to to test connectivity. If you are having trouble, let me know in the comments.

I want to thank Andreas Neuhaus for his IPv6 in Docker Containers post and his suggestion to use static ND proxy when the provider does not route a /64 to your docker host.

UPDATE 1/17/2015 – Docker now has native IPv6 functionality. See this post.

06/16/2013

IPv6 in XCP 1.6

by Jeff Loughridge

The intent of this post is to document how to enable IPv6 in XCP 1.6 and manage the host using IPv6 transport. I hope Google leads many people to this page, as I wasn’t able to find anything else on the web on the subject. I’d like to see more people experimenting with IPv6 on XCP hosts.

XCP 1.6 is built on an optimized Centos 6 dom0 kernel. Enabling IPv6 is not as simply a matter of editing files in /etc/sysconfig/ as you would for a typical Centos server. XCP takes over network configuration during system start-up. Fortunately, the process is very straightforward. To manage IPv6 networking on an XCP host, you must be comfortable with the ‘xe’ command line tools, as this cannot be performed using XenCenter.

Here are the steps for enabling IPv6 and configuring an IPv6 address.

  1. Log in to the XCP host as root and execute ‘/opt/xensource/bin/xe-enable-ipv6 enable’.
  2. Reboot.
  3. Verify that an IPv6 link-local address appears on xenbr0 using ‘ip -6 addr show dev xenbr0’. You should see a /64 address that begins with fe80 (e.g., fe80::20c:29ff:fe48:5bc9/64).
  4. Configure an IPv6 address on the host using ‘xe pif-reconfigure-ipv6’.

I’ll  provide some sample ‘xe pif-reconfigure-ipv6’ configuration commands.

Static IPv6 address – ‘xe pif-reconfigure-ipv6 mode=static uuid=<PIF_UUID> ipv6=<IPV6_ADDRESS>. The address is specified using the standard IPv6 notation with a slash followed by the prefix length (e.g., 2001:DB8::2/64). Fill in the UUID parameter with the Physical Interface (PIF) UUID of your management domain as provided by ‘xe pif-list’.

Stateless Autoconfiguration  (SLAAC) – ‘xe pif-reconfigure-ipv6 mode=autoconf uuid=<PIF_UUID> ipv6=<IPV6_ADDRESS>’. After executing this command the XCP host will create an IPv6 address using IPv6 Route Advertisements (RA). If there are no routers sending RAs on your network, the XCP host will not assign an address to xenbr0.

DHCPv6 – ‘xe pif-reconfigure-ipv6 mode=dhcp uuid=<PIF_UUID>’. XCP starts dhcp6c after the command is executed; however, the address assignment does not take place. If anyone wants DHCP on the hypervisor, feel free to fire up wireshark and track down the problem. I’ll update the post.

I’ve verified that the ‘xe’ commands can be run remotely using IPv6 transport. XenCenter also connects over IPv6. If you use an IPv6 literal, you must enclose it in brackets as you would in a web browser (e.g., [2001:DB8::2]). As I mentioned earlier in the post, XenCenter cannot configure IPv6 networking.

I want to thank Grant McWilliams for his tips that led me to figuring out IPv6 in XCP 1.6.

UPDATE 9/4/13 – This process also works in Xenserver 6.2.

04/23/2013

James Hamilton’s Failures at Scale and How to Ignore Them at AWS re: Invent 2012

by Jeff Loughridge

We know that James Hamilton is a bright guy. His On Designing and Deploying Internet-Scale Services paper and Datacenter Networks Are In My Way presentation are fascinating for those interested in data centers and distributed systems in general.

This morning I watched his Failures at Scale and How to Ignore Them talk at AWS re: Invent. The presentation is a must-watch.

03/30/2013

Why I Use AWS EC2 Reserved Instances

by Jeff Loughridge

Amazon Web Services (AWS) EC2 reserved instances provide a simple-to-use method for reducing AWS costs for small business like mine. When my free tier expired last year, I’d heard of reserved instances but didn’t recognize how the how simple it is to save money using them rather than on-demand instances.

We’re been told that EC2 instances are not servers. I agree completely. People who use EC2 instances as simple VPS-like servers are using a fraction of AWS’ capabilities. You can find very inexpensive VPSs from Joe’s Datacenter (btw, kudos to Joe’s for native IPv6). If you are not technically inclined, you’ll find VM with cpanel already pre-installed much easier to use than AWS EC2. I choose AWS because I wanted experience with the API and other AWS services.

If you run your company’s web page on EC2, the instance will be running 24×365. You can commit to a certain number of hours using reserved instances. AWS charges an upfront fee in addition to reduced hourly fees.

Let’s look at an example. You decide on a micro instance for a lightly utilized web server (I recommend you test your load on a micro instance before buying reserved instances. Some people are unhappy with the performance.). We’ll use a Linux instance in the Northern Virginia region.

All prices listed from 3/30/2013 in USD.

The cost for an on-demand Linux instance is $0.02/hour, or about $175/year.

The cost for a reserved Linux instance (light utilization) has an upfront cost of $23. The hourly pricing is $0.012/hour, or about $105/year. Add in the $23 upfront fee for a total of $128/year.

27% savings isn’t bad at all.  Increasing the commitment or selecting a bigger instance increases the discount. AWS claims on its web site that the savings can be up to 65%. In absolute terms, this is chump change for a person serious about running a business. Even so, I think awareness of the savings with reserved instances is good to know if you are setting up instances for friends, family members, and charities.

Check out AWS’s reserved instance page for additional information.

01/06/2012

The AWS VPC and the Network Engineer

by Jeff Loughridge

Amazon AWS is doing amazing things with its IaaS platform. As a networking guy, I find the networking features very impressive. AWS made a wise choice in using Layer 3 as the networking foundation. I suppose AWS engineers recognize what should be a widely held belief in networking–Layer 2 does not scale. The connection of the VPC to corporate data centers presents a compelling value proposition for customer interested in offloading work to the cloud. What I want to focus on in this post is how the integration of cloud and corporate network affects the network engineer.

I design IP networks for my clients. I know my way around basic Linux system administration and can probably figure most things out with patience and Google. I respect talented sys admins who understand the service that the IP network provides to their systems and can communicate simple network conditions (e.g., “I can’t ping the default gateway”). Who will be integrating the VPC and the corporate network? Clearly, both network engineers and sys admins will be involved. You wouldn’t want a sys admin making critical IP design decisions any more than you’d want me standing up a hadoop cluster.

Network engineers will have to adapt their thinking to the virtualized environment. This is a new way of thinking about moving packets. Networking components in the physical world are about as un-elastic resources as possible. I would argue more so than servers. Getting to a point in which network engineers can grasp the flexibility in VPC is going to require investment on their part in learning–the same way learning IS-IS would for an engineer who knows OSPF.

Educating network engineers in VPC networks is in Amazon’s best interests. It’s going to be guys like me who will get calls from potential clients wanting to tie their VPC into their network. The existing documentation does little to further that goal. I had to reach the VPC guide several times before obtaining a degree of comfort. Elastic Network Interfaces? Implied routers? Subnet routing tables? These concepts are not intuitive for network engineers.

Here’s how I recommend that Amazon could educate my networking brethren.

  1. Write a guide on the VPC intended for network engineers. Think about how Juniper write JUNOS documentation for engineers with an Cisco background. This is a very effective way to quickly get smart folks up-to-speed.
  2. Document use cases & recommended architectures for VPC that involve VPC to VPC and VPC to data center connectivity. Cisco excels in this area with its Cisco Validated Designs. Mimic their approach. Today, the documentation is limited to connecting a VPC gateway to a router with IPsec. This barely scratches the surface of how customers will use the networking capabilities of the VPC.
  3. Create online training that steps through the configuration of a VPC. Adding a hands-on component with “actual” VPCs shouldn’t be that difficult for a company that does virtualization at a massive scale.
  4. Talk to internal and external networking savvy engineers. I’ve met some sharp engineers who work on Amazon’s backbone. By engaging them and engineers outside of Amazon, the company could gain valuable insight on networking.

Migrating to the VPC should be as frictionless as possible for businesses. The accelerated set-up of a stable and scalable VPC will translate into more revenue for Amazon.

01/04/2012

Adventures in AWS, DNS, and IPv6

by Jeff Loughridge

This post describes how I used AWS Elastic Load Balancers and Route 53 to enable IPv6 connectivity to the zone apex of my company’s domain.

Recently I moved my company’s page to Amazon’s AWS. I needed IPv6 support, and the hosting company I was using kept promising IPv6 in 2 to 3 months but never delivered. I used the process I outlined in a previous post to make my site reachable via IPv6 using Elastic Load Balancers. I recommend reading that post before continuing if you don’t know how to do this.

In implementing IPv6 connectivity for my site, I stumbled on a problem that I had not considered. The URL for my company is http://brooksconsulting-llc.com. The URL is already long; I don’t want to put http://www.brooksconsulting-llc.com on company material, email signature, and business card. The “naked” domain, meaning the top of the zone, is called the zone apex. Per RFC 1034, CNAMEs cannot co-exist with required NS and SOA records. The IPv6 hack using AWS Elastic Load Balancers needs a CNAME. Fortunately, AWS does some proprietary magic and accommodates CNAMEs at the zone apex (see announcement here).

You must use AWS’s Route 53 tool for your zone. This wasn’t a problem for me. I prefer Route 53’s zone management GUI over GoDaddy’s. I realized that the Route 53 GUI appears not to support AWS’s on-the-fly conversion from CNAME to A/AAAA record. I had to use the CLI tools to add the records. I used the elb-associate-route53-hosted-zone command twice–once with the –rr-type A (the default) and once with the –rr-type AAAA flag–to add the entry. For more information, check out this section of the Elastic Load Balancing Developer Guide.

I posted a question to ServerFault to see if there was a way to perform the association in the Route 53 GUI. Jesper Mortensen provided a very helpful response. He believes the association can’t be made in the GUI.

Does all of this sound daunting? Well, I probably took a more difficult path than necessary. I’ve read that DNS30 has a GUI to manage Route 53 that includes a method to instruct Route 53 to do the CNAME to A/AAAA record conversion. You may want to take this approach, especially if you don’t already have the EC2 and Load Balancing API tools installed on your system.

In responding to my question at ServerFault, Jesper pointed out that there is an effort underway to standardize the use of CNAMEs at the zone apex. The Internet Draft is here.

——————————————————————————————-

UPDATE (1/5/2012) – A friendly engineer from the AWS Route 53 team contacted me and provided instructions for creating alias resource record sets in the Route 53 console. I confirmed that these work.

Here are the steps.

1. click create record set
2. for zone apex record just leave the name field blank
3. select the type of alias you want to make A or AAAA (all steps after this are the same for both types)
4. Select the yes radio button.
5. Open the EC2 console in another tab and navigate to the list of your load balancers.
6. Click on the load balancer and look at the description tab in the pane below the list. Sample output below

DNS Name:
new-balancer-751654286.us-east-1.elb.amazonaws.com (A Record)
ipv6.new-balancer-751654286.us-east-1.elb.amazonaws.com (AAAA Record)
dualstack.new-balancer-751654286.us-east-1.elb.amazonaws.com (A or AAAA Record)

Note: Because the set of IP addresses associated with a LoadBalancer can change over time,
you should never create an “A” record with any specific IP address. If you want to use a friendly
DNS name for your LoadBalancer instead of the name generated by the Elastic Load Balancing
service, you should create a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53
to create a hosted zone. For more information, see the Using Domain Names With Elastic Load Balancing

Status: 0 of 0 instances in service

Port Configuration: 80 (HTTP) forwarding to 80 (HTTP)

Stickiness: Disabled(edit)

Availability Zones:
us-east-1b

Source Security Group:
amazon-elb-sg

Owner Alias: amazon-elb

Hosted Zone ID:
Z3DZXD0Q79N41H

7. Now copy the Hosted zone ID in the above case ‘ Z3DZXD0Q79N41H’ and paste it into the field labeled ‘Alias Hosted Zone ID:’
8. Now copy the DNS Name in the above case ‘ new-balancer-751654286.us-east-1.elb.amazonaws.com‘ and paste into the field ‘ Alias DNS Name:’
-Just an FYI this DNS name is the same for both A and AAAA alias records. (do not use ‘ ipv6.new-balancer-751654286.us-east-1.elb.amazonaws.com‘)
9. Click create record set or at this time you can select yes to weight the record and provide a weight between 0-255 and a setID such as ‘my load balancer’

Tags: , , , , , ,
09/04/2011

How to Share Content over IPv6 with AWS EC2

by Jeff Loughridge

Although EC2 instances are not IPv6-capable as of this writing, Amazon has implemented IPv6 for its US East (Northern Virginia) and EU (Ireland) Elastic Load Balancers. I’ll demonstrate how to make IPv6 content available using EC2 and the load balancers. Please note that Amazon is currently offering new customers EC2 micro instances at no charge if you remain under certain thresholds.

Instance Set-up

  1. Install a Linux-based Amazon Machine Instance. If you want to follow along with this tutorial, use a Ubuntu 10.04 LTS instance that Canonical uploaded to the Community AMIs (AMI ID ami-63be790a). Use US East or EU (Ireland) servers. If this is your first time setting up an instance, I recommend viewing Greg Wilson’s tutorial on Youtube.
  2. Log in using the “ubuntu” user name. Use the ssh private key as described in the video.
  3. Install the packages required for a LAMP server. A simple way to do this is to “sudo tasksel --section server”. Select “LAMP server” in the graphical installer. Strangely, the LAMP selection does not install PHP. I did this manually with “sudo apt-get install php5-cli”.

Load Balancer Set-up

  1. Click on “Load Balancer” in the “Network & Security” left panel of the AWS Console. Click the “Create Load Balancer” button.
  2. Give your load balancer a name. I used the default HTTP entry. For the health check, I used the default settings.
  3. Add your instance to the load balancer.
  4. Now that the load balancer is created, place a check next to its entry so that detailed information appears in the bottom panel.
  5. Write down your IPv4, IPv6, and dual stack DNS names.
  6. Click on the Instances tab in the bottom panel. Make sure the instance’s status indicates “In Service”. Note:  I’ve noticed that the time required for the health check to add the instance into service can be 20 – 45 minutes.

Testing DNS and Load Balancer

  1. Use dig or nslookup to verify that you get A (IPv4) and AAAA (IPv6) records. This verification step is primarily for your information.
  2. ubuntu@ip-10-244-171-28:~$ nslookup
    > Jeff-LB-Test-1796974432.us-east-1.elb.amazonaws.com
    Server: 172.16.0.23
    Address: 172.16.0.23#53
    Non-authoritative answer:
    Name: Jeff-LB-Test-1796974432.us-east-1.elb.amazonaws.com
    Address: 50.19.220.184
    > set type=AAAA
    > ipv6.Jeff-LB-Test-1796974432.us-east-1.elb.amazonaws.com
    Server: 172.16.0.23
    Address: 172.16.0.23#53
    Non-authoritative answer:
    ipv6.Jeff-LB-Test-1796974432.us-east-1.elb.amazonaws.com has AAAA address 2406:da00:ff00::3213:dcb8
    Authoritative answers can be found from:
    > dualstack.Jeff-LB-Test-1796974432.us-east-1.elb.amazonaws.com
    Server: 172.16.0.23
    Address: 172.16.0.23#53
    Non-authoritative answer:
    dualstack.Jeff-LB-Test-1796974432.us-east-1.elb.amazonaws.com has AAAA address 2406:da00:ff00::3213:dcb8
    Authoritative answers can be found from:
    >

  3. Create a script called test.php with the following text.
    <?php
    
    $headers = apache_request_headers();
    $ip = $headers["X-Forwarded-For"];
    
    if($ip) {
      print "X-Forwarded-For header is $ip";
    }
    else {
      $ip =  getenv('REMOTE_ADDR');
      print "IP is $ip";
    }
    
    ?>
    

    Amazon’s Elastic Load Balancers will set the X-Forwarded-For header to the IPv6 source address. If the connection is made via IPv4, the X-Forwarded-For variable is undefined. Put this script in /var/www.

  4. Using your web browser, access http://yourIPv4DNS/test.php, http://yourIPv6DNS/test.php, and http://yourDualstackDNS/test.php. Assuming you are accessing from a dual stack IPv4/IPv6 end host that prefers IPv6, you will see an IPv4 address, an IPv6 address, and an IPv6 address respectively.

 

Congratulations! Your content is now available over IPv6. Now you can set the CNAME record for your domain to the dual stack DNS name so that users can type in your domain and reach your site via IPv4 or IPv6. For more information on how to use CNAME’s with Amazon EC2, see Using Domain Names with Elastic Load Balancing.

I hope this post encourages people to make content available over IPv6. The days of assuming all end hosts are reachable via IPv4 are over. Amazon’s EC2 and Elastic Load Balancers make transitioning content to IPv6 simple.

08/06/2011

Virtualization in the Network Designer’s Toolbox

by Jeff Loughridge

I’ve found virtualization increasingly useful in my work. I thought I’d share my observations on effectively using virtualization for feature testing, architecture validation, and learning. Virtualization is a very inexpensive way to accomplish tasks that previously required thousands of dollars in lab equipment.

The first decision in employing virtualization is selection where to establish your test environment. The advantage of using a dedicated server is that your applications aren’t competing for resources with the test environment. You can get a server with a lot of memory, which I would advise if you plan on using many virtual machines simultaneously. I prefer Ubuntu Server LTS for headless servers. Ubuntu provides a very stable host OS.

An alternative is creating the virtual environment on your laptop. This comes in very handy if you find yourself without Internet connectivity or you deliver a demo to customers. If you are at a customer site, do not expect to be able to reach your server. There are too many problems that can arise. For my needs, I maintain virtualized labs on both my laptop and office server.

For software, I recommend purchasing VMware Workstation 7.1. VirtualBox has its uses; however, Workstation is a better option. It has features not available in VirtualBox. Let’s take a look.

  • Teaming – Workstation lets you set up a group of VMs in a way that makes it easier to manage the virtual infrastructure. You do things such as start and stop all the VMs in the team. Over time, you end having numerous VMs for different purposes. The ability to group VMs into teams is a convenience.
  • VM Recording/Playback – This is an excellent feature for creating demos.
  • Virtual Network Editor – Creating the virtual network infrastructure is very simple in Workstation. When you combine teaming with the virtual network editor, you can set up new labs very quickly.
VMware Workstation has additional benefits. For those new to virtualization, the software introduces you to concepts and terminology that VMware uses across its product family. I use Workstation for one of the same reasons I use Ubuntu. When something breaks, you can almost inevitably find someone else who has encountered the problem by doing a web search. Don’t expect this if you use VirtualBox. Don’t get me wrong– I’m a proponent of open source software. In this case, the better product is clearly Workstation. On a related note, avoid qemu and its derivatives like the plague. Setting up bridging by hand and figuring out poorly documented command line flags is a hassle you don’t need.

 

Be very wary about connecting a virtualized environment to the old Cisco router you have in storage. I’ve made the mistake of trying to connect VMs and tangible networks. For performing testing, do this as a last resort. You don’t want to spend time when something breaks figuring out if the problem lies in the interconnection of physical and virtual gear.

 

To give you an idea of how I use virtualization, I’ll share several items on my to-do list. (Can you guess that I’m thinking about IPv6 a lot these days?)
  • Ecdysis NAT64/DNS64 – While I wouldn’t recommend beta software to clients, I don’t have commercial NAT64/DNS64 products in my lab. I want to investigate the IPv6-only user experience across various OSes.
  • Linux installation with IPv6-only connectivity– After doing some basic testing, I suspect that the developers of some distributions assume that end stations are dual stack. For example, I’ve been unable to get CentOS to install with only IPv6 connectivity. The installer sends DNS queries for A records only. I hope to write a report on the state of IPv6-only installations across the major distributions prior to end of year.
  • IPv6 Router Advertisement Option for DNS Configuration (RFC5006) – Recently there has been discussion on the v6ops list about replicating functionality in both DHCPv6 and SLAAC.  As a core guy, I haven’t worked extensively DHCPv6. I’d like to see DNS server assignment as explained in RFC5006. I believe only Linux supports the RFC. I’ll confirm.
If you are like most engineers, you enjoy taking things apart and understanding the details of how they work. Virtualization gives you the ability to do this without a big investment. Go forth and virtualize.