One Unified Global Perspective
Communications with a Global Perspective
Home
Intro
Contact Us
Voice over IP
PBX Solutions
Services
Support
Glossary
Open Source
Blog
Forum

WebMail





2013 Aug 12 - Mon

Linux Containers (LXC) on Debian Wheezy 7.1 With OpenVSwitch

Now that the networking side of things are complete with Quagga and OpenVSwitch, I can start working further up the technology layers. The next step is to work on virtualization technologies and their management. The focus of this article is to get Linux Containers (LXC) working within a Debian Wheezy 7.1 environment.

Before getting into the installation steps of lxc, some background first. LXC is not a full virtualization solution with an independent kernal and bios, such as provided by Qemu/Kvm. Instead, it runs in a namespace of the host's kernel. As such, it has very fast startup characteristics. The draw backs are several: a) it doesn't offer the tighter security that something like OpenVZ offers (another container flavour), b) it is a more recent container solution and has much catching up to do, and c) and one of things it needs to catch up with is live migration of sessions from one host to another.

János Pásztor goes into some of the differences between LXC and OpenVZ. The disadvantage of OpenVZ is that it has not been welecomed in to recent versions of the Linux Kernel. It appears as though the game plan is the passing of the baton from OpenVZ to LXC.

From an LXC perspective, its usefulness is good when using it in environments where the administrator has control over its integration. It is not a viable candidate when trying to set hosting solutions for third parties. Tighter security for those scenarios can be supplied with a Qemu/Kvm style solution. Some additional reservations regarding OpenVZ and LXC.

The basics of installation for LXC are straight-forward:

aptitude install lxc libvirt-bin

The first package deals with lxc directly. If I wasn't using OpenVSWitch, lxc would be all that I require. But in order to use OpenVSwitch for the networking side of things, I need assistance from the libvirt system. As a side effect of the libvirt install, the Qemu/Kvm compontents get installed at the same time. However, since I'm focussing on lxc at the moment, Qemu/Kvm will be discussed in a different entry.

A manual entry into /etc/fstab is required followed by a reboot to resolve a "resource busy problem" as well as taking care of the mount:

cgroup  /sys/fs/cgroup  cgroup  defaults  0   0

There is some explanation for control groups (cgroup) at Control Groups Resource Management.

Before performing the reboot, one other adjustment is required. The following needs to replace the 'GRUB_CMDLINE_LINUX=""' in /etc/default/grub:

GRUB_CMDLINE_LINUX="cgroup_enable=memory"

Then run the command 'update-grub'. Then perform the reboot. This is from the Debian Wiki on LXC.

After which, running 'lxc-checkconfig' should result in 'yes' for all items.

We can now move on to creating an lxc container. But even this isn't completely straightforward. Wheezy was released unstable in this regard, no functioning lxc template. Instead, based upon Horrors using Debian Wheezy, we need to obtain the template from elsewhere. LXC template lxc-debian-wheezy-template talks about the template.

The template uses debootstrap as part of the build process. The supplied script can be easily modified to install additional packages, or to choose different install configurations.

wget http://freedomboxblog.nl/wp-content/uploads/lxc-debian-wheezy.gz
gzip -d lxc-debian-wheezy.gz
mv lxc-debian-wheezy /usr/share/lxc/templates
chmod +x /usr/share/lxc/templates/lxc-debian-wheezy

Because I am doing this on an amd64 flavour, some lines in /usr/share/lxc/templates/lxc-debian-wheezy needs to be modified:

rootfs=$1/rootfs-amd64
# the following are commented out, not proper
#1:2345:respawn:/sbin/getty 38400 console
#c1:12345:respawn:/sbin/getty 38400 tty1 linux
#c2:12345:respawn:/sbin/getty 38400 tty2 linux
#c3:12345:respawn:/sbin/getty 38400 tty3 linux
#c4:12345:respawn:/sbin/getty 38400 tty4 linux
# instead use these:
1:2345:respawn:/sbin/getty 38400 tty1
2:23:respawn:/sbin/getty 38400 tty2
3:23:respawn:/sbin/getty 38400 tty3
4:23:respawn:/sbin/getty 38400 tty4
5:23:respawn:/sbin/getty 38400 tty5
6:23:respawn:/sbin/getty 38400 tty6

The lxc container can now be created:

lxc-create -n debianbase -t debian-wheezy

I am using btrfs as my file system for holding these containers. One interesting feature of the integration between btrfs and lxc is that the lxc-create command will automatically create a btrfs sub-volume for the new container. These allows the use of snapshots for each container. Therefore, the one caveat of which to be aware is that when removing a container, the btrfs subvolume delete command is required. For example, 'rmdir -rf /var/lib/lxc/debianbase' will complain that rootfs is on another device. The command 'btrfs subvolume delete /var/lib/lxc/debianbase/rootfs' will need to be run to remove the subvolme before the remaining elements can be removed with the rm command.

In any case, the creation process will take a few moments to transfer, extract, and configure.

Now that the container has been created, there will be a config file located in /var/lib/lxc/debianbase. This file is used by the lxc-start commands. But since we are using OpenVSwitch, errors will occur when trying to start with the lxc-start commands. Instead, we will ignore this file and use libvirt/virtsh to start the container. However, some additional configuration is first required. An .xml configuration file is required to define the lxc container to be run.

There are a couple of examples located within the libvirt LXC container driver documentation. There is another sample described as libvirt LXC container w/ bridged networking (and 2GB RAM).

The key point is that libvirt will only work with OpenVSwitch in bridged mode. How to Use Open vSwitch with Libvir discusses this a bit. Be sure to have the bridge device configured in OpenVSWitch with:

ovs-vsctl add-br br0

The key line in the libvirt file is to have '<virtualport type='openvswitch'/>' in place. I placed my debianbase.xml file in the /var/lib/lxc directory, to keep everything together. LibVirt Network XML format goes into the fine points of the configuration.

One other consideration is that I am running the amd64 64 bit version. The lxc-create command changed the root partition to be /var/lib/lxc/debianbase/rootfs/rootfs-amd64 instead of /var/lib/lxc/debianbase/rootfs.

libvirt will take care of the interface creation and integration between OpenVSwitch and the lxc container. But a configuration change will be required in the container to obtain an appropriate ip address. To use dhcp for addressing, a default dhcp stanza has been added for eth0. If a static ip address is to be assigned instead, use the normal addressing stanzas associated with that file.

Also, so you don't confuse yourself, set a different hostname in the file at: /var/lib/lxc/debianbase/rootfs/rootfs-amd64/etc/hostname.

LibVirt, the server, also needs to be told about OpenVSwitch. I copied a template from elsewhere in libvirt to use:

cp -p /etc/libvirt/qemu/networks/default.xml /etc/libvirt/lxc/network.xml

Here is an example network.xml file.

The network has to be started in libvirt:

virsh
  connect lxc:///
  net-define /etc/libvirt/lxc/network.xml
  net-start host-bridge
  net-autostart host-bridge

The container can now be started with libvirt:

virsh
  connect lxc:///
  define /var/lib/lxc/debianbase/debianbase.xml
  start debianbase
  console debianbase

For some reason, which has been noted by a number of authors, is that ssh is not configured properly, probably becuase the keys need to be regenerated, which the default debootstrap doesn't deal with. So, the ssh server has to be re-installed. From the console, a few house-keeping adjustments. The locale and ssh reinstall could be performed from a chroot environment:

cd /var/lib/lxc
chroot debianbase/rootfs/rootfs-amd64/
pwd
....
exit
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# fix some perl locale issues:
echo "en_US.UTF-8 UTF-8" > /etc/locale.gen
apt-get -y --force-yes install locales
dpkg-reconfigure locales
apt-get install --reinstall openssh-server
# add some additional tools
apt-get --no-install-recommends install -y adduser apt-utils iputils-ping rsyslog logrotate
apt-get install --no-install-recommends openssh-blacklist openssh-blacklist-extra
apt-get install wget less lsof screen nano inetutils-ping psmisc sockstat
#apt-get install man
apt-get clean
apt-get autoremove

The perl locale issues are identified and remedied with Creating a LXC virtual machine template (from scratch).

LXC HOWTO has some interesting ideas regarding LXC containter initialization.

The ssh reinstall was mentioned at Setting up LXC containers in 30 minutes (Debian Wheezy).

Coruscating Lucubrations talks about Incremental backups with btrfs

Margaret Bierman talks about How I Use the Advanced Capabilities of Btrfs.

Dobrica Pavlinusic's Weblog / Blog has an entry discussing the use of private mac addresses:

You will notice that I'm using prefix AC:DE:48 prefix for my mac addresses, which, to best of my knowledge range for private use and used all over IEEE docs (something like private IP range, but for mac addresses). I also have habit of naming interfaces with last octet of IP adress for internal ones, and last two for external one and same notation for mac addresses. Our brain is good at spotting patterns, and if you can read hex this seems just natural...

A ctrl-d or the command exit will exit and close the container. To disconnect from the container, instead use 'ctrl-a q', or if using screen, use 'ctrl-a a q'. 'ctrl-a ctrl-a' can be used to perform a regular 'ctrl-a' in the session (ie, move to beginning of a line). These commands apparently won't work in foreground mode, only in background mode.

Once in the container, and an ip address has been obtained, apt-get install iputils-ping can be used to get the ping utility installed.

On a moderately related note, I was wondering how to get by without doing the virsh .xml file first. The last part of Libvirt 1.0.5 with Openvswitch 1.11.90 provides the clue. Use the command virt-install like:

 # virt-install --connect qemu:///system --name DSL2 --ram 1024 --vcpus 1 \
      --disk path=/tmp/dsl2,size=1,bus=virtio,cache=none \
      --network=ovs-br0 \    <======= THIS LINE IS CHANGED.
      --vnc --os-type=linux --cdrom /dev/sr0

In a follow on article, I have to deal with one level of added complexity, working with vlans with libvirt. The question was brought up with [ovs-discuss] tag vlan communication inter LXC. The solution supposedly exists at Open vSwitch Frequently Asked Questions.

However, according to LibVirt Network XML format, libvirt handles VLAN oriented configurations starting with libvirt version 0.10.0. Debian Wheezy uses version 0.9.12. The solution to this is supplied via remarks at using VLANs with Open vSwitch Fake Bridges. This is actually quite useful as I think that there can be multiple networkxxx.xml files, each supporting a vlan, and as the guest file references a particular network name, the appropriate vlan can be selected. I haven't tested it yet, but this should also allow multiple interfaces, each connected to a particular vlan.

Some older info on LXC on Debian Squeeze.

docker: an open source project to pack, ship and run any application as a lightweight container making use of LXC.


2012 Oct 23 - Tue

HostDatastoreSystem.QueryVmfsDatastoreCreateOptions

I was having problems with getting a VMWare ESXi 5.1 Host to register properly with vSphere 5.0 (probably something seriously wrong with that whole concept all by itself). During the install of ESXi 5.1, the local drives were attached and formatted for vmfs capability.

Since I couldn't fix the registration problem, I rolled the host back to ESXi 4.1. When it came time to reformat and make the drives useful for vmfs, I encountered an error along the lines of:

VMware: Call "HostDatastoreSystem.QueryVmfsDatastoreCreateOptions" for object "ha-datastoresystem" on ESXi "SERVERNAME" failed

The comments of VMPros Virtualization provided the solution (with an enhancement included by someone in the comments):

  • I just did a .ls /vmfs/devices/disks/. from the shell to identify what was the vml I had to use.
  • To find the vml I just ran the following command .partedUtil get /vmfs/devices/disks/VMLID. and I got an error with the VMLID corresponding to the new drive.
  • After that, I just ran your .dd if=/dev/zero of=./vmfs/devices/disks/VMLID. bs=512 count=34 conv=notrunc. and I was able to add the hard drive.


2012 Oct 22 - Mon

Migrating a QEMU/KVM Image to VMWare

I recently found out that ISPs and Managed Service Providers are able to obtain a different licensing schedule from VMWare for hosted services. For some facilities, just starting up, this may be an attractive alternative to VMWare's mainstream licensing policy. Expecting that growth will be a primary driving factor, licensing is based upon a monthly useage fee. One has to become a partner with VMWare in order to take advantage of this program.

With this licensing plan in place, a company is migrating their virtualized guests from QEMU/KVM over to VMWare.

For Linux and Windows boxes, VMWare has a Converter program. For all other guests, it would, at first blush, appear to be a hard problem to solve. However, such is not the case. Instead, it is quite easy to convert and migrate guests from QEMU/KVM over to VMWare.

For Linux guests, while still active on QEMU/KVM, I installed the VMWare tools. This ensures network connectivity once the guest is migrated. On Wheezy/SID Debian, the following command gets the tools loaded:

  • apt-get install open-vm-tools open-vm-dkms

Then shutdown the guest.

At the QEMU/KVM host level, run the following command on the image file to prepare it for transfer to VMWare:

  • qemu-img convert filename.img -O vmdk filename.vmdk

On the the VMWare vSphere Client, I'll create new guest with a dummy harddrive component.

As I have had issues trying to use VMWare's GUI to upload large files into a datastore. I have also had timeout issues trying to SCP files to a VMWare Host. Instead, I login via SSH into the destination VMWare host, and copy files from there via SSH from the QEMU/KVM Host server. The file gets placed into the newly created guest directory. I then edit settings to delete the temporary connection and connect in the real harddrive/vmdk file.

The guest can be started and should run as normal.


2012 Jul 02 - Mon

Open vSwitch Configuration on Debian Sid/Wheezy

In my previous article, I went through the steps of installing Open vSwitch for use within a Qemu/KVM solution in substitute of the usual Linux bridge utility.

OVS works differently from normal Linux networking in at least a couple of different ways. The first way is that a single regular bridge in Linux is designed to handle a single VLAN, ie, one bridge per VLAN. This is why one reason why I decided to forgo using the standard Linux bridge utilities. With OVS, an OVS bridge takes the VLAN trunking in stride, and can bridge out trunk ports or regular access ports. A trunk port uses the command of 'ovs-vsctl add-port br1 tap0', whereas an access port connected to VLAN 101 would use 'ovs-vsctl add-port br1 tap0 tag=101'. Both commands attach the tap0 interface to the bridge br1, the first as a trunk, the second as an access port.

The second difference relates to something which I'll loosely define as non-persisted network configuration versus persisted network configuration. Using regular Linux bridging tools, the individual commands for bridging are incorporated into the /etc/network/interfaces stanzas. These stanzas are run on power up and power down, and manually when changing network interface state at the command line. The state of the network is not remembered from session to session (ie, through reboots).

With OVS, things work a bit differently. Commands, as entered at the command line, not only update the appropriate network state, but the requested network configuration is recorded into a database maintained by OVS. Thus, when rebooting the physical machine, what existed prior to reboot, will be automatically recreated after reboot. The punch line is that OVS configuration commands should not be entered into the /etc/network/interfaaces file, as the commands will be redundant, and may even cause problems on reboot if entered there.

For attaching bridge port 1 to eth1, the following commands are used:

ovs-vsctl add-br br1 ovs-vsctl add-port br1 eth1

Then the stanzas for /etc/network/interfaces are:

allow-hotplug eth1 iface eth1 inet manual pre-up ifconfig $IFACE up post-down ifconfig $IFACE down auto br1 iface br1 inet manual pre-up ifconfig $IFACE up post-down ifconfig $IFACE down
auto eth1

Using the above, I see eth1 come up automatically, but br1 still needs a 'ifconfig br1 up'. Perhaps it is an affect of when the OVS network services are started in relationship to when the /etc/network/interfaces file is evaluated.

Open vSwitch VLANs has a good reference for configuring access ports.

How to Use Open vSwitch with KVM deals with the special up/down files needed for Qemu/KVM.

In a nutshell, two files are required. The first, /etc/network/ovs-ifup which adds a tap interface to br1:

#!/bin/sh switch='br1' ovs-vsctl add-port ${switch} $1 /sbin/ifconfig $1 0.0.0.0 up

The second, /etc/network/ovs-ifdown removes the tap interface:

#!/bin/sh switch='br1' /sbin/ifconfig $1 0.0.0.0 down ovs-vsctl del-port ${switch} $1

Be aware that the above represents two scenario: the physical source port is an access port or a trunk port. If the physical port is an access port, then the access port traffic will be delivered to the guest, as expected. On the other hand, if the physical is a trunk port, then all VLAN traffic on the physical port will be replicated to the guest machine's network port, and the guest machine will need to know how to work with 802.1q tagged frames.

To deliver access ports to the guest from a trunked physical port, the following example represents delivering VLAN as an access port to a guest machine. /etc/network/ovs-ifup:

#!/bin/sh switch='br1' /sbin/ifconfig $1 0.0.0.0 up ovs-vsctl add-port ${switch} $1 tag=101

For /etc/network/ovs-ifdown:

#!/bin/sh switch='br1' /sbin/ifconfig $1 0.0.0.0 down ovs-vsctl del-port ${switch} $1

Then the -net parameter like the following can be used when starting Qemu/KVM:

-net tap0,script=/etc/network/ovs-ifup,downscript=/etc/network/ovs-ifdown

A few other useful status commands:

ovs-dpctl show br1 ovs-ofctl show br1


2012 Jun 29 - Fri

Installing Open vSwitch on Debian Sid/Wheezy

Open vSwitch appears to be the up and coming mechanism for connecting virtual guests to the physical network through bridges, vlans, and network cards.

Through my previous postings, I described the processes I went through to get basic networking running with my virtual guests in a QEMU/KVM environment on Debian Sid/Wheezy. After getting basic, simple bridged connectivity working with bridge-utils, I realized I wanted to make things a bit more complicated. I want to be able to make use of 802.1q VLAN trunking ports, and to bridge to individual VLANs from various virtualized guests.

I believe it would have been relatively straightforward to add the Debian VLAN module in, and start bridging that way. But after a bit more research, it looks like Open vSwitch handles that, plus has a number of other capabilities. Namely, it can deal with sFlow and Netflow exporting. It has other monitoring features as well. On top of that, it Open vSwitches on different machines can be linked together via tunnels. I understand it also integrates into OpenStack.

Therefore, I decided to take the big leap and start into Open vSwitch. With the basic knowledge gained from this tool, I can start to integrate additional infrastructure into the solution.

The first step is to install Open vSwitch. It isn't as simple as expected.

I first shut down and removed existing bridge configurations from the /etc/network/interfaces file. Then removed bridge-utils:

apt-get remove bridge-utils

Open vSwitch requires a kernel module to be built and installed. It doesn't come pre-built. And I see the reason. The kernel in Sid/Wheezy is in a state of flux. I started with 3.2.0-1-amd64, but the current is 3.2.0-2-amd64. When building the kernel module, linux-headers are needed. I had an error like:

FATAL: Module openvswitch_mod not found. * Inserting openvswitch module

So the following was needed to sync headers and kernel:

apt-get install linux-headers apt-get install linux-image-3.2.0-2-amd64

With the kernel and headers consistent, the Open vSwitch kernel module is downloaded and built:

apt-get install openvswitch-datapath-source module-assistant auto-install openvswitch-datapath

The main Open vSwitch modules can then be installed:

apt-get install openvswitch-common apt-get install openvswitch-switch

Output similar to the following should be obtained:

[ ok ] ovs-brcompatd is not running. [ ok ] ovs-vswitchd is not running. [ ok ] ovsdb-server is not running. [ ok ] Inserting openvswitch module. [warn] /etc/openvswitch/conf.db does not exist ... (warning). [ ok ] Creating empty database /etc/openvswitch/conf.db. [ ok ] Starting ovsdb-server. [ ok ] Configuring Open vSwitch system IDs. [ ok ] Starting ovs-vswitchd. [ ok ] Enabling gre with iptables.

I can now start to experiment with Open vSwitch.


QEMU/KVM Bridge/Tap Network Configuration

QEMU has a number of different ways of connecting the network to a virtualized guest, or vice-versa, connecting a virtualized guest to the network. The mechanism providing the most flexibility involves using a bridge mechanism plus a tap mechanism.

I'll resort to a cattle culling metaphor to explain the basic concept. Each cow represents a packet. The cows from the field are directed from the fields into a holding pen, which is loosely represented by the bridge concept. From the holding pen, individual cows are selected and directed to their destination, which is loosely represented by the tap.

From an implementation point of view, for the purposes of this discussion, will bond together an external physical interface (such as eth1) with a series of internal interfaces connecting to the virtualized guests (tap0, tap1, ...). The tap interfaces connect the virtualized guest to the bridge and the bridge works to get those packets out onto the physical interface, and for externally arriving packets, works to get the packets to the appropriate tap interface.

On a Linux Debian Wheezy system, things are straight forward to configure. The system, for the purposes of this example, has two external interfaces: eth0 and eth1. I have eth0 configured with private ip addresses behind a firewall and is dedicated for physical host management. The other interface, eth1, is connected to the public interface side of the network. The physical machine has no addresses on this side to help reduce the risk of compromise of the physical host. For different environments, an ip address could be assigned to the bridge interface, which would result in the physical machine being reachable on the public side of things.

Even more sophistication can be obtained with VLANs on the external interface. This topic is best left for another blog entry, and will be covered with a discussion of Open vSwitch.

Before looking at the actual implementation commands, I need to mention two more caveats. The first is that the following example turns off spanning tree related commands. The configuration is basically a stub off a physical network. If multiple physical interfaces will belong to the same bridge group, then spanning tree will need to be enabled. That would be a discussion for one or more separate blog entries.

The second caveat is that the example is based upon QEMU being used with root level privileges. Many other blog entries use this type of networking in non-root scenarios, and are thus a bit more complicated due to the fact that sudo is required for bringing up tap connections.

To ensure bridge utilities can be used, the result of the following should be CONFIG_TUN=m or CONFIG_TUN=y:

grep CONFIG_TUN= /boot/config-`uname -r`

Ensure the bridge utilities are installed. The assumption is that qemu/kvm are already installed and operational.

apt-get install bridge-utils

Here is an extract from the /etc/network/interfaces file:

auto eth1 allow-hotplug eth1 iface eth1 inet manual pre-up ifconfig $IFACE up post-down ifconfig $IFACE down auto bvi1 iface bvi1 inet manual pre-up brctl addbr bvi1 pre-up brctl addif bvi1 eth1 pre-up brctl stp bvi1 off pre-up brctl setfd bvi1 1 pre-up ifconfig $IFACE up post-down ifconfig $IFACE down post-down brctl delif bvi1 eth1 post-down brctl delbr bvi1

The eth1 interface is automatically started. For this example, I have used Cisco terminology for the bridge interface: bvi. BVI is an acronym for Bridged Virtual Interface. For this example, I have numbered the bvi interface as bvi1 to match that it is attached to eth1.

Under 'iface bvi1 inet manual' stanza, there are a series of pre-up commands. These are commands which are operable at the command line as well. The command brctl is used to create the bvi1 interface, add eth1 to the bridge group, turn spanning tree protocol off, and then set a forwarding delay. The interface is then brought up.

When shutting down the interface, eth1 is removed from the bridge group, and then bvi1 is deleted.

Two more files need to be created with execute privileges. The first is /etc/qemu-ifup. This is a file which qemu executes in order to connect the virtualized guest network interface to the bridge:

/sbin/ifconfig $1 0.0.0.0 promisc up /sbin/brctl addif bvi1 $1

The other is /etc/qemu-ifdown which qemu uses when the virtualized guest exits:

/sbin/brctl delif bvi1 $1

I add the following parameters to the qemu startup command:

-net nic,vlan=0,model=virtio -net tap,vlan=0,ifname=tap0

'-net nic,vlan=0,model=virtio' sets the type of network card to emulate. 'qemu -net nic,model=?' can be used to obtain a list of available devices. 'virtio' is a driver used to optimze the speed of communications between the guest and the host. 'tap,vlan=0,ifname=tap0' creates the tap0 interface, and it is with this parameter, that the /etc/qemu-if? commands are used for joining the tap interface to the bridge.

Ip addresses can then be assigned statically or via dhcp within the guest, and all network operations operate as though the guest is directly connected to the network.

A few web sites I encountered with additional background material:


2012 May 09 - Wed

LXC Linux Containers on Debian

As part of the virtualization solutions I'm working on, I wanted something a little lighter weight than a full Qemu/KVM solution, which has a higer overhead than a kernel oriented container mechanism. There are many services that can be run with a common operating system. Containers provides the ability for solid segregation of services at the price of very low overhead.

For a container style of virtualization, I had been looking at OpenVZ (which is used by Parallel's Virtuozzo, I believe), Linux-VServer, and VirtualBox. However, on Debian's Upgrade Path to Wheezy, I see that Debian is deprecating OpenVZ and Linux-VServer.

Independently of this, I had earlier come across lxc Linux Containers. As such, it looks like LXC is the path forward for light weight, container-based virtualization.


2012 May 04 - Fri

Qemu/KVM Headless Console

Most examples for beginners using Qemu/KVM assume that the user is starting up guest operating systems from a desktop running some sort of X11 windowing environment. As I need to host guests on servers with the servers having a minimal installation, this article reflects the very easy experience of running a guest and connecting to it with out the safety net of an X11 installation.

In a previous article, Virtualization on Opensource Platforms, I discussed the steps of installing the Qemu/KVM virtualization tools on Debian Wheezy. I was able to immediately start up with a local image in a local console.

This time, I want to create a new install of a guest Debian Wheezy on my existing Debian Wheezy host. The first step was to wget the latest Debian Wheezy iso and put it into a sub-directory called iso.

An image file needs to be created from which the guest will run. There are various ways of doing this from a thin/thick/sparse/initialized-with-0 perspective, but I'll use the shortest way of getting it going:

qemu-img create images/test.img 4g

This creates a file in called test.img in the sub-directory images, and makes the file 4G in file.

The next step is to boot from the iso and install to the img file. At the same time, I want to access the console remotely. To do this, one uses the vnc protocol. The base tcp port for vnc is 5900. As there may be multiple guests running, each needs a unique instance number. I ran the following command line:

qemu -hda images/test.img -cdrom iso/debian-testing-amd64-netinst.iso \ -boot d -m 512 -cpu kvm64 -name test -vnc :2 -no-reboot -enable-kvm

This associates the image file with drive hda, associates the cdrom device with the iso, d says boot from the cdrom, :2 says use display 2 with vnc, which translates into tcp port 5902, and when the install is done, I don't want to reboot automatically.

On another machine, I can connect to the host machine with a suitable vnc viewer, connect to port 5902, and am able to see the console of the installer.

Once the install is complete, I restart the guest with the following command line:

qemu -hda images/test.img -boot c -m 512 -cpu kvm64 -name test -vnc :2 -enable-kvm

This boots from the drive, and reconnects the vnc server to the same port.

I now have a fully functional guest running in a NAT'd environment on a host. The next article will get bridged networking up and running.

In preparation for more virtualization functionality, some other sites in which i've come across:

  • Virtual Square: Discusses the various aspects of VDE, the virtual Distributed Ethernet tool package, which gets guests across different hosts communicating with each other. This, I think, is from where a bunch of network based redundancy and resiliency will arise.
  • Hardware virtualization with QEMU: A good intro to Qemu and networking, but unfortunately, rather than discussing KVM, it uses some lesser known hardware virtualizer. Other than that, there is some good discussion about the network side of things.


2012 Apr 30 - Mon

NexentaStor Root / Expert Mode

Nexenta has a pretty good web interface on their SAN product, and when that isn't good enough, there is always their web console mode. But when even that isn't good enough, and you just have to see what is happening under the hood, there is something called expert mode.

Logging in as admin, provides some good commands when dealing with the various file shares. When I changed into root mode, I've always wondered why it was so lacking. By reading some one else's site, I now know why. Nexenta has disabled most of the root stuff, and you can only access it be going a secret route:

option expert_mode=1 !bash

When finished use exit and then run their resync command to make everything right.


2012 Apr 26 - Thu

Cloud / Virtualization Management

At the KVM/Qemu command level, there are a number of additional configuration requirements for connecting a network, remote storage, local devices, and other related operating environment devices. I'll write an entry about those later on. In the meantime, I wanted to put together a list of higher level management tools which may eliminate the need for dealing with the lower level infrastructure details. Some of the ones I have encountered are listed below.

For those wondering what is the different between cloud computing and virtualization, NIST has come up with a definition to help clarify the cloud computing side of things at: A NIST Definition of Cloud Computing

  • OpenStack: A group of companies organized around Rackspace to come up with a scaleable, resilient virtualization environment.
  • cloudStack: Cloud management tools developed primarily through Citrix for the Xen platform. I think Citrix has now joined the Openstack movement now. It has been said (Cloud code merges with Cloudstack) that code from cloudstack will merge into openstack.
  • OpenNebula: A seemingly robust, mature cloud management platform widely used by many big name companies. It interacts with Xen, KVM, and VMWare. There appears to be a good set of installation and configuration documents.
  • Ganeti: Ganeti is a cluster virtual server management software tool built on top of existing virtualization technologies. It is said that Google uses this to handle their internal cloud/virtualization requirements. There is a good set of top level documents.
  • Eucalyptus: One of the first open source cloud platforms I encountered. It has good integration with Amazon's EC2 services. It seems there are partially closed services in Eucalyptus.
  • openQRM: openQRM supports and seamlessly manages VMware, Xen, KVM and Citrix XenServer virtual machines. openQRM supports P2V (physical to virtual), V2P (virtual to physical) AND V2V (virtual to virtual) migration. The one difficulty with this is that their web site offers no direct line to downloads or documentation, so I'm not sure just how 'open' it is.


2012 Apr 24 - Tue

Virtualization on Opensource Platforms

VMWare is a very popular virtualization solution. I first encountered it many years ago when someone introduced me to VMWare Workstation. I was very impressed with the ability to run any operating system and its associated applications fully isolated 'on top' of an existing operating system. It makes for a very effective mechanism for trying out different operating sytems. In addition, with its snapshot capabilities, testing and performing experiments is simplified by having only to roll back to a specific marker in order to start over again with another variation.

Since then, I have progressed to using VMWare in a multi-host environment. In order to make use of the various enterprise grade resilient and redundancy options offered by VMWare, one has to spend enterprise style funds to obtain and maintain those abilities. For smaller organizations, those costs can be prohibitive. When one is used to working with open source software, those costs may also become out of proportion to the costs associated with the hosted operating systems and applications.

The natural inclination is therefore to look for virtualization solutions in the open source world. From a Kernel perspective, the two full virtualization environments I have encountered are Xen and KVM/Qemu. Xen has been supported through the efforts of Citrix.

My interests have been attracted to the KVM/Qemu side of the fence. The letters KVM are an abbreviation for Kernel-based Virtual Machine. The main support web site for this is found at www.linux-kvm.org. Primarily, KVM is a kernel module used for managing the hardware integrated virualization within the host operating system.

Assoicated with KVM is Qemu, which is a machine emulator and virtualizer. It provides related drivers and scaffolding for the guest environment.

When the underlying physical CPU(s) do(es) not have the built-in virtualization capabilities, KVM cannot be used, and Qemu handles the full emulation/virtualization structures for hosting a guest environment.

On current testing version of Debian, starting up a basic virtualization environment is as easy as running a few installation commands:

apt-get install qemu-kvm libvirt-bin
apt-get install qemu

To test the installation, download a test image, and decompress it:

wget http://wiki.qemu.org/download/linux-0.2.img.bz2
bzip2 -d linux-0.2.img.bz2

Running the simplest command may get you and error if you are starting things from the console and do not have X running:

# qemu linux-0.2.img 
Could not initialize SDL(No available video device) - exiting

To get around this when running from the console, change the command slightly:

#qemu -curses linux-0.2.img

That is a quick introduction to the basics of hardware accelerated virtualization on a console based Linux platform.



New blog site at: Raymond Burkholder - What I Do

Blog Content ©2013
Ray Burkholder
All Rights Reserved
ray@oneunified.net
(441) 705-7292
(519) 838-6013
Available for Contract Work
Resume

RSS: Click to see the XML version of this web page.

twitter
View Ray 
Burkholder's profile on LinkedIn
technorati
Add to Technorati Favorites



October
Su Mo Tu We Th Fr Sa
     
26 27 28 29 30 31  


Main Links:
Monitoring Server
SSH Tools
QuantDeveloper Code

Special Links:
Frink

Blog Links:
Sergey Solyanik
Marc Andreessen
Micro Persuasion
... Reasonable ...
Chris Donnan
BeyondVC
lifehacker
Trader Mike
Ticker Sense
HeadRush
TraderFeed
Stock Bandit
The Daily WTF
Guy Kawaski
J. Brant Arseneau
Steve Pavlina
Matt Cutts
Kevin Scaldeferri
Joel On Software
Quant Recruiter
Blosxom User Group
Wesner Moise
Julian Dunn
Steve Yegge
Max Dama

2014
Months
OctNov Dec




Mason HQ

Disclaimer: This site may include market analysis. All ideas, opinions, and/or forecasts, expressed or implied herein, are for informational purposes only and should not be construed as a recommendation to invest, trade, and/or speculate in the markets. Any investments, trades, and/or speculations made in light of the ideas, opinions, and/or forecasts, expressed or implied herein, are committed at your own risk, financial or otherwise.