Linux Containers

TODO: pull in the oview from the chroot jail page.


Note: This process assumes that vlans and bridging are already configured on the host.

Install Packages

sudo apt-get install lxc libvirt-bin pm-utils

AppArmor Tweak - Not needed for Ubuntu 12.04+

Open the file /etc/apparmor.d/lxc/lxc-default and change this line (approx line 4):

profile lxc-container-default flags=(attach_disconnected) {


profile lxc-container-default flags=(attach_disconnected,mediate_deleted) {

Bridge Selection Tweak

Since we have our own specific bridges, we will disable the default LXC created bridge. Open the file /etc/default/lxc-net (/etc/default/lxc before Ubuntu 13.10) and change the following line:




We will configure LXC to use the LAN bridge (brLAN) as the default. Open the file /etc/lxc/lxc.conf


At this point, you will be able to connect to this host using the Virtual Machine Manager GUI over SSH from the Linux machine that has a desktop.

Add a storage location

Rather than move the LXC containers into the location used for KVM disks, I added a new storage location (using the GUI) that points to /var/lib/lxc and called it LXC_Containers.

Install an Ubuntu base install into a directory

sudo lxc-create -t ubuntu -n precise-template -- -r precise
  • precise-template will be the name of the new container
  • precise will be the Ubuntu version installed into the container

Note that at the end of the installation you are given the default username and password for the new container:

  • Username: ubuntu
  • Password: ubuntu

Add the template into the libvirt inventory

Create a file called precise-template.xml and give it the following content:

<domain type='lxc'>
  <clock offset='utc'/>
    <filesystem type='mount'>
      <source dir='/var/lib/lxc/precise-template/rootfs'/>
      <target dir='/'/>
    <interface type='bridge'>
      <source bridge='brLAN'/>
    <console type='pty' />

Now run the following:

virsh -c lxc:/// define precise-template.xml

The container should now show in the Virtual Machine Manager inventory. From there you can start the container and access it's text console.

Some basic packages and configuration

Use the Virtual Machine Manager to start the container and connect to it's text console.

apt-get update
apt-get install nano telnet

LDAP Authentication

To streamline connecting containers to the LDAP directory, I followed all the steps here but at the end of the process I used pam-auth-update to disable the LDAP module. This means that LDAP is there and ready to go but not enabled by default.

Ubuntu Saucy SSH Issue

Without this adjustment you will not be able to ssh into a container. In auth.log you will see the following:

sshd[13852]: Accepted password for ubuntu from port 59813 ssh2
sshd[13852]: pam_loginuid(sshd:session): set_loginuid failed
sshd[13852]: pam_unix(sshd:session): session opened for user ubuntu by (uid=0)
sshd[13852]: error: PAM: pam_open_session(): Cannot make/remove an entry for the specified session

To fix, open /etc/pam.d/sshd and comment out the following line:

session    required

First Run Script

The following script detects that the container is doing it's first startup since it was copied from the template (e.g. the ssh host keys have been deleted) and fixes up a few things (e.g. generates new ssh host keys).

Insert the following into /etc/rc.local before the exit 0 line:

# First startup after being deployed from template
if [ ! -e /etc/ssh/ssh_host_ecdsa_key ]; then
	/usr/sbin/dpkg-reconfigure openssh-server
if [ $(echo "" |perl -w 2>&1 |grep -c locale) -gt 0 ]; then
	apt-get install --reinstall locales
	sudo locale-gen en_AU.UTF-8
	sudo update-locale --reset LANG=en_AU.UTF-8


# This function displays a warning to the user that they are about to be prompted for their sudo password.
function sudo_warning {
    if [ $SUDO_WARNING_DISPLAYED -lt 1 ]; then
        echo "Some system properties haven't yet been configured since this system was deployed.  To complete this configuration you will be prompted to reenter your login password for sudo."

## First startup after being deployed from template
# If we're not running as a sudo enabled user, skip.
if [ $(groups | grep -c sudo) -gt 0 ]; then
    # Set a hostname
    CURRENT_HOSTNAME=$(cat /etc/hostname)
    if [ $(echo $CURRENT_HOSTNAME | grep -c template) -gt 0 ]; then
        echo -n "Please specify a new hostname: "
        read NEW_HOSTNAME
        if [ "$NEW_HOSTNAME" = "" ]; then
            echo "Not changing hostname"
            sudo sed -i "s/${CURRENT_HOSTNAME}/${NEW_HOSTNAME}/ig" /etc/hostname
            sudo sed -i "s/${CURRENT_HOSTNAME}/${NEW_HOSTNAME}/ig" /etc/hosts

    # Set a timezone
    if [ ! -e /etc/tzconfigured ]; then
        sudo dpkg-reconfigure tzdata
        sudo touch /etc/tzconfigured

    # Reboot if we made a change that needs it
    if [ $REBOOT_NEEDED -gt 0 ]; then
        sudo reboot

Tidy up

Before we shut down the template and clone it to make all our new containers, let's just clean up some of the mess:

sudo apt-get clean
sudo rm /etc/ssh/ssh_host_*
sudo rm /etc/tzconfigured

First run

These steps should be completed the first time the container is run:

Enable LDAP Authentication (optional)

sudo pam-auth-update

I did find a few minor limitations when using the Virtual Machine Manager. Most are minor and none are show-stoppers.

You can't create a container

The nice wizard that would guide you through creating a new VM doesn't work for containers. There is an error message on the first screen and I can't get any further. To work around this, use the .xml file template and virsh -c lxc:/// define <filename> from the console.

Note: Retest this. My virt-manager might have just been being a jerk.

You can't clone a container

Well, to be specific, you can clone a container but it doesn't clone the storage directory, it just creates a new container the points at the same location as the original. While frustrating, just use the .xml/virsh define trick above until this bug is resolved.

Note: Everything in this section is probably redundant if you're using libvirt to manage containers. It is left here for reference only.

Limiting CPU

There's two ways of limiting CPU in LXC. On a multi-core system, you can assign different CPUs to different containers, as such (add this line to your container config file, /mnt/vm0/config or similar):

lxc.cgroup.cpuset.cpus = 0 (assigns the first CPU to the container) or lxc.cgroup.cpuset.cpus = 0,2,3 (assigns the first, third, and fourth CPU to the container)

The alternative (this one makes more sense to me) is to use the scheduler. You can use values to say 'I want this container to get 3 times the CPU of this container'. For example, add:

lxc.cgroup.cpu.shares = 2048

to the config to give a container double the default (1024).

Limiting RAM

To limit RAM, simply set:

lxc.cgroup.memory.limit_in_bytes = 256M

(replacing 256M with however much RAM you want to allow).

Limiting SWAP

To limit swap, set:

lxc.cgroup.memory.memsw.limit_in_bytes = 1G

Limiting Disk Space

Note: todo

Limiting Disk IO

Note: todo