Skip to content

Setting up rsyslog and logrotate for the new Eucalyptus Console

The latest release of Eucalyptus has introduced a new user user console. The user console is written in Python and is using the logging module that can easily be setup to work with rsyslog. It even uses it out the box! Unfortunately, the user console is sending a lot of verbose and useless information into /var/messages which I don’t like. /var/log/messages is the main log for my system so I’d like to make sure that I can find important messages easily. The Eucalyptus user console also uses the Python ConfigParser module so we are able to easily configure the console which includes the logger. Unfortunately for this feature as well, the logging configuration does not seem to take effect but we can fix this easily!

So, in this blog let’s first fix the logging so that it is configurable (I’ll put in a bug for this soon), write the verbose log information to a file other than /var/log/messages and finally setup logrotate to rotate the log on a daily basis.

Before starting make sure that you have installed and configured the Eucalyptus console following the instructions found in the Eucalyptus 3.2 User Console Guide. Make sure that the console is working by starting it up (service eucalyptus-console start) and verifying that you can login (https://my-host-or-ip:8888). This way if the configuration is trashed you’ll know that it was previously working. As always, I suggest you make backups of any files that are edited just in case.

When configuring the Eucalyptus user console in /etc/eucalyptus-console/console.ini you might have noticed the logging section. If you attempt to change the formatting for example in this section and then check /var/log/messages you’ll notice that nothing changed. It turns out that the Python logging module is directly called in /usr/bin/euca-console-server with the default rsyslog settings and the configuration is never honored. The configuration file also uses the StreamHandler in the logging module but there is nothing in the console that uses the stream so the information is not captured.

To first fix these issues we’ll need to stop the Eucalyptus console from using the SysLogHandler with the default settings. Open up /usr/bin/euca-console-server and either comment out or remove the following two lines:

handler = logging.handlers.SysLogHandler(address = '/dev/log')
logging.getLogger().addHandler(handler)

With these lines removed, the logger will be started using the configuration in the config file later on in the console’s execution.

Next we’ll configure the logging section in /etc/eucalyptus-console/console.ini to use the SysLogHandler when printing out the logging information. We’ll also make an edit to the formatting string to make it easier to filter out the lines from the Eucalyptus console. Make the logging section of your console.ini file look like the following:

##
# These sections are used to configure the logger. For info, see docs for the python logging package
##

[loggers]
keys=root

[handlers]
keys=eucahandler

[formatters]
keys=eucaformat

[logger_root]
level=INFO
handlers=eucahandler

[handler_eucahandler]
class=handlers.SysLogHandler
level=NOTSET
formatter=eucaformat
args=('/dev/log', handlers.SysLogHandler.LOG_SYSLOG)

[formatter_eucaformat]
format=eucalyptus-console: %(levelname)s %(message)s
datefmt=

##
# End logger config
##

The first two changes occur in the [handler_eucahandler] section. Here the handler is changed from using the StreamHandler to using the SysLogHandler so that logging messages are sent to the rsyslog process. The arguments are changed to include the device to send the logging information to, /dev/logger, as well as which log to send the information to (See Python logging module – SysLogHandler for more information on the SysLogHandler).

The change to the format of the logs makes it easier to filter the log lines produced by the Eucalyptus console without needing to send information to one of the LOG_LOCAL* handlers which could cause issues with other custom log setups (check out Python logging module – SysLogHandler). Since rsyslog automatically adds the time when lines are printed, remove the %(asctime)-15s section of the format line. Adding “eucalyptus-console:” will allow rsyslog to easily filter lines by looking for “eucalyptus-console” in the lines.

Now, create a file in /etc/rsyslog.d/ with any name you’d like as long as it ends in .conf. In the example, /etc/rsyslog.d/eucalyptus-console.conf will be used. Now add the following to it:

# Send the eucalyptus-console output to a separate file
:programname,isequal,"eucalyptus-console"	/var/log/eucalyptus-console.log
&~

This rsyslog configuration file will search the programname column in any log lines sent through rsyslog for “eucalyptus-console” string added above to the format string. If the string is found then it will send that line to /var/log/eucalyptus-console.log. The &~ tells rsyslog to stop evaluating rules for this line so that it only makes it into this file.

Finally, setup the logrotate configuration file for the Eucalyptus console. Create the file /etc/logrotate.d/eucalyptus-console with the following:

/var/log/eucalyptus-console.log
{
    rotate 5
    daily
    compress
    postrotate
	/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
    endscript
}

This file tells logrotate that for the file /var/log/eucalyptus-console.log do the following:

  • rotate 5: Remove the logs after 5 days
  • daily: Rotate every day
  • compress: Compress the rotated log files
  • postrotate … endscript: Run the command after rotating the log

With this configuration, the logs will be kept for long enough to be useful and then removed so that /var/log and the system disk does not fill up with unneeded logs.

So now you have the Eucalyptus console sending logging data to its own file and are having that file rotated every day. This is really just the tip of the iceberg though since the Eucalyptus console uses rsyslog we can also send logs to a remove server or search the logs for errors with a tool such as logstash or greylog.

Add Eucalyptus Account and User Names to Your BASH Prompt

At Eucalyptus, IT tends to have a bunch of different accounts for the clouds that we run. Most of the work we do with these clouds happens from the Cloud Controller of each cloud and it can become confusing determining which user’s credentials are currently sourced when you have 10 accounts and 15 users. We need a method for easily determining which account and user we are currently accessing the cloud as so that we don’t launch instances or create keypairs using the incorrect user as then we’ll need to do it again and clean up the mistake as well (i.e. doing twice the work).

So, one method that we’ve come up with is to put the account and user name of the currently sourced credentials directly into the BASH prompt. Now there is little doubt as to which user’s credentials we are currently using.

The following function will print out (<account>:<user>) for the currently source Eucalyptus credentials:

parse_euca_user() {
  if [[ ! -z ${EC2_USER_ID} ]]; then
    if [[ ! -e ${HOME}/.my_ec2_user_id || $(cat ${HOME}/.my_ec2_user_id | cut -d' ' -f1) -ne ${EC2_USER_ID} ]]; then
      EC2_ACCOUNT_NAME=$(euare-accountlist | grep $EC2_USER_ID | cut -d' ' -f1)
      EC2_USER_NAME=$(basename $EC2_CERT | cut -d'-' -f2)
      echo ${EC2_USER_ID} ${EC2_ACCOUNT_NAME} >${HOME}/.my_ec2_user_id
    else
      EC2_ACCOUNT_NAME=$(cat $HOME/.my_ec2_user_id | cut -d' ' -f2)
      EC2_USER_NAME=$(basename $EC2_CERT | cut -d'-' -f2)
    fi
  fi

  if [[ ! -z ${EC2_ACCOUNT_NAME} ]]; then
    echo "(${EC2_ACCOUNT_NAME}:${EC2_USER_NAME})"
  else
    echo ""
  fi
}

Add the above function to your ${HOME}/.bashrc file. Next update your prompt by adding a call to the parse_euca_user() function in it such as the following:

export PS1="\u@\h:\W \$(parse_euca_user)\$ "

Now you should see something like the following after running source eucarc on the eucalyptus account’s admin user.

root@prod-frontend:admin (eucalyptus:admin)$

Single System Test Cloud, Take 2

Testing Eucalyptus should be easy. In Single System Test Cloud, testing a cloud was made easier by removing the requirement of having at least two systems. But, it still adds unnecessary complexity by adding a VM to run the front-end system. After learning from the experience of the first single system test cloud, it became apparent that there should be an easy way to run Eucalyptus without the use of a VM. And yes there is!

First, start off with a CentOS 6 installation. This installation can either be with or without a GUI environment though a GUI will help on a system such as a laptop.

After CentOS 6 is installed setup a bridge. This bridge does will not be attached to any of the physical devices of the system. This bridge will be used by the NC for the instances as well as for communication between the CC and the NC. We will need an free subnet and IP address that may be use. This subnet can be very small but a /24 will be used in the example. The bridge will be setup to use the IP address 172.16.0.1 and a netmask of 255.255.255.0.

To setup the bridge on br0 with the above information, place the below into /etc/sysconfig/network-scripts/ifcfg-br0:

DEVICE=br0
ONBOOT=yes
TYPE=Bridge
BOOTPROTO=none
IPADDR=172.16.0.1
NETMASK=255.255.255.0
NETWORK=172.16.0.0

Note: If a laptop or another system that may get a different IP address is used, it could be a good idea to use a sub-interface. This way if the IP address changes on the system, the configuration will not need to change and the cloud should still function. Eucalyptus does not deal well with changing IP addresses on components.

To prevent a possible issue with the Eucalyptus meta-data service, turn off Zeroconf by adding the following to /etc/sysconfig/network:

NOZEROCONF=true

To have the two new settings setup above take effect, restart the networking process:

service network restart

Note: If a GUI was installed with the CentOS installation there might be issues caused by Network Manager. To get around these I suggest that you add NM_CONTROLLED="no" to the interface that Eucalyptus will use for the its IP.

Next disable the system firewall and either place SELinux in permissive mode or disabled. To turn off the firewall use the following command:

system-config-firewall-tui

Deselect the firewall entry. Next edit /etc/selinux/config and change the SELINUX entry to either permissive or disabled. Finish the SELinux configuration by running the following:

setenforce 0

Now install and configure the NTP service. The NTP service will be set to start at boot and the resulting updated time will be synced to the hardware clock of the system. Run the following commands:

yum -y install ntp
chkconfig ntpd on
service ntpd start
ntpdate -u pool.ntp.org
hwclock --systohc

Eucalyptus is a difficult product to install the first time so I heavily recommend taking some time to read the Eucalyptus Installation Guide. Please take the time to carefully read the Installation Guide as it will make the steps below much easier to understand. Really.

To begin the Eucalyptus installation, install and setup the needed repositories.

yum -y install http://downloads.eucalyptus.com/software/eucalyptus/3.1/centos/6/x86_64/eucalyptus-release-3.1.1.noarch.rpm
yum -y install http://downloads.eucalyptus.com/software/eucalyptus/3.1/centos/6/x86_64/eucalyptus-release-3.1-1.el6.noarch.rpm
yum -y install http://downloads.eucalyptus.com/software/euca2ools/2.1/centos/6/x86_64/euca2ools-release-2.1-2.el6.noarch.rpm
yum -y install http://downloads.eucalyptus.com/software/eucalyptus/3.1/centos/6/x86_64/epel-release-6-7.noarch.rpm
yum -y install http://downloads.eucalyptus.com/software/eucalyptus/3.1/centos/6/x86_64/elrepo-release-6-4.el6.elrepo.noarch.rpm

Now install the Eucalyptus Cloud Controller (clc), Cluster Controller (cc), Storage Controller (sc), Walrus, and Node Controller (nc) on the system.

yum -y groupinstall eucalyptus-cloud-controller
yum -y install eucalyptus-nc eucalyptus-cc eucalyptus-sc eucalyptus-walrus

Since all of the Eucalyptus components are now installed, it is time to configure the system. Before the changes can be made to the configuration file there is some information that needs to be gathered.

Eucalyptus requires a list of public IP addresses that can be given to instances that are started. For this type of a cloud five should be sufficient but one IP will be needed for each instance that is run. These public IPs do not need to be publicly routed or even routed on your network. In the example below, the range “10.104.5.55-10.104.5.60″ will be used for the list of public IPs.

Eucalyptus will create a private network that will be used for instance communication. This network subnet should be one that is not currently utilized on the local network. This network subnet should have at least 256 addresses in it (this is a /24 or a netmask of 255.255.255.0). In this example, the subnet that will be used is 172.31.0.0 with the netmask 255.255.255.0.

Eucalyptus will give each instance a DNS server to use when it boots. For this example we will use Google’s Public DNS server at 8.8.8.8.

Open up /etc/eucalyptus/eucalyptus.conf and change the following settings. Make sure that if any of the example settings conflict in the local network that values are swapped with some that will work. Also make sure to remove any “#” characters that might be at the beginning of these settings.

CREATE_SC_LOOP_DEVICES=256
USE_VIRTIO_NET="1"
VNET_MODE="MANAGED-NOVLAN"
VNET_PRIVINTERFACE="br0"
VNET_PUBINTERFACE="eth0"
VNET_BRIDGE="br0"
VNET_PUBLICIPS="10.104.5.55-10.104.5.60"
VNET_SUBNET=172.31.0.0
VNET_NETMASK=255.255.255.0
VNET_ADDRSPERNET="16"
VNET_DNS=8.8.8.8

When the Eucalyptus NC service was installed Libvirtd was also installed. DNSMasq comes with Libvirtd but causes issues with Eucalyptus. So, we’re going to turn off DNSMasq and disable it from starting at boot.

service dnsmasq stop
chkconfig dnsmasq off

Now it is time to initialize the Eucalyptus DB and to start the components. Run the following command:

euca_conf --initialize

If the output of the above command includs the word “succeeded” then the database was successfully setup. Next, start the services.

service eucalyptus-cloud start
service eucalyptus-cc start
service eucalyptus-nc start

Check to see if the services are running by looking for the following ports in the output of netstat -ntplu: 8443, 8773, 8774, 8775. If all of these ports are found in the output then the services are running.

Registration of the components can now take place. Register all of the components, except for the NC, on the same IP. The NC should be registered on the IP given to the br0 interface above. For this example, the system has been setup with the IP 10.104.5.1 so all components, except for the NC, will be registered to that IP.

/usr/sbin/euca_conf --register-walrus --partition walrus --host 10.104.5.1 --component walrus-single
/usr/sbin/euca_conf --register-cluster --partition cluster01 --host 10.104.5.1 --component cc-single
/usr/sbin/euca_conf --register-sc --partition cluster01 --host 10.104.5.1 --component sc-single
/usr/sbin/euca_conf --register-nodes "172.16.0.1"

Note: If the IP of the system that this is being setup on is not 10.104.5.1 then please replace the IP above with the correct IP.

Finally, the cloud should be running and registered so that credentials can now be downloaded. Run the following to get the cloud administrator’s credentials:

euca_conf --get-credentials admin.zip

Unzip the resulting admin.zip file into a directory. Next, run the following command inside of the directory where the admin.zip file was unzipped.

source eucarc

To see if the cloud has been able to find resources on the system run the following command:

euca-describe-availability-zones verbose

If the output does not contain 000 / 000 on every line then the cloud is successfully operating. Now an image should be uploaded to the cloud so that an instance may be run. I will leave this as activity to the user. Information on images can be found in the Eucalyptus Administration Guide.

Single System Test Cloud

UPDATE: I’ve posted an easier way to do this. Check out my other post Single System Test Cloud, Take 2 for the instructions.

When first trying out Eucalyptus, the requirement of having two systems at a minimum can become a blocker. What if you only want to test out running a couple VMs? Well for that use case a system such as the ECC might be just fine. But what if you are looking to try out the installation process or see what a Cloud Administrator needs to deal with? For this, it would be nice if a single machine could be used for some basic testing and if those tests are good then more resources could be purchased for a more proper proof-of-concept configuration. My coworker Graziano tried this out by using two VMs in his blog Developer Cloud but that requires that your CPU allows for nested virtualization.

For a single system configuration the Eucalyptus Front-end (CLC, Walrus, CC, SC) will be installed in a VM on the bare metal host that will run the Eucalyptus NC. A private network subnet will be setup that only the bare metal system and the VM can use to communicate. Two bridges will be used for the communication between the front-end (VM) and the NC (bare metal).

First start off with a stock CentOS 6 installation on your bare metal system. Install and setup KVM and libvirtd. Also make sure that you have enough room for a VM to run a Eucalyptus front-end (50GB minimum) along with any instances that you might wish to run (50GB minimum).

To allow for us to use the MANAGED-NOVLAN networking mode we will be setting up two network bridges. One of these bridges will be our public network and the other will be our private network. The IP addresses and subnet should not be routed on the rest of the network. For example, if we were going to have the interface br0 use the IP address 172.16.1.7 with a netmask of 255.255.255.0 we would use the following configuration:

DEVICE=br0
TYPE=Bridge
ONBOOT=yes
DELAY=0
NETWORK=172.16.1.0
NETMASK=255.255.255.0
IPADDR=172.16.1.7

To make sure that the Eucalyptus VM has access to the Internet will need to make sure that we have a NAT setup. First add the following IPTables rules (Note: You may need to change the interfaces if you system is not setup in the same way):

/sbin/iptables -A FORWARD -i br0 -o em1 -j ACCEPT
/sbin/iptables -A FORWARD -i em1 -o br0 -m state --state RELATED,ESTABLISHED -j ACCEPT
/sbin/iptables -t nat -A POSTROUTING -o em1 -j MASQUERADE

Next the bare metal system needs to forward packets. To do this edit /etc/sysctl.conf and make the following edit:

net.ipv4.ip_forward = 1

To make sure that we have a fully functioning cloud that we can use all of the available features with we need to shut off Zeroconf which causes issues with the metadata service of Eucalyptus. To do this add the following to /etc/syconfig/network

NOZEROCONF=true

Now restart networking and the Eucalyptus VM will now be able to access the Internet (that is if it is currently accessible by the other machines on your network).

service network restart

Now a CentOS 6 VM will need to be installed on the bare metal system. When setting up the VM both of the bridges will be used as interfaces for the VM. The defaults of the installer will work. You will want to use a static address for the networking. Below is an example libvirt XML file:

<domain type='kvm'>
  <name>frontend</name>
  <memory unit="GiB">2</memory>
  <description>Front End</description>
  <cpu match='exact'>
    <model>core2duo</model>
    <feature policy='require' name='vmx'/>
  </cpu>
  <os>
    <type arch="x86_64">hvm</type>
    <boot dev='hd'/> 
  </os>
  <features>
    <acpi/>
  </features>
  <clock sync="localtime"/>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <source file='/media/extra/frontend.img'/>
      <target dev='vda' bus='virtio'/>
    </disk>
    <disk type='file' device='cdrom'>
      <source file='/media/extra/CentOS-6.3-x86_64-minimal.iso'/>
      <target dev='hdc'/>
    </disk>
    <interface type='bridge'>
      <source bridge="br0"/>
      <mac address='00:16:3e:21:52:45'/>
      <model type='virtio'/>
    </interface>
    <interface type='bridge'>
      <source bridge="br1"/>
      <mac address='00:16:3e:21:52:46'/>
      <model type='virtio'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <graphics type='vnc' port='-1'/>
  </devices>
</domain>

Note: You may need to change the MAC addresses above if these two already exist on your network. Using MAC addresses from the 00:16:3e:XX:XX:XX pool will be a safe bet as these are registered to be used by VMs. This blog post here has some more information if you are interested.

After the VM’s OS is installed, follow the Eucalyptus Installation Guide to install all components (CLC, CC, SC, Walrus) on the VM. Install the Eucalyptus NC on the bare metal system.

When Eucalyptus is setup and running on the VM add the following IPtables rules to make sure that you are able to access the API and WebUI from the bare metal host.

iptables -t nat -I PREROUTING 1 -p tcp -i em1 --dport 8443 -j DNAT --to-destination **VM_IP**:8443
iptables -t nat -I PREROUTING 1 -p tcp -i em1 --dport 8773 -j DNAT --to-destination **VM_IP**:8773

Now a fully functioning cloud should be setup and ready for you to try out.

Get expert_recipe, mdraid, LVM, GPT and grub2 Playing Together on Ubuntu Lucid and Debian Squeeze

Hard drives are growing and growing. You can now get 3 TB hard drives and have a ton of storage in each server. But then you notice that your current preseed files are crashing when attempting to install grub2. You hit “alt-f4″ to check for any errors in the logs and all you see are errors about grub2 not being able to embed in a GPT disk.

When a disk size gets up around 2 TB the Debian installer and partman will default to using the GPT partitioning scheme. Unfortunately older versions of partman do not properly setup a disk with GPT when the disks are part of a software RAID. Luckily the fix is not that difficult as it only requires a few additions (or possible alterations if you already use some of these options in your preseed file).

What we need to add is the partition that older versions of partman do not properly create. These partitions are of the type bios_grub and only need to be 1MB in size. This partition gives grub2 a location to place it’s entire loader and allows the BIOS to properly find the grub boot information to load the boot loader. More information about the BIOS boot partition can be found on Wikipedia – BIOS Book Partition.

To add the bios_grub partition in a preseed with partman we add the following to our expert_recipe:

             1 1 1 free                          \
                $iflabel{ gpt }                  \
                method{ biosgrub }               \
             .                                   \

This adds a 1MB partition as the top priority partition and if the disk label is set to “GPT” then the partition is flagged as biosgrub.

Since we’re creating another partition on the disk you will also need to increase the partition numbers used in your RAID array by 1. For example, if we originally were using this RAID setup in the preseed:

d-i     partman-auto-raid/recipe string          \
        1 2 0 ext3 /boot /dev/sda1#/dev/sdb1     \
        .                                        \
        1 2 0 lvm - /dev/sda2#/dev/sdb2          \
        .

We would now need to edit the entry and increase the partitions by 1.

d-i     partman-auto-raid/recipe string          \
        1 2 0 ext3 /boot /dev/sda2#/dev/sdb2     \
        .                                        \
        1 2 0 lvm - /dev/sda3#/dev/sdb3          \
        .

Completed the disk setup in our preseed file looks like this:

d-i     partman-auto/disk string /dev/sda /dev/sdb
d-i     partman-auto/method string raid
d-i     partman-lvm/device_remove_lvm boolean true
d-i     partman-auto/purge_lvm_from_device boolean true
d-i     partman-md/device_remove_md boolean true
d-i     partman-md/confirm_nochanges boolean true
d-i     partman-lvm/confirm boolean true
d-i     partman-auto/choose_recipe select boot-root
d-i     partman-auto-lvm/new_vg_name string vg01
d-i     partman-auto-lvm/guided_size string 100%
d-i     partman-auto-raid/recipe string          \
        1 2 0 ext3 /boot /dev/sda2#/dev/sdb2     \
        .                                        \
        1 2 0 lvm - /dev/sda3#/dev/sdb3          \
        .
d-i     partman-auto/expert_recipe string        \
           boot-root ::                          \
             1 1 1 free                          \
                $iflabel{ gpt }                  \
                method{ biosgrub }               \
             .                                   \
             256 10 256 raid                     \
                $lvmignore{ }                    \
                $primary{ }                      \
                method{ raid }                   \
             .                                   \
             1000 20 1000000 raid                \
                $lvmignore{ }                    \
                $primary{ }                      \
                method{ raid }                   \
             .                                   \
             150% 30 150% swap                   \
                $defaultignore{ }                \
                $lvmok{ }                        \
                lv_name{ lv_swap }               \
                method{ swap }                   \
                format{ }                        \
             .                                   \
             20480 40 20480 ext4                 \
                $defaultignore{ }                \
                $lvmok{ }                        \
                lv_name{ lv_root }               \
                method{ format }                 \
                format{ }                        \
                use_filesystem{ }                \
                filesystem{ ext4 }               \
                mountpoint{ / }                  \
             .                                   \
             1 50 -1 ext4                        \
                $defaultignore{ }                \
                $lvmok{ }                        \
                lv_name{ lv_dummy }              \
             .                                    
d-i     mdadm/boot_degraded boolean true
d-i     partman-md/confirm boolean true
d-i     partman-partitioning/confirm_write_new_label boolean true
d-i     partman/choose_partition select Finish partitioning and write changes to disk
d-i     partman/confirm boolean true
d-i     partman-md/confirm_nooverwrite  boolean true
d-i     partman/confirm_nooverwrite boolean true

Looking for more info on using expert_recipe with preseed? Check out my other blog that goes more in depth, Notes on using expert_recipe in Debian/Ubuntu Preseed Files

Puppet module for euca2ools

I know that I’ve been lacking lately on the recipes front. I’m trying to get started so I’ve completed a little project, create a Puppet module for euca2ools. Now this module is quite simple as it simply needs to setup the Eucalyptus euca2ools repository for the OS and then install the euca2ools package. It makes some assumptions such as, if you have puppet on a CentOS box then you most-likely already have the EPEL repository in place (this sounds like a great feature to add a check for just in case EPEL is not installed).

Currently it’s been tested with Ubuntu 10.04 and 12.04 (using the puppet packages from http://apt.puppetlabs.com/) and CentOS 5 (6 should work but hasn’t explicitly been tested). Check it out in the Eucalyptus Recipes project on GitHub.

There are a couple of ways to run the module. You will need to get the euca2ools folder from the Recipes Project GitHub repo (note that it’s inside of the puppet directory of the repo). You then would drop the euca2ools folder into /etc/puppet/modules/ or wherever you wish to keep your puppet modules. Finally simply add a include euca2ools line to the puppet master or if running in a standalone fashion as talked about on this blog before create file in /etc/puppet/manifests/init.pp with the following contents:

include euca2ools

Complex I know. But it’s quite simple. After this you can either wait for the puppet agent to contact the puppet master or on a standalone box run puppet apply /etc/puppet/manifests/init.pp. And with that the latest version of euca2ools is installed on your host and is ready to go.

Check it out and let me know if you have any suggestions. I’m new to puppet so any comments from experience puppeteers would be helpful.

I’ve also added this module to a personal repository and uploaded it to the Puppet Forge. Check it out here: https://forge.puppetlabs.com/ahamilton55/euca2ools

Notes on using expert_recipe in Debian/Ubuntu Preseed Files

When working with IaaS easily provisioning bare metal is always needed. So, Eucalyptus uses preseed files to setup Debian and Ubuntu servers for testing software, supporting customers, and education new users. At times there are complex needs for how the servers are setup and it is not always an easy task.

When first starting out with preseed with the need for a complex partition setup, partman-auto/expert_recipe can look daunting. There can be many questions with regard to the somewhat cryptic setup of the recipes. The Debian documentation isn’t very helpful upon first look either but after understanding how to setup a recipe, it becomes quite easy. When RAID and LVM are added, d-i partman-auto/expert_recipe can create more complex disk setups and is a very powerful feature of any preseed setup.

A basic partitioning scheme on /dev/sda using the preseed partman-auto/expert_recipe directive is below:

d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string regular
d-i partman-auto/expert_recipe string root :: 19000 50 20000 ext3 \
        $primary{ } $bootable{ } method{ format } \
        format{ } use_filesystem{ } filesystem{ ext3 } \
        mountpoint{ / } \
    . \
    2048 90 2048 linux-swap \
        $primary{ } method{ swap } format{ } \
    . \
    100 100 10000000000 ext3 \
        $primary{ } method{ format } format{ } \
        use_filesystem{ } filesystem{ ext3 } \
        mountpoint{ /srv/extra } \
    .
d-i partman-auto/choose_recipe select root
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select Finish partitioning and write changes to disk
d-i partman/confirm boolean true

The main piece that we’ll focus on is the partman-auto/expert_recipe line. (Note: It might look like this is multiple lines but it actually a single line with the newlines escaped.) In the above example three primary partitions for /, swap, and /srv/extra are created. The partman-auto/expert_recipe directive is broken down below.

d-i partman-auto/expert_recipe string root ::

the first part of this line tells the Debian installer that “expert_recipe” will be used with partman to partition the hard drive provided on the d-i partman-auto/disk line. Next the Debian installer is told that a string should be expected with the value for the directive. Finally, a recipe title of “root” is given to the recipe. The recipe title is used with the d-i partman-auto/choose_recipe select root directive to tell partman which recipe to use. The :: signals to the Debian installer that we are starting the recipe for the partitions.

Next well look at a single partition and how it is created.

19000 50 20000 ext3 \
        $primary{ } $bootable{ } method{ format } \
        format{ } use_filesystem{ } filesystem{ ext3 } \
        mountpoint{ / } \
    . \

The first piece of the above partition recipe consists of three numbers. The numbers refer to the minimum size of the partition in megabytes(19000), the priority that this partition gets it’s maximum size fulfilled (with lower numbers having a higher priority)(30), and the maximum size of partition being created. The two size values are in megabytes (20000), respectively. The next word refers to the format of the partition that will be created.

The next three lines tell partman that this partition should be primary, the partition should be flagged as bootable, this partition should be formated, the format of the filesystem should be ext3, and finally that the mountpoint for this partition will be “/”. The final line with a single “.” tells partman that this is the end of the definition for this partition. If more text follows then partman knows that more partitions are being defined but if a newline is read then it will know that the partition recipe is completed.

Unfortunately the expert_recipe part of partman can currently only handles a single disk for partition based recipes. There are some hacks with sfdisk that can be used with preseed/late_command that can add some basic functionality for other disks. If expert_recipe is used with a LVM setup then use multiple disks can be used as will be shown below.

Below is a more complicated setup utilizing a RAID 1 array on /dev/sda and /dev/sdb and a LVM on top of the created RAID array.

d-i     partman-auto/disk string /dev/sda /dev/sdb
d-i     partman-auto/method string raid
d-i     partman-lvm/device_remove_lvm boolean true
d-i     partman-md/device_remove_md boolean true
d-i     partman-lvm/confirm boolean true
d-i     partman-auto/choose_recipe select boot-root
d-i     partman-auto-lvm/new_vg_name string vg00
d-i     partman-auto/expert_recipe string        \
           boot-root ::                          \
             1024 30 1024 raid                   \
                $lvmignore{ }                    \
                $primary{ } method{ raid }       \
             .                                   \
             1000 35 100000000 raid              \
                $lvmignore{ }                    \
                $primary{ } method{ raid }       \
             .                                   \
             19000 50 20000 ext4                 \
                $defaultignore{ }                \
                $lvmok{ }                        \
                lv_name{ root }                  \
                method{ format }                 \
                format{ }                        \
                use_filesystem{ }                \
                filesystem{ ext4 }               \
                mountpoint{ / }                  \
             .                                   \
             2048 60 2048 swap                   \
                $defaultignore{ }                \
                $lvmok{ }                        \
                lv_name{ swap }                  \
                method{ swap }                   \
                format{ }                        \
            .                                    
d-i partman-auto-raid/recipe string \
    1 2 0 ext2 /boot                \
          /dev/sda1#/dev/sdb1       \
    .                               \
    1 2 0 lvm -                     \
          /dev/sda2#/dev/sdb2       \
    .                               
d-i     mdadm/boot_degraded boolean false
d-i     partman-md/confirm boolean true
d-i     partman-partitioning/confirm_write_new_label boolean true
d-i     partman/choose_partition select Finish partitioning and write changes to disk
d-i     partman/confirm boolean true
d-i     partman-md/confirm_nooverwrite  boolean true
d-i     partman/confirm_nooverwrite boolean true

The start of setting up the RAID array is signaled following lines:

d-i     partman-auto/method string raid
d-i     partman-md/confirm boolean true

The first part that partman will utilize is the partman-auto-raid/recipe directive. This string defines how the RAID array will be setup on /dev/sda and /dev/sdb. For example, we setup a RAID 1 array for an LVM using /dev/sda and /dev/sdb using the following:

1 2 0 lvm -                     \
          /dev/sda2#/dev/sdb2       \
    .

The first number represents the RAID level (1), the second number refers to the number of devices we are using in the RAID array (2), and the third number refers to the number of spares the RAID array will have available (0). Next the partition type of the RAID array is defined (lvm) and the “-” refers to the mount point of the array. Since a LVM on this RAID array is created there is no mount point but see the full example above for an example of this. The partitions on the disks that will be used for the array are referenced with each partition separated with a “#”. Similar to the partition example above, the array definition is ended with a “.” and any text that follows will be considered another array with a newline telling partman that this recipe is completed.

partman-auto/expert_recipe is used to define the partitions being created for the RAID arrays. Above two RAID arrays are being created for /boot and a LVM partition. The recipe then defines two logical volumes to be created on the LVM for “/” and swap.

Next, the definition for a RAID partition is given:

             1024 30 1024 raid                   \
                $lvmignore{ }                    \
                $primary{ } method{ raid }       \
             .                                   \

Above a RAID array of 1GB with the highest priority and a partition type of “raid” is setup. Since this partition is of the highest priority, it will be setup as /dev/sda1 and /dev/sda2 and will be utilized as “/boot” by the OS given the partman-auto-raid/recipe directive explained above. The $lvmignore{ } flag is used to make sure that when partman is creating LVM logical volumes that this partition is not created as a logical volume. Next the RAID partitions are defined to be primary and that the method for using this partition will be with a RAID array.

To start off the definition for the LVM partition the following is used to tell the Debian installer to setup a LVM:

d-i     partman-lvm/confirm boolean true
d-i     partman-auto-lvm/new_vg_name string vg00
d-i     partman-auto-lvm/guided_size string 30GB

Make sure that the “guide_size” value above is greater than or equal to the size of all logical volumes created. To define a logical volume to be created the following logical volume is defined in the recipe:

             19000 50 20000 ext4                 \
                $defaultignore{ }                \
                $lvmok{ }                        \
                lv_name{ root }                  \
                method{ format }                 \
                format{ }                        \
                use_filesystem{ }                \
                filesystem{ ext4 }               \
                mountpoint{ / }                  \
             .                                   \

Above a logical volume between 19GB and 20GB with an ext4 filesystem will be created. The $defaultignore{ } is used to keep partman from using this partition when it is creating physical partitions on the disks. Next, partman is directed that this part of the recipe should be used when creating logical volumes with $lvmok{ } and the logical volume is given the name “root” with $lv_name{ root }. The rest of the flags are the same as the earlier examples in that it tells partman that the logical volume should be formatted and what the mount point should be.

The above complete examples can be placed into preseed files and tweaked to give the desired results. Hopefully this helps with using partman-auto/expert_recipe in either a standalone mode or when utilizing RAID and LVM.

Update: I’ve added full preseed file examples on GitHub. Check them out in my Blog Scripts repo.

Another Update: I’ve added another post about using preseed on Ubuntu Lucid and Debian Squeeze where the disk uses GPT (2TB and over disk sizes). Check it out: Get expert_recipe, mdraid, LVM, GPT, and grub2 Playing Together on Ubuntu Lucid and Debian Squeeze

Follow

Get every new post delivered to your Inbox.

Join 53 other followers