Skip to content

Adding Non-free Drivers to a Debian Netboot initrd

When working on an IaaS product, there is a constant need to have an easy and quick way to re-provision bare metal quickly. At Eucalyptus we utilize  Cobbler and a home grown solution that allow us to setup servers automatically. PXE along with kickstart/preseed configurations create an easy, consistent and automated way to setup bare metal on the fly. Setting this up though is not always as easy as one would like for some operating systems and hardware configurations.

When netbooting Debian on a Dell server with a Broadcom Netextreme II network card, a missing firmware error will occur with the stock Debian netboot ramdisk. The Broadcom Netextreme II network card uses non-free firmware so Debian will not pack it by default with its stock images. So, the firmware needs to be added to the netboot ramdisk manually before the machine will be able to successfully boot and access the network.

The Debian Wiki’s Netboot Firmware page contains some instructions on how to add the non-free firmware deb package to the stock ramdisk. Unfortunately, I was unable to get this to work. So, another method was performed to get the firmware into the netboot ramdisk and below is what I did. (NOTE: the below example uses the Debian Squeeze release. The same instructions will work with other versions of Debian as well.)

  1. Download the netboot kernel and ramdisk from your favorite Debian mirror. For example, on the USC Mirror these files for 64-bit can be found at the following location:
  2. Create a temporary directory that you will extract the ramdisk to.
    mkdir /tmp/initrd
  3. Copy the ramdisk to the temporary location and then extract the ramdisk into the directory.
    cp $DOWNLOAD_DIR/initrd.gz /tmp/initrd
    cd /tmp/initrd
    gunzip <initrd.gz | cpio --extract --preserve --verbose

    Above $DOWNLOAD_DIR is the location the ramdisk was originally downloaded to.

  4. Remove the initrd.gzfile from the temp directory.
    rm /tmp/initrd/initrd.gz
  5. In /tmp/initrd/lib create a firmwaredirectory.
    mkdir /tmp/initrd/lib/firmware

    *NOTE: There may be a more proper place to put this directory according to the Debian Wiki – Firmware Locations. This location works though without any issues in the ramdisk.

  6. Download the Debian non-free drivers tarball from the Debian package websiteand unpack it in another temporary directory.
    cd /tmp
    tar xzvf firmware-nonfree*.tar.gz
  7. Copy over the bnx2 folder contents from the non-free drivers tarball to the firmware directory created in step 5.
    cp -Rav /tmp/firmware-nonfree/bnx2/bnx2* /tmp/initrd/lib/firmware/
  8. Now repack the initrd with the newly added drivers.
    cd /tmp/initrd
    find . | cpio --create --format='newc' | gzip >../initrd.gz

A new ramdisk has been created with the non-free drivers included. The /tmp/initrd.gz file can be placed onto a tftpboot server or other boot server for usage by properly configured systems using PXE. The Debian Squeeze installer can then be run by users or with a proper configuration, a fully automated install can occur with preseed configurations.


Eucalyptus Recipes Project

Automation and configuration management is a big part of any successful cloud deployment. Whether on AWS, Eucalyptus or another cloud provider, having services that can be easily spun up and down with a consistent configuration is a must at cloud scale. The recipes project is looking to assist new cloud users with a first step.

The recipes project is attempting to be as vendor agnostic as possible by using both Puppet and Chef with the possible expansion to more options (Fabric, Ansible, etc) in the future. The project will be showing users basic techniques to get started with configuration management once their cloud is up and running. The project will also attempt to tie in the plentiful resources already available from these vibrant communities to extend the flexibility of deployments.

To help users with this a repository on GitHub has been created. The simple structure will help users choose which solution they want to use easily (or why not try all of them?). The repository contains scripts to take you automatically from a bare instance to having the puppet agent or chef-solo installed and ready to go (Note: These original bootstrap scripts were taken from my earlier posts here and here for automating a puppet and chef installation respectively). After this you’ll also be able to use some of our pre-built puppet modules or chef cookbooks to get started with basic configuration management in the cloud.

The basic structure for the repo will look like the following:

|- bootstrap
  \- puppet
    \- debian
    |- centos
    |- rhel
    |- ubuntu
  |- chef
    \- debian
    |- centos
    |- rhel
    |- ubuntu
|- puppet
  \- apache
    \- manifests
      \- init.pp
    |- files
    |- templates
  |- nginx
|- chef
  \- apache
    \- recipes
    |- files
    |- templates
    |- definitions
    |- attributes
  |- nginx
    \- recipes
    |- files
    |- templates
    |- definitions
    |- attributes

As you can see, it will be easy to find the script or set of modules you are looking for. The bootstrap folder will take care of automating the installation of any clients that might be needed for the various configuration management tools. Each configuration management tool will have a directory containing basic setups that new users can use to setup basic services. The recipes project is only looking to give some basic examples (at least at the start) so you can easily run a git clone inside of your instance and have the entire repo for usage in little time.

If you are interested in the project then come check us out! Let us know what you would like to see from us to make getting started with configuration management in Eucalyptus easier for you.We have a weekly meeting on in #eucalyptus-meeting at 11:30 PT. You can also reach us check out the Eucalyptus community mailing list for updates from time to time.

Automating a chef-solo Installation on a CentOS 5 Instance

Chef is a tool used for configuration management of bare metal and virtual systems. Chef has a client/server model as well as a standalone tool that is very similar to the tools from Puppet Labs. (Want to automate a puppet agent installation? Check out my earlier blog post

When using the cloud and launching multiple instances with the same job, configuration management is a huge time saver. Configuration management systems will allow you to build the configuration once and use it to create exact replicas on as many instances as you need. Having the exact same configuration for every system doing the same job keeps errors and possible issues down to a minimum. Also, if your production systems are easily replicated, you can spin up smaller test environments to check your systems before pushing out the newer code to production (possibly even using Amazon for production and Eucalyptus internally for testing!).

For configuration of systems, chef uses recipes and cookbooks to give users pre-made system configurations. A recipe is a single task and cookbook is a group of tasks, usually pertaining to a single application (e.g. apache, nginx, etc). A collection of pre-made cookbooks can be downloaded from a GitHub repository run by OpsCode, the creators of Chef. These cookbooks will help you a perform a large number of installations and configurations from setting up a web server to installing applications such as vim.

Below is a script that will automate the installation of the chef-solo, a standalone version of the chef client, on a CentOS 5 system. You can run this script by hand or have it automatically run by passing to the metadata service when running an instance. This script was tested on the Eucalyptus Community Cloud using a CentOS 5 image (emi-709D1676)

#!/usr/bin/env bash

YUM=`which yum`
RPM=`which rpm`
CURL=`which curl`
HOSTNAME=`which hostname`

SHORT_NAME=`echo ${SYSTEM_NAME} | cut -d'.' -f1`



# Set the hostname of the system.
hostname ${SYSTEM_NAME}
if [ -z `cat /etc/sysconfig/network | grep HOSTNAME` ]; then
    echo "HOSTNAME=${SYSTEM_NAME}" >> /etc/sysconfig/network
    sed -i -e "s/\(HOSTNAME=\).*/\1${SYSTEM_NAME}/" /etc/sysconfig/network

sed -i -e "s/\(localhost.localdomain\)/${SYSTEM_NAME} ${SHORT_NAME} \1/" /etc/hosts

${YUM} -y update

# Setup the required repos. EPEL, Aegisco, and rbel.
${CURL} -o /etc/yum.repos.d/aegisco.repo

${RPM} -Uhv

${CURL} -o ${DEFAULT_DIR}/epel-release-5-4.noarch.rpm
${RPM} -Uhv ${DEFAULT_DIR}/epel-release-5-4.noarch.rpm 

# Install ruby and required tools for building the system
${YUM} install -y ruby- ruby-libs- ruby-devel.x86_64 ruby-ri ruby-rdoc ruby-shadow gcc gcc-c++ automake autoconf make curl dmidecode

RUBY=`which ruby`
# Setup RubyGems
curl -o ${TMP_DIR}/rubygems-1.8.10.tgz 
tar xzvf ${TMP_DIR}/rubygems-1.8.10.tgz -C ${TMP_DIR}
${RUBY} ${TMP_DIR}/rubygems-1.8.10/setup.rb --no-format-executable

GEM=`which gem`
# Setup the chef ruby gem
${GEM} install chef --no-ri --no-rdoc

CHEF=`which chef-solo`
# Setup the basic configuration files needed
cat >>${DEFAULT_DIR}/solo.rb <>${DEFAULT_DIR}/node.json <<EOF
    "run_list": [ "recipe[apache2]" ]

# Setup up cookbooks directory for chef solo
mkdir -p ${CHEF_DIR}/cookbooks

# Download and untar the cookbooks provided by OpsCode on GitHub
${CURL} -o ${DEFAULT_DIR}/cookbooks.tgz
tar xzvf ${DEFAULT_DIR}/cookbooks.tgz -C ${DEFAULT_DIR}

# Add the apache2 cookbook to the chef solo cookbooks directory
cp -R ${DEFAULT_DIR}/opscode-cookbooks-*/apache2 ${CHEF_DIR}/cookbooks

# Run the node.rb JSON file to install apache2
${CHEF} -c ${DEFAULT_DIR}/solo.rb -j ${DEFAULT_DIR}/node.json 

This script will need a couple changes towards the end to be used as with chef-client and communicate with a chef server. Hopefully in a future post this will be discussed.

This script uses the chef-solo version of the chef-client that does not require a chef server. This makes testing out chef much easier. Utilizing the GitHub cookbooks repository, we can install anything available in the cookbooks along with the customized configuration that we might need. Please check out the chef-solo wiki for more details on how to extend the script above to do more for you.

With the addition of Amazon S3 or Eucalyptus Walrus or GitHub this script can be used to work with customized cookbooks to improve your infrastructure even more.

Not really interested in CentOS 5? Looking for the same for CentOS 6 or Debian? Check out my GitHub repo containing my blog scripts for two additional scripts in the chef_solo_install folder. (Note: The majority of the OpsCode cookbooks are built for Debian (probably more specifically Ubuntu) based systems so some won’t work with CentOS 6 out of the box. There are also some issues with CentOS 5 with cookbooks such as the nginx one.)

Cool post on past, present, and future of euca2ools. Looks interesting and I’m hoping to help a bit on the these improvements in the future!


For those who don’t know, I work on the euca2ools suite of command line tools for interacting with Eucalyptus and Amazon Web Services clouds on Launchpad. As of late the project has stagnated somewhat, due in part to the sheer number of different tools it includes. Nearly every command one can send to a server that uses Amazon’s APIs should have at least one corresponding command line tool, making development of euca2ools’ code repetitive and error-prone.

Today this is going to end.

But before we get to that part, let’s chronicle how euca2ools got to where they are today.

The Past

Early euca2ools versions employed the popular boto Python library to do their heavy lifting. Each tool of this sort triggers a long chain of events:

  • The tool translates data from the command line into its internal data structures.
  • The tool translates its internal data into the form that…

View original post 1,293 more words

Installing Fabric on CentOS 5

Fabric is a great tool for performing remote tasks that need to be done on a group of hosts. It allows a sysadmin to run commands both locally and remotely, copy and send files, and even execute commands using sudo on the remote end.

For our current POC configuration, Eucalyptus still recommends CentOS 5. Unfortunately, getting Fabric to install and work with CentOS 5 is a bit of pain. I’ve finally figured out what is needed for Fabric to work. This will allow for future blogs about utilizing Fabric with Eucalyptus.

1. Install the EPEL repository for CentOS 5 and install python26, python26-devel, gcc, and python-setuptools

# curl -o epel-release-5-4.noarch.rpm
# rpm -Uhv epel-release-5-4.noarch.rpm
# yum install -y python26 python26-devel gcc python-setuptools

2. Make setuptools available to python26

# cp -R /usr/lib/python2.4/site-packages/setuptools* /usr/lib/python2.6/site-packages/
# cp -R /usr/lib/python2.4/site-packages/* /usr/lib/python2.6/site-packages/

3. Download the latest tarball of the Fabric master branch from GitHub at

# curl -o fabric.tgz
# tar xzvf fabric.tgz
# cd fabric-fabric-

4. Run to install Fabric from the tarball.

# python26 install

Optional: Since setuptools was installed from the EPEL repository it is built using a sets module that is deprecated in Python 2.6. If you would like to get rid of the “DeprecationWarning” message when you run fab use the following:

# sed -i 's#/usr/bin/python26#/usr/bin/python26 -W ignore::DeprecationWarning#' /usr/bin/fab

Note: Tip found at HACKTUX | Ignore Python Deprecation Warnings

Now that fabric is installed on our CentOS 5 host we can now run a small test. Create the following that will print out the /etc/redhat-release file when using the release() method in the

from fabric.api import local

def release():
    local("cat /etc/redhat-release")

You can now execute the above by executing fab while in the same directory as the above created

# fab -H localhost release
[localhost] Executing task 'release'
[localhost] run: cat /etc/redhat-release
[localhost] Login password: 
[localhost] out: CentOS release 5.8 (Final)
Disconnecting from localhost... done.

If you get something similar to the output above then your Fabric installation is now setup and ready to go. You can now use fabric on any local or remote management tasks you might need to do on a single machine or a group of hundreds. Fabric is also great for usage with AWS or Eucalyptus instances and I hope to get into a few use cases for using fabric with the cloud in the future.

These instructions can be tweaked to get pip-python working with CentOS 5 as well. With python-pip you have a vast assortment of available utilities and libraries for python as part of the PyPI. If there is any interest in these steps let me know and I’ll throw together a quick blog about getting pip-python to work.

If you want to learn more about using fabric then check out the fabric website and the fabric tutorial.

Edit: Added a script for this setup. Find it on GitHub in my Blog Scripts repo under fabric_on_centos.

Automating a Puppet Agent Installation

Puppet is a very popular configuration management system. It allows users to deploy new infrastructure in a cloud or traditional enterprise environment quickly and efficiently. When an organization utilizes a configuration management system, it allows IT and DevOps groups to effectively replicate environments on the fly with little to no interaction with the new instance or system.

The basic building block for using puppet in the cloud is through the puppet agent. The puppet agent has two main actions:

  • Applying a manifest to the systems it’s running on
  • Working with a puppet master to coordinate a new deployment

In this post we will be focusing on the first aspect of the puppet agent mentioned above by using it in a standalone configuration.

What will our script do in this post?

  • Update the instance
  • Setup the hostname
  • Install the puppet agent
  • Create a manifest for the puppet agent to work with that will install vim and Apache2
  • Apply the puppet manifest to the instance

All of these actions will be accomplished with a single command, euca-run-instances. With a simple deployment like this, we reduce complexity for new instance deployment and will give more power to cloud users. This has the side effect of making IT’s talk list much shorter, gives teams a quicker turn around time for new systems, and allows for new systems that are spun up to look exactly the same which reduces the possibility of errors.

For this post I’m going to use the Eucalyptus Community Cloud and a Debian Squeeze image (emi-00781826). Below is the script that will do what was outlined above. Copy it from here (or GitHub – Puppet Agent Install) and save it to your local machine for use.

#!/usr/bin/env bash

SHORT_HOST=`echo ${FULL_HOSTNAME} | cut -d'.' -f1`
APTITUDE=`which aptitude`
APT_KEY=`which apt-key`

# Setup the hostname for the system. Puppet really relies on 
# the hostname so this must be done.
hostname ${FULL_HOSTNAME}

sed -i -e "s/\(localhost.localdomain\)/${SHORT_HOST} ${FULL_HOSTNAME} \1/" /etc/hosts

# Need to add in the aptitude workarounds for instances.
# * First disable dialog boxes for dpkg
# * Add the PPA for ec2-consistent-snapshot or else the update will hang.
export DEBIAN_FRONTEND=noninteractive
export DEBIAN_PRIORITY=critical

${APT_KEY} adv --keyserver --recv-keys BE09C571

# Update the instance and install the puppet agent
${APTITUDE} update
${APTITUDE} -y safe-upgrade
${APTITUDE} -y install puppet

PUPPET=`which puppet`

# Setup the puppet manifest in /root/my_manifest
cat >>/root/my_manifest.pp <<EOF
package {
    'apache2': ensure => installed

service {
        ensure => true,
        enable => true,
        require => Package['apache2']

package {
    'vim': ensure => installed

# Apply the puppet manifest
$PUPPET apply /root/my_manifest.pp

# End of script cleanup.
export DEBIAN_FRONTEND=dialog

This script very pretty basic. First we take care of some basics like setting the hostname on the instance and we update the instance to the latest software available. We then install the puppet agent onto the system and find where the new binary is located. Next we create a file containing our puppet manifest that we wish to install and finally we apply the manifest to the instance.

To use the script with a new instance use the -f flag for euca-run-instances like so:

euca-run-instances -k <my_key> -t <instance_size> -f <path_to_script> emi-00781826

This is just a basic example of how you can utilize puppet inside of your Eucalyptus or EC2 compliant cloud with a metadata service.

(Looking to try this out with CentOS? There’s a script for it on GitHut – Puppet Agent Install. Try it out on the ECC with EMI emi-709D1676.)

Edit: I’ve added a CentOS 6 script for this as well now. Check it out at the GitHub repo above!

Basics of Instance Automation

Automation is a huge piece of any successful cloud deployment. When you have the ability to scale horizontally with seemingly “infinite” resources you need a quick and automatic method of setting up additional services. Infrastructure-as-a-Service cloud services, such as Amazon EC2 or Eucalyptus, have built-in functionality that allows for information to be passed to new instances and allows for complete system automation.

The two main technologies that assist the cloud user or cloud developer with an automation task are the following:

  • Metadata service
  • Automated user data execution

The metadata service allows the user to supply an instance with information regarding it’s setup, environment, and data provided by the user through the cloud service (For more information see Amazon EC2 Instance Metadata or Eucalyptus Metadata Service). This ability to pass data to a new instance gives the user a simple method for automating the instance’s setup without needing to ever login to the instance. The user’s data, kept at by the metadata service, can contain a script that, with the proper setup on the instance’s image, can be executed automatically as the instance boots.

To achieve automatic execution of the user data from the metadata service, we can utilize a couple different methods. The first method involves running commands inside of the /etc/rc.local file of an instance’s image. The most basic version of an /etc/rc.local that will automate the execution of user data is the following:

curl -s -o /tmp/user-data 
sh /tmp/user-data

In the above code snippet, we first download the user data from the metadata service at and place it into a temporary file. Then we execute the user data using a basic POSIX shell. This is only the most basic form. You can find a more sophisticated /etc/rc.local implementation in the Eucalyptus Starter Images.

An emerging standard method for downloading and utilizing the user data is with CloudInit. CloudInit takes the above /etc/rc.local functionality and extends it greatly. It is definitely worth it to research CloudInit’s features and use it with a Debian or Ubuntu based image and hopefully in the near future Fedora and Centos as well.

This was simple a basic primer on how the metadata service of an IaaS cloud service functions behind what the cloud user will see. In my next post, I’ll show how to use the metadata service to do some basic instance setup automation.