Skip to content

Setting up rsyslog and logrotate for the new Eucalyptus Console

The latest release of Eucalyptus has introduced a new user user console. The user console is written in Python and is using the logging module that can easily be setup to work with rsyslog. It even uses it out the box! Unfortunately, the user console is sending a lot of verbose and useless information into /var/messages which I don’t like. /var/log/messages is the main log for my system so I’d like to make sure that I can find important messages easily. The Eucalyptus user console also uses the Python ConfigParser module so we are able to easily configure the console which includes the logger. Unfortunately for this feature as well, the logging configuration does not seem to take effect but we can fix this easily!

So, in this blog let’s first fix the logging so that it is configurable (I’ll put in a bug for this soon), write the verbose log information to a file other than /var/log/messages and finally setup logrotate to rotate the log on a daily basis.

Before starting make sure that you have installed and configured the Eucalyptus console following the instructions found in the Eucalyptus 3.2 User Console Guide. Make sure that the console is working by starting it up (service eucalyptus-console start) and verifying that you can login (https://my-host-or-ip:8888). This way if the configuration is trashed you’ll know that it was previously working. As always, I suggest you make backups of any files that are edited just in case.

When configuring the Eucalyptus user console in /etc/eucalyptus-console/console.ini you might have noticed the logging section. If you attempt to change the formatting for example in this section and then check /var/log/messages you’ll notice that nothing changed. It turns out that the Python logging module is directly called in /usr/bin/euca-console-server with the default rsyslog settings and the configuration is never honored. The configuration file also uses the StreamHandler in the logging module but there is nothing in the console that uses the stream so the information is not captured.

To first fix these issues we’ll need to stop the Eucalyptus console from using the SysLogHandler with the default settings. Open up /usr/bin/euca-console-server and either comment out or remove the following two lines:

handler = logging.handlers.SysLogHandler(address = '/dev/log')
logging.getLogger().addHandler(handler)

With these lines removed, the logger will be started using the configuration in the config file later on in the console’s execution.

Next we’ll configure the logging section in /etc/eucalyptus-console/console.ini to use the SysLogHandler when printing out the logging information. We’ll also make an edit to the formatting string to make it easier to filter out the lines from the Eucalyptus console. Make the logging section of your console.ini file look like the following:

##
# These sections are used to configure the logger. For info, see docs for the python logging package
##

[loggers]
keys=root

[handlers]
keys=eucahandler

[formatters]
keys=eucaformat

[logger_root]
level=INFO
handlers=eucahandler

[handler_eucahandler]
class=handlers.SysLogHandler
level=NOTSET
formatter=eucaformat
args=('/dev/log', handlers.SysLogHandler.LOG_SYSLOG)

[formatter_eucaformat]
format=eucalyptus-console: %(levelname)s %(message)s
datefmt=

##
# End logger config
##

The first two changes occur in the [handler_eucahandler] section. Here the handler is changed from using the StreamHandler to using the SysLogHandler so that logging messages are sent to the rsyslog process. The arguments are changed to include the device to send the logging information to, /dev/logger, as well as which log to send the information to (See Python logging module – SysLogHandler for more information on the SysLogHandler).

The change to the format of the logs makes it easier to filter the log lines produced by the Eucalyptus console without needing to send information to one of the LOG_LOCAL* handlers which could cause issues with other custom log setups (check out Python logging module – SysLogHandler). Since rsyslog automatically adds the time when lines are printed, remove the %(asctime)-15s section of the format line. Adding “eucalyptus-console:” will allow rsyslog to easily filter lines by looking for “eucalyptus-console” in the lines.

Now, create a file in /etc/rsyslog.d/ with any name you’d like as long as it ends in .conf. In the example, /etc/rsyslog.d/eucalyptus-console.conf will be used. Now add the following to it:

# Send the eucalyptus-console output to a separate file
:programname,isequal,"eucalyptus-console"	/var/log/eucalyptus-console.log
&~

This rsyslog configuration file will search the programname column in any log lines sent through rsyslog for “eucalyptus-console” string added above to the format string. If the string is found then it will send that line to /var/log/eucalyptus-console.log. The &~ tells rsyslog to stop evaluating rules for this line so that it only makes it into this file.

Finally, setup the logrotate configuration file for the Eucalyptus console. Create the file /etc/logrotate.d/eucalyptus-console with the following:

/var/log/eucalyptus-console.log
{
    rotate 5
    daily
    compress
    postrotate
	/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
    endscript
}

This file tells logrotate that for the file /var/log/eucalyptus-console.log do the following:

  • rotate 5: Remove the logs after 5 days
  • daily: Rotate every day
  • compress: Compress the rotated log files
  • postrotate … endscript: Run the command after rotating the log

With this configuration, the logs will be kept for long enough to be useful and then removed so that /var/log and the system disk does not fill up with unneeded logs.

So now you have the Eucalyptus console sending logging data to its own file and are having that file rotated every day. This is really just the tip of the iceberg though since the Eucalyptus console uses rsyslog we can also send logs to a remove server or search the logs for errors with a tool such as logstash or greylog.

ahamilton55:

I’m definitely contributing back with both code and money. Has helped me plenty over the past year.

Originally posted on Greg DeKoenigsberg Speaks:

We’re big fans of the Cobbler project here at Eucalyptus. We think it’s the best tool in the open source world for bare metal provisioning.  We’ve invested in a gigantic QA environment for continual integration testing, and Cobbler is one of the linchpins of that environment. It’s the kind of tool that’s best appreciated by sysadmins who deal with *a lot* of systems.

I’m sort of attached to Cobbler personally, since I watched it grow out of Red Hat’s Emerging Technologies team several years ago. Now it’s grown past its Red Hat roots to become a truly independent project — and independent projects need support from time to time.

The Cobbler folks have set up an Indiegogo campaign to raise some funds for some much-needed infrastructure, and as proud Cobbler users, we are proud to help them out. Their goal is to raise $4000, and Eucalyptus will match every donation…

View original 36 more words

Add Eucalyptus Account and User Names to Your BASH Prompt

At Eucalyptus, IT tends to have a bunch of different accounts for the clouds that we run. Most of the work we do with these clouds happens from the Cloud Controller of each cloud and it can become confusing determining which user’s credentials are currently sourced when you have 10 accounts and 15 users. We need a method for easily determining which account and user we are currently accessing the cloud as so that we don’t launch instances or create keypairs using the incorrect user as then we’ll need to do it again and clean up the mistake as well (i.e. doing twice the work).

So, one method that we’ve come up with is to put the account and user name of the currently sourced credentials directly into the BASH prompt. Now there is little doubt as to which user’s credentials we are currently using.

The following function will print out (<account>:<user>) for the currently source Eucalyptus credentials:

parse_euca_user() {
  if [[ ! -z ${EC2_USER_ID} ]]; then
    if [[ ! -e ${HOME}/.my_ec2_user_id || $(cat ${HOME}/.my_ec2_user_id | cut -d' ' -f1) -ne ${EC2_USER_ID} ]]; then
      EC2_ACCOUNT_NAME=$(euare-accountlist | grep $EC2_USER_ID | cut -d' ' -f1)
      EC2_USER_NAME=$(basename $EC2_CERT | cut -d'-' -f2)
      echo ${EC2_USER_ID} ${EC2_ACCOUNT_NAME} >${HOME}/.my_ec2_user_id
    else
      EC2_ACCOUNT_NAME=$(cat $HOME/.my_ec2_user_id | cut -d' ' -f2)
      EC2_USER_NAME=$(basename $EC2_CERT | cut -d'-' -f2)
    fi
  fi

  if [[ ! -z ${EC2_ACCOUNT_NAME} ]]; then
    echo "(${EC2_ACCOUNT_NAME}:${EC2_USER_NAME})"
  else
    echo ""
  fi
}

Add the above function to your ${HOME}/.bashrc file. Next update your prompt by adding a call to the parse_euca_user() function in it such as the following:

export PS1="\u@\h:\W \$(parse_euca_user)\$ "

Now you should see something like the following after running source eucarc on the eucalyptus account’s admin user.

root@prod-frontend:admin (eucalyptus:admin)$

Single System Test Cloud, Take 2

Testing Eucalyptus should be easy. In Single System Test Cloud, testing a cloud was made easier by removing the requirement of having at least two systems. But, it still adds unnecessary complexity by adding a VM to run the front-end system. After learning from the experience of the first single system test cloud, it became apparent that there should be an easy way to run Eucalyptus without the use of a VM. And yes there is!

First, start off with a CentOS 6 installation. This installation can either be with or without a GUI environment though a GUI will help on a system such as a laptop.

After CentOS 6 is installed setup a bridge. This bridge does will not be attached to any of the physical devices of the system. This bridge will be used by the NC for the instances as well as for communication between the CC and the NC. We will need an free subnet and IP address that may be use. This subnet can be very small but a /24 will be used in the example. The bridge will be setup to use the IP address 172.16.0.1 and a netmask of 255.255.255.0.

To setup the bridge on br0 with the above information, place the below into /etc/sysconfig/network-scripts/ifcfg-br0:

DEVICE=br0
ONBOOT=yes
TYPE=Bridge
BOOTPROTO=none
IPADDR=172.16.0.1
NETMASK=255.255.255.0
NETWORK=172.16.0.0

Note: If a laptop or another system that may get a different IP address is used, it could be a good idea to use a sub-interface. This way if the IP address changes on the system, the configuration will not need to change and the cloud should still function. Eucalyptus does not deal well with changing IP addresses on components.

To prevent a possible issue with the Eucalyptus meta-data service, turn off Zeroconf by adding the following to /etc/sysconfig/network:

NOZEROCONF=true

To have the two new settings setup above take effect, restart the networking process:

service network restart

Note: If a GUI was installed with the CentOS installation there might be issues caused by Network Manager. To get around these I suggest that you add NM_CONTROLLED="no" to the interface that Eucalyptus will use for the its IP.

Next disable the system firewall and either place SELinux in permissive mode or disabled. To turn off the firewall use the following command:

system-config-firewall-tui

Deselect the firewall entry. Next edit /etc/selinux/config and change the SELINUX entry to either permissive or disabled. Finish the SELinux configuration by running the following:

setenforce 0

Now install and configure the NTP service. The NTP service will be set to start at boot and the resulting updated time will be synced to the hardware clock of the system. Run the following commands:

yum -y install ntp
chkconfig ntpd on
service ntpd start
ntpdate -u pool.ntp.org
hwclock --systohc

Eucalyptus is a difficult product to install the first time so I heavily recommend taking some time to read the Eucalyptus Installation Guide. Please take the time to carefully read the Installation Guide as it will make the steps below much easier to understand. Really.

To begin the Eucalyptus installation, install and setup the needed repositories.

yum -y install http://downloads.eucalyptus.com/software/eucalyptus/3.1/centos/6/x86_64/eucalyptus-release-3.1.1.noarch.rpm
yum -y install http://downloads.eucalyptus.com/software/eucalyptus/3.1/centos/6/x86_64/eucalyptus-release-3.1-1.el6.noarch.rpm
yum -y install http://downloads.eucalyptus.com/software/euca2ools/2.1/centos/6/x86_64/euca2ools-release-2.1-2.el6.noarch.rpm
yum -y install http://downloads.eucalyptus.com/software/eucalyptus/3.1/centos/6/x86_64/epel-release-6-7.noarch.rpm
yum -y install http://downloads.eucalyptus.com/software/eucalyptus/3.1/centos/6/x86_64/elrepo-release-6-4.el6.elrepo.noarch.rpm

Now install the Eucalyptus Cloud Controller (clc), Cluster Controller (cc), Storage Controller (sc), Walrus, and Node Controller (nc) on the system.

yum -y groupinstall eucalyptus-cloud-controller
yum -y install eucalyptus-nc eucalyptus-cc eucalyptus-sc eucalyptus-walrus

Since all of the Eucalyptus components are now installed, it is time to configure the system. Before the changes can be made to the configuration file there is some information that needs to be gathered.

Eucalyptus requires a list of public IP addresses that can be given to instances that are started. For this type of a cloud five should be sufficient but one IP will be needed for each instance that is run. These public IPs do not need to be publicly routed or even routed on your network. In the example below, the range “10.104.5.55-10.104.5.60″ will be used for the list of public IPs.

Eucalyptus will create a private network that will be used for instance communication. This network subnet should be one that is not currently utilized on the local network. This network subnet should have at least 256 addresses in it (this is a /24 or a netmask of 255.255.255.0). In this example, the subnet that will be used is 172.31.0.0 with the netmask 255.255.255.0.

Eucalyptus will give each instance a DNS server to use when it boots. For this example we will use Google’s Public DNS server at 8.8.8.8.

Open up /etc/eucalyptus/eucalyptus.conf and change the following settings. Make sure that if any of the example settings conflict in the local network that values are swapped with some that will work. Also make sure to remove any “#” characters that might be at the beginning of these settings.

CREATE_SC_LOOP_DEVICES=256
USE_VIRTIO_NET="1"
VNET_MODE="MANAGED-NOVLAN"
VNET_PRIVINTERFACE="br0"
VNET_PUBINTERFACE="eth0"
VNET_BRIDGE="br0"
VNET_PUBLICIPS="10.104.5.55-10.104.5.60"
VNET_SUBNET=172.31.0.0
VNET_NETMASK=255.255.255.0
VNET_ADDRSPERNET="16"
VNET_DNS=8.8.8.8

When the Eucalyptus NC service was installed Libvirtd was also installed. DNSMasq comes with Libvirtd but causes issues with Eucalyptus. So, we’re going to turn off DNSMasq and disable it from starting at boot.

service dnsmasq stop
chkconfig dnsmasq off

Now it is time to initialize the Eucalyptus DB and to start the components. Run the following command:

euca_conf --initialize

If the output of the above command includs the word “succeeded” then the database was successfully setup. Next, start the services.

service eucalyptus-cloud start
service eucalyptus-cc start
service eucalyptus-nc start

Check to see if the services are running by looking for the following ports in the output of netstat -ntplu: 8443, 8773, 8774, 8775. If all of these ports are found in the output then the services are running.

Registration of the components can now take place. Register all of the components, except for the NC, on the same IP. The NC should be registered on the IP given to the br0 interface above. For this example, the system has been setup with the IP 10.104.5.1 so all components, except for the NC, will be registered to that IP.

/usr/sbin/euca_conf --register-walrus --partition walrus --host 10.104.5.1 --component walrus-single
/usr/sbin/euca_conf --register-cluster --partition cluster01 --host 10.104.5.1 --component cc-single
/usr/sbin/euca_conf --register-sc --partition cluster01 --host 10.104.5.1 --component sc-single
/usr/sbin/euca_conf --register-nodes "172.16.0.1"

Note: If the IP of the system that this is being setup on is not 10.104.5.1 then please replace the IP above with the correct IP.

Finally, the cloud should be running and registered so that credentials can now be downloaded. Run the following to get the cloud administrator’s credentials:

euca_conf --get-credentials admin.zip

Unzip the resulting admin.zip file into a directory. Next, run the following command inside of the directory where the admin.zip file was unzipped.

source eucarc

To see if the cloud has been able to find resources on the system run the following command:

euca-describe-availability-zones verbose

If the output does not contain 000 / 000 on every line then the cloud is successfully operating. Now an image should be uploaded to the cloud so that an instance may be run. I will leave this as activity to the user. Information on images can be found in the Eucalyptus Administration Guide.

Single System Test Cloud

UPDATE: I’ve posted an easier way to do this. Check out my other post Single System Test Cloud, Take 2 for the instructions.

When first trying out Eucalyptus, the requirement of having two systems at a minimum can become a blocker. What if you only want to test out running a couple VMs? Well for that use case a system such as the ECC might be just fine. But what if you are looking to try out the installation process or see what a Cloud Administrator needs to deal with? For this, it would be nice if a single machine could be used for some basic testing and if those tests are good then more resources could be purchased for a more proper proof-of-concept configuration. My coworker Graziano tried this out by using two VMs in his blog Developer Cloud but that requires that your CPU allows for nested virtualization.

For a single system configuration the Eucalyptus Front-end (CLC, Walrus, CC, SC) will be installed in a VM on the bare metal host that will run the Eucalyptus NC. A private network subnet will be setup that only the bare metal system and the VM can use to communicate. Two bridges will be used for the communication between the front-end (VM) and the NC (bare metal).

First start off with a stock CentOS 6 installation on your bare metal system. Install and setup KVM and libvirtd. Also make sure that you have enough room for a VM to run a Eucalyptus front-end (50GB minimum) along with any instances that you might wish to run (50GB minimum).

To allow for us to use the MANAGED-NOVLAN networking mode we will be setting up two network bridges. One of these bridges will be our public network and the other will be our private network. The IP addresses and subnet should not be routed on the rest of the network. For example, if we were going to have the interface br0 use the IP address 172.16.1.7 with a netmask of 255.255.255.0 we would use the following configuration:

DEVICE=br0
TYPE=Bridge
ONBOOT=yes
DELAY=0
NETWORK=172.16.1.0
NETMASK=255.255.255.0
IPADDR=172.16.1.7

To make sure that the Eucalyptus VM has access to the Internet will need to make sure that we have a NAT setup. First add the following IPTables rules (Note: You may need to change the interfaces if you system is not setup in the same way):

/sbin/iptables -A FORWARD -i br0 -o em1 -j ACCEPT
/sbin/iptables -A FORWARD -i em1 -o br0 -m state --state RELATED,ESTABLISHED -j ACCEPT
/sbin/iptables -t nat -A POSTROUTING -o em1 -j MASQUERADE

Next the bare metal system needs to forward packets. To do this edit /etc/sysctl.conf and make the following edit:

net.ipv4.ip_forward = 1

To make sure that we have a fully functioning cloud that we can use all of the available features with we need to shut off Zeroconf which causes issues with the metadata service of Eucalyptus. To do this add the following to /etc/syconfig/network

NOZEROCONF=true

Now restart networking and the Eucalyptus VM will now be able to access the Internet (that is if it is currently accessible by the other machines on your network).

service network restart

Now a CentOS 6 VM will need to be installed on the bare metal system. When setting up the VM both of the bridges will be used as interfaces for the VM. The defaults of the installer will work. You will want to use a static address for the networking. Below is an example libvirt XML file:

<domain type='kvm'>
  <name>frontend</name>
  <memory unit="GiB">2</memory>
  <description>Front End</description>
  <cpu match='exact'>
    <model>core2duo</model>
    <feature policy='require' name='vmx'/>
  </cpu>
  <os>
    <type arch="x86_64">hvm</type>
    <boot dev='hd'/> 
  </os>
  <features>
    <acpi/>
  </features>
  <clock sync="localtime"/>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <source file='/media/extra/frontend.img'/>
      <target dev='vda' bus='virtio'/>
    </disk>
    <disk type='file' device='cdrom'>
      <source file='/media/extra/CentOS-6.3-x86_64-minimal.iso'/>
      <target dev='hdc'/>
    </disk>
    <interface type='bridge'>
      <source bridge="br0"/>
      <mac address='00:16:3e:21:52:45'/>
      <model type='virtio'/>
    </interface>
    <interface type='bridge'>
      <source bridge="br1"/>
      <mac address='00:16:3e:21:52:46'/>
      <model type='virtio'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <graphics type='vnc' port='-1'/>
  </devices>
</domain>

Note: You may need to change the MAC addresses above if these two already exist on your network. Using MAC addresses from the 00:16:3e:XX:XX:XX pool will be a safe bet as these are registered to be used by VMs. This blog post here has some more information if you are interested.

After the VM’s OS is installed, follow the Eucalyptus Installation Guide to install all components (CLC, CC, SC, Walrus) on the VM. Install the Eucalyptus NC on the bare metal system.

When Eucalyptus is setup and running on the VM add the following IPtables rules to make sure that you are able to access the API and WebUI from the bare metal host.

iptables -t nat -I PREROUTING 1 -p tcp -i em1 --dport 8443 -j DNAT --to-destination **VM_IP**:8443
iptables -t nat -I PREROUTING 1 -p tcp -i em1 --dport 8773 -j DNAT --to-destination **VM_IP**:8773

Now a fully functioning cloud should be setup and ready for you to try out.

Get expert_recipe, mdraid, LVM, GPT and grub2 Playing Together on Ubuntu Lucid and Debian Squeeze

Hard drives are growing and growing. You can now get 3 TB hard drives and have a ton of storage in each server. But then you notice that your current preseed files are crashing when attempting to install grub2. You hit “alt-f4″ to check for any errors in the logs and all you see are errors about grub2 not being able to embed in a GPT disk.

When a disk size gets up around 2 TB the Debian installer and partman will default to using the GPT partitioning scheme. Unfortunately older versions of partman do not properly setup a disk with GPT when the disks are part of a software RAID. Luckily the fix is not that difficult as it only requires a few additions (or possible alterations if you already use some of these options in your preseed file).

What we need to add is the partition that older versions of partman do not properly create. These partitions are of the type bios_grub and only need to be 1MB in size. This partition gives grub2 a location to place it’s entire loader and allows the BIOS to properly find the grub boot information to load the boot loader. More information about the BIOS boot partition can be found on Wikipedia – BIOS Book Partition.

To add the bios_grub partition in a preseed with partman we add the following to our expert_recipe:

             1 1 1 free                          \
                $iflabel{ gpt }                  \
                method{ biosgrub }               \
             .                                   \

This adds a 1MB partition as the top priority partition and if the disk label is set to “GPT” then the partition is flagged as biosgrub.

Since we’re creating another partition on the disk you will also need to increase the partition numbers used in your RAID array by 1. For example, if we originally were using this RAID setup in the preseed:

d-i     partman-auto-raid/recipe string          \
        1 2 0 ext3 /boot /dev/sda1#/dev/sdb1     \
        .                                        \
        1 2 0 lvm - /dev/sda2#/dev/sdb2          \
        .

We would now need to edit the entry and increase the partitions by 1.

d-i     partman-auto-raid/recipe string          \
        1 2 0 ext3 /boot /dev/sda2#/dev/sdb2     \
        .                                        \
        1 2 0 lvm - /dev/sda3#/dev/sdb3          \
        .

Completed the disk setup in our preseed file looks like this:

d-i     partman-auto/disk string /dev/sda /dev/sdb
d-i     partman-auto/method string raid
d-i     partman-lvm/device_remove_lvm boolean true
d-i     partman-auto/purge_lvm_from_device boolean true
d-i     partman-md/device_remove_md boolean true
d-i     partman-md/confirm_nochanges boolean true
d-i     partman-lvm/confirm boolean true
d-i     partman-auto/choose_recipe select boot-root
d-i     partman-auto-lvm/new_vg_name string vg01
d-i     partman-auto-lvm/guided_size string 100%
d-i     partman-auto-raid/recipe string          \
        1 2 0 ext3 /boot /dev/sda2#/dev/sdb2     \
        .                                        \
        1 2 0 lvm - /dev/sda3#/dev/sdb3          \
        .
d-i     partman-auto/expert_recipe string        \
           boot-root ::                          \
             1 1 1 free                          \
                $iflabel{ gpt }                  \
                method{ biosgrub }               \
             .                                   \
             256 10 256 raid                     \
                $lvmignore{ }                    \
                $primary{ }                      \
                method{ raid }                   \
             .                                   \
             1000 20 1000000 raid                \
                $lvmignore{ }                    \
                $primary{ }                      \
                method{ raid }                   \
             .                                   \
             150% 30 150% swap                   \
                $defaultignore{ }                \
                $lvmok{ }                        \
                lv_name{ lv_swap }               \
                method{ swap }                   \
                format{ }                        \
             .                                   \
             20480 40 20480 ext4                 \
                $defaultignore{ }                \
                $lvmok{ }                        \
                lv_name{ lv_root }               \
                method{ format }                 \
                format{ }                        \
                use_filesystem{ }                \
                filesystem{ ext4 }               \
                mountpoint{ / }                  \
             .                                   \
             1 50 -1 ext4                        \
                $defaultignore{ }                \
                $lvmok{ }                        \
                lv_name{ lv_dummy }              \
             .                                    
d-i     mdadm/boot_degraded boolean true
d-i     partman-md/confirm boolean true
d-i     partman-partitioning/confirm_write_new_label boolean true
d-i     partman/choose_partition select Finish partitioning and write changes to disk
d-i     partman/confirm boolean true
d-i     partman-md/confirm_nooverwrite  boolean true
d-i     partman/confirm_nooverwrite boolean true

Looking for more info on using expert_recipe with preseed? Check out my other blog that goes more in depth, Notes on using expert_recipe in Debian/Ubuntu Preseed Files

Puppet module for euca2ools

I know that I’ve been lacking lately on the recipes front. I’m trying to get started so I’ve completed a little project, create a Puppet module for euca2ools. Now this module is quite simple as it simply needs to setup the Eucalyptus euca2ools repository for the OS and then install the euca2ools package. It makes some assumptions such as, if you have puppet on a CentOS box then you most-likely already have the EPEL repository in place (this sounds like a great feature to add a check for just in case EPEL is not installed).

Currently it’s been tested with Ubuntu 10.04 and 12.04 (using the puppet packages from http://apt.puppetlabs.com/) and CentOS 5 (6 should work but hasn’t explicitly been tested). Check it out in the Eucalyptus Recipes project on GitHub.

There are a couple of ways to run the module. You will need to get the euca2ools folder from the Recipes Project GitHub repo (note that it’s inside of the puppet directory of the repo). You then would drop the euca2ools folder into /etc/puppet/modules/ or wherever you wish to keep your puppet modules. Finally simply add a include euca2ools line to the puppet master or if running in a standalone fashion as talked about on this blog before create file in /etc/puppet/manifests/init.pp with the following contents:

include euca2ools

Complex I know. But it’s quite simple. After this you can either wait for the puppet agent to contact the puppet master or on a standalone box run puppet apply /etc/puppet/manifests/init.pp. And with that the latest version of euca2ools is installed on your host and is ready to go.

Check it out and let me know if you have any suggestions. I’m new to puppet so any comments from experience puppeteers would be helpful.

I’ve also added this module to a personal repository and uploaded it to the Puppet Forge. Check it out here: https://forge.puppetlabs.com/ahamilton55/euca2ools

Follow

Get every new post delivered to your Inbox.

Join 53 other followers