Zokahn -Personal Blog-


Running the strongman for ALS


Virtualization is not cloud!

There is a distinct difference between virtualization as used in organizations for over a decade and the new evolution in IT called 'cloud'. In classic virtualization the focus is on the virtual machine itself. The compute resources, the virtualization hosts, are made to host multiple instances of a guest OS, possible connected to predefined networks and storage volumes. Manual or automated effort is needed to connect the VM to needed resources, which have to exist or need to be created up front for the VM can utilize it's resources.

In virtualization the VM can be tailored to the users exact specifications, a VM can be made to scale-up (add more compute resources to the single VM) to match a single application needs. A VM is typically nurtured and upgraded with minor and major software upgrades. In virtualization the SLA is typically targeted on the uptime and performance features of the actual VM.

In cloud the focus is extended, from the VM to the whole datacenter. All the datacenter facilities are presented to the user via self-service portals and/or via a application programming interface (API) these services consists of software defined entities that can be fully controlled by it's user without any intervention of IT staff.
The user can create their own datacenter environment based on a predefined budget, the datacenter is implemented automatically using on premises compute, network and storage resources or the user is (seamlessly) connected to resources remotely.

Cloud resources help the user to scale out (use smaller but more compute instances) as the virtual machines themselves cannot/should not be tailored to exact specifications. A cloud vm (instance) receives only the minor, security related, OS updates. Major release upgrades are performed by deleting a instance then redeploy a new instance based on a new OS image.

The SLA in cloud is typically targeted on the IT infrastructure, the ability to start a cloud instance (VM) and use it's datacenter facilities. The uptime of a specific VM should not matter and is normally not part of the Cloud SLA.


A bit about OpenStack processes

OpenStack can be overwhelming, it's big and complex. It might be hard to find the source of a problem, in this article i want to share some ways to get extra information about the processes and their settings. OpenStack software is developed in Python. Once a process starts it will show up in the process list. It starts to log information based on their configured logfile/dir location and verbosity. But what if the process fails to start? No messages, no hints on what went wrong.

Lets take the look at a typical OpenStack process. In this example we look at Glance but almost all OpenStack components follow the same principles. In a typical setup Glance will show up a few times in the process list. Running "ps -ef | grep glance" on a controller node will uncover the processes. We are interested in the 'CMD' field, the last part of the process line:

/usr/bin/python /usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf --verbose

This line gives a lot of information, in whole it gives you the complete command that the 'openstack-glance-api' init script uses to start the API process. This command invokes two config files, first glance-api-dist.conf then glance-api.conf. It also puts the process in verbose logging mode.

The config file part is good information, the glance-api-dist.conf holds the default and (most) mandatory configuration items and their values. Take a look at this file. These settings will apply if you set nothing in the glance-api.conf file. Anything set in the glance-api.conf file will override these settings, simply because the file is loaded after the dist file.

Let's say you are having problems getting a process started... You have run "service openstack-glance-api start" and nothing happens. The OS reports [OK] but glance-api is not listed in the process list. Checking the logging in /var/log/glance shows nothing. No new log entries... What would you do? You start the process by hand! Just paste this to the command line:

/usr/bin/python /usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf --verbose

This will result in the following output:

2013-11-28 10:01:07.306 2147 WARNING glance.store.base [-] Failed to configure store correctly: Store s3 could not be configured correctly. Reason: Could not find s3_store_host in configuration options. Disabling add method.
2013-11-28 10:01:07.315 2147 ERROR glance.store.swift [-] Could not find swift_store_auth_address in configuration options.
2013-11-28 10:01:07.315 2147 WARNING glance.store.base [-] Failed to configure store correctly: Store swift could not be configured correctly. Reason: Could not find swift_store_auth_address in configuration options. Disabling add method.

If there is anything preventing the Glance API to start, it will be listed in your output. This method also shows the exceptions that are not handled by the log facility developed by the OpenStack team. As said before this works for Glance but it will also work with most other OpenStack services.

Happy OpenStacking!


Persoonlijk record op de 10 ;-)


3 dagen vijf landen, deel 2


Drie dagen, vijf landen deel 1




Hardlopen: Nog eens 10K

Tagged as: Comments Off

Hardlopen… Heerlijk!


RHN Satellite – Separating RHEL6.0, RHEL6.1 and RHEL6.2 versions

Red Hat Network Satellite works with "base channels". If you run satellite-sync regularly you will have the latest version of all the RPM's released by Red Hat in that base channel. This channel can be updated with the satellite-sync command, however this command just updates the channel to hold all available RPM's in one place without any reference to which update level they belong.

If you use software that requires a certain update level (RHEL6.1 or RHEL6.2 for instance) you might have a hard time facilitating that. Satellite (The web-ui or sync tools) did not really have a way to generate cloned channels based on update level. Until Satellite version 5.4 arrived! So if you have software that requires your machines to have a specific version of Red Hat Enterprise Linux? please, read on.

Note: Before 5.4 it was possible to sort packages based on update level, the community version of RHN Satellite (Spacewalk) developed this facility and it was adopted in the Red Hat product.


The tool used (spacewalk-create-channel) is part of the spacwalk-remote-utils package found in the RHN Tools for RHEL channel. Please make sure you sync this child channel with your basechannel, if you have trouble finding this package. This tool has a man page that explains all the options. Spacewalk-create-channel has package manifests that it will use to sort the existing Base channel, it will create a clone channel with references to the already synced packages. The tool is perfectly happy on disconnected Satellite servers.

The following is a example to have a RHEL6.1 clone channel:
spacewalk-create-channel -l -s -v 6 -s Server -u U1 -a x86_64 –d rhel6-1-channel –N “RHEL 6.1 channel”

-l <username>
The Satellite username (admin or personal account), this account should have enough authorization to create / clone channels.

-s <server>
The Satellite server that needs the new channel, if no channel is listed localhost will be used.

-v <version>
The version of the channel to create (e.g. 5, 4, 3, 2.1).

-s <release>
The release of the channel to create (e.g. AS, ES,  WS,  Server, Client, Desktop).

-u <Update level>
The update level of the channel to create (e.g. GOLD, U1, U2, U3, U4, U5, U6, U7, U8, U9), where GOLD stands for the initial release.

-a <Architecture>
The  arch  of the channel to create (e.g. i386, ia64, ppc, s390, s390x, x86_64).

-d <label>
The label of the destination channel. This will be created if not present.

-N <title>
If the destination channel is created use DEST_NAME for it's name. If not provided the label will be used.

Please see the manpage for more options. I would like to thank Thomas Cameron (http://people.redhat.com/tcameron/) for helping me discover this tool via his Red Hat Summit slides)