Archive for the ‘Red Hat and CentOS’ Category

SuperMicro ipmicfg utility on Linux

Sunday, December 15th, 2013

SuperMicro have a nice little utility called ipmicfg, which can be used to interact with the IPMI BMC from within your operating system. This can do all sorts of things with the IPMI BMC, however it’s really useful if you want to change the IP address details on the IPMI card without rebooting your system and going into the BIOS setup.

To get started, download the latest version of ipmicfg from the SuperMicro FTP site (currently it’s ftp://ftp.supermicro.com/utility/IPMICFG/ipmicfg_1.14.3_20130725.zip).

Unzip this and you will find DOS, Linux and Windows versions of the ipmicfg tool, as well as a bit of documentation. I’m only really interested in the Linux version, so lets go into that folder, where you will find 32-bit and 64-bit versions.

There are two binary files included – “ipmicfg-linux.x86_64” which is dynamically linked and “ipmicfg-linux.x86_64.static” which is statically linked. The dynamically linked version normally works fine for me.

As a quick example of how to use ipmicfg, lets change the IPMI BMC IP address from being assigned via DHCP to being statically configured to 192.168.1.2 with a subnet mask of 255.255.255.0 and the default gateway set to 192.168.1.1:

./ipmicfg-linux.x86_64 -dhcp off
./ipmicfg-linux.x86_64 -m 192.168.1.2
./ipmicfg-linux.x86_64 -k 255.255.255.0
./ipmicfg-linux.x86_64 -g 192.168.1.1

When you run ipmicfg, you may see errors along the lines of:

[kcs] kcs_error_exit:

[kcs] kcs_error_exit:

[kcs] kcs_error:

[kcs] kcs_error_exit:

This essentially means that ipmicfg is having problems communicating with the IPMI BMC, and can normally be resolved by installing the IPMI drivers and loading into the kernel. On CentOS you can do this with the following commands:

yum -y install OpenIPMI
service ipmi start
chkconfig ipmi on

Building httpd 2.4.6

Sunday, July 28th, 2013

When trying to build an RPM from the Apache source tarball, rpmbuild bails with:

RPM build errors:
Installed (but unpackaged) file(s) found:
/usr/lib64/httpd/modules/mod_proxy_wstunnel.so

The problem is that the mod_proxy_wstunnel.so file has been omitted from the httpd.spec file used to build the RPM.

Extract the tarball, open up httpd.spec in your favourite text editor and scroll down until you find a section that looks like:

%dir %{_libdir}/httpd
%dir %{_libdir}/httpd/modules
%{_libdir}/httpd/modules/mod_access_compat.so
%{_libdir}/httpd/modules/mod_actions.so

%{_libdir}/httpd/modules/mod_vhost_alias.so
%{_libdir}/httpd/modules/mod_watchdog.so

%dir %{contentdir}

This should start at line 308. Add in the following line:

%{_libdir}/httpd/modules/mod_proxy_wstunnel.so

You can now run rpmbuild again. You will either need to rebuild the tarball or change to using “rpmbuild -bb httpd.spec” instead of “rpmbuild -tb httpd-2.4.6.tar.bz2”.

EPEL NSD RPM and the missing PID file directory

Sunday, June 26th, 2011

NSD is a fantastic authoritative nameserver from NLnet Labs which was developed in conjunction with the RIPE NCC to be a highly scalable, secure authoritative nameserver which has no recursive features by design. In fact, it is such as good nameserver that it is used on three of the root namesevers (k.root-servers.net, h.root-servers.net and l.root-servers.net).

Thanks to the EPEL project run by the Fedora guys, you can quickly and easily install an up to date copy of NSD on CentOS/RHEL systems. The only problem that I have found so far is that the RPM doesn’t seem to create directory for the PID file specified in the /etc/nsd/nsd.conf and so the daemon won’t start out of the box.

Obviously it is easy enough to create the /var/run/nsd directory with mkdir, but remember to chown/chgrp this directory to the nsd user and group, otherwise and “nsdc restart” will fail with errors in /var/log/messages along the lines of “failed to unlink pidfile /var/run/nsd/nsd.pid: Permission denied

Using kill to display dd progress

Tuesday, November 9th, 2010

How long has that dd process been running for now? Is it even doing anything? How long is it going to take?

If you want dd to give you a progress update, then find out the process ID (PID) and then send the USR1 signal to it with

kill -USR1

And dd will then print out the same records in/out, bytes copied, time taken and overall speed to STDERR as it would when it finishes.

It doesn’t matter if you are re-directing STDOUT (such as to pipe the data stream to another machine via netcat or even compressing it with gzip) but make sure that you aren’t sending STDERR anywhere such as /dev/null

Make sure you specify the “-USR1” argument to kill as you don’t want to send SIGTERM to dd by mistake!
By default, kill will send SIGTERM (or SIGKILL if you use kill -9) to the specified process, but using “-USR1” you are telling kill to send a different signal, in this case one that causes dd to print the progress summary and so you aren’t actually going to “kill” the process.

You can even use have the progress refresh every few seconds with a command such as

watch -n 10 kill -USR1 $PID

Just replace $PID with the PID of the running dd process (or set the PID environment variable to the process ID).
If dd was the last command you ran, then you can get the PID with the special $! variable, otherwise you’ll have to use ps to find the PID.

The dreaded scrolling GRUB GRUB GRUB on startup

Thursday, September 16th, 2010

Rebooting a server is always stressful, particularly when you don’t have immediate physical access to it. Of course, when the server inevitably doesn’t come back up you need to either get directly on the local console or connect in via KVMoIP and one of the worst things you can see is “GRUB GRUB GRUB” scrolling past endlessly.

This is a sign that stage 1 of the GRUB bootloader which is stored in the Master Boot Record (MBR) has somehow become corrupted and do GRUB can’t start. There is no way to even get into the GRUB command line and boot the system manually or even troubleshoot further as the problem is with stage 1 and not stage 2.

As I ran into this on a CentOS machine, I used a netinstall CD with the virtual media feature on an IP KVM attached to the server to boot into rescue mode and chroot into the operating system installed on the drive in question. I could then identify the /boot hard drive number and partition from /boot/grub/menu.lst ready to re-install GRUB and point it at the stage 2 files.

Simply run /sbin/grub to get to a version of the GRUB command prompt and then (assuming /boot/grub/menu.lst references root {hd0,0) for each of the menu options) just run:

root (hd0,0)
setup (hd0)

You should see a series of messages about looking for stage 1.5 and 2 files and then that it has successfully embedded. Congratulations, GRUB has now been re-installed and simply rebooting your machine should take you straight into your operating system as normal.

Python setuptools and get_python_version is not defined

Sunday, September 12th, 2010

If you run into the below error when using setuptools (setup.py), then it’s quite possible that you’re using an outdated version of Python’s setuptools. In particular, the python-setuptools package in the CentOS yum repository is too old.

Traceback (most recent call last):
File “setup.py”, line 19, in ?
setup(**metadata)
File “/usr/lib64/python2.4/distutils/core.py”, line 149, in setup
dist.run_commands()
File “/usr/lib64/python2.4/distutils/dist.py”, line 946, in run_commands
self.run_command(cmd)
File “/usr/lib64/python2.4/distutils/dist.py”, line 966, in run_command
cmd_obj.run()
File “/usr/lib/python2.4/site-packages/setuptools/command/bdist_rpm.py”, line 28, in run
_bdist_rpm.run(self)
File “/usr/lib64/python2.4/distutils/command/bdist_rpm.py”, line 377, in run
self.move_file(rpm, self.dist_dir)
File “/usr/lib/python2.4/site-packages/setuptools/command/bdist_rpm.py”, line 20, in move_file
getattr(self.distribution,’dist_files’,[]).append(
NameError: global name ‘get_python_version’ is not defined

Luckily, this is quite easy to fix; simply remove the RPM and download the latest version from http://pypi.python.org/pypi/setuptools then just run it with “sh” as if it was a normal shell script.

Sysconfig ifcfg scripts and VLAN sub-interfaces

Monday, August 16th, 2010

If you are using the ifcfg scripts in /etc/sysconfig/network-scripts to bring up VLAN sub-interfaces on a NIC and you are getting messages such as:

Bringing up interface eth0.200: Device eth0.200 does not seem to be present, delaying initialization.

instead of

Bringing up interface eth0.200: Added VLAN with VID == 200 to IF -:eth0:-

as you would expect, then make sure that you have the vconfig RPM installed.

HyperVM and yum update Transaction Check Errors

Monday, August 16th, 2010

If you’re having file conflict problems when running “yum update” on servers with the lxlabsupdate repository for HyperVM (or Kloxo) installed then there’s a simple resolution:

cd /var/cache/yum/lxlabsupdate/packages/
rpm -Uvh *.rpm –replacefiles –replacepkgs

This should fix errors such as:

file /usr/share/man/man1/pcregrep.1.gz from install of pcre-8.02-1.el5_5.1.x86_64 conflicts with file from package pcre-6.6-2.el5_1.7.i386
file /usr/share/man/man1/pcretest.1.gz from install of pcre-8.02-1.el5_5.1.x86_64 conflicts with file from package pcre-6.6-2.el5_1.7.i386

Restoring the contents of /dev

Sunday, July 18th, 2010

Have you ever deleted everything out of /dev by accident (or even on purpose)? Although it may seem that all is lost or that you have a lot of work ahead of you, it’s actually quite easy to restore on a modern Linux system such as CentOS 5 (or the RHEL equivalent).

The first thing you need to know is that CentOS and Red Hat use udevd, which means that the entries in /dev are dynamically created by the udev daemon and restarting this daemon will force it to re-create everything in /dev, just as it would when you start your computer up. This daemon isn’t controller in the normal way through the /etc/init.d scripts though, all you need to run is:

/sbin/start_udev

This will kill any copies of udev running and then start it back up, re-creating the /dev entries in the process. This seems to be quite safe to do on a production system, but it might be wise to only do this if you really have to, as if you haven’t damaged the contents of /dev, then some of your applications may not take kindly to the contents disappearing.

This will have re-created most of your device nodes in /dev, but there are still a couple of important ones missing, namely those used by device-mapper and LVM. You can get these back with the following two commands:

dmsetup mknodes
vgmknodes

The first of which will re-create entries under /dev/mapper and the second of which will re-create LVM volume group entries under /dev/ such as /dev/VolGroup00/ by default on CentOS or Red Hat.

Helpfully this will save someone a real headache or even rebuilding/restoring from backup unnecessarily. Just be more careful with rm next time! 😉

Intel VT Virtualisation Technology on Dell PowerEdge servers

Saturday, June 19th, 2010

Somewhat annoyingly, Dell seem to like to disable Intel’s VT (Virtualisation Technology, sometimes called VMX) in the BIOS on their Dell PowerEdge servers, which means that you can’t use the Xen hypervisor to virtualise Microsoft Windows Server without changing this setting, which requires a reboot of the server to take effect.
You can use omreport from the Dell OpenManage Server Administrator software to check whether or not you have Intel Virtualisation Technology enabled.
If you haven’t got OpenManaged Server Administrator installed, then you can enable the Dell yum repository for CentOS/Red Hat systems and install it with:

wget -q -O – http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash
yum -y install srvadmin-base
/opt/dell/srvadmin/sbin/srvadmin-services.sh start

Once you’ve got the Dell OpenManage Server Administrator services running, you can take a look at what processor is installed in your system and what the current BIOS settings are with:

omreport chassis processors
omreport chassis biossetup

The two attributes that you’re looking for are Processor Virtualization Technology (which needs to be enabled) and Demand-Based Power Management (which needs to be disabled).

If you need to change them, then you can do this with:

omconfig chassis biossetup attribute=cpuvt setting=enabled
omconfig chassis biossetup attribute=dbs setting=disabled

omreport chassis biossetup again and then once you’ve rebooted the server you can start taking advantage of the hardware virtualisation provided by Intel’s Virtualisation Technology.