Building Bareos RPMs on Ubuntu 16.04

March 16th, 2018

Following on from my previous post on how to build Bareos (Backup Archiving Recovery Open Sourced) RPM packages on CentOS 6 & 7 (, the following instructions will show you how to build .deb versions of the packages on Ubuntu 16.04.

Again, these instructions are based on Bareos version 17.2.5, so would need to be adjusted appropriately for other versions and I’m working exclusively with 64-bit (amd64) versions.

Before we start, lets make sure that everything is up to date:

apt-get update
apt-get upgrade

Before we start building anything, we’ll need to install all of the dependencies which are required in order to build the .deb packages. We’ll use the libfastlz and libfastlz-dev packages from the Bareos repositories:

apt-get install build-essential acl-dev autotools-dev bc chrpath debhelper libacl1-dev libcap-dev libjansson-dev liblzo2-dev libqt4-dev libreadline-dev libssl-dev libwrap0-dev libx11-dev libsqlite3-dev libmysqlclient-dev libpq-dev mtx ncurses-dev pkg-config po-debconf python-dev zlib1g-dev glusterfs-common librados-dev libcephfs-dev apache2-dev apache2 autoconf automake python-all python-setuptools
dpkg -i libfastlz_0.1-7.2_amd64.deb libfastlz-dev_0.1-7.2_amd64.deb

Now let’s download the Bareos source code the various repositories on GitHub and extract it ready for building:

wget -qO – | tar zx
wget -qO – | tar zx
wget -qO – | tar zx

Before starting the build, we need to create a changelog file which contains information used by the build process. Use your favourite text editor to put the below into ~/bareos-Release-17.2.5/debian/changelog:

bareos (17.2.5-0) stable; urgency=low

* Bareos 17.2.5 release;

— Your Name <your@email.address> Thu, 16 Mar 2018 10:58:00 +0000

Once that’s done, you can start the build process:

cd ~/bareos-Release-17.2.5/
fakeroot debian/rules binary

Now we just need to repeat this process for the bareos-webui package. Use your favourite text editor to create the ~/bareos-webui-Release-17.2.5/debian/changelog file containing the below:

bareos-webui (17.2.5-0) stable; urgency=low

* Bareos 17.2.5 release;

— Your Name <your@email.address> Thu, 16 Mar 2018 10:58:00 +0000

Unlike the main bareos repository, the debian/rules file isn’t executable by default in the code from bareos-webui repository, so we need to set that before we can start the build process:

cd ~/bareos-webui-Release-17.2.5/
chmod +x debian/rules
fakeroot debian/rules binary

Finally we need to build the python-bareos package. Use your favourite text editor to create the ~/python-bareos-Release-17.2.5/debian/changelog file containing the below:

python-bareos (17.2.5-0) stable; urgency=low

* Bareos 17.2.5 release;

— Your Name <your@email.address> Thu, 16 Mar 2018 10:58:00 +0000

Then it’s just the usual commands to start the build process:

cd ~/python-bareos-Release-17.2.5/
fakeroot debian/rules binary

You should now have all of the .deb package files in your home directory which you can install locally or host in your own APT repository.

Building Bareos RPMs on CentOS 6 & 7

March 14th, 2018

Bareos (Backup Archiving Recovery Open Sourced) is a popular open source backup system originally forked from the Bacula project, but they only publicly publish packages for the first release of each major version; updates are reserved for paying customers. The source code is available on GitHub however, so you can pretty easily build your own packages, even if exactly how to do it doesn’t seem to be documented.

These instructions are based on Bareos version 17.2.5, so would need to be adjusted appropriately for other versions. I’m also working exclusively with 64-bit (x86_64) versions.

Before we start, lets make sure that everything is up to date:

yum -y update

If you don’t already have the EPEL repository installed, then install it as we’ll need it for the jansson-devel and libcmocka-devel build dependancies:

yum -y install  epel-release

Now install everything needed to build the RPMs. We’ll use the libdroplet, libdroplet-devel, libfastlz and libfastlz-devel packages from the Bareos repositories.

On CentOS 6:

yum -y install rpm-build wget autoconf automake httpd httpd-devel glusterfs-devel glusterfs-api-devel git-core gcc gcc-c++ glibc-devel ncurses-devel readline-devel libstdc++-devel zlib-devel openssl-devel libacl-devel lzo-devel sqlite-devel mysql-devel postgresql-devel libcap-devel mtx qt-devel libcmocka-devel python-devel python-setuptools libtermcap-devel tcp_wrappers redhat-lsb jansson-devel tcp_wrappers-devel

On CentOS 7:

yum -y install rpm-build wget autoconf automake httpd httpd-devel glusterfs-devel glusterfs-api-devel git-core gcc gcc-c++ glibc-devel ncurses-devel readline-devel libstdc++-devel zlib-devel openssl-devel libacl-devel lzo-devel sqlite-devel mysql-devel postgresql-devel libcap-devel mtx qt-devel libcmocka-devel python-devel python-setuptools libtermcap-devel tcp_wrappers redhat-lsb jansson-devel tcp_wrappers-devel

It’s a good idea to run the build under an unprivileged user. I’ve set up a dedicated user called “build” for this, but any normal user account will do.
Let’s set up the build environment and download the Bareos source code from the various repositories on GitHub:

useradd build
su – build
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
echo ‘%_topdir %(echo $HOME)/rpmbuild’ > ~/.rpmmacros
wget -O bareos-17.2.5.tar.gz
tar xf bareos-17.2.5.tar.gz
mv bareos-Release-17.2.5/ bareos-17.2.5
tar zcf bareos-17.2.5.tar.gz bareos-17.2.5
mv bareos-17.2.5/platforms/packaging/bareos.spec ~/rpmbuild/SPECS/
rm -rf bareos-17.2.5
mv bareos-17.2.5.tar.gz ~/rpmbuild/SOURCES/
wget -O bareos-webui-17.2.5.tar.gz
mv 17.2.5.tar.gz bareos-webui-17.2.5.tar.gz
tar xf bareos-webui-17.2.5.tar.gz
mv bareos-webui-Release-17.2.5/ bareos-webui-17.2.5
tar zcf bareos-webui-17.2.5.tar.gz bareos-webui-17.2.5
mv bareos-webui-17.2.5/packaging/obs/bareos-webui.spec ~/rpmbuild/SPECS/
rm -rf bareos-webui-17.2.5
mv bareos-webui-17.2.5.tar.gz ~/rpmbuild/SOURCES/
wget -O python-bareos-17.2.5.tar.gz
tar xf python-bareos-17.2.5.tar.gz
mv python-bareos-Release-17.2.5/ python-bareos-17.2.5
tar zcf python-bareos-17.2.5.tar.gz python-bareos-17.2.5
mv python-bareos-17.2.5/packaging/python-bareos.spec ~/rpmbuild/SPECS/
rm -rf python-bareos-17.2.5
mv python-bareos-17.2.5.tar.gz ~/rpmbuild/SOURCES/

Edit the ~/rpmbuild/SPECS/bareos.spec file in your favourite text editor and set “Version” (line 8) to “17.2.5” as well as “Release” (line 9) to “0%{?dist}”.
You also need to search for “BuildRequires: libqt4-devel” (line 186) and replace it with “BuildRequires: qt-devel”.

By default, GlusterFS and Droplet support isn’t built on CentOS 6 for some reason, so if you want them then you need to edit “%define glusterfs 0” and “%define objectstorage 0” (lines 45 and 46) and set them to 1.

Now you’d ready to run the build itself. On CentOS 6:

rpmbuild -ba ~/rpmbuild/SPECS/bareos.spec –define “centos_version 600”

And on CentOS 7:

rpmbuild -ba ~/rpmbuild/SPECS/bareos.spec –define “centos_version 700”

Once this finishes, you should find a collection of several Bareos RPMs in ~/rpmbuild/RPMS/x86_64/. We need the bareos-common package installed for build the Web UI, so become root and install it.

On CentOS 6:

yum -y install /home/build/rpmbuild/RPMS/x86_64/bareos-common-17.2.5-0.el6.x86_64.rpm

On CentOS 7:

yum -y install /home/build/rpmbuild/RPMS/x86_64/bareos-common-17.2.5-0.el7.centos.x86_64.rpm

Next, edit the ~/rpmbuild/SPECS/bareos-webui.spec file in your favourite text editor and set “Version” (line 4) to “17.2.5”.

Now you’d ready to build the Web UI package:

rpmbuild -ba ~/rpmbuild/SPECS/bareos-webui.spec

Finally, edit ~/rpmbuild/SPECS/python-bareos.spec in your favourite text editor and set “Version” (line 21) to “17.2.5” as well as “Release” (line 22) to “0%{?dist}” and build the final package:

rpmbuild -ba ~/rpmbuild/SPECS/python-bareos.spec

You should now have the full compliment of RPMs in ~/rpmbuild/RPMS/x86_64/ and ~/rpmbuild/RPMS/noarch/.

If you need to rebuild the RPMs for the same version of Bareos for any reason, then you should increment the value of Release in the relevant .spec file by 1 each time (e.g. “1%{?dist}”, “2%{?dist}” etc.).

You can now GPG sign your RPMs if you want and then add them to your own central yum repository with createrepo or just directly install them locally with rpm.

WHMCS and Nominet EPP farce (again!)

September 10th, 2016

On the 14th of January 2016 Nominet notified EPP users that they would be upgrading their EPP system to require TLS version 1.1 or higher connections on the 8th of June 2016 in order to keep the EPP system secure.
From the 2nd of February 2016 the Nominet EPP testbed was updated to reflect these new requirements so that registrars and software developers could test that their systems will work once this change has been made to the production EPP platform.

In the past, WHMCS have ignored notices from Nominet and waited until after such changes were put live before making the appropriate changes to their software, so I raised a ticket with WHMCS on the 19th of February 2016 in order to check that WHMCS were aware of the planned change and would be making the appropriate changes.
As usual, WHMCS completely dismissed this ticket and point blank refused to investigate any potential impact which could affect their customers.

In this case, the original date of the 8th of June was pushed back to the 22nd of June before being rescheduled to the 16th of August in order to give registrars more time to update and test their systems against the EPP testbed.
However, despite having over 7 months notice from Nominet and 6 months notice from at least one customer, WHMCS made no effort to test their EPP implementation against the Nominet testbed until after Nominet had made the changes and WHMCS customers complained of problems.
The WHMCS module was updated 4 days *AFTER* the change, which fell on a weekend and so some customers may not have had the technical resource immediately on-hand to test and deploy the update.

To top it all off, both publicly and in tickets WHMCS arrogantly tried to blame Nominet for their usual dismal failings.
I tried to comment on the WHMCS blog ( and initially this comment was published, but despite WHMCS replying to it the comment was moderated and my follow up response was deleted (as was another reply asking why they deleted the previous reply!). Three weeks later, the comment is still moderated…

It speaks volume about the way WHMCS is run that they not only created the initial problem by arrogantly refusing to make the effort to test their module, but then tried to lay the blame on Nominet as well as actively surprising any negative comments criticising them for the way that they treat their customers with such contempt.

CSF bugs and updates

June 25th, 2016

ConfigServer Security and Firewall (CSF) is a great program for managing iptables/netfilter firewall rules on Linux servers and performing automated blocks based on various things such as brute force login attempts (check it out at and I really shouldn’t complain given that it’s free, but sometimes I really do wonder if ConfigServer/Way to the Web actually do any testing at all before releasing new versions!

7 issues fixed in 6 bugfix releases (9.01 to 9.06) in 2 days! It’s a good job that the automatic update feature works properly…

ValueError in Django migration

May 15th, 2016

I’ve recently started developing in Python+Django again for a personal site that I’m working on and for far too long today I’ve been pulling my hair out when trying to write what should be a fairly simple migration using migrations.RunPython() to generate a default value to assign to a new OneToOneField column.

The rather confusing error message that I was receiving is:

ValueError: Cannot assign "<User: blah>": "Profile.account" must be a "User" instance.

This seems to be the same problem described at, but nothing mentioned in that post worked.

I was hopeful when I came across that this was a known issue, but alas whilst similar it’s not exactly the same problem and that bug was closed over a year ago. Back to square one!

What was particularly confusing was that the same code seemed to work as expected when run in the Django shell:

from foo.models import Profile, Article
from django.contrib.auth.models import User
first_superuser = User.objects.all().filter(is_superuser=True).first()
default_profile = Profile(account=first_superuser, biography="")

After a lot of fiddling, I eventually realised that the way which I was importing the User model from the built in Django auth system was wrong for a migration and so I replaced:

from django.contrib.auth.models import User


User = apps.get_model('auth', 'user')

And now the migration finally works as expected. 🙂

Cumulus attacks on Juniper (again)

November 12th, 2015

I have a lot of time for Cumulus Networks – I think they’re doing some very cool and unique things with their Cumulus Linux operating system for switches and they genuinely have something different to offer, but when they publish blog posts like their one today ( I lose a lot of respect for them.

This seems to be nothing more than a thinly veiled attack casting FUD (Fear, Uncertainty and Doubt) at a competitor – a knee-jerk reaction to a threat to their business. It actually reads pretty similarly to their blog post when Juniper originally announced the OCX range ( They’ve probably attacked other vendors in a similar manner.

For example, just by going to the main QFX5200 page on the Juniper web site (, I find:

Open access to the standard Junos Linux kernel, enabled by the disaggregated version of the Junos software, allows users to install third-party Linux RPM packages and create guest containers and VMs with central resource management and programmable APIs.

Yes that still needs a little more detail, but it answers at least some of the questions and all it took was a couple of clicks! Imagine what you could find out by actually speaking to someone familiar with the details…

I have a few questions of my own for Cumulus Networks;

  1. Did Cumulus Networks actually attempt to find out the answers to any of these points yourselves? If so, were you unable to find the details, or did you just not like what you found so decided to feign ignorance?
  2. Will Cumulus Networks put their money where their mouth is and make sure that Cumulus Linux runs on the Juniper QFX5200 series of switches (assuming that Juniper are willing to co-operate)?
  3. Does Cumulus Linux currently run on any switches powered by the Broadcom StrataXGS Tomahawk chipset? It doesn’t seem to be listed anywhere on the Cumulus Linux HCL that you so helpfully linked to from your blog post.
  4. Does Cumulus Linux currently run on any switches which support 25G, 50G or 100G Ethernet ports? These also seem to be conspicuously absent from the Cumulus Linux HCL.
  5. When will Cumulus Networks offer a fully featured MPLS implementation on their Cumulus Linux control plane?

Upgrading to Junos 12.3 from before 10.4R2 on Juniper EX

October 19th, 2015

In the release notes for Junos 12.3 ( on Juniper EX series switches, it says:

Upgrading from Junos OS Release 10.4R2 or Earlier

To upgrade to Junos OS Release 12.3 from Junos OS Release 10.4R2 or earlier, first upgrade to Junos OS Release 11.4 by following the instructions in the Junos OS Release 11.4 release notes. See Upgrading from Junos OS Release 10.4R2 or Earlier or Upgrading from Junos OS Release 10.4R3 or Later in the Junos OS 11.4 Release Notes .

Unfortunately, Juniper don’t list any Junos releases older than 12.3R1 for the EX4200 (and possibly other EX series) on their download site.

After poking around the Juniper support site for a bit, I found technical bulletin TSB16151 (, which contains downloads for Junos 11.4R8-S1 on EX2200, EX3200, EX3300, EX4200, EX4500, EX6200, EX8200 and XRE-200.

With this and the jloader files from technical bulletin TSB15524 (, I was able to complete the upgrade successfully.

cPanel 54

October 16th, 2015

Yesterday cPanel laid out the upcoming changes in cPanel 11.54, or just cPanel 54 as it’s now known (see Whilst light on any details, there are at least some interesting tidbits.

The new versioning system
This makes very little real world difference, but I can’t help but feel like they’re following Google Chrome and Mozilla Firefox in a race to have the largest possible version number!

X3 being retired
Finally! X3 is an absolutely horrible theme which provides a truly terrible experience for users and I’ll be glad to see the back of it at long last!

Paper Lantern becoming the only choice
Hopefully with Paper Lantern becoming the only cPanel user interface (and dropping the silly “Paper Lantern” name!), it will start to move away from just being a tarted up version of X3 with some nicer icons and towards a more friendly, usable interface which doesn’t just feel the need to dump everything on one page!

cPassword, OpenID Connect and 2FA
I’ve got mixed feelings about this – the new cPassword interface sounds like a great idea, but the OpenID Connect feature sounds like a security nightmare, particularly with the default service being hosted externally on At least we’re going to have the option of replacing it with our own backend (as well as being able to disable it altogether, hopefully!).
That said, Two factor authentication is a great addition, although I suspect that we are going to see more support tickets as people lose their phones etc. and lock themselves out of their hosting!

IPv6 only
cPanel were massively behind the game when it came to adding full IPv6 support, so it’s good to see them adding the ability to run completely without IPv4 now, particularly given the recent IPv4 exhaustion at ARIN.

Nginx front end
Good to see cPanel finally starting to catch up with Odin Plesk on this one! Hopefully we’ll see support for more complex configurations in future versions.

Directory Syncing
This could be quite useful depending on how it’s implemented. I suspect that it will be some form of asynchronous rsync based system, possibly with FTP and/or inode based hooks. Hopefully it won’t just be a periodic cron job task!

EasyApache 4
Hopefully EasyApache 4 will move towards using the operating system package management (RPM and YUM) for Apache and PHP, instead of insisting on needlessly compiling everything from scratch. This is one of my biggest pet peeves with cPanel at the moment – it adds needlessly complexity to system administration, makes simple tasks like adding an Apache module or PHP extension slow and laborious and even makes installing cPanel pointlessly time consuming. If they have finally caught up with how the rest of the world has been working for the past decade (or more) then it will be great news!

Courier support finally being dropped
Dovecot beats Courier hands down, so it makes sense to stop supporting Courier and move everyone over to Dovecot. There really is little point in spending the extra development effort support two mail servers, so I’m a bit surprised that it has taken this long.
I wonder if we’ll continue to see support for both ProFTPD and Pure-FTPd as well as BIND/named, NSD and MyDNS in future or if they will also move those towards only supporting a single daemon.

OpenLiteSpeed OCSP stapling with Comodo PositiveSSL

September 13th, 2015

OpenLiteSpeed supports OCSP stapling, which helps web browsers check the revocation status of an SSL certificate without having to connect to the Certificate Authority’s OCSP servers and so can speed up the SSL connection process.

In order to enable OCSP stapling, first we need to construct the intermediate certificate chain which OpenLiteSpeed will use to cryptographically verify the response from the CA’s OCSP server.

Take the COMODORSADomainValidationSecureServerCA.crt and COMODORSAAddTrustCA.crt files provided by Comodo when your certificate was issued and concatenate them into a single file

cat COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt > /etc/pki/tls/certs/PositiveSSL_chain.pem

Now log in to the OpenLiteSpeed WebAdmin console and perform the following steps:

  1. Click on “Configuration” on the navigation bar and then select “Listeners” from the drop down menu
  2. Click “View/Edit” on your HTTPS listener
  3. Click on the “SSL” tab
  4. Click “Edit” on the “OCSP Stapling” section
  5. Set “Enable OCSP Stapling” to “Yes”
  6. Set “OCSP Responder” to “”
  7. Set “OCSP CA Certificates” to the file containing the chained intermediate certificates created earlier (“/etc/pki/tls/certs/PositiveSSL_chain.pem” in my case).
  8. Click “Save”
  9. Perform a “Graceful Restart” of the OpenLiteSpeed server

If all has gone well, you now have OCSP stapling working. Click on “Actions” on the navigation bar and then select “Server Log Viewer” from the drop down menu or look in /usr/local/lsws/logs/error.log and check that you have a line saying “Enable OCSP Stapling successful!

You can also use the excellent SSL Server Test by Qualys’ SSL Labs at to check many attributes of your server’s SSL setup, including whether or not OCSP stapling is working.

The Zimbra merry-go-round

August 21st, 2015

I’ve been a big fan of the Zimbra email collaboration system for many years, using it since version 4.5 or 5.0 (I can’t remember exactly). However, in recent years the product has been falling further and further behind competitors such as Microsoft Exchange, particularly in the all important area of redundancy and availability.

Email and collaboration are critical to modern businesses and so every effort needs to be taken in order to ensure that they are always available. Microsoft clearly recognise this as Exchange has had Database Availability Groups (DAG) since Exchange 2010 and before that had a number of other High Availability options.

Zimbra however still does not have this as of the current version (8.6). Zimbra were supposed to be addressing this with a 9.0 release scheduled for the second half of 2015, however now that has been pushed back to the first half of 2017 at the earliest!

Instead, we aren’t getting any more releases in 2015 and all we are getting in the first half of 2016 is version 8.7, which will start to bring back the chat feature that was previously dropped! Zimbra aren’t even providing the chat server to start with – just an XMPP client and you will have to run your own server until version 8.8 arrives in the second half of 2016! This will also bring some much needed anti-spam improvements (although it seems that this will be by integrating an as yet unspecified third party product) and two factor authentication. This seems a long time to wait for not a great deal of new functionality!

I can’t help but feel that this is in a large part due to the constant change of ownership of Zimbra. Back in 2007 Yahoo bought Zimbra for $350m ( but then sold it on to VMware in 2010 ( The exact amount paid wasn’t disclosed, but it was generally reported to be around $100m.

VMware then sold Zimbra to Telligent in 2013 (, again for an undisclosed amount, who promptly renamed themselves to Zimbra Inc. ( with the products becoming Zimbra Collaboration (formerly Zimbra) and Zimbra Social (formerly Telligent).

Telligent then acquired Mezeo ( for their MezeoFile sync-and-share technology in 2014, with the MezeoFile product becoming Zimbra Sync and Share, which was then discontinued in 2015 (

Shortly after discontinuing Zimbra Sync and Share, Zimbra made the wooly statement of (

“As many of you know, Zimbra made a few strategic decisions over the past few months in order to ensure the company’s stability and achieve an increase in EBITDA”

Not long afterwards, the Zimbra Social product was sold off to a company called Verint ( and renamed back to Telligent, leaving Zimbra Inc. with just Zimbra Collaboration.

At this point I was naturally wondering if Zimbra Inc. was running out of money and concerned as to what the future holds for Zimbra Collaboration and its customers given all these recent announcements, but I didn’t have to wait long as then a couple of days later Zimbra is sold to Synacor ( for $24.5m. Strangely this announcement seems to be missing from the Zimbra blog…

Back when Zimbra was owned by VMware, their answer to any questions about availability, redundancy or disaster recovery/business continuity was to run Zimbra inside a VMware environment and use their HA+DR technologies, but soon after being sold off to Telligent they started talking about a project “Always ON”. This was mentioned in a number of blog posts throughout 2013, but and were the most detailed.

Sadly, over 2 years later we are still waiting for this new “Always ON” architecture and it seems that we have to wait at least another year and a half! I’m not holding my breath that things are going to get any better under the new ownership, but right now I’m just glad that my company didn’t buy into Zimbra Social or Zimbra Sync and Share like we considered!