Archive for the ‘Technical’ Category

ProCurve SSH – no matching cipher found

Monday, September 24th, 2018

I recently ran into a strange problem where I suddenly couldn’t SSH to any of our HPE ProCurve 2800 series (2824, 2848) devices from either macOS or Linux. I’m still not really sure what started this as OpenSSH definitely hasn’t been updated recently on the Linux client device at the very least, so I don’t see any reason for the list of ciphers supported on the client to have changed.

Anyway, the error message given by the OpenSSH client was:

Unable to negotiate with port 22: no matching cipher found. Their offer: des,3des-cbc

These ProCurves are pretty old and their SSH support is rather limited (1024 bit keys for example), so it’s not hugely surprising that their supported ciphers are also old and crappy.
Luckily, with OpenSSH you can specify the cipher(s) that you want to use on the command line when you’re connecting:

ssh -c 3des-cbc

This fixed the issue and lets me connect, but isn’t particularly convenient. However, you can also specify this in your ~/.ssh/config file so that it is applied automatically:

Host <blah>
Ciphers 3des-cbc

Just enter one or more hosts to match against (separated by spaces) and OpenSSH will automatically apply the specified options when connecting to any of them.

OpenLiteSpeed WordPress cache mysteriously not working

Monday, September 17th, 2018

The OpenLiteSpeed web server (OLS) and LiteSpeed Cache for WordPress (LSCWP) plugin provide a great way of both speeding up WordPress and handing large numbers of visitors.

OLS is an open source derivative of the LiteSpeed Web Server (LSWS), which delivers most many of the key features, including the high performance LiteSpeed Cache (LSCache). Whilst it can’t read  Apache configuration files like its bigger brother LSWS (and thus can’t be used with a hosting control panel like cPanel or Plesk), it’s great for working with a handful of sites configured manually

Whilst the LSCWP plugin has a lot of useful features which can be used even without the LSCache from OLS/LSWS, the main selling point is the integration with LSCache to deliver blazing fast page load times along with massive scalability under load.

I recently ran into a bizarre issue where the cache just completely stopped working for no obvious reason, which led to hours of pulling apart WordPress and OLS to try and work out why.

Aside from pages loading very slowly, the main clue was that the X-Litespeed-Cache header was completely missing, although the X-Litespeed-Cache-Control header was present as normal. This would normally mean some kind of issue with the cache storage location (/usr/local/lsws/cachedata/ by default, unless override by the storagePath configuration option in the cache module settings).
I couldn’t see any issues with the cache storage location, but tried adjusting it elsewhere anyway without any luck.

I verified that all of the settings for the cache module were configured as per those listed on the OLS wiki and eventually out of frustration deleted the whole cache module definition from the server configuration and added it back, at which point the cache started working again!

I’ve absolutely no idea why removing and re-adding the exact same configuration should make any difference whatsoever, but I have now verified identical behaviour on two different servers with completely independent configurations.

Rspamd, bayes expiry and Redis – ERR Number of keys can’t be greater than number of args

Wednesday, August 15th, 2018

I recently enabled the bayes expiry module (https://rspamd.com/doc/modules/bayes_expiry.html) in Rspamd and found myself staring at the following error in the Rspamd logs with no idea what it meant:

lua; bayes_expiry.lua:332: cannot perform expiry step: ERR Number of keys can’t be greater than number of args

The only reference that I could find to this online was a single post on the Rspamd mailing list, which had gone unanswered (https://groups.google.com/d/msg/rspamd/jFG-MTLzZw8/-8WIIY_ECAAJ).

It turns out that I had the “expire” setting defined twice in /etc/rspamd/local.d/classifier-bayes.conf. After fixing this, restarting Rspamd and waiting a couple of minutes, I started to see the expected log entries from the bayes expiry module executing its expiry processing step every minute:

lua; bayes_expiry.lua:368: finished expiry step 9995 (lazy): 1001 items checked, 163 significant (0 made persistent), 0 insignificant (0 ttls set), 0 common (0 discriminated), 838 infrequent (837 ttls set), 4 mean, 20 std

 

SQL Server 2008 R2 upgrade and INSTALLSHAREDDIR/INSTALLSHAREDWOWDIR

Monday, August 6th, 2018

I recently found myself trying to upgrade an old SQL Server 2008 R2 instance to SP3 so that it could in turn be upgraded to SQL Server 2014, but quickly ran into problems with the following error message:

The INSTALLSHAREDWOWDIR command line value is not valid. Please ensure the specified path is valid and different than the INSTALLSHAREDDIR path.

At first I thought this would be pretty simple – fire up setup.exe from the command line and manually specify the INSTALLSHAREDWOWDIR and/or INSTALLSHAREDDIR options, but that would have been far too easy and unfortunately it seems that for whatever reason you can’t specify these options when the action is “patch” (update/upgrade) or “repair” – they only work for “install”, which wasn’t going to help me.

Much Googling later I had found plenty of people with similar issues, but most were struggling with initial installation (it seems that the SQL Server 2008 R2 installer was very buggy at first) rather than upgrading/updating or repairing an existing installation and so none of the fixes provided worked for me.

Eventually, after poking around the registry, I discovered that the installer is looking in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-518\Components\0D1F366D0FE0E404F8C15EE4F1C15094 key for the INSTALLSHAREDDIR path as well as the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\ Components\C90BFAC020D87EA46811C836AD3C507F key for the INSTALLSHAREDWOWDIR path.

All of the values in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\ Components\C90BFAC020D87EA46811C836AD3C507F key (INSTALLSHAREDWOWDIR) were fine (“C:\Program Files (x86)\Microsoft SQL Server\”), but one of the values (91D3749D1F6219B4BBCA0498BC14CB84) in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-518\Components\0D1F366D0FE0E404F8C15EE4F1C15094 key (INSTALLSHAREDDIR) was set to “C:\Program Files (x86)\Microsoft SQL Server\” instead of “C:\Program Files\Microsoft SQL Server\” and so the INSTALLSHAREDDIR path was conflicting with the INSTALLSHAREDWOWDIR path.

Updating the 91D3749D1F6219B4BBCA0498BC14CB84 value from “C:\Program Files (x86)\Microsoft SQL Server\” to “C:\Program Files\Microsoft SQL Server\” allowed the SP3 update to complete successfully with no reboot require for the change to take effect and I was then able to complete the upgrade to SQL Server 2014 successfully.

Decrypting an APFS volume from the Terminal

Monday, July 2nd, 2018

I’ve been playing about with a Hackintosh desktop running High Sierra, but run into an interesting problem – the FileVault Preboot loader which asks you for the password to decrypt the APFS volume doesn’t recognise the USB keyboard by default.

Apparently there are ways to fix this by building the necessary drivers and inserting them into the Preboot volume, but as the drive in question is an m.2 NVMe disk, I didn’t have an easy way to put it into another computer which could mount APFS volumes.

I therefore decided that the quickest and simplest way to recover a working system was to temporarily decrypt the volume.
To do this, I booted the macOS installer from my UniBeast USB stick and launched the Terminal from Utilities->Terminal in the menu bar.

I found plenty of articles suggesting to use “fdesetup” to manage FileVault, however this utility doesn’t seem to be included in the macOS installer, so instead I had to work out how to accomplish this with the “diskutil” utility.

As I’m using APFS, everything takes place using the commands under “diskutil apfs”, however for older HFS+ formatted disks, the same thing should still be possible using the equivalent CoreStorage commands under “diskutil cs” (although I haven’t tested this, so the steps may be a little bit different).

Now lets take a look at the disks and volumes in this system:

diskutil apfs list

This gives you an ASCII tree view of your disks and their volumes along with various information about each of them.

Find the UUID (the 5 groups of letters and numbers separated by hyphens) for the volume  that you want to decrypt – it will say “Encrypted: Yes (Locked)”.

Before we can decrypt the volume, first we need to unlock it:

diskutil apfs unlockVolume <UUID>

Enter your passphrase and the volume will be unlocked so that it can be accessed. This only unlocks the Volume whilst the computer is running however and won’t persist after a reboot.

To permanently decrypt the volume, run:

diskutil apfs decryptVolume <UUID>

This will start the decryption of the volume in the background.

You can run “diskutil apfs list” again to see the progress. Instead of the previous “Encrypted:” line, you should now see “Decryption Process: 1.0% (Unlocked)”.
Depending on the size of the volume in question, it could take quite some time to complete the decryption.

Once completed, the progress line in the output of “diskutil apfs list” will have been replaced with “Encrypted: no”. At this point it’s safe to boot back into normal macOS.

Building Bareos RPMs on Ubuntu 16.04

Friday, March 16th, 2018

Following on from my previous post on how to build Bareos (Backup Archiving Recovery Open Sourced) RPM packages on CentOS 6 & 7 (https://www.spheron1.uk/2018/03/14/building-bareos-rpms-on-centos-6-7/), the following instructions will show you how to build .deb versions of the packages on Ubuntu 16.04.

Again, these instructions are based on Bareos version 17.2.5, so would need to be adjusted appropriately for other versions and I’m working exclusively with 64-bit (amd64) versions.

Before we start, lets make sure that everything is up to date:

apt-get update
apt-get upgrade

Before we start building anything, we’ll need to install all of the dependencies which are required in order to build the .deb packages. We’ll use the libfastlz and libfastlz-dev packages from the Bareos repositories:

apt-get install build-essential acl-dev autotools-dev bc chrpath debhelper libacl1-dev libcap-dev libjansson-dev liblzo2-dev libqt4-dev libreadline-dev libssl-dev libwrap0-dev libx11-dev libsqlite3-dev libmysqlclient-dev libpq-dev mtx ncurses-dev pkg-config po-debconf python-dev zlib1g-dev glusterfs-common librados-dev libcephfs-dev apache2-dev apache2 autoconf automake python-all python-setuptools
wget http://download.bareos.org/bareos/release/17.2/xUbuntu_16.04/amd64/libfastlz_0.1-7.2_amd64.deb
wget http://download.bareos.org/bareos/release/17.2/xUbuntu_16.04/amd64/libfastlz-dev_0.1-7.2_amd64.deb
dpkg -i libfastlz_0.1-7.2_amd64.deb libfastlz-dev_0.1-7.2_amd64.deb

Now let’s download the Bareos source code the various repositories on GitHub and extract it ready for building:

wget https://github.com/bareos/bareos/archive/Release/17.2.5.tar.gz -qO – | tar zx
wget https://github.com/bareos/bareos-webui/archive/Release/17.2.5.tar.gz -qO – | tar zx
wget https://github.com/bareos/python-bareos/archive/Release/17.2.5.tar.gz -qO – | tar zx

Before starting the build, we need to create a changelog file which contains information used by the build process. Use your favourite text editor to put the below into ~/bareos-Release-17.2.5/debian/changelog:

bareos (17.2.5-0) stable; urgency=low

* Bareos 17.2.5 release; https://www.bareos.org/en/news/bareos-17-2-5-maintenance-version-released.html

— Your Name <your@email.address> Thu, 16 Mar 2018 10:58:00 +0000

Once that’s done, you can start the build process:

cd ~/bareos-Release-17.2.5/
fakeroot debian/rules binary

Now we just need to repeat this process for the bareos-webui package. Use your favourite text editor to create the ~/bareos-webui-Release-17.2.5/debian/changelog file containing the below:

bareos-webui (17.2.5-0) stable; urgency=low

* Bareos 17.2.5 release; https://www.bareos.org/en/news/bareos-17-2-5-maintenance-version-released.html

— Your Name <your@email.address> Thu, 16 Mar 2018 10:58:00 +0000

Unlike the main bareos repository, the debian/rules file isn’t executable by default in the code from bareos-webui repository, so we need to set that before we can start the build process:

cd ~/bareos-webui-Release-17.2.5/
chmod +x debian/rules
fakeroot debian/rules binary

Finally we need to build the python-bareos package. Use your favourite text editor to create the ~/python-bareos-Release-17.2.5/debian/changelog file containing the below:

python-bareos (17.2.5-0) stable; urgency=low

* Bareos 17.2.5 release; https://www.bareos.org/en/news/bareos-17-2-5-maintenance-version-released.html

— Your Name <your@email.address> Thu, 16 Mar 2018 10:58:00 +0000

Then it’s just the usual commands to start the build process:

cd ~/python-bareos-Release-17.2.5/
fakeroot debian/rules binary

You should now have all of the .deb package files in your home directory which you can install locally or host in your own APT repository.

Building Bareos RPMs on CentOS 6 & 7

Wednesday, March 14th, 2018

Bareos (Backup Archiving Recovery Open Sourced) is a popular open source backup system originally forked from the Bacula project, but they only publicly publish packages for the first release of each major version; updates are reserved for paying customers. The source code is available on GitHub however, so you can pretty easily build your own packages, even if exactly how to do it doesn’t seem to be documented.

These instructions are based on Bareos version 17.2.5, so would need to be adjusted appropriately for other versions. I’m also working exclusively with 64-bit (x86_64) versions.

Before we start, lets make sure that everything is up to date:

yum -y update

If you don’t already have the EPEL repository installed, then install it as we’ll need it for the jansson-devel and libcmocka-devel build dependancies:

yum -y install  epel-release

Now install everything needed to build the RPMs. We’ll use the libdroplet, libdroplet-devel, libfastlz and libfastlz-devel packages from the Bareos repositories.

On CentOS 6:

yum -y install rpm-build wget autoconf automake httpd httpd-devel glusterfs-devel glusterfs-api-devel git-core gcc gcc-c++ glibc-devel ncurses-devel readline-devel libstdc++-devel zlib-devel openssl-devel libacl-devel lzo-devel sqlite-devel mysql-devel postgresql-devel libcap-devel mtx qt-devel libcmocka-devel python-devel python-setuptools libtermcap-devel tcp_wrappers redhat-lsb jansson-devel tcp_wrappers-devel http://download.bareos.org/bareos/release/17.2/CentOS_6/x86_64/libdroplet-3.0.git.1510141874.bc2a9a0-41.1.el6.x86_64.rpm http://download.bareos.org/bareos/release/17.2/CentOS_6/x86_64/libdroplet-devel-3.0.git.1510141874.bc2a9a0-41.1.el6.x86_64.rpm http://download.bareos.org/bareos/release/17.2/CentOS_6/x86_64/libfastlz-0.1-7.3.el6.x86_64.rpm http://download.bareos.org/bareos/release/17.2/CentOS_6/x86_64/libfastlz-devel-0.1-7.3.el6.x86_64.rpm

On CentOS 7:

yum -y install rpm-build wget autoconf automake httpd httpd-devel glusterfs-devel glusterfs-api-devel git-core gcc gcc-c++ glibc-devel ncurses-devel readline-devel libstdc++-devel zlib-devel openssl-devel libacl-devel lzo-devel sqlite-devel mysql-devel postgresql-devel libcap-devel mtx qt-devel libcmocka-devel python-devel python-setuptools libtermcap-devel tcp_wrappers redhat-lsb jansson-devel tcp_wrappers-devel http://download.bareos.org/bareos/release/17.2/CentOS_6/x86_64/libdroplet-3.0.git.1510141874.bc2a9a0-41.1.el6.x86_64.rpm http://download.bareos.org/bareos/release/17.2/CentOS_7/x86_64/libdroplet-devel-3.0.git.1510141874.bc2a9a0-41.1.el7.x86_64.rpm http://download.bareos.org/bareos/release/17.2/CentOS_7/x86_64/libfastlz-0.1-7.3.el7.x86_64.rpm http://download.bareos.org/bareos/release/17.2/CentOS_7/x86_64/libfastlz-devel-0.1-7.3.el7.x86_64.rpm

It’s a good idea to run the build under an unprivileged user. I’ve set up a dedicated user called “build” for this, but any normal user account will do.
Let’s set up the build environment and download the Bareos source code from the various repositories on GitHub:

useradd build
su – build
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
echo ‘%_topdir %(echo $HOME)/rpmbuild’ > ~/.rpmmacros
wget https://github.com/bareos/bareos/archive/Release/17.2.5.tar.gz -O bareos-17.2.5.tar.gz
tar xf bareos-17.2.5.tar.gz
mv bareos-Release-17.2.5/ bareos-17.2.5
tar zcf bareos-17.2.5.tar.gz bareos-17.2.5
mv bareos-17.2.5/platforms/packaging/bareos.spec ~/rpmbuild/SPECS/
rm -rf bareos-17.2.5
mv bareos-17.2.5.tar.gz ~/rpmbuild/SOURCES/
wget https://github.com/bareos/bareos-webui/archive/Release/17.2.5.tar.gz -O bareos-webui-17.2.5.tar.gz
mv 17.2.5.tar.gz bareos-webui-17.2.5.tar.gz
tar xf bareos-webui-17.2.5.tar.gz
mv bareos-webui-Release-17.2.5/ bareos-webui-17.2.5
tar zcf bareos-webui-17.2.5.tar.gz bareos-webui-17.2.5
mv bareos-webui-17.2.5/packaging/obs/bareos-webui.spec ~/rpmbuild/SPECS/
rm -rf bareos-webui-17.2.5
mv bareos-webui-17.2.5.tar.gz ~/rpmbuild/SOURCES/
wget https://github.com/bareos/python-bareos/archive/Release/17.2.5.tar.gz -O python-bareos-17.2.5.tar.gz
tar xf python-bareos-17.2.5.tar.gz
mv python-bareos-Release-17.2.5/ python-bareos-17.2.5
tar zcf python-bareos-17.2.5.tar.gz python-bareos-17.2.5
mv python-bareos-17.2.5/packaging/python-bareos.spec ~/rpmbuild/SPECS/
rm -rf python-bareos-17.2.5
mv python-bareos-17.2.5.tar.gz ~/rpmbuild/SOURCES/

Edit the ~/rpmbuild/SPECS/bareos.spec file in your favourite text editor and set “Version” (line 8) to “17.2.5” as well as “Release” (line 9) to “0%{?dist}”.
You also need to search for “BuildRequires: libqt4-devel” (line 186) and replace it with “BuildRequires: qt-devel”.

By default, GlusterFS and Droplet support isn’t built on CentOS 6 for some reason, so if you want them then you need to edit “%define glusterfs 0” and “%define objectstorage 0” (lines 45 and 46) and set them to 1.

Now you’d ready to run the build itself. On CentOS 6:

rpmbuild -ba ~/rpmbuild/SPECS/bareos.spec –define “centos_version 600”

And on CentOS 7:

rpmbuild -ba ~/rpmbuild/SPECS/bareos.spec –define “centos_version 700”

Once this finishes, you should find a collection of several Bareos RPMs in ~/rpmbuild/RPMS/x86_64/. We need the bareos-common package installed for build the Web UI, so become root and install it.

On CentOS 6:

yum -y install /home/build/rpmbuild/RPMS/x86_64/bareos-common-17.2.5-0.el6.x86_64.rpm

On CentOS 7:

yum -y install /home/build/rpmbuild/RPMS/x86_64/bareos-common-17.2.5-0.el7.centos.x86_64.rpm

Next, edit the ~/rpmbuild/SPECS/bareos-webui.spec file in your favourite text editor and set “Version” (line 4) to “17.2.5”.

Now you’d ready to build the Web UI package:

rpmbuild -ba ~/rpmbuild/SPECS/bareos-webui.spec

Finally, edit ~/rpmbuild/SPECS/python-bareos.spec in your favourite text editor and set “Version” (line 21) to “17.2.5” as well as “Release” (line 22) to “0%{?dist}” and build the final package:

rpmbuild -ba ~/rpmbuild/SPECS/python-bareos.spec

You should now have the full compliment of RPMs in ~/rpmbuild/RPMS/x86_64/ and ~/rpmbuild/RPMS/noarch/.

If you need to rebuild the RPMs for the same version of Bareos for any reason, then you should increment the value of Release in the relevant .spec file by 1 each time (e.g. “1%{?dist}”, “2%{?dist}” etc.).

You can now GPG sign your RPMs if you want and then add them to your own central yum repository with createrepo or just directly install them locally with rpm.

WHMCS and Nominet EPP farce (again!)

Saturday, September 10th, 2016

On the 14th of January 2016 Nominet notified EPP users that they would be upgrading their EPP system to require TLS version 1.1 or higher connections on the 8th of June 2016 in order to keep the EPP system secure.
From the 2nd of February 2016 the Nominet EPP testbed was updated to reflect these new requirements so that registrars and software developers could test that their systems will work once this change has been made to the production EPP platform.

In the past, WHMCS have ignored notices from Nominet and waited until after such changes were put live before making the appropriate changes to their software, so I raised a ticket with WHMCS on the 19th of February 2016 in order to check that WHMCS were aware of the planned change and would be making the appropriate changes.
As usual, WHMCS completely dismissed this ticket and point blank refused to investigate any potential impact which could affect their customers.

In this case, the original date of the 8th of June was pushed back to the 22nd of June before being rescheduled to the 16th of August in order to give registrars more time to update and test their systems against the EPP testbed.
However, despite having over 7 months notice from Nominet and 6 months notice from at least one customer, WHMCS made no effort to test their EPP implementation against the Nominet testbed until after Nominet had made the changes and WHMCS customers complained of problems.
The WHMCS module was updated 4 days *AFTER* the change, which fell on a weekend and so some customers may not have had the technical resource immediately on-hand to test and deploy the update.

To top it all off, both publicly and in tickets WHMCS arrogantly tried to blame Nominet for their usual dismal failings.
I tried to comment on the WHMCS blog (http://blog.whmcs.com/?t=117270) and initially this comment was published, but despite WHMCS replying to it the comment was moderated and my follow up response was deleted (as was another reply asking why they deleted the previous reply!). Three weeks later, the comment is still moderated…

It speaks volume about the way WHMCS is run that they not only created the initial problem by arrogantly refusing to make the effort to test their module, but then tried to lay the blame on Nominet as well as actively surprising any negative comments criticising them for the way that they treat their customers with such contempt.

CSF bugs and updates

Saturday, June 25th, 2016

ConfigServer Security and Firewall (CSF) is a great program for managing iptables/netfilter firewall rules on Linux servers and performing automated blocks based on various things such as brute force login attempts (check it out at http://www.configserver.com/cp/csf.html) and I really shouldn’t complain given that it’s free, but sometimes I really do wonder if ConfigServer/Way to the Web actually do any testing at all before releasing new versions!

7 issues fixed in 6 bugfix releases (9.01 to 9.06) in 2 days! It’s a good job that the automatic update feature works properly…

ValueError in Django migration

Sunday, May 15th, 2016

I’ve recently started developing in Python+Django again for a personal site that I’m working on and for far too long today I’ve been pulling my hair out when trying to write what should be a fairly simple migration using migrations.RunPython() to generate a default value to assign to a new OneToOneField column.

The rather confusing error message that I was receiving is:

ValueError: Cannot assign "<User: blah>": "Profile.account" must be a "User" instance.

This seems to be the same problem described at http://stackoverflow.com/questions/29700001/valueerror-cannot-assign-user-issue-on-my-onetoonefield-relationship, but nothing mentioned in that post worked.

I was hopeful when I came across https://code.djangoproject.com/ticket/24282 that this was a known issue, but alas whilst similar it’s not exactly the same problem and that bug was closed over a year ago. Back to square one!

What was particularly confusing was that the same code seemed to work as expected when run in the Django shell:

from foo.models import Profile, Article
from django.contrib.auth.models import User
first_superuser = User.objects.all().filter(is_superuser=True).first()
default_profile = Profile(account=first_superuser, biography="")
default_profile.save()

After a lot of fiddling, I eventually realised that the way which I was importing the User model from the built in Django auth system was wrong for a migration and so I replaced:

from django.contrib.auth.models import User

with:

User = apps.get_model('auth', 'user')

And now the migration finally works as expected. 🙂