Citrix XenServer XS62E015 update failing to apply

June 16th, 2014

If you’re using XenServer 6.2, you may have some problems installing the XS62E015 update from CTX140808.

You go through the normal update procedure – download from the CTX140808 Citrix knowledge base article, extract it and upload the XS62E015.xsupdate file to the pool, then apply UUID c8b9d332-30e4-4e5e-9a2a-8aaae6dee91a to the pool, which promptly fails with:

The uploaded patch file is invalid. See attached log for more details.
log: error parsing patch precheck xml: expected one of these character sequence: “required”, found “error” line 4

It turns out that Citrix issued two updates for the same issue – one for Citrix XenServer 6.2 without SP1 installed and one for Citrix XenServer with SP1 installed. The slight snag is that both of these updates show up in the list of available updates in Citrix XenCentre and the knowledge base articles don’t mention that the two updates patch the same problem, but in two different version of Citrix XenServer.

If you are running Citrix XenServer 6.2 with SP1 installed, then you need to install the XS62ESP1003 update from CTX140416 (UUID c208dc56-36c2-4e91-b8d7-0246575b1828). Once XS62ESP1003 has been installed, XS62E015 will also show up in the list of installed updates.

Misleading companies – part 3

June 8th, 2014

The final part of my rant (see part 1 featuring ConnetU and part 2 featuring IX Reach as well as euNetworks) is all about C4L (A.K.A. Connexions4London), a company who are trying to take the take the practice of claiming to have PoPs which don’t really exist to a whole new level!

I should start this with a disclaimer. I have been a C4L customer, I have used their network services, I have experienced their technical support and their customer service. Equally, I have had both customers and friends who have used C4L’s services. Between these various experiences over a significant amount of time, I can safely say that I will never be a customer of C4L or knowingly use their services ever again.

Every time I have had anything to do with them, they have managed to reinforce my impression that they are an awful company with an old, unreliable, congested network, unhelpful staff, slow support and non-existent customer service. Suffice to say, I may be considered somewhat biased. Anyway…

On the front page of their web site, C4L state:

C4L is a Colocation, Connectivity, Cloud and Communications provider headquartered on the South Coast, providing access to over 100 UK data centres and more than 300 globally.

Now, this is not the only misleading thing on their web site, or even on the home page, but as the other items aren’t specifically relevant to this rant, I’ve decided to leave them out (for now).

These impressive sounding numbers of over 100 UK and 300 global data centres appear repeatedly on the C4L web site as well as in their email newsletters, press releases and other marketing literature.

The thing is, that C4L don’t have anywhere near that many Points of Presence (PoPs). If you take a look at their network map (, then you can very quickly see that the actual number is much smaller:

  • Telecity Reynolds House
  • Telecity Williams House
  • Telecity Kilburn House
  • “Manchester 1” (probably the M247 Ball Green data centre)
  • “Birmingham” (god knows)
  • “Derby” (Node4 Derby DC1 and DC2 – basically the same building)
  • “Enfield” (Virtus London1 Enfield)
  • “Milton Keynes” (Pulsant Milton Keynes, formerly BlueSquare MK)
  • “Slough” (Virtustream)
  • Telecity Soverign House
  • Level(3) Goswell Road
  • Telehouse Metro
  • City Lifeline
  • InterXion
  • “Brick Lane” (probably Easynet)
  • Global Switch 2
  • Telehouse North
  • Telehouse East
  • Global Switch 1
  • Telecity Meridian Gate
  • Telecity Harbour Exchange 6&7
  • Telecity Harbour Exchange 8&9
  • Telecity Bonnington House
  • “City Reach” (Tutis Point City Reach, formerly owned by QiComm, now Docklands Data Centre Ltd – DDCL)
  • “Greenwich” (the former BIS Anchorage Point data centre, now owned by 6dg)
  • “Bylfeet” (4D Data Centres Sirius II)
  • “Park Royal” (probably Telecity Powergate)
  • “Maidenhead” (Pulsant Maidenhead 1-3, formerly BlueSquare House and BlueSquare 2-3)
  • “Bournemouth” (C4L’s own data centre with their offices at County Gates)
  • Evoswitch Amsterdam
  • “Isle of Man” (Either Netcetera or Wi-Manx)

Now, that is still a fairly impressive PoP list, but at 31 data centres it is a lot less than the 100 that they are claiming in the UK, let alone the 300 that they are claiming globally – particularly when you consider that there’s only a single international data centres on there, and that is Evoswitch in Amsterdam!

This all comes back to my earlier question – what is the point of climbing more PoPs than you actually have? What do you stand to gain from it?

If you do somehow manage to deceive a customer into taking services from you at a datacentre where you don’t already have a PoP, then the likelihood is that you won’t be providing any kind of useful service – you’ll just be buying a single rack from the data centre, adding a markup onto it and passing it on to the customer. Now the customer has to go through you for access requests and remote hands, slowing them down and adding an unnecessary layer of complication.

Likewise with connectivity, you probably aren’t going to be able to build out a full scale PoP for a single customer, so you’re just buying a circuit from someone who does actually have a PoP in that data centre and then backhauling it to somewhere on your existing network. This adds a potential point of failure as well as complicating any troubleshooting as now three (or more!) parties are involved in any investigations.

In both instances, the customer could have got the same (or better) service from going direct and it would have cost them less. So all you have done is made it more expensive and less reliable for your customer!

There is no problem with reselling services, as long as you are adding some value to them. In these cases however, it seems that the only value being added is to the reseller’s bank account!

Misleading companies – part 2

June 8th, 2014

In part 1, I wrote about my experiences looking for IP transit services in Telehouse West and an annoying encounter with ConnetU over a phantom PoP.

Part 2 of this rant covers a second requirement from the same project – this time for a pair of 1Gbps layer 1 (wavelength) or layer 2 (MPLS pseudo-wire or VPLS) circuits between two data centres – London Data eXchange’s LDeX1 in North-West London and Telehosue West in the London docklands.

This shouldn’t be such a hard task as both data centres are carrier neutral (with the Telehouse docklands campus being one of the most highly connected places in the UK, if not the EU!) and so have pretty good carrier coverage.

With LDeX1 being the smaller, newer site I decided to start with their carrier list ( as there was a good chance that any carriers on this list would also be present at the Telehouse docklands campus.

I contacted several of the carriers on this list, however this rant is specifically about euNetworks and IX Reach as both of these companies replied that they aren’t actually present in LDeX1.

IX Reach

IX Reach specifically list LDeX1 on their own PoP list ( as well has having put out a press release in August 2012 ( saying:

IX Reach, a layer 2 Ethernet carrier, is pleased to have become a partner of LDeX and added LDeX1 – a state of the art, network independent 22,000sq ft London data centre – to its PoP (Point of Presence) list and is able to offer their full range of services; capacity from 100Mbps to multiple 10Gbps over Point-to-Point/Multipoint connection, full colocation options and also a Direct Connect into the Amazon Web Services (AWS) Cloud platform.

Now, I don’t know about you, but to me, that says that the PoP is open and services are available to order, so I was somewhat surprised when I got a reply back from IX Reach saying:

We are at Telehouse West but would need at least a 10Gbps/10GE requirement to consider to PoP London LDeX1 which is also quite a distance away.

I sent them a link to the LDeX carrier list, as well as IX Reach’s own PoP list and press release, but all they came back with was:

I’m afraid our PoP list and the LDeX1 carrier list has not been updated, we are currently not in LDeX1.

Eh? Why would the IX Reach PoP list and the LDeX carrier list had to be “updated”? I pressed the point and asked for clarification. Had the LDeX1 PoP been closed for some reason?

As far as I’m aware, IX Reach were intending to PoP LDeX1, for some technical reason this did not happen.

All very vague and noncommittal. Suffice to say, IX Reach now join ConnetU on my list of companies who can’t be trusted to tell the truth!


euNetworks particularly annoyed me, as they stung me along a took quiet a bit of time before eventually deciding that they weren’t going to be able to provide any services.

euNetworks had previously announced back in March 2013 that they were establishing a Point of Presence in LDeX1 (see and I had spoken to them at the time about the specific fibre route that they were using as I was interested in enhancements to redundancy and diversity that this might enable.

At the time, euNetworks had told me that they weren’t using the existing Geo (now Zayo) or Virgin Media fibre connections, but were instead digging their own fibre in, so I got in touch with them again on the 22nd of May 2014 to ask if this was live yet (as it had now been over a year since it was originally announced) and they eventually got back to me on the 27th of May 2014 to say:

Unfortunately euNetworks aren’t digging in to LDex, the dig is a large one which is financially big and the customer have taken a decision to not pop the building.

We will be able to provide a quote like from our 3rd party team.

This was somewhat disappointing, but at least it looked like they would be able to help in some way, I mean, they must have some kind of presence in LDeX1, otherwise they wouldn’t have announced that they were providing Amazon Web Services (AWS) Direct Connect services in LDeX1 only a couple of months ago ( in April 2014 surely?

I followed this up on the 29th of May 2014, only to be told that they were still waiting for their partners. I then chased it again on the 5th of June 2014, only to be told that they weren’t going to be able to provide any kind of quote at all:

Unfortunately, we are not going able to provide this service. We are finding it hard to pin down a supplier for managed services from LDeX and consequently the pricing we are receiving is just not competitive.

Sigh! Two weeks of waiting, only to be told that they’re not going to be able to help at all. So much for an LDeX1 PoP!

So, there we go, two companies who can’t seem to decide whether they have opened a PoP or not, but apparently that doesn’t stop them putting out press releases about it and getting themselves featured on carrier lists…

Now, on to the third and final part of my rant – C4L!

Misleading companies – part 1

June 8th, 2014

Something that I’ve never understood is why some companies feel the need to lie on their websites. I’m not talking about ambiguous marketing twaddle about being “a leading provider” etc. but flat out deception. This seems to be particularly prevalent in the hosting/network/data centre industry for some reason.

This specific rant was triggered by a recent project which required an IP transit connection in Telehouse West.

This shouldn’t be such a hard task – Telehouse West is carrier neutral and the Telehouse docklands campus is one of the most highly connected places in the UK, if not the EU!

I did a bit of Googling for companies claiming to have network PoPs in Telehouse West. One of the companies which came up was ConnetU, who I’ve heard of, but never had actually any dealings with before now.

On the front page of their web site ( ConnetU state:

With over 15 London data centres on our network, we cater for a full complement of project requirements including location, cost and performance, from being in the heart of low-latency Internet to high-density computation.

and if you follow that link, then you get to a page talking about their London network that contains quotes such as:

Enterprise metro Ethernet spanning 15 London data centres

Now, if you click on the network map, then you actually find 19 data centres (although I didn’t bother to count them at the time):

Telecity Harbour Exchange (HEX) 6&7
Telecity Harbour Exchange (HEX) 8&9
Telecity Meridian Gate
Telecity Sovereign House
Telstra London Hosting Centre (LHC)
Tutis Point City Reach (formerly QiComm, now Docklands Data Centre Ltd – DDCL)
Global Switch 1
Global Switch 2
Telehouse North
Telehouse East
Telehouse West
Telehouse Metro
Level(3) Goswell Road
City Lifeline
InterXion London
Croydon (I’m guessing Pulsant’s Croydon data centre)
Greenwich South London (I’m guessing the former BIS Anchorage Point data centre, now owned by 6dg)
London Bridge (I’m guessing the former Safehosts data centre, now GSDV)
West Byfleed (I’m guessing 4D Data Centres Sirius II)

Now, Telehouse West is clearly listed on there as a network PoP with 10Gbps fibre/wavelength connections to Telehouse North and Telehouse East. If you click on the IP transit link, Telehouse West is listed again under “Available PoPs” with a link to it’s very own page ( detailing a few key facts about the data centre as well as a list of which services are available there (“IP transit” and “Interconnects”).

So, from all this, it’s fair to say that ConnetU are clearly advertising that they have a Point of Presence (PoP) at Telehouse West (THW) are are able to provide IP transit services there. So, I filled out the “Request Quote” form on the IP transit page, which again feature Telehouse West on the drop down “Delivery in” list

Now, imagine my surprise when I receive an email from ConnetU saying:

we’re still looking for a decent reason to PoP West as there hasn’t been much demand in there to date.

This seemed somewhat odd based on the description on their web site, so I queried this and they replied with:

We’ve been waiting for a good excuse to break-out West – it’s a fibre run away, which can be done fairly quickly upon order. However, enquiries to date have been so small they’re just not worthwhile.

So basically, they don’t have a PoP in Telehouse West, but they’re listing it on their site anyway and if they get a big enough order for it to be worth their while, they will have Telehouse run fibre from their existing PoPs in order to service it…

Not only is this completely misleading, but as someone looking for services in a particular data centre it’s also utterly frustrating as it makes my job so much harder and just wastes my time.

I don’t understand how companies think that they will benefit from this sort of behaviour. Surely it’s pretty obvious that all you are going to end up doing is annoying prospective customers because you won’t be able to provide the services as the requirement is too small for it to be worth you while building a PoP, or the price will be too high because you’ll pass all the costs of building the PoP onto the client or the lead time will be too long because you need to order space, connectivity, equipment etc. and get it all set up.

I would certainly think twice before considering taking services from ConnetU at any of their other locations due to this experience. Why waste my time getting in touch with them again? Who knows how many of their other PoPs don’t really exist? Plus, do I want to do business with a company which behaves in such a misleading manner publicly?

Owing to the size of this rant, I have split it up into three pats so that it is easier to read. Part 2 focuses on IX Reach and euNetworks whilst part 3 is about C4L.

Twitter integration not working in WHMCS

May 14th, 2014

If you’re having problems with WHMCS’s Twitter integration feature on a CentOS/RHEL server, then chances are that you need to install the php-xml RPM.

The Twitter integration in WHMCS uses jQuery to make a POST request to announcements.php detailing the number of tweets to retrieve. This then in turn connects to the Twitter API, parses the response and returns it to the browser as HTML.

Something in this script requires one of the extensions provided by the php-xml RPM (dom, wddx, xmlreader, xmlwriter and xsl). If the module isn’t present, then the script fails silently and returns no tweets. You still get some HTML though – the Twitter icon, a single hyphen and the “Follow us on Twitter” button.

eBay and PayPal DNS hijacked by Syrian Electronic Army

February 1st, 2014

Earlier today, the nameservers on the and domain were changed to and in an apparent hijack.

It seems that the Syrian Electronic Army are now claiming responsibility for this on Twitter. They have posted screenshots of the eBay/PayPal MarkMonitor account where they were able to manage the domains in question as well as seemingly had access to the email account of Paul Whitted, Senior Manager at eBay’s Site Engineering Centre judging by another screenshot.

Several hours before this broke in the news, I tried to get in touch with PayPal UK’s security team to report this to them, however after being passed between several people I was eventually told that the problems I was experiencing were because “PayPal doesn’t support Apple devices as they are less secure”. Thanks guys, really helpful, top notch work there!

I also emailed the eBay network team and their domain registrar, MarkMonitor, neither of whom bothered to get back to me.

For posterity, I’ve attached screenshots of the and listings in Nominet’s whois records at the time of the attack. nameserver nameserver hijack

Tags used by OWASP CRS ModSecurity rules

January 18th, 2014

I couldn’t find a definitive list of the tags used by the OWASP CRS ModSecurity rules, so after a bit of faffing around, here’s what I’ve come up with for the “base” rules in OWASP CRS version 2.2.9 (current at the time of writing).

I’ve tried to group them together as best I can:

Web Attack:


Protocol Violation:







OWASP_CRS/OWASP_CRS/MALICIOUS_IFRAME (not sure why this one has “OWASP_CRS” in it twice)





XenConvert fails to import OVF to XenServer

January 11th, 2014

I recently ran into a problem when trying to P2V a server with the Citrix XenConvert tool.

XenConvert would successfully create the local VHD file for the disk image as well as the OVF file for the server and then copy them to XenServer, however at the vert last hurdle (actually importing the OVF into XenCenter), it would then fail with a rather unhelpful “Failed to import the OVF Package” message.

The relevant part of the XenConvert.txt log would look something like this:

Physical to OVF Package stopped at <date/time>
Physical to OVF Package lasted <duration> seconds
Source is <path>.
Destination is <XenServer IP>.
OVF to XenServer started at <date/time>
Importing OVF Package…
Failed to import.
Failed to import the OVF Package.
OVF to XenServer stopped at <date/time>
OVF to XenServer lasted <duration> seconds
Physical to XenServer stopped at <date/time>
Physical to XenServer lasted <duration> seconds

It turns out that the OVF package has in actual fact been successfully imported in to XenServer, it’s just that for whatever reason XenCovert fails to remove the “hidden” flag from the resulting Virtual Machine.

You can see the hidden Virtual Machine in XenCenter by selecting “Hidden Objects” from the “View” menu.

In order to make the virtual machine permanently visible, you need to switch to the console and find the UUID of the virtual machine using:

xe vm-list

Once you have the UUID, you can remove the “hidden” flag from the virtual machine with the following command:

xe vm-param-remove uuid=<UUID> param-name=other-config param-key=HideFromXenCenter

Obviously you need to replace “<UUID>” with the UUID that you found in the output of the vm-list command.

You should now have a normal, fully functioning Virtual Machine which you can see without “Hidden Objects” enabled in XenCetner.

OpenVZ reboot loop

January 2nd, 2014

I recently encountered a strange reboot loop with a server running OpenVZ.

When booting the server, the host would come up and start the process of booting the containers, then at some point during the boot process it would reboot without any errors.

I managed to break the reboot loop by quickly SSHing in to the host node whilst it was starting the containers and disable the “vz” service so that it wasn’t started at boot time:

chkconfig vz off

On the next reboot, this broke the reboot loop and allowed me to get in via SSH and look around the system properly.

The only hint that I had to go on was that whilst most of the containers brought up during the boot sequence said “starting container”, the container immediately prior to the unexpected reboot said “restoring” instead.

This prompted me to look in the “/vz/dump/” folder, where I found a file called “Dump.” (Obviously was the container ID of the container which was being restored instead of started).

Simply removing this dump file allowed the “vz” service to be started without causing a reboot and everything started working as normal.

The only thing left to do at this point is to remember to enable the “vz” service again so that it will start during the next system boot:

chkconfig vz on

cPanel breaking Dovecot in 11.40

January 2nd, 2014

Recently I’ve had a couple of cases where cPanel randomly breaks Dovecot with one of the cPanel 11.40.x updates.

In one of these cases, cPanel actually uninstalled the Dovecot RPM as part of the automated, overnight upcp process! In the other cases, Dovecot was still running and accepting connections, but POP3/IMAP clients were getting messages that their passwords were wrong.

Reinstalling Dovecot if upcp has decided to remove it for some reason is quite simple – just use the cPanel script to check and repair their RPMs:

/scripts/check_cpanel_rpms –fix

Whilst the Dovecot RPM is now installed, chances are that Dovecot is still left in a broken state with any login attempt failing and messages like this in /var/log/maillog:

dovecot: auth: Fatal: execv(/usr/local/cpanel/bin/dovecot-wrap) failed: Permission denied

If you look at the ownership and permissions on /usr/local/cpanel/bin/dovecot-wrap, you’ll find that it’s root:root instead of root:dovecot and so you need to run the following in order to fix the ownership:

chgrp dovecot /usr/local/cpanel/bin/dovecot-wrap

At this point, you won’t be seeing any of the permission errors in the maillog, but you’ll still be seeing failed authentication attempts. Now you want to trick cPanel into thinking that the RPM has been removed so that it will try and re-install it. This should mean that the scripts from the RPM are executed without replacing any of the files:

rpm -e –nodeps –justdb dovecot
/scripts/check_cpanel_rpms –fix

If you are still having problems at this point, then try running the following to set the setuid flag for the owner on the script:

chmod u+s /usr/local/cpanel/bin/dovecot-wrap

Then you just need to re-run the above RPM trick and Dovecot should spring back into life with successful authentication attempts being logged into the maillog.

According to cPanel support, this is a “known issue” which has somehow made its way through the EDGE, CURRENT and RELEASE tiers into the STABLE tier…