You don’t need ransomware to make me WannaCry about Windows..

Windows Servers.  What a load of old tosh.  The past three weeks or so have seen me tinkering unnecessarily with the blasted things because of Microsoft’s inability to write an operating system which is so super sensitive to hardware changes – principally because of licensing – that just by upgrading underlying virtualisation software triggers the operating system to think it has a new network card.  You can imagine the chaos something like that can cause!

It’s not just that which makes me despise Windows Server.  For similar reasons, if a dedicated server chassis dies and needs to be swapped out – you’d better have a spare because any hardware changes will cause Windows to freak out.  Linux has no problem with such things providing you’re using a modern distribution and reasonably up to date hardware.  Generally speaking, with maybe a very few exceptions, Linux Just Works(tm).

Don’t get me started on those people that are still running the now 15 year old Windows 2003.. (though this article about Fasthosts running Windows 2003 for their backup platform made me laugh a lot more than it should – and bury my hands in my face for leaving an obsolete OS in charge of managing critical customer backups).

The whole WCry situation around these parts has been, strangely, pretty good – indeed, a lot more people have taken an interest in their backups and patching their systems and this is only to be commended.  A good old major outbreak tends to kick people in the teeth and get them thinking about disaster recovery.

Just because I use MacOS and Linux isn’t making me complacent – oh no.  Very recently Apple just released updates to iOS, MacOS and WatchOS to fix a rather nasty exploit, as well as general performance updates.  It’s one of the reasons I went back to iOS – Apple has become very good at rolling out updates much faster and on schedule than the likes of Samsung.

The server on which this blog runs on utilises something called KernelCare which patches the kernel in real time for newly discovered exploits.  This has the advantage of:

  1. Not having to wait for the OS vendor to release a patch.
  2. You don’t have to reboot the machine.

In my testing of KernelCare, it has worked very well.  If you’re using it in a VPS, it must support full virtualisation – paravirtualisation won’t cut it.

Meanwhile, Microsoft should stick to producing office productivity software and gaming (Xbox One) – it’s what they’re good at.  I’ve completely lost faith in their desktop and server operating system divisions.

Memset win Best Dedicated Hosting in the 2016 ISPA Awards!

We won, in part, due to my superb support team.  Couldn’t ask for a better bunch.  Always striving to be the best and fastest (response) for support in the industry.

ISPA Award Winners 2016

Best Dedicated Hosting winner: Memset

Memset’s response times, technical support options and resilience impressed the judges, as did having IPv6 as standard and their commitment to being carbon neutral.

EasyApache 4: Making cPanel/WHM more sysadmin friendly

One of the reasons for popping up to Edinburgh last week was to hear various representatives from cPanel/WHM talk about the many features of the cPanel/WHM ecosystem as well as glimpsing upcoming new features to make everybody’s life a bit more easier.

As as systems administrator of some 20 years (has it been that long?), I am most comfortable with a command line interface and a decent text editor.  cPanel/WHM provides a user friendly web interface to many of the complex tasks that one would to go through to configure a web hosting environment.  But I must admit to loving cPanel/WHM just as I love the command line because it is easier to set-up a blog like this through cPanel/WHM than it would take me to set-up nginx, php-fpm, MySQL (or MariaDB, or PerconaDB) from scratch.  That said, to get the very best out of cPanel/WHM, you should still know some Linux commands because not everything can (or should be) handled through a web interface.

As cPanel/WHM development storms ahead, we’re getting to the point where cPanel/WHM is becoming more standardised so that you’ll be able to manage it just as you would any other kind of bare bones Linux box, with full LSB compliance (with configuration files and scripts in meaningful places) along with full API and command line support for most features.

With the forthcoming EasyApache 4, for example, you can set-up Apache and PHP through the use of RPMs rather than having to wait for cPanel/WHM to compile everything for you.  I cannot tell you how much faster it is installing everything through a Linux package management system.

EasyApache 4 is still considered beta, with plans for it to be released within the next major release of cPanel/WHM – version 58, which is about 12-16 weeks away.  Beta or not, EasyApache 4 is perfectly serviceable right now.  With EasyApache 4, it’ll make it much easier for folk to run multiple versions of PHP (so older sites can run PHP 5.3/5.4 and WordPress and the ilk can run PHP 7).  Of course, one would recommend deploying CloudLinux to provide a greater amount of segregation and security for the older, potentially more exploitable apps, but this feature in EasyApache 4 makes it possible for all folk to run multiple versions of PHP side-by-side.

There will still be a user interface to configure EasyApache profiles.  Indeed, I used it to specify the relevant Apache and PHP packages for this server.  The MultiPHP INI editor is a wonderful inclusion that makes it dead simple to go through all the php.ini options and set them to your liking.  The changes will be applied to whatever PHP handler is being used.

Full PHP-FPM support is among one of the biggest and greatest features I’ve been waiting for in cPanel/WHM.  It should be fully supported in version 58, but I’m making great use of it right now with a bit of command line tinkering.  I’m running this blog (and the stats system) on PHP 7 with PHP-FPM.  It wasn’t difficult, and I find that I’m loving the performance from having made the effort.  Having nginx would be a nice have (as a web server rather than as a front end proxy to Apache), but beggars can’t be choosers and Apache 2.4’s performance is pretty decent as it is.

Brings a whole new definition to the word “clean-up”

Info Insecurity

I shouldn’t laugh at a fellow web hosting company’s misfortune, but when I heard about the almighty muck-up from 123-reg inadvertently nuking customer’s virtual private servers (source: BBC)  during routine maintenance, I couldn’t help but to try and stifle a chuckle.

But on a more serious note it highlights a couple of problems (least of which is to be very, very sure about what stuff you’re doing on the underlying host platform):

  • Virtualisation = multi-tenant server, therefore a dedicated server will be home to quite a few other clients, all doing their own thing.  Unless you’re using some form of shared storage for the virtual server image, or can quickly hot swap the drives out to a new standby chassis – if the server goes TITSUP (see below), many people will be affected, and for quite some time!
  • Backups.  I can’t believe people aren’t making multiple backups.  Especially if you’re not paying the hosting provider for the privilege.  NEVER assume that your hosting provider is taking backups of your data.  But there are many options available to ensure that you have sufficient coverage in the case of a failure. Some hosting providers usually provide something (at cost), but it’s always recommended that you store backups both away the hosting platform, in a different datacentre, and at least one copy preferably away from the hosting company.  Why not use a third party utility such as rclone to make sure you’re backing up valuable data to another service?  I’ve written a guide for cPanel server users here.
  • Redundancy.  If your business is truly that important, you’ll be looking at high availability options that can include, but are not limited to, load balancing (multiple web front ends, multiple DB and file backends).  If one more servers goes TITSUP (Total Inability To Support Usual Performance), others can take over.  Failover options are well worth investigating.  Note: it’s rarely cheap, but if you really value uptime of your business – it’s a must.

I think the best attitude to have in this situation is to tell yourself what would you do WHEN these things go wrong – not IF.  Aside from all of the above, your web site may be affected by malware (especially if you’re running legacy versions of the server components, or if your CMS or web site is itself based around legacy components – make sure you keep it up-to-date!), denial of service attacks, or a combination of both.

Running a web site and managing your email is fun, fun, fun!

Using rclone to backup your cPanel backups to a remote destination

cPanel/WHM has a robust backup system that can create .tar.gz archives of your accounts, combining email, web files, databases, etc. into a single archive that can be used to restore the account in the case of emergency, or to move to another server.

What it isn’t so good at is putting them somewhere off the server to ensure that if your it dies a horrible death (multiple hard drive failures, spontaneous combustion, human error, etc.) you can restore all your accounts.  Much of the backup system depends on third-party remote mounts, Amazon S3 or FTP servers.

Worry no more!  For one of the directors of Memset, the company that employs me to do things, has created a multi-purpose transfer tool called rclone.  It can be set-up to copy or sync data to a variety of multiple destinations, including:

  • Google Drive
  • Amazon S3
  • Openstack Swift / Rackspace cloud files / Memset Memstore
  • Dropbox
  • Google Cloud Storage
  • Amazon Cloud Drive
  • Microsoft One Drive
  • Hubic
  • Backblaze B2
  • Yandex Disk
  • The local filesystem

Since this site is hosted on a Memset server, it makes sense to backup my cPanel accounts over to my Memstore account, an object storage system that uses the OpenStack Swift protocol.  While we have custom FTP and SFTP proxies, it’s important to note that you can’t upload a file that’s greater than 5Gb in size. Thankfully rclone speaks native Swift and can handle sizes beyond 5Gb.

The following assumes a basic knowledge of Linux and access to SSH as root..

So the first thing to do is download a copy of rclone for your server.  Most people will be running a 64-bit Linux, so you’ll need to download the tarball for that.    The next step is to unpack the archive and install the binaries and manpage as per the instructions.  Skip the sudo parts if you’re on cPanel – it’s not needed, so:

Now run:

You’ll see something like this:

Press ‘n’ for New Remote.  You’ll then be prompted to give this a name.  You can call it whatever you like.  In this example I’ll be using the name ‘memstore’.  Once you’ve given it a name, you’ll be prompted for the storage type.  In our example, it’s OpenStack (number 10):

I’ve created a user within my Memstore/Memset account control panel called “cpanel” that I’ll be using to connect to the Memstore container “cpaneldemo” that will hold my backups:


I then assign read and write permissions for user “cpanel” to the container “cpaneldemo”:


Now to configure rclone:

So the username is the right-hand part of the Memstore username, and the tenant is the left-hand part (e.g. msdrakeab2.cpanel becomes user = cpanel, tenant = msdrakeab2).  The key (or password) will be displayed in plain text at all times, and is stored within the /root/.rclone.conf file.  Make sure that only root has permission to read this file – it should do by default, e.g.:

So we’re ready to rock and roll.  We don’t have any data in the container, but we can give a quick test to make sure we’re able to connect:

All seems to be working.  So let’s manually move some backups to Memstore.  Memset configures cPanel backups to be dumped to /backup on your server.  So based on that, the initial upload will look like this:

When we look at the contents of the container through the Memset account control panel:


How do I retrieve backups?

Very easily done.  Let’s say we want to grab the account called ‘mice’ that was backed up on the 31st.  In the cPanel backup hierarchy, it’ll look like this:

So to get that back from Memstore, we’d do this:

where /tmp is the local filesystem where you want the file to be placed.  Can be anywhere on the filesystem.   You can leave out the file and have the entire contents of the ‘accounts’ directory transferred too (although in this example, there is only one file in ‘accounts’):

How do I automate the backups?

Simple, just add it as a cron job:

which will run at 1:30am and will dump the output to /var/log/rclone.log.

Other ideas

You could use rclone to create historical backups within Memstore, handy if you keep a set of daily backups that you’d like to keep around longer than the cPanel keeps them for on your server’s filesystem.  To do this, you ensure you have a destination container to sync to.  So let’s create one:

then sync the contents of one container to another.  Note that all of this is done on Memstore – no data is transferred from your server to Memstore (or vice versa).

The following example demonstrates a sync of the existing data in the “cpaneldemo” container to the new container “cpdev”.  I could  automate this by adding a cron job to sync data from “cpaneldemo” to “cpaneldev” on a weekly basis, for example.