All posts by Phil

Installing ClusterControl with an existing cluster and Nginx

ClusterControl is a comprehensive tool for managing and monitoring MySQL and MongoDb clusters, more information can be found here: http://www.severalnines.com/clustercontrol

ClusterControl comes with some great scripts for automating deployment, however there are a number of hard coded dependencies (like Apache) which make it a rather more painful process to integrate with your existing systems.

Background info:

There are 3 components that make up ClusterControl:

ClusterControl UI: A CakePHP based GUI
CMON: A daemon process which monitors servers and writes to a MySQL database
CMON API: A PHP based API which reads data from the CMON database

ClusterControl UI and CMON API can be on the same or different hosts and the UI can talk to multiple instances of the CMON API (hence multiple CMON daemons).

The UI also makes AJAX calls to the API via a script in the path /access which simply sends the same request to the API with the addition of the API token.

There are several .htaccess files that rewrite requests to the UI, the access script and the API.

Installation process:

CMON can be installed by following this guide: http://support.severalnines.com/entries/20613923-installation-on-an-existing-cluster

The guide contains a section at the end to install the UI/API, this does install and configure Apache, but you can simply remove it again.

Making it work with Nginx:

After a bit of tinkering I’ve put together a config which works nicely for both the ClusterControl UI and CMON API:

server {
        listen 1.2.3.4:80;
        server_name mydomain.com;

        access_log /var/log/nginx/mydomain.com-access.log;
        error_log /var/log/nginx/mydomain.com-error.log;

        root /var/www;
        index index.php;

        location ~ \.htaccess {
                deny all;
        }

        location ~ \.php$ {
                fastcgi_pass 127.0.0.1:9000;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include /etc/nginx/fastcgi_params;
        }

        # Handle requests to /clustercontrol
        location /clustercontrol {
                alias /var/www/clustercontrol/app/webroot;
                try_files $uri $uri/ /clustercontrol/app/webroot/index.php;
        }

        # Equivalent of $is_args but adds an & character
        set $is_args_amp "";
        if ($is_args != "") {
                set $is_args_amp "&";
        }

        # Handle requests to /clustercontrol/accesss
        location ~ "^/clustercontrol/access/(.*)$" {
                try_files $uri $uri/ /clustercontrol/app/webroot/access/index.php?url=$1$is_args_amp$args;
        }

        # Handle requests to /cmonapi
        location ~ "^/cmonapi/(.*)$" {
                try_files $uri $uri/ /cmonapi/index.php?url=$1$is_args_amp$args;
        }

}

Making the UI work is straight forward, however the access script and API require the partial URL and the equivalent of the mod_rewrite [QSA] flag as the ? in the try_files directives stops Nginx from auto-appending the query string from the request, hence the need for the regex and appended args.

A small code change is also required to the library used by the access script which is located in /clustercontrol/app/Lib/Access.php as the API token header uses an underscore rather than a dash which results in Nginx ignoring it (which I think is correct behaviour, probably an RFC floating around somewhere which will clarify this).

Replace both instances of:

$headers[0] = "CMON_TOKEN: {$token}";

With:

$headers[0] = "CMON-TOKEN: {$token}";

Everything should now work!

Share

Some observations while upgrading Percona XtraDB Cluster from 5.5 to 5.6

I followed the Percona guide to upgrade my XtraDB Cluster (MySQL + Galera) from 5.5 to 5.6, details can be found here: http://www.percona.com/doc/percona-xtradb-cluster/5.6/upgrading_guide_55_56.html

The upgrade process

I included putting the upgraded nodes into read only mode and I performed an additional step to reconfigure the xinetd process used to run clustercheck after installing the new binaries:

server_args = clustercheck <password> 0 /var/log/mysqlchk.log 0

This sets available when donor and available when read only to off, by default with 5.6 the SST method is Xtrabackup, but my 5.5 cluster had been setup with rsync, more on this later.

The first 2 nodes upgraded just fine, however when I restarted the 3rd node the entire cluster crashed. Restarting MySQL on all nodes got things back up and running again.

After upgrading

Everything was initially working fine, but I noticed some odd behaviour on one node, it was repeatedly crashing when I was purging old data from one of the databases. The node would fail to restart with an rsync error, only resolved when I performed a rolling restart of the other 2 nodes.

Switching from rsync to Xtrabackup

Rysnc clearly is not the best option for a 5.6 based cluster, it worked fine for the last 18 months with the 5.5 cluster but there are good reasons why Xtrabackup is the default option with 5.6. The main benefit that I can see is that Xtrabackup is non-blocking so the node can continue to accept connections during SST.

Xtrabackup requires a username/password to login to another host to perform the transfer, hence you have to create an account with permission to access all databases from each server in the cluster. I simply ran CREATE USER and GRANT statements on a single node and allowed the cluster to replicate the new accounts to the other nodes.

It’s important that the datadir variable is defined in my.cnf, or Xtrabackup doesn’t work.

A test can be carried out from the shell before updating my.cnf:

innobackupex --host=<another node> --user=<user> --pass=<password> /tmp/

Finally my.cnf can be updated:

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=user:password

Checking the MySQL log should show SST now using Xtrabackup.

Share

Passing backend server host name as a custom header with Nginx

When using load balanced web servers it’s often useful to know which server processed the request. There is a simple unobtrusive method to achieve this with the Nginx web server. Nginx conveniently populates the server’s real host name to the $hostname variable, meaning that you can simply output this to a custom header in the global config file:

http {
    ...
    # Show server host name as header
    add_header X-Backend-Server $hostname;
}
Share

Another problem with VMware Tools solved

The issues with 3.8+ kernel that I posted about previously have been resolved in newer versions of VMware Tools, but unfortunately the latest version 9.6.1 (bundled with Fusion 6.0.2) brings along more problems with the file sync driver. The issue causes deletions within files to be reflected in the guess OS as truncation at the end of the file.

There is a very simple solution to this, it’s reasonably well documented on the VMware forums and the developer has also responded informing us that a fix is in progress, however I thought it would be useful to collate all the information into a single post.

The fix is simply to downgrade the tools back to 9.6.0, however I’ve noticed that a lot of people have had issue with this as the default behaviour is to auto upgrade, putting your right back to version 9.6.1.

Firstly you need to edit the vmx file of your VM (In some versions of VMware I think you can change this in the GUI, but I couldn’t find this option in Fusion), I would suggest shutting it down completely before doing this. In the terminal you can navigate into the VM folder and use a text editor to do this. Look for the setting “tools.upgrade.policy” which seems to default to “upgradeAtPowerCycle” and ensure this is set to “manual”, the save any change if necessary.

Finally, download the relevant tools package from https://softwareupdate.vmware.com/cds/vmw-desktop/fusion/6.0.1/1331545/packages/, untar/unzip it several times and perform the standard tools installation process (below example is for Linux guests):

wget https://softwareupdate.vmware.com/cds/vmw-desktop/fusion/6.0.1/1331545/packages/com.vmware.fusion.tools.linux.zip.tar
tar -xf com.vmware.fusion.tools.linux.zip.tar
unzip com.vmware.fusion.tools.linux.zip
sudo mkdir /mnt/iso
sudo mount -o loop payload/linux.iso /mnt/iso
cp /mnt/iso/VMwareTools-9.6.0-1294478.tar.gz .
tar xzf VMwareTools-9.6.0-1294478.tar.gz
cd vmware-tools-distrib/
sudo ./vmware-install.pl
Share

Recursively remove/delete a directory in PHP using SPL components

File system management is not the most common use case for PHP, but in writing a command line tool today I was surprised to find that PHP doesn’t have a function to recursively remove a directory (I was expecting at least a flag for rmdir or unlick, but no, nothing).

I did a quick Google search, just to be sure, and found numerous other people asking this same question and a few rather long winded recursive functional solutions floating around.

No being much of functional programming (and also because really there is no excuse for it when PHP has had reasonable OOP and some fantastic stuff in the SPL for a long time now) I knocked together a much cleaner SPL based solution for recursively deleting a directory:

function recursiveRmDir($dir)
{
    $iterator = new RecursiveIteratorIterator(new \RecursiveDirectoryIterator($dir, \FilesystemIterator::SKIP_DOTS), \RecursiveIteratorIterator::CHILD_FIRST);
    foreach ($iterator as $filename => $fileInfo) {
        if ($fileInfo->isDir()) {
            rmdir($filename);
        } else {
            unlink($filename);
        }
    }
}

I’ve obviously not added any checks to ensure that rmdir and unlink are successful, but this would be a simple addition and I really wanted to post this as an example of a nice modern way to use PHP rather than relying on old functional components.

Share

Patching VMware Tools to fix multiple installation errors on Ubuntu 13.04

Ubuntu 13.04 has the 3.8.0 kernel which has a few changes that VMware have not got round to fixing in the tools package yet. These patches were written for VMware Workstation 9, but I used them successfully with VMware Fusion 5, so there is little or no difference between the tools package for either product. I also did not write these patches myself but I’m writing this post to consolidate fixes for multiple separate issues when attempting to install tools on an Ubuntu 13.04 guest.

Issue 1 – Cannot find kernel headers

This is something to do with the location of version.h changing in the newer kernel and can be fixed with a simple simlink:

sudo ln -s /usr/src/linux-headers-$(uname -r)/include/generated/uapi/linux/version.h /usr/src/linux-headers-$(uname -r)/include/linux/version.h

Issue 2 – Build of vmci fails

The following errors will be encountered in vmware-config-tools.pl:

Using 2.6.x kernel build system.
make: Entering directory `/tmp/modconfig-9TKqy8/vmci-only'
/usr/bin/make -C /lib/modules/3.8.0-21-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. \
      MODULEBUILDDIR= modules
make[1]: Entering directory `/usr/src/linux-headers-3.8.0-21-generic'
  CC [M]  /tmp/modconfig-9TKqy8/vmci-only/linux/driver.o
/tmp/modconfig-9TKqy8/vmci-only/linux/driver.c:127:4: error: implicit declaration of function ‘__devexit_p’ [-Werror=implicit-function-declaration]
/tmp/modconfig-9TKqy8/vmci-only/linux/driver.c:127:4: error: initialiser element is not constant
/tmp/modconfig-9TKqy8/vmci-only/linux/driver.c:127:4: error: (near initialisation for ‘vmci_driver.remove’)
/tmp/modconfig-9TKqy8/vmci-only/linux/driver.c:1754:1: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘vmci_probe_device’
/tmp/modconfig-9TKqy8/vmci-only/linux/driver.c:1982:1: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘vmci_remove_device’
/tmp/modconfig-9TKqy8/vmci-only/linux/driver.c:119:12: warning: ‘vmci_probe_device’ used but never defined [enabled by default]
/tmp/modconfig-9TKqy8/vmci-only/linux/driver.c:121:13: warning: ‘vmci_remove_device’ used but never defined [enabled by default]
/tmp/modconfig-9TKqy8/vmci-only/linux/driver.c:2063:1: warning: ‘vmci_interrupt’ defined but not used [-Wunused-function]
/tmp/modconfig-9TKqy8/vmci-only/linux/driver.c:2137:1: warning: ‘vmci_interrupt_bm’ defined but not used [-Wunused-function]
/tmp/modconfig-9TKqy8/vmci-only/linux/driver.c:1717:1: warning: ‘vmci_enable_msix’ defined but not used [-Wunused-function]
cc1: some warnings being treated as errors
make[2]: *** [/tmp/modconfig-9TKqy8/vmci-only/linux/driver.o] Error 1
make[1]: *** [_module_/tmp/modconfig-9TKqy8/vmci-only] Error 2
make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-21-generic'
make: *** [vmci.ko] Error 2
make: Leaving directory `/tmp/modconfig-9TKqy8/vmci-only'

The communication service is used in addition to the standard communication
between the guest and the host.  The rest of the software provided by VMware
Tools is designed to work independently of this feature.
If you wish to have the VMCI feature, you can install the driver by running
vmware-config-tools.pl again after making sure that gcc, binutils, make and the
kernel sources for your running kernel are installed on your machine. These
packages are available on your distribution's installation CD.
[ Press Enter key to continue ]

The solution to this issue involves patching driver.c to work with the 3.8 kernel:

cd ~
wget http://cdn.philbayfield.com/downloads/vmware-vmci.patch
cd /usr/lib/vmware-tools/modules/source/
sudo tar xf vmci.tar
cd vmci-only/
sudo patch -p1 < ~/vmware-vmci.patch
cd ..
sudo tar cf vmci.tar vmci-only

Issue 3 – Build of vmhgfs fails

The following errors will be encountered in vmware-config-tools.pl:

Using 2.6.x kernel build system.
make: Entering directory `/tmp/modconfig-LxeJ83/vmhgfs-only'
/usr/bin/make -C /lib/modules/3.8.0-21-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. \
      MODULEBUILDDIR= modules
make[1]: Entering directory `/usr/src/linux-headers-3.8.0-21-generic'
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/backdoor.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/backdoorGcc64.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/bdhandler.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/cpName.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/cpNameLinux.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/cpNameLite.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/dentry.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/dir.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/file.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/filesystem.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/fsutil.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/hgfsBd.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/hgfsEscape.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/hgfsUtil.o
  CC [M]  /tmp/modconfig-LxeJ83/vmhgfs-only/inode.o
/tmp/modconfig-LxeJ83/vmhgfs-only/inode.c: In function ‘HgfsTruncatePages’:
/tmp/modconfig-LxeJ83/vmhgfs-only/inode.c:888:4: error: implicit declaration of function ‘vmtruncate’ [-Werror=implicit-function-declaration]
cc1: some warnings being treated as errors
make[2]: *** [/tmp/modconfig-LxeJ83/vmhgfs-only/inode.o] Error 1
make[1]: *** [_module_/tmp/modconfig-LxeJ83/vmhgfs-only] Error 2
make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-21-generic'
make: *** [vmhgfs.ko] Error 2
make: Leaving directory `/tmp/modconfig-LxeJ83/vmhgfs-only'

The filesystem driver (vmhgfs module) is used only for the shared folder
feature. The rest of the software provided by VMware Tools is designed to work
independently of this feature.

If you wish to have the shared folders feature, you can install the driver by
running vmware-config-tools.pl again after making sure that gcc, binutils, make
and the kernel sources for your running kernel are installed on your machine.
These packages are available on your distribution's installation CD.
[ Press Enter key to continue ]

The solution to this issue involves patching compat_mm.h to work with the 3.8 kernel:

cd ~
wget http://cdn.philbayfield.com/downloads/vmware-vmhgfs.patch
cd /usr/lib/vmware-tools/modules/source/
sudo tar xf vmhgfs.tar
cd vmhgfs-only/shared/
sudo patch -p1 < ~/vmware-vmhgfs.patch
cd ../..
sudo tar cf vmhgfs.tar vmhgfs-only

Finally, after adding any (or all) of these patches, just run vmware-config-tools.pl again and all should complete without issue.

Share

Showing the correct client IP in logs and scripts when using Nginx behind a reverse proxy

I’ve noticed when it comes to use of reverse proxies such as HAProxy to serve high availability websites that many people seem to struggle to get the real client IP address both in their server logs and scripting languages (e.g. PHP).

This is actually really easy to do, but it seems that the methods are not well known as there are numerous posts on forums and many people resort in attempting to resolve the problem by making code changes in their scripts and using custom logging settings on their web server.

With Nginx it’s EASY! I’m sure you can do it with other web servers too, but Nginx is my web server of choice these days due to it’s low memory footprint, speed and features.

For the sake of an example I’ll explain how this works in the context of the hosting that this website is running on. This is a pair of HAProxy servers in front of a pair of Nginx servers with PHP configured via FPM.

Firstly you need to configure HAProxy to pass the client IP address, which is quite simply done by adding the “forwardfor” and “http-server-close” options to the backend, ensuring that the real client IP reaches the backend web servers via the X-Forwarded-For header. Here is the configuration I’m using for my HTTP backend:

backend http
    mode http
    option httpchk GET /index.html
    balance leastconn
    option http-server-close
    option forwardfor
    server web1 <web1_ip>:80 check inter 1000 fastinter 500 downinter 5000 rise 3 fall 1
    server web2 <web2_ip>:80 check inter 1000 fastinter 500 downinter 5000 rise 3 fall 1

Secondly you need to configure Nginx to use the correct source for the client IP address using the HttpRealIP module.

There are 2 main directives you should define:

  • set_real_ip_from specifies the address of a trusted server (e.g. the load balancer) of which Nginx can replace this address with an untrusted address (the client IP)
  • real_ip_header specifies the source header to find the address to replace the trusted server with

These are also global directives, so can be placed in the main config in the ‘http’ section so that it takes effect for all virtual hosts.

Here is an example for a configuration using 2 load balancers:

http {
...
    set_real_ip_from <load_balancer1_ip>;
    set_real_ip_from <load_balancer2_ip>;
    real_ip_header X-Forwarded-For;
...
}

After making these simple changes (5 lines of configuration in total for a dual load balanced cluster) all http logs will display the correct client IP as will scripting languages see the correct IP in the REMOTE_ADDR header without requiring a single code change.

Share

PC Rebuild – Part 2

All the parts have finally arrived, including one or two things I forgot about. I had to order a few extra connectors as the 1/2″ ID tubing was slightly less easy to route as I expected (coming from 3/8″ ID tubing currently). Also worth mentioning that neither Aqua Computer’s D5 pumps or XSPC’s twin D5 dual bay reservoir include O-rings for the pumps (they are usually included with pump tops though by the looks of things) so they had to be ordered separately but are easy enough to find.

A small amount of initial assembly is required. Firstly, mounting the fans to the radiator:

Then the pumps to the reservoir:

Finally the Aquaero block:

Fitting the Aquaero block is a little more complicated. You have to remove the circuit board, take the screws out holding the heat sink in place, add new thermal pads and then attach the block with the different screws provided. As the block is quite small it’s also a bit of a pain to attach the compression fittings, I used a 10mm extender on the left and a 45 degree angled fitting on the right to give clearance from the block.

I’ve got a fried ASUS P9X79 board that I’ve not got round to getting an RMA for yet which is a handy way to mount the CPU block in the right place. This case is so big that the full ATX board looks tiny:

This is a full height radiator plus fans and there is still room above the board to see the cable management, a huge improvement over my current case.

Finally it’s time to put everything in and add the tubing. At this point after I’ve fiddled around getting everything in and my hands start to get sore from tightening up the compression fittings, removing bits and adding them again to make things easier, I get a bit slack with the pictures and just want to get things finished!

Here are a few during and after photos:

Fill with distilled water and away we go!

At the time of writing I’ve had the system running for about 48 hours, I only ordered 1 litre of coolant and it took 1.5 litres to fill it with distilled water. I’m waiting on a new order of Mayhems Aurora fluid which will arrive on Tuesday!

Share

PC Rebuild – Part 1

Lately I’ve been trying to reduce noise in my desktop PC, but the knock on effect of this has been at the expense of cooling performance. I’m also bored of the case (which looks pretty grubby now) and the system colours as well as being fed up with the mess of cables inside (no cable management back then). As the case and complete water loop are in their 6th year of use the’ve had a good run and it’s time for a change.

Initially I’ve decided against pulling my brand new GTX 670 card apart to add a water block to it, it also runs very quietly anyway. For now I’ll cool the CPU and Aquaero fan controller with water, with a nice big rad that can support a GPU later and run fans at lower RPM and noise. I’m sticking with BitFenix fans because they look great and don’t make too much noise and a bay res with integrated pump to save space.

Parts list
Corsair 800D full tower case

Cooling components

XSPC RX360 radiator

XSPC twin D5 dual bay reservoir

Aqua Computer D5 pumps with USB/Aquabus

EK Supremacy CPU block

Aquaero 5 block

Aqua Computer inline temperature sensors

1/2″ ID compression fittings and various other G1/4″ fittings

Mayhem X1 red fluid

Share

An experiment in purchasing Twitter followers

There are many sites now offering Twitter followers for sale for anything as low as $5 for 5,000 followers.

I was curious to see if this might be a viable method to boost the presence of a new website or online business and more importantly would it hold up under scrutiny.

I bought 4,000 followers from someone on Fiverr and expected my followers list to soon be filled with eggs! Once the order had been completed I was surprised to see that actually the vast majority of the profiles looked fairly genuine, all had pictures and bio which is a good start. Unfortunately this is where the illusion ends, as soon as you start to look at the followers accounts a very familiar pattern starts to arise. Most of my latest 4,000 followers have zero followers (some have 1 or 2), they’re all following the same number of people and most of them have never tweeted. Look even further and I can see that they also all follow the exact same list of people!

What I find very interesting is that looking at the people each of these accounts are following is obviously the client list of the person selling the followers, many of which include everything from new businesses and companies to verified users and minor celebrities. Looks like everyone is at it?

To conclude, clearly it’s easy to identify fake followers, even those that look good on the surface. For public figures and B2C companies I can see that this might be a nice way to boost your figures and “look” popular. For a B2B company it seams like a bad idea, especially if your customers are likely to perform any kind of due diligence checks on you.

Share