Staging Servers on Demand

As our team of developers and designers grew so did our need for having more staging servers available for testing new features.  With multiple teams working on multiple features, having just 2 or 3 available staging servers quickly became a problem, especially when some of these features required occupying a server for a couple of days. We needed a way for every developer to have a staging server “on demand”.

We do utilize ngrok, but that can lead to blocking the developer. Ideally we would want to deploy our feature on a separate server where it can be checked and approved at any time.

As we are already using AWS services and it’s quite easy and fast to create and shutdown new instances with AWS, we went with creating a script that’s using the aws command line tools for creating and managing new instances. The purpose of this script is to make the existing aws commands easy to use with little to no configuration. The only configuration required for the script to work is setting up a named profile for the aws-cli (http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-multiple-profiles).

After setting up a profile, with just using the instance id and a single command we can easily start, stop, restart or terminate instances. We tag these instances with a specific name, to easily identify and filter these later on as staging instances. We also add a user tag to these instances to make it easier to check who started which instance.

All the commands we execute have the following pattern (the <profile> is the named profile setup beforehand):

An example list of all instances command and output:

What this one command does in the background is:

It’s calling describe-instances with the given PROFILE and applies the AMI (STAGING_DYNAMIC_AMI – in this case is the name of the AMI we are using to create the instances from) name as the filter and the user tag – so we only get the instances that the one issuing the command created. The second line outputs some info in a table format from the found instances.

As we can see, issuing the list command gives us the state, IP address and the id of the instances. We’ve also setup a custom AMI that we use to create these instances from. The AMI is just an Ubuntu 14.04 image with mongodb, mysql, java and tomcat installed but not configured yet. Later on, during the deployment process we are creating and uploading all the needed configuration files. For this we are using fabric – a python library which allows us to do shell operations, locally and remotely as well as issue sudo commands.

To use fabric we need to create a Python module that contains one or more functions which we can then execute via the fab command-line tool. Each of these functions executes something on the remote host, be that restarting apache or build and upload a WAR file. When we are executing these functions we are also specifying the IP address of the host we want to run these commands on by doing:

Passing the IP address this way gives us access to the IP address in the environment variables dictionary in the fabfile module, which we use for setting up our apache config, ssh config as well as the external config for our grails application where we need to specify server paths and uri’s.

Some of the commands that we can issue via fab:

For example, the command that restarts tomcat looks like:

The fab tool is very verbose and by default it prints almost everything it can.  That’s why we are using the with hide() with running and warning arguments, which will hide any warning messages and printouts of the command being executed.

For all the configuration files we have a preset template files to which we pass any needed variables (like the IP address of the server). For the template files we use yet another Python library – Jinja2 ; very similar to Django templates.

When we run the full deployment command, first we stop the apache/tomcat services. Then we use the config templates and pass the server ip address to them and upload them to the server.

Jinja2 uses a central object called the template Environment, with which we can load our templates after. So we create the object pointing it to our templates dir then load the desired template with:

To pass the variables to the template and render it:

After that we can create the file on our remote server:

When we issue run() everything inside executes on the remote server. If we want to execute something locally we do that with local() instead. For example, to build the WAR file locally we call:

We create any other configs needed and upload them to the server. Then we create the WAR file locally and upload it to the remote server after which we restart tomcat.

With this setup for ~20 min. we can get a fully working staging server with our feature deployed and ready to be tested.

One Comment

  1. Thank you very much ! You have cleared out the difference between them.

Leave a Reply

Your email address will not be published. Required fields are marked *

*