Posts Tagged ‘deployment’
virtualenv is a tool for simplifying dependency management in Python applications. As the name suggests, virtualenv creates a virtual environment which makes it easy to install Python packages without needing root privileges to do so.
To use the packages installed in a virtual environment you run the activate script in the bin directory of the virtual environment. This is fine when you’re working on the command line, but you don’t want to have to remember this step when running the debug server, and it’s hard to get that to work when the site is deployed under mod_wsgi.
To make things easier you can add the appropriate directory from the virtual environment to Python’s path as part of manage.py, or the appropriate fcgi or wsgi control script.
import os import sys import site root_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) site.addsitedir(os.path.join(root_dir, 've/lib/python%i.%i/site-packages' % (sys.version_info, sys.version_info)))
Just add the code above to the top of your manage.py file and the ve virtual environment will always be activated when you run the script.
In a previous post I discussed what you want from an automatic deployment system. In this post I’ll discuss how use to solve the repeatability and scalability requirements that I set out.
Fabric is a tool which lets you write scripts to automate repetitive tasks. So far, so bash like. What sets Fabric apart is the tools it gives you to run commands on remote servers. Fabric allows you to run the same commands on multiple machines, and to move files between the hosts easily.
To get started with Fabric you’ll need to install it, but a simple sudo easy_install fabric should be enough to get you up and running. The Fabric website has excellent documentation, including a tutorial, but before I discuss how to integrate Fabric with your Django deployment process, lets go over the basics.
A Fabric control file is a Python file named fabfile.py. In it, you define a series functions, one for each command that you want to run on the remote servers.
from fabric.context_managers import cd from fabric.operations import sudo env.hosts = ['host1', 'host2'] def update(): with cd('/data/site'): sudo('svn up') sudo('/etc/init.d/apache2 graceful')
We’ve defined the function update which can be run by typing fab update in the same directory as fabfile.py. When run, Fabric will connect in turn to host1 and host2 and run svn up in /data/site and then restart Apache.
Typically I define two functions. An update command, like that above is used to update the site where is has previously been deployed. A deploy command is used to checkout the site onto a new machine. Fabric lets you override the host list on the command line using the -H option. To deploy one of my sites on a new box I just have to type fab -H new-machine deploy and the box is set up for me.
Fabric helps you fulfil a few of the requirements for a perfect deployment system. It is scaleable, as to extend your system to a new machine you only need to add the hostname to the env.hosts list. It is also repeatable, providing you put every command you need to run to update your site into your fabfile.
With an automated deployment system in place we can now move on to looking a dependency, settings and database change management, but those are subjects for a future post.
Recently I have been giving a lot of thought to how best to deploy websites, specifically Django powered sites. In future posts I’ll describe how I use some of tools available to deploy websites, but in this post I want to set out the goals of any system that you use to deploy a website.
What do mean when we say deployment? Clearly it involves getting your code onto a production server but the process also needs to look after any dependencies of your code. Updates also sometimes require database changes and these need to be managed and deployed with the appropriate code changes. If your website is more than just a hobby then it will also usually involve some sort of high availably set up.
The first requirement is repeatability. You might be able to follow a list of ten commands without making a mistake normally, when your site is broken and you need to get a fix deployment as soon as possible following that list will suddenly become a whole lot harder. For this reason, and to avoid the temptation to cut corners when deploying a change automation of as much as possible is key.
The second requirement is scalability. As your website grows you deployment process needs to be able grow with it. As you add a new server to your cluster you don’t want to have to spend a long time updating your deployment process, nor do you want the extra server to create extra work for you to do on every deployment.
Another requirement is speed. I’m usually very skeptical of anything that claims ‘written for speed’ as a key benefit. Unless you know something is slow, it’s usually better to make something easier to maintain than quick. An automatic process will inevitably be quicker than a manual one and whether or not your deployment process results in down time for your site the upgrade process is inevitably a risk and ideally that risk window will be kept to a minimum.
Database migrations are a tricky thing to get right. A deployment system must allow developers to track changes that they make to the database, and to make it easy to ensure these changes are applied at the right moment.
In future posts I will talk about how tools such as fabric, virtualenv, pip and south can be used to meet these requirements and ensure you never have a failed deployment again.