Andrew Wilkinson

Random Ramblings on Programming

Integrating Python and Javascript with PyV8

with 8 comments

Scania 500/560/580/620 hp 16-litre Euro 3/4/5 V8 engineA hobby project of mine would be made much easier if I could run the same code on the server as I run in the web browser. Projects like Node.js have made Javascript on the server a more realistic prospect, but I don’t want to give up on Python and Django, my preferred web development tools.

The obvious solution to this problem is to embed Javascript in Python and to call the key bits of Javascript code from Python. There are two major Javascript interpreters, Mozilla’s SpiderMonkey and Google’s V8. Unfortunately the python-spidermonkey project is dead and there’s no way of telling if it works with later version of SpiderMonkey. The PyV8 project by contrast is still undergoing active development.

Although PyV8 has a wiki page entitled How To Build it’s not simple to get the project built. They recommend using prebuilt packages, but there are none for recent version of Ubuntu. In this post I’ll describe how to build it on Ubuntu 11.11 and give a simple example of it in action.

The first step is make sure you have the appropriate packages. There may be others that are required and not part of the default install, but there are what I had to install.

sudo aptitude install scons libboost-python-dev

Next you need to checkout both the V8 and PyV8 projects using the commands below.

svn checkout http://v8.googlecode.com/svn/trunk/ v8
svn checkout http://pyv8.googlecode.com/svn/trunk/ pyv8

The key step before building PyV8 is to set the V8_HOME environment variable to the directory where you checked out the V8 code. This allows PyV8 to patch V8 and build it as a static library rather than the default dynamic library. Once you’ve set that you can use the standard Python setup.py commands to build and install the library.

cd v8
export PyV8=`pwd`
cd ../pyv8
python setup.py build
sudo python setup.py install

In future I’ll write more detailed posts about how to use PyV8, but let’s start with a simple example. Mustache is a simple template language that is ideal when you want to create templates in Javascript. There’s actually a Python implementation of Mustache, but let’s pretend that it doesn’t exist.

To start import the PyV8 library and create a JSContext object. These are equivalent to sub-interpreters so you have several instance of your Javascript code running at once.

>>> import PyV8
>>> ctxt = PyV8.JSContext()

Before you can run any Javascript code you need enter() the context. You should also exit() it when you are complete. JSContext objects can be used with with statements to automate this, but for a console session it’s simplest to call the method explicitly. Next we call eval() to run our Javascript code, first by reading in the Mustache library and then to set up our template as a variable.

>>> ctxt.enter()
>>> ctxt.eval(open("mustache.js").read())
>>> ctxt.eval("var template = 'Javascript in Python is {{ opinion }}';")

The final stage is to render the template by dynamically created some Javascript code. The results of the expressions are returned as Python objects, so here rendered contains a Python string.

>>> import random
>>> opinion = random.choice(["cool", "great", "nice", "insane"])
>>> rendered = ctxt.eval("Mustache.to_html(template, { opinion: '%s' })" % (opinion, ))
>>> print rendered
Javascript in Python is nice

There’s much more to PyV8 than I’ve described in this post, including calling Python code from Javascript but unfortunately the V8 and PyV8 documentation is a bit lacking. I will post some more of my discoveries in future posts.


Photo of Scania 500/560/580/620 hp 16-litre Euro 3/4/5 V8 engine by Scania Group.

Advertisements

Written by Andrew Wilkinson

January 23, 2012 at 2:15 pm

Posted in python

Tagged with , , , ,

Back Garden Weather in CouchDB (Part 4)

leave a comment »

Weather frontIn this series of posts I’m describing how I created a CouchDB CouchApp to display the weather data collected by the weather station in my back garden. In the previous post I showed you how to display a single day’s weather data. In this post we will look at processing the data to display it by month.

The data my weather station collects consists of a record every five minutes. This means that a 31 day month will consist of 8,928 records. Unless you have space to draw a graph almost nine thousand pixels wide then there is no point in wasting valuable rending time processing that much data. Reducing the data to one point per hour gives us a much more manageable 744 data points for a month. A full years worth of weather data consists of 105,120 records, even reducing it to one point per hour gives us 8760 points. When rendering a year’s worth of data it is clearly worth reducing the data even further, this time to one point per day.

How do we use CouchDB to reduce the data to one point per hour? Fortunately CouchDB’s map/reduce architecture is perfect for this type of processing. CouchDB will also cache the results of the processing automatically so it only needs to be run once rather than requiring an expensive denormalisation process each time some new data is uploaded.

First we need to group the five minute weather records together into groups for each hour. We could do this by taking the unix timestamp of record and rounding to the nearest hour. The problem with this approach is that the keys are included in the urls. If you can calculate unix timestamps in your head then your maths is better than mine! To make the urls more friendly we’ll use a Javascript implementation of sprintf to build a human-friendly representation of date and time, excluding the minute component.

function(doc) {
    // !code vendor/sprintf-0.6.js

    emit(sprintf("%4d-%02d-%02d %02d", doc.year, doc.month, doc.day, doc.hour), doc);
}

CouchDB will helpfully group documents with the same key, so all the records from the same hour will be passed to the reduce function. What you cannot guarantee though is that all the records will be passed in one go, instead you must ensure that your reduce function can operate on its own output. You can tell whether you are ‘rereducing’ the output of the reduce function by checking the third parameter to the function.

function(keys, values, rereduce) {
    var count = 0;

    var timestamp = values[0].timestamp;
    var temp_in = 0;
    var temp_out = 0;
    var abs_pressure = 0;
    var rain = 0;

    var wind_dir = [];
    for(var i=0; i<8; i++) { wind_dir.push({ value: 0}); }

To combine the multiple records it makes sense to average most of the values. The exceptions to this are the amount of rain, which should be summed; the wind direction, which should be a count of the gusts in each direction, and the wind gust speed which should be the maximum value. Because your reduced function may be called more than once calculating the average value is not straightforward. If you simply calculate the average of the values passed in then you will be calculating the average of averages, which is not the same the average of the full original data. To work around this we calculate the average of the values and store that with the number of values. Then, when we rereduce, we multiply the average by the number of values and then average the multiplied value.

In the previous, simplified, code snippet we set up the variables that will hold the averages.

    for(var i=0; i<values.length; i++) {
        var vcount;
        if(rereduce) { vcount = values[i].count } else { vcount = 1 }

We now loop through each of the values and work out how many weather records the value we’re processing represents. The initial pass will just represent a single record, but in the rereduce step it will be more.

        temp_in = temp_in + values[i].temp_in * vcount;
        temp_out = temp_out + values[i].temp_out * vcount;
        abs_pressure = abs_pressure + values[i].abs_pressure * vcount;

Here we build up the total values for temperature and pressure. Later we’ll divide these by the number of records to get the average. The next section adds the rain count up and selects the maximum wind gust.

        rain = rain + values[i].rain;

        wind_ave = wind_ave + values[i].wind_ave * vcount;
        if(values[i].wind_gust > wind_gust) { wind_gust = values[i].wind_gust; }

So far we’ve not really had to worry about the possibility of a rereduce, but for wind direction we need to take it into account. An individual record has a single window direction but for a hourly records we want to store the count of the number of times each direction was recorded. If we’re rereducing we need to loop through all the directions and combine them.

        if(rereduce) {
            for(var j=0; j<8; j++) {
                wind_dir[j]["value"] += values[i].wind_dir[j]["value"];
            }
        } else if(values[i].wind_ave > 0 && values[i].wind_dir >= 0 && values[i].wind_dir < 16) {
            wind_dir[Math.floor(values[i].wind_dir/2)]["value"] += 1;
        }

        if(values[i].timestamp < timestamp) { timestamp = values[i].timestamp; }
        count = count + vcount;
    }

The final stage is to build the object that we’re going to return. This stage is very straightforward, we just need to divide the numbers we calculated before by the count of the number of records. This gives us the correct average for these values.

    return {
            "count": count,
            "status": status,
            "timestamp": timestamp,
            "temp_in": temp_in / count,
            "temp_out": temp_out / count,
            "abs_pressure": abs_pressure / count,
            "rain": rain,
            "wind_ave": wind_ave / count,
            "wind_gust": wind_gust,
            "wind_dir": wind_dir,
        };
}

Now we have averaged the weather data into hourly chunks we can use a list, like the one described in the previous post, to display the data.

In the next and final post in this series I’ll discuss the records page on the weather site.


Photo of Weather front by Paul Wordingham.

Written by Andrew Wilkinson

January 20, 2012 at 2:15 pm

Posted in couchdb

Tagged with , , , ,

Back Garden Weather in CouchDB (Part 3)

with 2 comments

almost mayIn this series I’m describing how I used a CouchDB CouchApp to display the weather data collected by a weather station in my back garden. In the first post I described CouchApps and how to get a copy of the site. In the next post we looked at how to import the data collected by PyWWS and how to render a basic page in a CouchApp. In the post we’ll extend the basic page to display real weather data.

Each document in the database is a record of the weather data at a particular point in time. As we want to display the data over a whole day we need to use a list function. list functions work similarly to the show function we saw in the previous post. Unlike show functions list functions don’t have the document passed in, they can call a getRow function which returns the next row to process. When there are no rows left it returns null.

show functions process an individual document and return a single object containing the processed data and any HTTP headers. Because a list function can process a potentially huge number of rows they return data in a different way. Rather than returning a single object containing the whole response list functions must return their response in chunks. First you need to call the start function, passing in any headers that you want to return. Then you call send one or more times to return parts of your response. A typical list function will look like the code below.

function (head, req) {
    start({ "headers": { "Content-Type": "text/html" }});

    send(header);
    while(row = getRow()) {
        data = /* process row */;
        send(row);
    }
    send(footer);
}

To process the weather data we can’t follow this simple format because we need to split each document up and display the different measurements separately. Let’s look at the code for creating the day page. The complete code is a bit too long to include in a blog post so checkout the first post in this series to find out how to get a complete copy of the code.

To start the function we load the templates and code that we need using the CouchApp macros. Next we return the appropriate Content-Type header, and then we create the object that we’ll pass to Mustache when we’ve processed everything.

function(head, req) {
    // !json templates.day
    // !json templates.head
    // !json templates.foot
    // !code vendor/couchapp/lib/mustache.js
    // !code vendor/sprintf-0.6.js
    // !code vendor/date_utils.js

    start({ "headers": { "Content-Type": "text/html" }});

    var stash = {
        head: templates.head,
        foot: templates.foot,
        date: req.query.startkey,
    };

Next we build a list of the documents that we’re processing so we can loop over the documents multiple times.

    var rows = [];
    while (row = getRow()) {
        rows.push(row.doc);
    }

To calculate maximum and minimum values we need to choose the first value and then run through each piece of data and see whether it is higher or lower than the current record. As the data collector of the weather station is separate to the outside sensors occasionally they lose their connection. This means that we can just pick the value in the first document as our starting value, instead we must choose the first document where the connection with the outside sensors was made.

    if(rows.length &gt; 0) {
        for(var i=0; i<rows.length; i++) {
            if((rows[i].status &amp; 64) == 0) {
                max_temp_out = rows[i].temp_out;
                min_temp_out = rows[i].temp_out;
                max_hum_out = rows[i].hum_out;
                min_hum_out = rows[i].hum_out;

                break;
            }
        }

Now we come to the meat of the function. We loop through all of the documents and process them into a series of arrays, one for each graph that we’ll draw on the final page.

        for(var i=0; i<rows.length; i++) {
            var temp_out = null;
            var hum_out = null;
            if((rows[i].status & 64) == 0) {
                temp_out = rows[i].temp_out;
                hum_out = rows[i].hum_out;

                total_rain = total_rain + rows[i].rain;
                rainfall.push({ "time": time_text, "rain": rows[i].rain });

                wind.push({ "time": time_text, "wind_ave": rows[i].wind_ave, "wind_gust": rows[i].wind_gust });

            }

            pressure.push({ "time": time_text, "pressure": rows[i].abs_pressure });

            temps.push({ "time": time_text, "temp_out": temp_out, "temp_in": rows[i].temp_in });

            humidity.push({ "time": time_text, "hum_in": rows[i].hum_in, ";hum_out": hum_out });
        }
    }

Lastly we take the stash, which in a bit of code I’ve not included here has the data arrays added to it, and use it to render the day template.

    send(Mustache.to_html(templates.day, stash));

    return &quot;&quot;;
}

Let’s look at a part of the day template. The page is a fairly standard use of the Google Chart Tools library. In this first snippet we render the maximum and minimum temperature values, and a blank div that we’ll fill with the chart.

<h3>Temperature</h3>

<p>Outside: <b>Maximum:</b> {{ max_temp_out }}<sup>o</sup>C <b>Minimum:</b> {{ min_temp_out }}<sup>o</sup>C</p>
<p>Inside: <b>Maximum:</b> {{ max_temp_in }}<sup>o</sup>C <b>Minimum:</b> {{ min_temp_in }}<sup>o</sup>C</p>

<div id="tempchart_div"></div>

In the following Javascript function we build a DataTable object that we pass to the library to draw a line chart. The {{#temps}} and {{/temps}} construction is the Mustache way of looping through the temps array. We use it to dynamically write out Javascript code containing the data we want to render.

function drawTempChart() {
    var data = new google.visualization.DataTable();
    data.addColumn('string', 'Time');
    data.addColumn('number', 'Outside');
    data.addColumn('number', 'Inside');

    data.addRows([
    {{#temps}}
        ['{{ time }}', {{ temp_out }}, {{ temp_in }}],
    {{/temps}}
        null]);

    var chart = new google.visualization.LineChart(document.getElementById('tempchart_div'));
    chart.draw(data, {width: 950, height: 240, title: 'Temperature'});
}
google.setOnLoadCallback(drawTempChart);

We now have a page that displays all the collected weather data for a single day. In the next post in this series we’ll look at how to use CouchDB’s map/reduce functions to process the data so we can display it by month and by year.


Photo of almost may by paul bica.

Written by Andrew Wilkinson

January 12, 2012 at 1:46 pm

Posted in couchdb

Tagged with , , , ,

Back Garden Weather in CouchDB (Part 2)

with one comment

its raining..its pouringIn my last post I described the new CouchDB-based website I have built to display the weather data collected from the weather station in my back garden. In this post I’ll describe to import the data into CouchDB and the basics of rendering a page with a CouchApp.

PyWWS writes out the raw data it collected into a series of CSV files, one per day. These are stored in two nested directory, the first being the year, the second being year-month. To collect the data I use PyWWS’s live logging mode, which consists of a process constantly running, talking to the data collector. Every five minutes it writes a new row into today’s CSV file. Another process then runs every five minutes to read the new row, and import it into the database.

Because CouchDB stores its data using an append only format you should aim to avoid unnecessary updates. The simplest way to write the import script would be to import each day’s data every five minutes. This would cause the database to balloon in size, so instead we query the database to find the last update time and import everything after than. Each update is stored as a separate document in the database, with the timestamp attribute containing the unix timestamp of the update.

The map code to get the most recent update is quite simple, we just need to emit the timestamp for each update. The reason the timestamp is emitted as the key is so we can filter the range of updates. It is also emitted as the value so we can use the timestamp in the reduce function.

function(doc) {
    emit(doc.timestamp, doc.timestamp);
}

The reduce function is a fairly simple way to calculate the maximum value of the keys. I’ve mostly included it here for completeness.

function(keys, values, rereduce) {
    if(values.length == 0) {
        return 0;
    }

    var m = values[0];

    for(var i=0; i<values.length; i++) {
        if(values[i] > m) { m = values[i]; }
    }

    return m;
}

You’ll find the import script that I use in the directory you cloned in the previous post, when you got a copy of the website.

So, we’ve got some data in our database. How do we display it on a webpage? First, let’s consider the basics of rendering a webpage.

CouchDB has two ways to display formatted data, show and list functions. Show functions allow you to format a single documents, for example a blog post. List functions allow you to format a group of documents, such as a the comments on a post. Because viewing a single piece of weather data is not interesting the weather site only uses list functions. To get started let’s create a simple Show function, as these are simpler.

CouchApp doesn’t come with a templating library, but a common one to use is Mustache. The syntax is superficially like Django templates, but in reality it is far less powerful. For a simple website like this, Mustache is perfect.

In the show directory of your CouchApp create a new file, test.js. As with the map/reduce functions this file contains an anonymous function. In this case the function takes two parameters, the document and the request obejct, and returns an object containing the response body and any headers.

function (doc, req) {
    // !json templates.records
    // !json templates.head
    // !json templates.foot
    // !code vendor/couchapp/lib/mustache.js

The function begins with some magic comments. These are commands to CouchDB which includes the referenced code or data in the function. This allows you to keep shared data separate from the functions that uses it.

The first !json command causes the compiler to load the file templates/records.* and add it to a templates objects, under the records attribute.

The !code command works similarly, but in loads the specified file and includes the code in your function. Here we load the Mustache library, but I have also used the function to load a javascript implementation of sprintf. You might want to load some of your own common code using this method.

    var stash = {
        head: templates.head,
        foot: templates.foot
    };

    return { body: Mustache.to_html(templates.records, stash), headers: { "Content-Type": "text/html" } };
}

Firstly we build an object containing the data we want to use in our template. As Mustache doesn’t allow you to extend templates we need to pass the header and footer HTML code in as data.

As mentioned the return type of a show function is a object containing the HTML and any HTTP headers. We only want to include the content type of the page, but you could return any HTTP header in a similar fashion. To generate the HTML we call the to_html function provided by Mustache, passing the template and the data object we prepared earlier.

Now we have data in our database and can create simple pages using a CouchApp we can move on to showing real data. In the next post I will describe the list functions use to show summarized day and month weather information.


Photo of its raining..its pouring by samantha celera.

Written by Andrew Wilkinson

January 5, 2012 at 1:58 pm

Posted in couchdb

Tagged with , , , ,

Back Garden Weather in CouchDB (Part 1)

with 7 comments

RainWhen she was younger my wife wanted to be a meteorologist. That didn’t pan out, but our recent move found us with a garden, which we’ve not had before. This gave me the opportunity to buy her a weather station. I didn’t just choose any old station though, I wanted one that did wind and rain as well as the usual temperature, pressure and humidity. And, the deciding factor, a USB interface with Linux support. Fortunately the excellent PyWWS supports a range of weather stations, including the one I brought.

I’m not going to go into how I mounted the system, or configured PyWWS. That’s all covered in the documentation. PyWWS can produce a static website, but as someone who earns his living building websites I wanted something a bit better. Continuing my experiments with CouchDB I decided to build the website as a CouchApp.

As well as allowing you to query your data with Javascript, CouchDB lets you display webpages directly out of your database. If you visit welwynweather.co.uk you’ll notice that you’re redirected to a url that contains url arguments that look a lot like those used to query a view. That’s because that’s exactly what’s going on. Things become clearer when you discover that that http://www.welwynweather.co.uk is an alias for http://db.welwynweather.co.uk/_design/weather/_rewrite/. Now you can see a more complete CouchDB URL, albeit without the database name. db.welwynweather.co.uk points to an Apache reverse proxy that routes requests through to CouchDB.

Over the next few posts I’ll detail how the CouchApp works, but to get started you can clone my app and poke it yourself. Once you’ve installed the couchapp command line client simply run couchapp clone http://db.welwynweather.co.uk/_design/weather. This will give you a directory, weather, that contains a number of subdirectories including templates and views which contain the complete website.

To deploy the site to your own server you need to create a database and then run couchapp push weather http://localhost:5984/welwynweather. Visiting http://localhost:5984/welwynweather/_design/weather/_rewrite/ should show you the site. You’ll need some data though, and you can use CouchDB replication to pull my data to your server. Using Futon simply set http://db.welwynweather.co.uk/ as the replication source and your database as the destination and you’ll quickly get a complete copy of the database.

When replicating my data you currently cannot use continuous replication. When it completes replication CouchDB calls POST /_ensure_full_commit, but obviously I’ve disabled POST, PUT and DELETE on my server. This causes replication to fail and to restart from the beginning. The data will already have been copied, but CouchDB will copy it again. If you have any ideas on how to avoid this, please answer my StackOverflow question.

The website consists of four main pages. When you visit you are redirected to a page that shows the weather for the current day. Clicking on the date at the top of the page lets you also view the weather by month and by year. The daily weather pages show as much detail as is recorded by the station, in my case this is an update every five minutes. The monthly page is much the same except that the values are averaged across an hour. The yearly page is a bit different as it shows a single point for each day. An average temperature for each day is not that useful so we calculate the high and low for each day and display that.

The final page is the records page. This displays information like the highest and lowest temperature ever recorded and the heaviest rain by hour and by day. The previous three pages are all fully generated by the server. The records page is a bit different though as calculating the records in one step is a bit complicated, instead we use AJAX to load each record individually. This means we can focus on each record keeping the code simple.

In the next post I’ll discuss how I import data into CouchDB and the basics of rendering a page in a CouchApp.


If you visit the site you may find that there is no recent weather data. This is because I run PyWWS on my MythTV box. Rather than running the PC all the time the weather data only updates when a programme is being recorded, or I’m watching TV.


Photo of Rain by Moyan Brenn.

Written by Andrew Wilkinson

December 2, 2011 at 12:00 pm

Posted in couchdb

Tagged with , ,

Programming Documentary

leave a comment »

TV Camera manI’m a huge science and engineering documentary geek. I prefer watching documentaries over all other forms of television. It doesn’t really matter what the documentary is about, I’ll usually watch it. After getting ready for my wedding I had a bit of time before I had to walk down the aisle so I watched a documentary about pilots learning to land Marine One at the White House. There probably aren’t many people who would choose to spend that time that way.

Science documentaries have experienced a renaissance over the last few years, particularly on the BBC. The long running Horizon series has been joined by a raft of other mini-series presented by Brian Cox, Alice Roberts, Marcus Du Sutoy, Jim Al-Kalili and Michael Mosely. These cover a large part of the sciences, including Chemistry, Biology and Physics. Physics in particular is regularly on our screens. Whether it’s talking about quantum mechanics or astronomy or something else it seems that Physics has never been more popular.

As someone who writes computer programmes for a living this makes me worry that your average man on the street may end up with a better understanding of quantum mechanics than they do of the computer on their desk, or in their pocket.

It wasn’t always like this. Back in 1981 the BBC ran the BBC Computer Literacy project, which attempted to teach the public to program using the BBC Micro through a ten part television series.

Clearly if a project like this was to be attempted today there would be no need for the BBC to partner with hardware manufactures. People have access to many different programmable devices, they just don’t know how to program them.

Recent programs that have focused on computers were Rory Cellan Jones’ Secret History of Social Networking and Virtual Revolution by Aleks Krotoski. Neither of these were technical documentaries, instead they focused on business, cultural and sociological impacts of computers and the internet.

It’s not that more technical aspects of computer don’t appear as part of other documentaries, recently Marcus Du Sautoy announced that he is filming a episode of Horizon on Artificial Intelligence. It won’t air until next spring, so it’s hard to comment, but I suspect it will focus on the outcome of the software rather than the process of how computers can be made to appear intelligent.

Jim Al-Kalili’s recent series on the history of electricity, Shock and Awe, ends with a section on the development of the transistor. During it, and over a picture of the LHC, he says something rather interesting.

Our computer’s have become so powerful that they are helping us to understand the universe in all its complexity.

The Large Hadron Collider/ATLAS at CERNIf you don’t understand computers it’s impossible to understand how almost all modern science is done. Researches in all disiplinces need to be proficent at programming in order to anaylse their own data. Business is run on software, often which is customised to the individual requirements of the company. It boggles my mind that people can be so reliant on computers yet have so little idea of how they work.

So, what would my ideal programming documentary cover? The most obvious thing is the internet. A history of computer networking could begin at the development of the first computer networks, describe how TCP/IP divides data into packets and routes it between computers. It could move on to HTTP and HTML both of which are fundamentally simple yet apply to our everyday lives. To bring things up to date it could focus on Google and Facebook and show people the inside of a data centre. I suspect that most people have no idea where their Google search results are coming from.

I doubt that there is much demand for the updated series as long as the 10 part original, but the soon to be released Raspberry Pi machine would be an ideal way to recreate the tinkering appeal of the original BBC Micro. There’s something magical about seeing a program you’ve written appearing on the TV in your living room, rather than on the screen of your main PC. An alternative would be to provide an interpreter as part of a website so you can just type in a URL and start programming.

Raspberry PIA documentary focussing on programming would have a difficulty that the original series never had – the fact that computing power is common place means that people are used to software created by large teams with dedicated designers. An individual with no experience can’t hope to come to close to something like that. Fortunately computers are so much more powerful today that much of the complexity that you needed to cope with can be abstracted away. Libraries like VPython make it very simple to produce complicated and impressive 3D graphics.

I’m certainly not the only person who wants to help teach the masses to program, but realistically you need an organisation like the BBC to do something that might actually make a difference. Do I think that you create a compelling and informative documentary that might inspire people to program, and give them a very basic understand of how to do it. Definitely.


Photo of TV Camera man by Chris Waits.

Photo of The Large Hadron Collider/ATLAS at CERN by Image Editor.

Photo of Raspberry PI by Paul Downey.

Written by Andrew Wilkinson

November 25, 2011 at 1:00 pm

Posted in bbc

Tagged with , , , ,

Sonos Review

leave a comment »

Sonos S5Recently I purchased a basic Sonos system, and after just a couple of weeks I’m already in love with it and have more music playing in my house than ever before.

For those of you who haven’t come across Sonos before, Sonos produce a multi-room wireless music system. The system consists of a number of devices that connect to each other using a proprietary mesh network. You can buy Sonos devices that contain built in speakers, or ones that connect to your own as well as a device to link your iPhone and to join your existing network to the Sonos wireless network.

I purchased a Sonos Play:3, a Wireless Dock and ZoneBridge (all three links contain an affiliate id) so that’s what I’m reviewing here.

The Sonos Play:3 is as fairly small, unassuming, single speaker block. It contains three individual speakers while it’s larger brother, the Play:5 (affiliate link) contains five. The back has a power socket and a network port. The top has a mute button, as well as a volumn up and down rocker. The other devices are similarly spartan, yet stylish, in their design with minimal on device buttons.

First you need to plug the bridge into your network using the supplied ethernet cable. Then, after installing the PC software, or their iPhone app, you can create a Sonos network. Just follow the on screen prompts and press the ‘join’ button on the device. For each of your other Sonos devices plug them in, select “Add new device” in the software on on the app, press the ‘Join’ button (or Mute + Volumn Up on the Play:3) and the new device will be found and added the network.

The setup is supposed to be quick and straightforward, and for the first two devices it was. When I tried to add my Play:3 to the network it would repeatedly not be found. The white light on the top of the device stopped flashing, indicating that it had connected but the PC software did not find it. It’s not clear what happened, but I may have plugged it in before the previous device had finished configuring. Doing a factory reset solved the issue.

The simplest thing to play on the Sonos system is internet radio. The controller comes preloaded with a huge range of radio stations. Just select the one you want and after a short pause it’ll come out of your speaker, on the other side of the room. Not only is process of listening to the radio incredibly simple, but the sound from such a small box is amazing. I’m not an audiophile, but it was loud, clear and had plenty of bass.

To play your own music collection you need to have it available on a Windows share. I already had this set up so I just had to tell Sonos where to find it. After short while it had crawled my complete collection and I could select by artist, album, track or genre right from my iPhone. As with the radio it’s quick to start playing and the sound quality is excellent.

It was at this point that I came across the first of the few bugs I’ve found with the Sonos system. Originally I had ripped my music into Ogg Vorbis format. Then, when I got my iPhone I had to rerip it as MP3. Some of my albums have both Ogg and MP3 files of the same music, in the same directory. The Sonos player does not appear to like this, and although it can play both formats neither would appear in the controller. Where only one copy exists the files were found with no problems.

I also had some difficulties when my network was heavily loaded. While upgrading one of my pcs to the latest Ubuntu and listening to some music it skipped heavily and eventually the Play:3 crashed. Another issue is that my music is stored on my MythTV box which turns itself on and off to record tv. I forgot to lock the box so it switched itself off mid-track. Somewhat annoyingly the Play:3 stopped playing mid-track as well. I would have thought that the Sonos would have enough memory to have cached at least the whole track, if not the whole playlist.

The iPhone dock is a very useful addition to my house, if only because I just have to slip my phone in and it starts charging. It is certainly much easier to connect than a cable, and much tidier too. Unfortunately you cannot stream music from your iPhone/iPod Touch unless it is placed in the dock. This is a limitation imposed by Apple rather than Sonos, so I have to forgive them. When it’s placed in the dock any sound your device makes will be played through your speaker. This works great when you’re playing some music or a podcast through your phone, but I had a timer set on my phone which was charging while I listened some internet radio. While surprising this is just it working as expected, and you can turn off the autoplay feature.

I have my old iPhone 3G as well as much newer iPhone 4S, and if I want to keep my MythTV box off I can dock the old phone and browse its music selection and select what to listen to from the 4S. This is the real power of the Sonos concept – all your music, everywhere in your house.

The criticisms I’ve made are small points, and despite only having my system for just two weeks I already can’t imagine life without it. I’m willing to forgive the somewhat high price and am saving my pennies to buy another couple of either Play:5 or Play:3s to spread around the house.


Photo of Sonos S5 by Robert Wetzlmayr.
Photo of Play:3 and iPhone courtesy of Sonos.

Written by Andrew Wilkinson

November 18, 2011 at 2:08 pm

Posted in review

Tagged with , ,