Andrew Wilkinson

Random Ramblings on Programming

FitBit Ionic Review

leave a comment »

Since I received my Pebble Steel back in 2014 I knew I never wanted to go back to using a normal watch. Having notifications and apps on my wrist was just too useful to me. I skipped the Pebble Time, but when the Time 2 was announced I happily put in a preorder. Unfortunately it was not to be, and Pebble folded and was sold to FitBit. If Pebble wasn’t able to survive then as an existing FitBit user having them as a buyer is probably the the best option.

The idea of FitBit’s scale and expertise in building hardware, combined with Pebble’s excellent developer platform was an enticing prospect. Rather than switch to an Apple Watch (or Android Wear, although that would have required a new phone) I decide to wait for the fruits of the combined company’s labour to be released.

I was getting a bit itchy, and my trusty Pebble Steel was showing it’s age, but eventually the FitBit Ionic was announced. A few days before the official release date my preorder arrived. It’s now been two weeks of wearing it nearly 24/7, so it seems like a reasonable time to post my thoughts.

First impressions of the hardware are excellent. Most reviews have criticised the looks, but I’m actually a fan. I like the way the bands transition into the watch itself, and sure it does just look like a black square when the screen is off, but that’s the case for all current smart watches. The buttons have a nice firmness to them, and the touchscreen is responsive. I have had some issues swiping to clear notifications, but I think that’s more to do with the touch targets in the software rather than the touchscreen, as I’ve not had issues elsewhere.

The key hardware concerns are the screen and battery life. The bottom line is that both are excellent. The screen is bright and clear, even in strong sunlight. I’ve not tested the battery life extensively because I’m wearing it essentially all day. I only take the Ionic off to shower, and it appears to only lose 15-20% per day, and a quick 15 minute charge per day is enough to keep it topped up.

The one big element I miss from my Pebble is the fact that the screen is not always on. If you do the lift-and-twist “I’m looking at my watch” gesture then it does turn on reliably, but it’s rare that I actually do that. Looking at my watch tends to be a much more subtle movement, and then it only recognises it occasionally. I have found myself pressing a button to turn the screen on, which after having an always on screen feels like a step backwards.

At the moment it’s probably too early to comment on the software side. The core features are all there and work well. Notifications from apps, texts and calls all work. I’ve been able to track various types of exercise, including bike rides which were tracked with the built in GPS and synced automatically to Strava. Heart rate monitoring and step count also appear reasonably accurate, as you would expect given FitBit’s history.

Unfortunately the key reason I brought the Ionic – that they had Pebble’s software team building the SDK – is not yet visible. There are a small set of watch faces (I’m a fan of the Cinemagraph), and some built in apps, but as yet there’s no sign of any externally developed apps. It’s early days though, and hopefully a developer community will form soon.

So, would I recommend the FitBit Ionic? Yes, but more on potential than current execution. The hardware appears to be there, it just needs a bit more time for the software to mature and apps to be developed.

FitBit Ionic photograph by FitBit.


Written by Andrew Wilkinson

October 18, 2017 at 12:00 pm

Posted in review

Tagged with , ,

Leading Without Deep Technical Knowledge

leave a comment »

this way or thatIn my previous jobs, when I’ve been promoted to a leadership role it has been as a result of being the most experienced member on the team. Having a deep knowledge of the business, the code base and the technologies we’re using meant I was already an authority on most topics of the team needed to discuss, and could weigh in on a discussion with a well formed and considered option.

When I changed companies at the end of last year I came to Ocado Technology as a team lead for an existing team, using a technology stack I wasn’t familiar with. In fact Ocado are a Java based company, which I had never used before, so not only was I not familiar with the frameworks and libraries used, but I wasn’t even familiar with language the code was written it either!

Leading in a situation like this required a complete change in how I approached problems. When a stakeholder or the product owner approached me with a challenge rather than immediately being able to respond with a rough solution, and vague estimate or a timeline I need to defer to my team, and let them propose a solution, estimate it, and then I could fit it into our schedule. I might challenge them on some points, but it was their plan. I quickly needed to learn who knew the most about which systems, so I could get the right people involved in discussions early.

Previously although I was able give initial feedback on a potential project, I would still allow the team to discuss them, to propose alternate solutions and to estimate. The change is that now my contribution is much more about making sure the right people are talking and helping to avoid misunderstanding when the business and my developers are accidentally talking at cross-purposes.

While this change has definitely pushed me out of my comfort zone, it has also given me space to focus a different area of my leadership skills. Ocado prides itself on its values, one of which is its servant leadership philosophy. By not having the knowledge to make decisions myself I am forced to empower my team to make decisions on how they want solve problems.

It’s not just case a facilitating discussions though. I may not know the details of our code base, or the intricacies of library, but my knowledge of software design patterns and systems architecture is valid whatever language is being used, and my opinions are as strong as ever. It is normal for developers to immediately jump to the simplest solution to a problem within the framework of the existing code. As an outsider my first instinct is usually to take a step back and ask why the system is designed like that, and to propose a bigger solution that resolves some technical debt, rather than focussing on the issue at hand.

This change in role has made me realise that even when I was the most experienced in the code, language or framework I should have made more of an effort to devolve the decision making process. Not to stop expressing my opinions, or involving myself in the discussions, but to explicitly encourage others to contribute, and make sure they are taking part in discussions. This has resulted in people being more bought in to solutions, and encouraged a much closer team with a greater feeling of ownership over our code. Being forced to make this change to my style has undoubtedly made me a better manager, and a better developer too.

Photo of this way or that by Robert Couse-Baker.

Written by Andrew Wilkinson

October 11, 2017 at 1:20 pm

Posted in management, ocado

Tagged with ,

Accessing FitBit Intraday Data

with 40 comments

JoggingFor Christmas my wife and I brought each other a new FitBit One device (Amazon affiliate link included). These are small fitness tracking devices that monitor the number of steps you take, how high you climb and how well you sleep. They’re great for providing motivation to walk that extra bit further, or to take the stairs rather than the lift.

I’ve only had the device for less than a week, but already I’m feeling the benefit of the gamification on As well as monitoring your fitness it also provides you with goals, achievements and competitions against your friends. The big advantage of the FitBit One over the previous models is that it syncs to recent iPhones, iPads, as well as some Android phones. This means that your computer doesn’t need to be on, and often it will sync without you having to do anything. In the worst case you just have to open the FitBit app to update your stats on the website. Battery life seems good, at about a week.

The FitBit apps sync your data directly to, which is great for seeing your progress quickly. They also provide an API for developers to provide interesting ways to process the data captured by the FitBit device. One glaring omission from the API is any way to get access to the minute by minute data. For a fee of $50 per year you can become a Premium member which allows you do to a CSV export of the raw data. Holding the data, collected by a user hostage is deeply suspect and FitBit should be ashamed of themselves for making this a paid for feature. I have no problem with the rest of the features in the Premium subscription being paid for, but your own raw data should be freely available.

The FitBit API does have the ability to give you the intraday data, but this is not part of the open API and instead is part of the ‘Partner API’. This does not require payment, but you do need to explain to FitBit why you need access to this API call and what you intend to do with it. I do not believe that they would give you access if your goal was to provide a free alternative to the Premium export function.

So, has the free software community provided a solution? A quick search revealed that the GitHub user Wadey had created a library that uses the urls used by the graphs on the FitBit website to extract the intraday data. Unfortunately the library hadn’t been updated in the last three years and a change to the FitBit website had broken it.

Fortunately the changes required to make it work are relatively straightforward, so a fixed version of the library is now available as andrewjw/python-fitbit. The old version of the library relied on you logging into to and extracting some values from the cookies. Instead I take your email address and password and fake a request to the log in page. This captures all of the cookies that are set, and will only break if the log in form elements change.

Another change I made was to extend the example script. The previous version just dumped the previous day’s values, which is not useful if you want to extract your entire history. In my new version it exports data for every day that you’ve been using your FitBit. It also incrementally updates your data dump if you run it irregularly.

If you’re using Windows you’ll need both Python and Git installed. Once you’ve done that check out my repository at Lastly, in the newly checked out directory run python examples/ <email> <password> <dump directory>.

Photo of Jogging by Glenn Euloth.

Written by Andrew Wilkinson

December 30, 2012 at 1:22 pm

Losing Games

leave a comment »

Alan WakeI’m not a quick game player. I don’t rush out a buy the latest games and complete them on the same weekend. Currently I’m most of the way through both Alan Wake and L.A. Noire.

Alan Wake is a survival horror game where you’re fighting off hordes of people possessed by darkness. L.A. Noire is a detective story that has you solving crimes in 1940s Los Angeles. Both feature an over the shoulder third person camera, and both have excellent graphics. They also both have a film like quality to the story. In Alan Wake the action is divided up in six tv style “episodes”, with a title sequence between each one. It also has a number of cut scenes and narration by the title character sprinkled throughout the game which help to drive the story forward.

LA Noire Screenshot 4
In L.A. Noire you are detective try to solve crimes and rise up the ranks of the police force. The game features cut scenes to introduce and close each case. During each case you head from location to location and interviewing suspects and witnesses. The big breakthrough in L.A. Noire is the facial animation in the game. Rather than being animated by hand the faces of characters were recorded directly from actor’s faces. This gives the faces a lifelike quality that has not been seen in games before.

Despite the extensive similarities between the game my opinion of the two could hardly be more different. Alan Wake is one of the best games I’ve ever played, while L.A. Noire is really quite boring. I was trying to work out why I felt so differently about them when I read the following quote in Making Isometric Social Real-Time Games with HTML5, CSS3, and JavaScript by Mario Andres Pagella.

This recent surge in isometric real-time games was caused partly by Zynga’s incredible ability to “keep the positive things and get rid of the negative things” in this particular genre of games, and partly by a shift in consumer interests. They took away the frustration of figuring out why no one was “moving to your city” (in the case of SimCity) and replaced it with adding friends to be your growing neighbours.

The need for the face of characters in L. A. Noire to be recorded from real actors limits one of the best things about games: their dynamic nature. Even if you get every question wrong you still solve the case and make progress. Initially you don’t really notice this, but quickly I found it meant that the questioning, the key game mechanic, became superfluous.

Alan Wake is a fairly standard game in that there’s really only one way to progress. This is well disguised though so you don’t notice. The atmosphere in the game forces you to keep moving and the story progresses at quite a pace.

Ultimately it’s not for me to criticise what games people want to play. FarmVille and the rest of Zynga’s games are enormously popular. What disappoints me most about L.A. Noire is that it such a technically advanced game, but falls down on such a simple piece of game mechanics. Alan Wake on the other hand succeeds mostly based on story and atmosphere, and that’s the way it should be.

Photo of Alan Wake by jit.
Photo of LA Noire Screenshot 4 by The GameWay.

Written by Andrew Wilkinson

May 29, 2012 at 8:24 pm

Posted in gaming, Uncategorized

Tagged with , ,

Scalable Collaborative Filtering With MongoDB

leave a comment »

Book AddictionMany websites have some form of recommendation system. While it’s simple to create a recommendation system for small amounts of data, how do you create a system that scales to huge amounts of data?

How to actually calculate the similarity of two items is a complicated topic with many possible solutions. Which one if appropriate depends on your particularly application. If you want to find out more I suggest reading the excellent Programming Collective Intelligence (Amazon affiliate link) by Toby Segaran.

We’ll take the simplest method for calculating similarity and just calculate the percentage of users who have visited both pages compared to the total number who have visited either. If we have Page 1 that was visited by user A, B and C and Page 2 that was visited by A, C and D then the A and C visited both, but A, B, C and D visited either one so the similarity is 50%.

With thousands or millions of items and millions or billions of views calculating the similarity between items becomes a difficult problem. Fortunately MongoDB’s sharding and replication allow us to scale the calculations to cope with these large datasets.

First let’s create a set of views across a number of items. A view is stored as a single document in MongoDB. You would probably want to include extra information such as the time of the view, but for our purposes this is all that is required.

views = [
        { "user": "0", "item": "0" },
        { "user": "1", "item": "0" },
        { "user": "1", "item": "0" },
        { "user": "1", "item": "1" },
        { "user": "2", "item": "0" },
        { "user": "2", "item": "1" },
        { "user": "2", "item": "1" },
        { "user": "3", "item": "1" },
        { "user": "3", "item": "2" },
        { "user": "4", "item": "2" },

for view in views:

The first step is to process this list of view of events so we can take a single item and get a list of all the users that have viewed it. To make sure this scales over a large number of views we’ll use MongoDB’s map/reduce functionality.

def article_user_view_count():
    map_func = """
function () {
    var view = {}
    view[this.user] = 1
    emit(this.item, view);

We’ll build a javascript Object where the keys are the user id and the value is the number of time that user has viewed this item. In the map function we we build an object that represents a single view and emit it using the item id as the key. MongoDB will group all the objects emitted with the same key and run the reduce function, shown below.

    reduce_func = """
function (key, values) {
    var view = values[0];

    for (var i = 1; i < values.length; i++) {
        for(var item in values[i]) {
            if(!view.hasOwnProperty(item)) { view[item] = 0; }

            view[item] = view[item] + values[i][item];
    return view;

A reduce function takes two parameters, the key and a list of values. The values that are passed in can either be those emitted by the map function, or values returned from the reduce function. To help it scale not all of the original values will be processed at once, and the reduce function must be able to handle input from the map function or its own output. Here we output a value in the same format as the input so we don’t need to do anything special.

    db.views.map_reduce(Code(map_func), Code(reduce_func), out="item_user_view_count")

The final step is to run the functions we’ve just created and output the data into a new collection. Here we’re recalculating all the data each time this function is run. To scale properly you should filter the input based on the date the view occurred and merge it with the output collection, rather than replacing it as we are doing here.

Now we need calculate a matrix of similarity values, linking each item with every other item. First lets see how we can calculate the similarity of all items to one single item. Again we’ll use map/reduce to help spread the load of running this calculation. Here we’ll just use the map part of map/reduce because each input document will be represented by a single output document.

def similarity(item):
    map_func = """
function () {
    if(this._id == "%s") { return; }

    var viewed_both = {};
    var viewed_any = %s;

    for (var user in this.views) {
        if(this.value.hasOwnProperty(user)) {
            viewed_both[user] = 1;

        viewed_any[user] = 1;
     emit("%s"+"_"+this._id, viewed_both.length / viewed_any.length );
""" % (int(item["_id"]), json.dumps(item["value"]), json.dumps(item["value"]) int(item["_id"]), )

The input to our Python function is a document that was outputted by our previous map/reduce call. We build a new Javascript by interpolating some data from this document into a template function. We loop through all the users who viewed the document we’re comparing against and work out whether they have viewed both. At the end of the function we emit the percentage of users who viewed both.

    reduce_func = """
function (key, values) {
    return results[0];

Because we output unique ids in the map function this reduce function will only be called with a single value so we just return that.

    db.item_user_view_count.map_reduce(Code(map_func), Code(reduce_func), out=SON([("merge", "item_similarity")]))

The last step in this function is to run the map reduce. Here as we’re running the map/reduce multiple times we need to merge the output rather than replacing it as we did before.

The final step is to loop through the output from our first map/reduce and call our second function for each item.

for doc in db.item_user_view_count.find():

A key thing to realise is that you don’t need to calculate live similarity data. Once you have even a few hundred views per item then the similarity will remain fairly consistent. In this example we step through each item in turn and calculate the similarity for it with every other item. For a million item database where each iteration of this loop takes one second the similarity data will be updated once every 11 days.

I’m not claiming that you can take the code provided here and immediately have a massively scalable system. MongoDB provides an easy to use replication and sharding system, which are plugged in to its Map/Reduce framework. What you should take away is that by using map/reduce with sharding and replication to calculate the similarity between two items we can quickly get a system that scales well with an increasing number of items and of views.

Photo of Book Addiction by Emily Carlin.

Written by Andrew Wilkinson

March 28, 2012 at 1:46 pm

Steve Jobs and the Lean Startup

leave a comment »

Steve JobsOn my 25 minute train journey to work each morning I like to pass the time by reading. The two most recent books I’ve read are The Lean Startup: How Constant Innovation Creates Radically Successful Businesses by Eric Ries and Steve Jobs by Walter Isaacson (both links contain an affiliate id). Although one is a biography and the other is a book on project management they actually cover similar ground, and both are books that people working in technology should read.

Walter Isaacson’s book has been extensively reviewed and dissected so I’m not going to go into detail on it. The book is roughly divided into two halves. The first section is on the founding of Apple, Pixar and NeXT. This section serves an inspirational guide to setting up your own company. The joy of building a great product and defying the odds against a company succeeding comes across very strongly. The later section following Job’s return to Apple is a much more about the nuts and bolts of running a huge corporation. While it’s an interesting guide to how Apple got to where it is today, it lacks the excitement of the earlier chapters.

Eric Ries - The Lean Startup, London EditionThe Lean Startup could, rather unkindly, be described as a managerial technique book. It’s much more than that though as it’s more of a philosophy for how to run company or a project. The book is very readable and engaging with plenty of useful case studies to illustrate the point being made. The key message of the book is to get your product out to customers as soon as possible, to measure as much as you can and learn from what your customers are doing and saying. As you learn you need to make a decision on whether to persevere or to pivot, and change strategy.

There are many reasons why Steve Jobs was a great leader, a visionary and a terrible boss. One aspect was his unshakable belief that he knew what the customer wanted, even before they knew themselves. This is the antithesis of the Lean Startup methodology, which focuses on measurement and learning. Eric Ries stresses that a startup is not necessarily two guys working out of a garage. Huge multinational corporations can have speculative teams or projects inside them, that act much like start ups, so it wouldn’t be impossible for the Apple of today to act like a start up. Apple weren’t always huge though, and back in the 1970s they really were a start up.

One Apple trait the Lean Startup methodolgy doesn’t allow for is dramatic product launches. The Lean Startup is a way of working that relies on quick iteration and gradually building up your customer base. It’s hard to quickly iterate when building hardware, but early in Apple’s life they were struggling to find a market for their computers. The Apple I follow the trend of the time of build-it-yourself computers. Just a year later and Apple released the Apple ][ which came with a case and was much more suitable for the average consumer. This represents a pivot on the part of Apple. They could have continued to focus on hobbyists but instead they decided to change and aim for a bigger, but less technical, market.

Reading is a key part of becoming a better programmer. Whether it’s reading about the latest technology on a blog, the latest project management techniques or the history of computers reading will help you become better at your job. I’m not sure I recommend anyone tries to recreate Steve Job’s management style, but as a history of Apple Walter Isaacson’s book is inspirational and informative. The Lean Startup is considerably more practical, even if it won’t inspire you to set a company in the first place.

Photo of Steve Jobs by Ben Stanfield.
Photo of Eric Ries – The Lean Startup, London Edition by Betsy Weber.

Written by Andrew Wilkinson

March 20, 2012 at 1:50 pm

Django ImportError Hiding

leave a comment »

Hidden CatA little while ago I was asked what my biggest gripe with Django was. At the time I couldn’t think of a good answer because since I started using Django in the pre-1.0 days most of the rough edges have been smoothed. Yesterday though, I encountered an error that made me wish I thought of it at the time.

The code that produced the error looked like this:

from django.db import models

class MyModel(model.Model):

    def save(self):



The error that was raised was AttributeError: 'NoneType' object has no attribute 'Model'. This means that rather than containing a module object, models was None. Clearly this is impossible as the class could not have been created if that was the case. Impossible or not, it was clearly happening.

Adding a print statement to the module showed that when it was imported the models variable did contain the expected module object. What that also showed was that module was being imported more than once, something that should also be impossible.

After a wild goose chase investigating reasons why the module might be imported twice I tracked it down to the load_app method in django/db/models/ The code there looks something like this:

    def load_app(self, app_name, can_postpone=False):
            models = import_module('.models', app_name)
        except ImportError:
            # Ignore exception

Now I’m being a harsh here, and the exception handler does contain a comment about working out if it should reraise the exception. The issue here is that it wasn’t raising the exception, and it’s really not clear why. It turns out that I had a misspelt module name in an import statement in a different module. This raised an ImportError which was caught, hidden and then Django repeatedly attempted to import the models as they were referenced in the models of other apps. The strange exception that was originally encountered is probably an artefact of Python’s garbage collection, although how exactly it occurred is still not clear to me.

There are a number of tickets (#6379, #14130 and probably others) on this topic. A common refrain in Python is that it’s easier to ask for forgiveness than to ask for permission, and I certainly agree with Django and follow that most of the time.

I always follow the rule that try/except clauses should cover as little code as possible. Consider the following piece of code.


except AttributeError:
    # handle error

Which of the three attribute accesses are we actually trying to catch here? Handling exceptions like this are a useful way of implementing Duck Typing while following the easier to ask forgiveness principle. What this code doesn’t make clear is which member or method is actually optional. A better way to write this would be:


    member = var.member
except AttributeError:
    # handle error

Now the code is very clear that the var variable may or may not have a member member variable. If method1 or method2 do not exist then the exception is not masked and is passed on. Now lets consider that we want to allow the method1 attribute to be optional.

except AttributeError:
    # handle error

At first glance it’s obvious that method1 is optional, but actually we’re catching too much here. If there is a bug in method1 that causes an AttributeError to raised then this will be masked and the code will treat it as if method1 didn’t exist. A better piece of code would be:

    method = var.method1
except AttributeError:
    # handle error

ImportErrors are similar because code can be executed, but then when an error occurs you can’t tell whether the original import failed or whether an import inside that failed. Unlike with an AttributeError there is a no easy way to rewrite the code to only catch the error you’re interested in. Python does provide some tools to divide the import process into steps, so you can tell whether the module exists before attempting to import it. In particular the imp.find_module function would be useful.

Changing Django to avoid catching the wrong ImportErrors will greatly complicate the code. It would also introduce the danger that the algorithm used would not match the one used by Python. So, what’s the moral of this story? Never catch more exceptions than you intended to, and if you get some really odd errors in your Django site watch out for ImportErrors.

Photo of Hidden Cat by Craig Grahford.

Written by Andrew Wilkinson

March 7, 2012 at 1:59 pm