Transitioning To A More Open Technology Stack

Snowy PostboxI’m currently working with some large Java monoliths which talk to each other over ActiveMQ. There are several aspects of the architecture that I’d like to change. Certainly, new production environments (Kubernetes, etc) mean that monoliths are not required because of the overhead of deployment, and the benefits of easier testing and more modular architecture mean that I think the expense of migrating to smaller services will be well worth it. With such an established code base though, the question I’m grappling with is how can we transition to a better, more open technology stack without needing to rewrite from scratch and do a big bang deployment.

Currently I’m toying with the idea of writing an ActiveMQ to Web Sockets bridge.  Web Sockets are a way of emulating a direct TCP connection in a web browser, although a more normal use case is to send and receive a stream of JSON encoded events. Although Web Sockets were created for use in browsers all languages have libraries available which will allow you to connect to a server.

ActiveMQ natively supports connecting over Web Sockets, so why would I propose building a bridge application? In our case the messages being exchanged are binary encoded, so you can’t decode them unless you’re running Java and have the same library used to send the messages. By building an application to act as a bridge you get much more control over the Web Socket API than if you use the native ActiveMQ implementation, so you can tidy up the JSON representations you use and easily make any other improvements to the API that you want.

Spring is our current Java Framework of choice, which conveniently has a built-in HTTP server which supports Web Sockets. Combining that with our shared library for connecting to ActiveMQ results in a Web Socket server in just a couple of hundred lines of code, and most of that is actually converting the message objects into a nice JSON representation.

In future posts I’ll talk about our progress migrating to a more open environment, but first let’s go through how to build the bridge. I’ve chosen a simple REST API.

  • GET /topic will return a list of topics.
  • GET /topic/{topic} returns a single message from the topic (not much use in reality, but useful for testing).
  • CONNECT /topic/{topic} opens a web socket connection to a topic, which lets you send and receive a stream of events.

The first step is to enable web sockets on the right URL.

@Configuration
@EnableWebSocket
public class WebSocketConfig implements WebSocketConfigurer {
    @Autowired
    private SocketHandler sockerHandler;

    public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
        registry.addHandler(sockerHandler, "/topic/{topic}")
            .setAllowedOrigins("*");
    }
}

Next up we set up the normal HTTP end points. Here I’m using two objects to manage the ActiveMQ connections and JSON serialisation/deserialisation. If like us you have shared libraries to do your messaging for you then you can just plug those in, and there are some many JSON serialisers you can just pick your favourite.

A key thing with this class is to specify the method of the requests so we can use the same URL as we registered for the web sockets without clashing.

@Controller
@RequestMapping("/topic")
public class TopicHandler {
    @Autowired
    private JmsConnectionManager jmsConnectionManager;

    @Autowired
    private JsonSerialiser jsonSerialiser;

    @RequestMapping(method = RequestMethod.GET)
    public @ResponseBody List GetTopics() {
        return jsonSerialiser.serialise(jmsConnectionManager.getTopics());
    }

    @RequestMapping(value="/{topic}", method = RequestMethod.GET, headers = "Connection!=Upgrade")
    public @ResponseBody String GetTopic(@PathVariable("topic") String topic) {
        ActiveMqTopicController controller = jmsConnectionManager.getTopicController(topic);

        return jsonSerialiser.serialise(controller.getMessage(), BaseMessage.class);
    }
}

Lastly, we handle the web socket connections. There are three methods of TextWebSocketHandler that we need to override. handleTextMessage is called when a message is received from the client, while afterConnectionEstablished and afterConnectionClosed are called at the start and end of the connection. When the connection is established you need to connect to the JMS topic, and start streaming events.

@Component
public class SocketHandler extends TextWebSocketHandler {
    @Autowired
    private JmsConnectionManager jmsConnectionManager;

    @Autowired
    private JsonSerialiser jsonSerialiser;

    public SocketHandler() {
    }

    @Override
    public void handleTextMessage(WebSocketSession session, TextMessage message)
            throws InterruptedException {
        BaseMessage jmsMessage = jsonSerialiser.deserialise(message.getPayload(), BaseMessage.class);

        ActiveMqTopicController tc = jmsConnectionManager.getTopicController(getTopic(session));
        tc.publishMessage(jmsMessage);
    }

    @Override
    public void afterConnectionEstablished(WebSocketSession session) throws Exception {
        ActiveMqTopicController tc = jmsConnectionManager.getTopicController(getTopic(session));
        tc.addListener(session);
    }

    @Override
    public void afterConnectionClosed(WebSocketSession session, CloseStatus closeStatus) {
        ActiveMqTopicController tc = jmsConnectionManager.getTopicController(getTopic(session));
        tc.removeListener(session);
    }

    private String getTopic(WebSocketSession session) {
        String path = session.getUri().getRawPath();

        String[] components = path.split("/");

        return components[components.length - 1];
    }
}

With this fairly simple code in place, it’s dead easy to start integrating other languages, or single page apps running in a web browser into your previously closed messaged based system.


Photo of Snowy Postbox by Gordon Fu.

Advertisements

Network Booting A Raspberry Pi MythTV Frontend

Network cables - mess :DWhen we moved house earlier in the year I wanted to simplify our home theatre setup. With my son starting to grow up, in a normal house he’d be able to turn on the tv and watch his favourite shows without needing us to do it for him, but with the overcomplicated setup that we had it would take him several years longer before he could learn the right sequence of buttons.

I’ve been a MythTV user for well over ten years, and all our TV watching is done through it. At this stage with our history of recorded shows and a carefully curated list of recording rules switching would be a big pain, so I wanted to try and simplify the user experience, even if it means complicating the setup somewhat.

I had previously tried to reduce the standby power consumption by using an Eon Power Down Plug, which monitors the master socket and switches off the slave sockets when the master enters standby mode. This works great as when the TV was off my Xbox and surround speakers would be switched off automatically. The downside is that if I want the use the speakers to listen to music (they are also connected to a Sonos Connect) then either the TV needs to be on, or I need to change the plug over. Lastly, because I was running a combined frontend and backend it wasn’t connected to the smart plug (otherwise it wouldn’t be able to turn on to record.) If you turned the TV off the frontend would still be on, preventing the backend from shutting down for several hours, until it went into idle mode.

I decided to solve these problems by using a Raspberry Pi 3 as a separate frontend, and switching the plugs around. As they run Linux, and have hardware decoding of MPEG2 and h264 they work great as MythTV frontends.

A common issue with Raspberry Pis is that if you don’t shutdown them down correctly then their SD cards become corrupt. If I connected the Pi to the slave plug socket as planned then it would be uncleanly shut down every time the TV was switched off, risking regular corruption. Fortunately Raspberry Pis support network booting, which means you can have the root filesystem mounted from somewhere else, and you don’t even need the SD card at all. I already had a Synology NAS, which I love, and is a perfect host for the filesystem.

Sadly the network code that is built into the Pis ROM (and therefore isn’t updatable) is very specific and buggy. My router’s DNS server doesn’t support the options required to make the Pi boot, so I switched to using a DNS server on the Synology. While you can’t set the right options in the web frontend you can edit the config files directly to make it work. The bugs in the Pis firmware are that the DNS responses must be received at the right time. Too quick or too slow and the Pi will fail to boot. One of the aspects I like the most about my Synology is that it has a very low power suspend more. When it is in this mode it takes a little while to wake up and respond the network event. Waking up takes too long for the Pi, which would give up waiting for a response. While I wouldn’t have been happy about it, I could have disabled the low power mode to make the Pi work. Unfortunately the second time the Pi boots the DNS server responds too quickly (the first time it has to check whether the IP address it is about to hand out is in use.) This response is too quick for the Pi, which again will fail to boot.

The other option is to use an SD card with a kernel and a few supporting files on it to start the boot, and then use Linux’s built-in NFS root filesystem support. While this does require an SD card, it’s read only and after the kernel has been loaded the card will be accessed very rarely, if ever, so the risk of corruption is minimal. After running with this set up for a few months, and being switched off several times per day we’ve not had a single corruption of the SD card so far.

Setting this up is pretty straightforward, I just extracted a Minibian tarball to my NAS and shared it via NFS. Next I copied the contents of /boot to my SD card, and modified cmdline.txt to include the following:

root=/dev/nfs nfsroot=192.168.1.72:/volume1/pi/minibian rw ip=dhcp

With this added it boots up reliably and can be shut down uncleanly with little or no risk of corruption.

Next up is making the MythTV frontend start up automatically. This is was done by adding the following to /etc/rc.local

modprobe rc_rc6_mce
/usr/bin/ir-keytable -c -p RC-5,RC-6 -w /etc/rc_keymaps/rc6_mce
echo "performance" > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
su -c "/home/andrew/autostart.sh" andrew &

The first two lines are required to set up my MCE IR receiver. The third line is needed to ensure that the Pi’s performance remains consistent and the CPU isn’t throttled down while you’re in the middle of an episode of Strictly. The final line just triggers another script that actually runs the frontend, but run as me, and not root.

#!/bin/bash

/home/andrew/wake_speakers &
startx /home/andrew/start_myth 2>&1 > ~/mythtv.log

I’ll cover the first line in another post, but it just turns on the surround speakers and makes sure they in the right mode. The second line starts X, and runs my custom start script. This final script looks like this:

#!/bin/bash
QT_QPA_PLATFORM=xcb /usr/bin/mythfrontend -O libCECEnabled=0

While I managed to solve my key issues of making it easier to switch the open and off, and I can listen to music without the TV being on and still have most devices switched fully off, I still have a few issues still to solve. The main two are that bootup speed is not as fast as I would like, and the backend doesn’t cope well with the frontend exiting uncleanly (and it waits 2.5 hours before turning off). I will cover these issues, and some others that I had to solve in a future post.


Photo of Network cables – mess 😀 by jerry john.

Links to Amazon contain an affiliate code.

Accessing FitBit Intraday Data

JoggingFor Christmas my wife and I brought each other a new FitBit One device (Amazon affiliate link included). These are small fitness tracking devices that monitor the number of steps you take, how high you climb and how well you sleep. They’re great for providing motivation to walk that extra bit further, or to take the stairs rather than the lift.

I’ve only had the device for less than a week, but already I’m feeling the benefit of the gamification on FitBit.com. As well as monitoring your fitness it also provides you with goals, achievements and competitions against your friends. The big advantage of the FitBit One over the previous models is that it syncs to recent iPhones, iPads, as well as some Android phones. This means that your computer doesn’t need to be on, and often it will sync without you having to do anything. In the worst case you just have to open the FitBit app to update your stats on the website. Battery life seems good, at about a week.

The FitBit apps sync your data directly to FitBit.com, which is great for seeing your progress quickly. They also provide an API for developers to provide interesting ways to process the data captured by the FitBit device. One glaring omission from the API is any way to get access to the minute by minute data. For a fee of $50 per year you can become a Premium member which allows you do to a CSV export of the raw data. Holding the data, collected by a user hostage is deeply suspect and FitBit should be ashamed of themselves for making this a paid for feature. I have no problem with the rest of the features in the Premium subscription being paid for, but your own raw data should be freely available.

The FitBit API does have the ability to give you the intraday data, but this is not part of the open API and instead is part of the ‘Partner API’. This does not require payment, but you do need to explain to FitBit why you need access to this API call and what you intend to do with it. I do not believe that they would give you access if your goal was to provide a free alternative to the Premium export function.

So, has the free software community provided a solution? A quick search revealed that the GitHub user Wadey had created a library that uses the urls used by the graphs on the FitBit website to extract the intraday data. Unfortunately the library hadn’t been updated in the last three years and a change to the FitBit website had broken it.

Fortunately the changes required to make it work are relatively straightforward, so a fixed version of the library is now available as andrewjw/python-fitbit. The old version of the library relied on you logging into to FitBit.com and extracting some values from the cookies. Instead I take your email address and password and fake a request to the log in page. This captures all of the cookies that are set, and will only break if the log in form elements change.

Another change I made was to extend the example dump.py script. The previous version just dumped the previous day’s values, which is not useful if you want to extract your entire history. In my new version it exports data for every day that you’ve been using your FitBit. It also incrementally updates your data dump if you run it irregularly.

If you’re using Windows you’ll need both Python and Git installed. Once you’ve done that check out my repository at github.com/andrewjw/python-fitbit. Lastly, in the newly checked out directory run python examples/dump.py <email> <password> <dump directory>.


Photo of Jogging by Glenn Euloth.

Losing Games

Alan WakeI’m not a quick game player. I don’t rush out a buy the latest games and complete them on the same weekend. Currently I’m most of the way through both Alan Wake and L.A. Noire.

Alan Wake is a survival horror game where you’re fighting off hordes of people possessed by darkness. L.A. Noire is a detective story that has you solving crimes in 1940s Los Angeles. Both feature an over the shoulder third person camera, and both have excellent graphics. They also both have a film like quality to the story. In Alan Wake the action is divided up in six tv style “episodes”, with a title sequence between each one. It also has a number of cut scenes and narration by the title character sprinkled throughout the game which help to drive the story forward.

LA Noire Screenshot 4
In L.A. Noire you are detective try to solve crimes and rise up the ranks of the police force. The game features cut scenes to introduce and close each case. During each case you head from location to location and interviewing suspects and witnesses. The big breakthrough in L.A. Noire is the facial animation in the game. Rather than being animated by hand the faces of characters were recorded directly from actor’s faces. This gives the faces a lifelike quality that has not been seen in games before.

Despite the extensive similarities between the game my opinion of the two could hardly be more different. Alan Wake is one of the best games I’ve ever played, while L.A. Noire is really quite boring. I was trying to work out why I felt so differently about them when I read the following quote in Making Isometric Social Real-Time Games with HTML5, CSS3, and JavaScript by Mario Andres Pagella.

This recent surge in isometric real-time games was caused partly by Zynga’s incredible ability to “keep the positive things and get rid of the negative things” in this particular genre of games, and partly by a shift in consumer interests. They took away the frustration of figuring out why no one was “moving to your city” (in the case of SimCity) and replaced it with adding friends to be your growing neighbours.

The need for the face of characters in L. A. Noire to be recorded from real actors limits one of the best things about games: their dynamic nature. Even if you get every question wrong you still solve the case and make progress. Initially you don’t really notice this, but quickly I found it meant that the questioning, the key game mechanic, became superfluous.

Alan Wake is a fairly standard game in that there’s really only one way to progress. This is well disguised though so you don’t notice. The atmosphere in the game forces you to keep moving and the story progresses at quite a pace.

Ultimately it’s not for me to criticise what games people want to play. FarmVille and the rest of Zynga’s games are enormously popular. What disappoints me most about L.A. Noire is that it such a technically advanced game, but falls down on such a simple piece of game mechanics. Alan Wake on the other hand succeeds mostly based on story and atmosphere, and that’s the way it should be.


Photo of Alan Wake by jit.
Photo of LA Noire Screenshot 4 by The GameWay.