Exposing Docker Ports After The Fact

Almacenaje ColoridoDocker is a great tool for running your applications in a consistent and repeatable environment. One issue that I’ve come across occasionally is getting data into and out of the environment when it’s running.

In this post I want to talk about exposing ports that are published by applications running inside a container. When you start up the container it’s pretty easy to configure the ports you want to expose using the --publish or -p parameter. It’s followed by the internal port number, a colon, and the external port number. For example:

docker run --publish 80:8080 myapp

This will publish port 80 from inside the container as port 8080 on the host.

This works great if you know want ports you want to expose before you run the container. Once it’s running, if you decide you need access to a port, you can’t expose it. Unless that is, you cheat.

socat is a very useful command line tool which lets you create tunnels to forward ports. It has many other features, such as forwarding unix sockets to tcp sockets, but we just need to forward a port from an existing container, into a new container and then expose that port to the host.

Fortunately a Docker container that’s only job is to run socat already exists, so we just need to pass the right options to forward the remote port, and expose the port.

I was trying to expose port 61616 from a container called activemq, so I ran the following command:

docker run -p 61616:61616 alpine/socat tcp-listen:61616,reuseaddr,fork tcp:activemq:61616

Let’s break the command down.

docker run -p 61616:61616

This runs the container and exposes port 61616 on port 61616 on the host.

alpine/socat

This runs the container alpine/socat.

tcp-listen:61616,reuseaddr,fork

This is the first parameter that's passed to socat. It specifies that it should listen on port 61616.

tcp:activemq:61616

This specifies that when an incoming connection arrives it should be connected port 61616 running on container activemq.

So to summarise, you can run the following command and expose a port while a container is running.

docker run -p cport:hostport alpine/socat tcp-listen:cport,reuseaddr,fork tcp:remotehost:remoteport

Photo of Almacenaje Colorido by Mireia mim.

Advertisements

Transitioning To A More Open Technology Stack

Snowy PostboxI’m currently working with some large Java monoliths which talk to each other over ActiveMQ. There are several aspects of the architecture that I’d like to change. Certainly, new production environments (Kubernetes, etc) mean that monoliths are not required because of the overhead of deployment, and the benefits of easier testing and more modular architecture mean that I think the expense of migrating to smaller services will be well worth it. With such an established code base though, the question I’m grappling with is how can we transition to a better, more open technology stack without needing to rewrite from scratch and do a big bang deployment.

Currently I’m toying with the idea of writing an ActiveMQ to Web Sockets bridge.  Web Sockets are a way of emulating a direct TCP connection in a web browser, although a more normal use case is to send and receive a stream of JSON encoded events. Although Web Sockets were created for use in browsers all languages have libraries available which will allow you to connect to a server.

ActiveMQ natively supports connecting over Web Sockets, so why would I propose building a bridge application? In our case the messages being exchanged are binary encoded, so you can’t decode them unless you’re running Java and have the same library used to send the messages. By building an application to act as a bridge you get much more control over the Web Socket API than if you use the native ActiveMQ implementation, so you can tidy up the JSON representations you use and easily make any other improvements to the API that you want.

Spring is our current Java Framework of choice, which conveniently has a built-in HTTP server which supports Web Sockets. Combining that with our shared library for connecting to ActiveMQ results in a Web Socket server in just a couple of hundred lines of code, and most of that is actually converting the message objects into a nice JSON representation.

In future posts I’ll talk about our progress migrating to a more open environment, but first let’s go through how to build the bridge. I’ve chosen a simple REST API.

  • GET /topic will return a list of topics.
  • GET /topic/{topic} returns a single message from the topic (not much use in reality, but useful for testing).
  • CONNECT /topic/{topic} opens a web socket connection to a topic, which lets you send and receive a stream of events.

The first step is to enable web sockets on the right URL.

@Configuration
@EnableWebSocket
public class WebSocketConfig implements WebSocketConfigurer {
    @Autowired
    private SocketHandler sockerHandler;

    public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
        registry.addHandler(sockerHandler, "/topic/{topic}")
            .setAllowedOrigins("*");
    }
}

Next up we set up the normal HTTP end points. Here I’m using two objects to manage the ActiveMQ connections and JSON serialisation/deserialisation. If like us you have shared libraries to do your messaging for you then you can just plug those in, and there are some many JSON serialisers you can just pick your favourite.

A key thing with this class is to specify the method of the requests so we can use the same URL as we registered for the web sockets without clashing.

@Controller
@RequestMapping("/topic")
public class TopicHandler {
    @Autowired
    private JmsConnectionManager jmsConnectionManager;

    @Autowired
    private JsonSerialiser jsonSerialiser;

    @RequestMapping(method = RequestMethod.GET)
    public @ResponseBody List GetTopics() {
        return jsonSerialiser.serialise(jmsConnectionManager.getTopics());
    }

    @RequestMapping(value="/{topic}", method = RequestMethod.GET, headers = "Connection!=Upgrade")
    public @ResponseBody String GetTopic(@PathVariable("topic") String topic) {
        ActiveMqTopicController controller = jmsConnectionManager.getTopicController(topic);

        return jsonSerialiser.serialise(controller.getMessage(), BaseMessage.class);
    }
}

Lastly, we handle the web socket connections. There are three methods of TextWebSocketHandler that we need to override. handleTextMessage is called when a message is received from the client, while afterConnectionEstablished and afterConnectionClosed are called at the start and end of the connection. When the connection is established you need to connect to the JMS topic, and start streaming events.

@Component
public class SocketHandler extends TextWebSocketHandler {
    @Autowired
    private JmsConnectionManager jmsConnectionManager;

    @Autowired
    private JsonSerialiser jsonSerialiser;

    public SocketHandler() {
    }

    @Override
    public void handleTextMessage(WebSocketSession session, TextMessage message)
            throws InterruptedException {
        BaseMessage jmsMessage = jsonSerialiser.deserialise(message.getPayload(), BaseMessage.class);

        ActiveMqTopicController tc = jmsConnectionManager.getTopicController(getTopic(session));
        tc.publishMessage(jmsMessage);
    }

    @Override
    public void afterConnectionEstablished(WebSocketSession session) throws Exception {
        ActiveMqTopicController tc = jmsConnectionManager.getTopicController(getTopic(session));
        tc.addListener(session);
    }

    @Override
    public void afterConnectionClosed(WebSocketSession session, CloseStatus closeStatus) {
        ActiveMqTopicController tc = jmsConnectionManager.getTopicController(getTopic(session));
        tc.removeListener(session);
    }

    private String getTopic(WebSocketSession session) {
        String path = session.getUri().getRawPath();

        String[] components = path.split("/");

        return components[components.length - 1];
    }
}

With this fairly simple code in place, it’s dead easy to start integrating other languages, or single page apps running in a web browser into your previously closed messaged based system.


Photo of Snowy Postbox by Gordon Fu.

Using A Raspberry Pi To Switch On Surround Sound Speakers

SpeakerIn a previous post, I talked about network booting a Raspberry Pi MythTV frontend. One issue that I had to solve was how to turn on my Onkyo surround sound speakers, but only if they are not already turned on.

I already had an MCE remote and receiver which can both transmit and receive, so it is perfect for controlling MythTV and switching the speakers on. There are plenty of tutorials out there, but the basic principle is to use irrecord to record the signals from the speaker’s remote control, so the Raspberry Pi can replay them to switch it on when the Pi starts up. In my case, I needed two keys, the power button and VCR/DVR input button. Once you’ve recorded the right signals, you can use irsend to repeat them.

Initially, I had it set up to always send the power button signal on boot. This had the unfortunate side-effect of switching the speakers off if they were already on, for example, if I had been listening to music through Sonos before deciding to watch TV.

To prevent this from happening I needed to determine whether the speakers were on or not. Fortunately, Raspberry Pi’s come with some useful tools to determine information about what is supported by the HDMI device it’s connected to. These tools are tvservice, which dumps the EDID information, and edidparser which turns the EDID into human-readable text.

You can use them as follows:

tvservice -d /tmp/edid.dump

edidparser /tmp/edid.dump > /tmp/edid.txt

This gives you a nice text file containing all of the resolutions and audio formats supported by the connected HDMI device. I took one output when the speakers were on, and one when they were off, and by diffing them I got this set of changes.

-HDMI:EDID found audio format 2 channels PCM, sample rate: 32|44|48 kHz, sample size: 16|20|24 bits
+HDMI:EDID found audio format 2 channels PCM, sample rate: 32|44|48|88|96|176|192 kHz, sample size: 16|20|24 bits
+HDMI:EDID found audio format 6 channels PCM, sample rate: 32|44|48|88|96|176|192 kHz, sample size: 16|20|24 bits
+HDMI:EDID found audio format 8 channels AC3, sample rate: 32|44|48 kHz, bitrate: 640 kbps
+HDMI:EDID found audio format 8 channels DTS, sample rate: 44|48 kHz, bitrate: 1536 kbps
+HDMI:EDID found audio format 6 channels One Bit Audio, sample rate: 44 kHz, codec define: 0
+HDMI:EDID found audio format 8 channels Dobly Digital+, sample rate: 44|48 kHz, codec define: 0
+HDMI:EDID found audio format 8 channels DTS-HD, sample rate: 44|48|88|96|176|192 kHz, codec define: 1
+HDMI:EDID found audio format 8 channels MLP, sample rate: 48|96|192 kHz, codec define: 0

Pretty obvious really – when the speakers are on they support a much greater range of audio formats!

Putting all this together I ended up with the following script. It grabs the EDID data, converts it into text, and if it doesn’t contain DTS-HD then turn the speakers on.

tvservice -d /tmp/edid.dump

edidparser /tmp/edid.dump > /tmp/edid.txt

if ! grep DTS-HD /tmp/edid.txt; then
 irsend SEND_ONCE speaker KEY_POWER
fi

Photo of Speaker by Ryann Gibbens.

Introducing A New Language

code.close()At work, there is a discussion going on at the moment about introducing Kotlin into our tech stack. We’re a JVM based team, with the majority of our code written in Java and few apps in Scala. I don’t intend to discuss the pros and cons of any particular language in this post, as I don’t have enough experience of them to decide yet (more on that to come as the discussion evolves). Instead, I wanted to talk about how you can decide when to introduce a new language.

Programmers, myself included, have a habit of being attracted to anything new and shiny. That might be a new library, a new framework or a new language. Whatever it is, the hype will suggest that you can do more, with less code and fewer bugs. The reality often turns out to be a little different, and by the time you have implemented a substantial production system then you’ve probably pushed up against the limits, and found areas where it’s hard to do what you want, or where there are bugs or reliability problems. It’s only natural to look for better tools that can make your life easier.

If you maintain a large, long-lived code base then introducing anything new is something that has to be considered carefully. This is particularly true of a new language. While a new library or framework can have its own learning curve, a new language means the team has to relearn how to do the fundamentals from scratch. A new language brings with it a new set of idioms, styles and best practices. That kind of knowledge is built up by a team over many years, and is very expensive both in time and mistakes to relearn.

Clearly, if you need to start writing code in a radically different environment then you’ll need to pick a new language. If like us, you mostly write Java server applications and you want to start writing modern web-based frontends to your applications then you need to choose to add Javascript, or one of the many Javascript based languages, into your tech stack.

The discussion that we’re having about Java, Scala and Kotlin is nowhere near as clear-cut however. Fundamentally choosing one over the other wouldn’t let us write a new type of app that we couldn’t write before, because they all run in the same environment. Scala is functional, which is a substantial change in idiom, while Kotlin is a more traditional object-orientated language, but considerably more concise than Java.

To help decide it makes sense to write a new application in the potential new language, or perhaps rewrite an existing application. Only with some personal experience can you hope to make a decision that’s not just based on hype, or other people’s experiences. The key is treat this code as a throw-away exercise. If you commit to putting the new app into production, then you’re not investigating the language, you’re commiting to add it to your tech stack before you’ve investigated it.

As well as the technical merits, you should also look into the training requirements for the team. Hopefully there are good online tutorials, or training courses available for your potential technology, but these will need to be collated and shared, and everyone given time to complete them. If you’re switching languages then you can’t afford to leave anyone behind, so training for the entire team is essential.

Whatever you feel is the best language to choose, you need to be bold and decisive in your decision making. If you decide to use a new language for an existing environment then you need to commit to not only writing all new code in it, but to also fairly quickly port all your existing code over as well. Having multiple solutions to the same problem (be it the language you write your server-side, or browser-side apps in, or a library or framework) create massive amounts of duplicated code, duplicated effort and expensive context switching for developers.

Time and again I’ve seen introducing the new shiny solution create a mountain of technical debt because old code is not ported to the new solution, but instead gets left behind in the vague hope that one day it will get updated. New technology and ways of working can have a huge benefit, but never underestimate the cost, and importance, of going all the way.


Photo of code.close() by Ruiwen Chua.

Network Booting A Raspberry Pi MythTV Frontend

Network cables - mess :DWhen we moved house earlier in the year I wanted to simplify our home theatre setup. With my son starting to grow up, in a normal house he’d be able to turn on the tv and watch his favourite shows without needing us to do it for him, but with the overcomplicated setup that we had it would take him several years longer before he could learn the right sequence of buttons.

I’ve been a MythTV user for well over ten years, and all our TV watching is done through it. At this stage with our history of recorded shows and a carefully curated list of recording rules switching would be a big pain, so I wanted to try and simplify the user experience, even if it means complicating the setup somewhat.

I had previously tried to reduce the standby power consumption by using an Eon Power Down Plug, which monitors the master socket and switches off the slave sockets when the master enters standby mode. This works great as when the TV was off my Xbox and surround speakers would be switched off automatically. The downside is that if I want the use the speakers to listen to music (they are also connected to a Sonos Connect) then either the TV needs to be on, or I need to change the plug over. Lastly, because I was running a combined frontend and backend it wasn’t connected to the smart plug (otherwise it wouldn’t be able to turn on to record.) If you turned the TV off the frontend would still be on, preventing the backend from shutting down for several hours, until it went into idle mode.

I decided to solve these problems by using a Raspberry Pi 3 as a separate frontend, and switching the plugs around. As they run Linux, and have hardware decoding of MPEG2 and h264 they work great as MythTV frontends.

A common issue with Raspberry Pis is that if you don’t shutdown them down correctly then their SD cards become corrupt. If I connected the Pi to the slave plug socket as planned then it would be uncleanly shut down every time the TV was switched off, risking regular corruption. Fortunately Raspberry Pis support network booting, which means you can have the root filesystem mounted from somewhere else, and you don’t even need the SD card at all. I already had a Synology NAS, which I love, and is a perfect host for the filesystem.

Sadly the network code that is built into the Pis ROM (and therefore isn’t updatable) is very specific and buggy. My router’s DNS server doesn’t support the options required to make the Pi boot, so I switched to using a DNS server on the Synology. While you can’t set the right options in the web frontend you can edit the config files directly to make it work. The bugs in the Pis firmware are that the DNS responses must be received at the right time. Too quick or too slow and the Pi will fail to boot. One of the aspects I like the most about my Synology is that it has a very low power suspend more. When it is in this mode it takes a little while to wake up and respond the network event. Waking up takes too long for the Pi, which would give up waiting for a response. While I wouldn’t have been happy about it, I could have disabled the low power mode to make the Pi work. Unfortunately the second time the Pi boots the DNS server responds too quickly (the first time it has to check whether the IP address it is about to hand out is in use.) This response is too quick for the Pi, which again will fail to boot.

The other option is to use an SD card with a kernel and a few supporting files on it to start the boot, and then use Linux’s built-in NFS root filesystem support. While this does require an SD card, it’s read only and after the kernel has been loaded the card will be accessed very rarely, if ever, so the risk of corruption is minimal. After running with this set up for a few months, and being switched off several times per day we’ve not had a single corruption of the SD card so far.

Setting this up is pretty straightforward, I just extracted a Minibian tarball to my NAS and shared it via NFS. Next I copied the contents of /boot to my SD card, and modified cmdline.txt to include the following:

root=/dev/nfs nfsroot=192.168.1.72:/volume1/pi/minibian rw ip=dhcp

With this added it boots up reliably and can be shut down uncleanly with little or no risk of corruption.

Next up is making the MythTV frontend start up automatically. This is was done by adding the following to /etc/rc.local

modprobe rc_rc6_mce
/usr/bin/ir-keytable -c -p RC-5,RC-6 -w /etc/rc_keymaps/rc6_mce
echo "performance" > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
su -c "/home/andrew/autostart.sh" andrew &

The first two lines are required to set up my MCE IR receiver. The third line is needed to ensure that the Pi’s performance remains consistent and the CPU isn’t throttled down while you’re in the middle of an episode of Strictly. The final line just triggers another script that actually runs the frontend, but run as me, and not root.

#!/bin/bash

/home/andrew/wake_speakers &
startx /home/andrew/start_myth 2>&1 > ~/mythtv.log

I’ll cover the first line in another post, but it just turns on the surround speakers and makes sure they in the right mode. The second line starts X, and runs my custom start script. This final script looks like this:

#!/bin/bash
QT_QPA_PLATFORM=xcb /usr/bin/mythfrontend -O libCECEnabled=0

While I managed to solve my key issues of making it easier to switch the open and off, and I can listen to music without the TV being on and still have most devices switched fully off, I still have a few issues still to solve. The main two are that bootup speed is not as fast as I would like, and the backend doesn’t cope well with the frontend exiting uncleanly (and it waits 2.5 hours before turning off). I will cover these issues, and some others that I had to solve in a future post.


Photo of Network cables – mess 😀 by jerry john.

Links to Amazon contain an affiliate code.

FitBit Ionic Review

fitbit_ionic.jpg
Since I received my Pebble Steel back in 2014 I knew I never wanted to go back to using a normal watch. Having notifications and apps on my wrist was just too useful to me. I skipped the Pebble Time, but when the Time 2 was announced I happily put in a preorder. Unfortunately it was not to be, and Pebble folded and was sold to FitBit. If Pebble wasn’t able to survive then as an existing FitBit user having them as a buyer is probably the the best option.

The idea of FitBit’s scale and expertise in building hardware, combined with Pebble’s excellent developer platform was an enticing prospect. Rather than switch to an Apple Watch (or Android Wear, although that would have required a new phone) I decide to wait for the fruits of the combined company’s labour to be released.

I was getting a bit itchy, and my trusty Pebble Steel was showing it’s age, but eventually the FitBit Ionic was announced. A few days before the official release date my preorder arrived. It’s now been two weeks of wearing it nearly 24/7, so it seems like a reasonable time to post my thoughts.

First impressions of the hardware are excellent. Most reviews have criticised the looks, but I’m actually a fan. I like the way the bands transition into the watch itself, and sure it does just look like a black square when the screen is off, but that’s the case for all current smart watches. The buttons have a nice firmness to them, and the touchscreen is responsive. I have had some issues swiping to clear notifications, but I think that’s more to do with the touch targets in the software rather than the touchscreen, as I’ve not had issues elsewhere.

The key hardware concerns are the screen and battery life. The bottom line is that both are excellent. The screen is bright and clear, even in strong sunlight. I’ve not tested the battery life extensively because I’m wearing it essentially all day. I only take the Ionic off to shower, and it appears to only lose 15-20% per day, and a quick 15 minute charge per day is enough to keep it topped up.

The one big element I miss from my Pebble is the fact that the screen is not always on. If you do the lift-and-twist “I’m looking at my watch” gesture then it does turn on reliably, but it’s rare that I actually do that. Looking at my watch tends to be a much more subtle movement, and then it only recognises it occasionally. I have found myself pressing a button to turn the screen on, which after having an always on screen feels like a step backwards.

At the moment it’s probably too early to comment on the software side. The core features are all there and work well. Notifications from apps, texts and calls all work. I’ve been able to track various types of exercise, including bike rides which were tracked with the built in GPS and synced automatically to Strava. Heart rate monitoring and step count also appear reasonably accurate, as you would expect given FitBit’s history.

Unfortunately the key reason I brought the Ionic – that they had Pebble’s software team building the SDK – is not yet visible. There are a small set of watch faces (I’m a fan of the Cinemagraph), and some built in apps, but as yet there’s no sign of any externally developed apps. It’s early days though, and hopefully a developer community will form soon.

So, would I recommend the FitBit Ionic? Yes, but more on potential than current execution. The hardware appears to be there, it just needs a bit more time for the software to mature and apps to be developed.


FitBit Ionic photograph by FitBit.

Leading Without Deep Technical Knowledge

this way or thatIn my previous jobs, when I’ve been promoted to a leadership role it has been as a result of being the most experienced member on the team. Having a deep knowledge of the business, the code base and the technologies we’re using meant I was already an authority on most topics of the team needed to discuss, and could weigh in on a discussion with a well formed and considered option.

When I changed companies at the end of last year I came to Ocado Technology as a team lead for an existing team, using a technology stack I wasn’t familiar with. In fact Ocado are a Java based company, which I had never used before, so not only was I not familiar with the frameworks and libraries used, but I wasn’t even familiar with language the code was written it either!

Leading in a situation like this required a complete change in how I approached problems. When a stakeholder or the product owner approached me with a challenge rather than immediately being able to respond with a rough solution, and vague estimate or a timeline I need to defer to my team, and let them propose a solution, estimate it, and then I could fit it into our schedule. I might challenge them on some points, but it was their plan. I quickly needed to learn who knew the most about which systems, so I could get the right people involved in discussions early.

Previously although I was able give initial feedback on a potential project, I would still allow the team to discuss them, to propose alternate solutions and to estimate. The change is that now my contribution is much more about making sure the right people are talking and helping to avoid misunderstanding when the business and my developers are accidentally talking at cross-purposes.

While this change has definitely pushed me out of my comfort zone, it has also given me space to focus a different area of my leadership skills. Ocado prides itself on its values, one of which is its servant leadership philosophy. By not having the knowledge to make decisions myself I am forced to empower my team to make decisions on how they want solve problems.

It’s not just case a facilitating discussions though. I may not know the details of our code base, or the intricacies of library, but my knowledge of software design patterns and systems architecture is valid whatever language is being used, and my opinions are as strong as ever. It is normal for developers to immediately jump to the simplest solution to a problem within the framework of the existing code. As an outsider my first instinct is usually to take a step back and ask why the system is designed like that, and to propose a bigger solution that resolves some technical debt, rather than focussing on the issue at hand.

This change in role has made me realise that even when I was the most experienced in the code, language or framework I should have made more of an effort to devolve the decision making process. Not to stop expressing my opinions, or involving myself in the discussions, but to explicitly encourage others to contribute, and make sure they are taking part in discussions. This has resulted in people being more bought in to solutions, and encouraged a much closer team with a greater feeling of ownership over our code. Being forced to make this change to my style has undoubtedly made me a better manager, and a better developer too.


Photo of this way or that by Robert Couse-Baker.