Deaf fish in a hearing pond, Part 2

This post is Part 2 in a series. You can read Part 1 here.

Functional Requirements (Hardware)

The first thing I did was outline the functional hardware requirements necessary to roll my own communications setup.

  • The total combined setup must be as compact and as lightweight as possible, preferably less than 4 pounds total.
  • Each device in the setup must be able to discover and communicate with each other wirelessly with as little on-the-spot configuration as possible.
  • Each device in the setup must be able to go from sleep to app as quickly as possible.
  • Each device must have a physical keyboard with an intuitive layout (don’t put the punctuation keys in stupid places, I’m looking at you Dell)
  • The entire setup must use only off-the-shelf parts and devices.

Functional Requirements (Software)

Next, I outlined the software requirements.

  • The user interface must be as obvious and intuitive as possible, with no verbal or written explanation needed for how to get going with it.
  • Must be able to go fullscreen and hide distracting OS elements.
  • Must be as close to 100% keyboard driven as possible, with the exception of a minimal number of touch buttons, ideally just one.
  • Must make as much use of the screen real estate as possible for the text elements, which should be large and readable.
  • Connection should happen automatically with no user input.
  • Must be able to run on any hardware, at least during prototyping.


The prototyping process for the software was the first thing I wanted to tackle, so for development purposes I was going to use whatever hardware I already had on hand.

I happened to already have a Windows 8 tablet and a couple of Bluetooth keyboards, so I designated it as the testing machine. My iMac is the development machine and will simulate the other side of the conversation for now.

I also have a hilariously tiny HooToo TripMate Nano travel router and an Anker Astro E5 16,000 mAh portable power pack, which are both already permanent residents of my messenger bag. When connected to the Astro E5 for power, the TripMate Nano provides a wireless LAN that can run for a ridiculously long time, and the whole thing can be ignored at the bottom of my bag for all practical purposes. I can tether the Nano to my iPhone if I need WAN access for any device connected to the Nano.

I decided to use that setup as the connectivity backbone during the prototyping process–both communication devices would simply be connected to the Nano’s LAN to simulate an active device connection. Later on, this can be replaced by a direct Bluetooth connection or something similar.

First Prototype

For funsies, I did the first software prototype in Unity3D because it was a convenient way to do some research I needed to do for my day job anyway–specifically, getting to grips with Unity’s new UI system and experimenting with networking. I like to catch as many birds with a single stone as possible!

A weekend and some evenings later, I had a functional prototype that I tested on OS X and Windows. I deployed it to the tablet, fired it up, and had a number of pointless conversations with myself.


It worked great. The only problem I had with it was that Unity is overkill for something like this and my poor Dell tablet quickly heated up to the point where I could’ve probably fried up some eggs and sausage patties on it. But, that aside, the basic idea seemed sound, I’d learned some new Unity UI and networking tricks, and now it was time to move on to a serious prototype.

Second Prototype

The second prototype was done as a web app, to be served over the LAN hiding in my messenger bag. This way, I don’t have to screw around with building app packages for a bunch of devices during the prototype cycle, and all I need is for each device to support Google Chrome. From there, I just create an app shortcut for that page on each device, then set it up to run fullscreen. Then when I tap the icon for that app shortcut on the start screen, it pops up in fullscreen all ready to go.

Chrome has fairly robust WebRTC support, which means you can directly connect two different machines and transfer information between them without it having to go through a server first. The only thing handled on a server is the initial connection setup, and after that it’s peer-to-peer data exchange between connected clients.

So, the second prototype uses WebRTC to pass data between clients. I whipped up a basic web app using Macaw and Atom, and tested that on multiple machines.

There’s an intro screen that tries to get the point across, with one button to start the chat.


That button takes you to the actual chat screen, which again tries to explain itself as succinctly as possible. At this point, the WebRTC connection is automatically made and the chat is initiated.


Here are a couple shots of the web app in action on the 8″ tablet, which has a Microsoft Wedge keyboard connected to it via Bluetooth. Any device on the portable LAN can serve this app if a lightweight HTTP server like Mongoose is running:

2015-05-24 18.04.13

When a connection is made between both clients, the avatars are swapped out with little webcam thumbnails. That’s why you can see me taking the photo in both thumbnails.

2015-05-24 18.08.15

The second prototype is much less demanding on the tablet than the first, and it works just as well as the first prototype did. It doesn’t turn the Venue 8 Pro into a George Foreman Tablet either, so eggs and sausages everywhere can now breathe a sigh of relief.

Next Steps

The next thing I want to do, since it’s all working now, is source a couple of cheap tablets and keyboard folios, then test this setup out in the wild. I’ll take Mrs E out for supper or coffee and we’ll see if any problems occur, then address them as needed. I’ll post the results in Part 3.

Update: Part 3 is up now! You can read it here.

Leave a Reply

Your email address will not be published.