Category Archives: Software Development

Deaf fish in a hearing pond, Part 3

This post is Part 3 in a series. You can read Part 2 here.

Server Hardware

I’m going to start off by saying that the Raspberry Pi 2 Model B is the cutest little thing ever, and at only $35 it’s an amazing little computer. I’ve got one hooked up to my 16000 mAh Anker portable battery along with the TripMate Nano travel router. I was expecting it to be larger because it just looks bigger in Internet photos, but it’s so tiny. I installed the Pi in a small black plastic enclosure and the whole thing is barely the size of my wallet.

Raspberry Pi

I can’t remember the last time I was this excited by a piece of hardware.

I set things up so that I could remote into the Pi using SSH over my phone, as well as copy files over from the Mac via SFTP. I’m still getting to grips with Linux as a desktop OS but so far it hasn’t given me much trouble. In fact, I kind of enjoy managing things from a terminal/command prompt, since it brings back nostalgic memories of when I first started messing around with computers.

There was a brief hiccup where I couldn’t find the Pi on the network, but I fixed that by adding a crontab entry that pings the router once a minute. I was also able to successfully make a SSH connection to administer it after setting that up. I then installed Node.js on the Pi, whipped up a quickie test socket server using Socket.IO, and did some test connections from other machines. Success!

The Client Hardware

I mentioned in the previous post that I was going to source a couple of cheap tablets and keyboard folios to serve as the client hardware. I did a lot of window shopping and review perusal over several days before making a decision. The big thing about cheap tablets is that there’s a line where if you go too cheap, “you get what you pay for” becomes a distressing truth, so there’s a lot of chaff to sift through. I went for a pair of HP Stream 7 Signature Edition tablets, which were on sale for $79.

Tablets

The keyboard folios were a little harder to source–reliability is very important to me because it just won’t do for the keyboards to spontaneously disconnect every 5 minutes or misbehave in the middle of a conversation, and so many keyboard folio vendors treat the keyboard as a cheap thrown-in extra rather than an important part of the package. The other thing is that a lot of those cheap keyboard folios put size before layout, so you end up with stupid issues like you have to press Fn+L for apostrophes and nothing’s where you expect it to be.

I decided on 7″ Zagg Auto-Fit folios for them. They’re rigid clamshells with a spring-loaded top cover that holds the tablet in place, and it makes the things look like little bitty baby laptops. The key layout is good, much better than on my Dell keyboard folio, and the build quality is good. Not exactly premium, but good.

I changed the Bluetooth power management settings for both tablets to not turn off the radio to save power, and I also set it up so that the Bluetooth radio can wake up the device. This does two things: pressing a key wakes up a sleeping tablet, and prevents the keyboard from being disconnected when the radio turns off.

Software

I went through three prototype iterations for the client application. The first one was mocked up in Unity 3D since it was a good opportunity to get to grips with the new Unity UI system for work while I was at it. The second one was done as a WebRTC webapp displayed in a fullscreen instance of Google Chrome because I’m a sucker for bleeding edge web stuff and wanted to take that for a test drive.

For the third prototype, I finally got serious and fired up Visual Studio to do a proper native Windows app. The last time I did any Windows desktop application development was in 2010, I believe, and I kinda like how far things have come since then.

Initially, I started out doing it as a Modern UI app (formerly known as Metro apps), but I changed my mind in a hurry when I saw what a gongshow that was shaping up to be. I’m not really sure what Microsoft was thinking there, but maybe they’ll get it right with Windows 10. Once I switched targets to a desktop application, things started to shape up beautifully.

Dev setup!

I’ve now got the tablets running native Windows desktop apps that talk to the Raspberry Pi server. In the shot above, I’m running Visual Studio on my big XPS 18 tablet, I have the Raspberry Pi temporarily plugged into one of my work monitors, and the tablets are running deployed instances of the client application. At one point, I was tapping away on 4 keyboards!

The setup is getting closer and closer to what I’d consider stable enough to use in a production environment. There are a few more software and hardware tweaks I want to do, then I’ll move on to more extensive field-testing.

Deaf fish in a hearing pond, Part 2

This post is Part 2 in a series. You can read Part 1 here.

Functional Requirements (Hardware)

The first thing I did was outline the functional hardware requirements necessary to roll my own communications setup.

  • The total combined setup must be as compact and as lightweight as possible, preferably less than 4 pounds total.
  • Each device in the setup must be able to discover and communicate with each other wirelessly with as little on-the-spot configuration as possible.
  • Each device in the setup must be able to go from sleep to app as quickly as possible.
  • Each device must have a physical keyboard with an intuitive layout (don’t put the punctuation keys in stupid places, I’m looking at you Dell)
  • The entire setup must use only off-the-shelf parts and devices.

Functional Requirements (Software)

Next, I outlined the software requirements.

  • The user interface must be as obvious and intuitive as possible, with no verbal or written explanation needed for how to get going with it.
  • Must be able to go fullscreen and hide distracting OS elements.
  • Must be as close to 100% keyboard driven as possible, with the exception of a minimal number of touch buttons, ideally just one.
  • Must make as much use of the screen real estate as possible for the text elements, which should be large and readable.
  • Connection should happen automatically with no user input.
  • Must be able to run on any hardware, at least during prototyping.

Inventory

The prototyping process for the software was the first thing I wanted to tackle, so for development purposes I was going to use whatever hardware I already had on hand.

I happened to already have a Windows 8 tablet and a couple of Bluetooth keyboards, so I designated it as the testing machine. My iMac is the development machine and will simulate the other side of the conversation for now.

I also have a hilariously tiny HooToo TripMate Nano travel router and an Anker Astro E5 16,000 mAh portable power pack, which are both already permanent residents of my messenger bag. When connected to the Astro E5 for power, the TripMate Nano provides a wireless LAN that can run for a ridiculously long time, and the whole thing can be ignored at the bottom of my bag for all practical purposes. I can tether the Nano to my iPhone if I need WAN access for any device connected to the Nano.

I decided to use that setup as the connectivity backbone during the prototyping process–both communication devices would simply be connected to the Nano’s LAN to simulate an active device connection. Later on, this can be replaced by a direct Bluetooth connection or something similar.

First Prototype

For funsies, I did the first software prototype in Unity3D because it was a convenient way to do some research I needed to do for my day job anyway–specifically, getting to grips with Unity’s new UI system and experimenting with networking. I like to catch as many birds with a single stone as possible!

A weekend and some evenings later, I had a functional prototype that I tested on OS X and Windows. I deployed it to the tablet, fired it up, and had a number of pointless conversations with myself.

f2f_v1_screenshot

It worked great. The only problem I had with it was that Unity is overkill for something like this and my poor Dell tablet quickly heated up to the point where I could’ve probably fried up some eggs and sausage patties on it. But, that aside, the basic idea seemed sound, I’d learned some new Unity UI and networking tricks, and now it was time to move on to a serious prototype.

Second Prototype

The second prototype was done as a web app, to be served over the LAN hiding in my messenger bag. This way, I don’t have to screw around with building app packages for a bunch of devices during the prototype cycle, and all I need is for each device to support Google Chrome. From there, I just create an app shortcut for that page on each device, then set it up to run fullscreen. Then when I tap the icon for that app shortcut on the start screen, it pops up in fullscreen all ready to go.

Chrome has fairly robust WebRTC support, which means you can directly connect two different machines and transfer information between them without it having to go through a server first. The only thing handled on a server is the initial connection setup, and after that it’s peer-to-peer data exchange between connected clients.

So, the second prototype uses WebRTC to pass data between clients. I whipped up a basic web app using Macaw and Atom, and tested that on multiple machines.

There’s an intro screen that tries to get the point across, with one button to start the chat.

f2f_p2_screen1

That button takes you to the actual chat screen, which again tries to explain itself as succinctly as possible. At this point, the WebRTC connection is automatically made and the chat is initiated.

f2f_p2_screen2

Here are a couple shots of the web app in action on the 8″ tablet, which has a Microsoft Wedge keyboard connected to it via Bluetooth. Any device on the portable LAN can serve this app if a lightweight HTTP server like Mongoose is running:

2015-05-24 18.04.13

When a connection is made between both clients, the avatars are swapped out with little webcam thumbnails. That’s why you can see me taking the photo in both thumbnails.

2015-05-24 18.08.15

The second prototype is much less demanding on the tablet than the first, and it works just as well as the first prototype did. It doesn’t turn the Venue 8 Pro into a George Foreman Tablet either, so eggs and sausages everywhere can now breathe a sigh of relief.

Next Steps

The next thing I want to do, since it’s all working now, is source a couple of cheap tablets and keyboard folios, then test this setup out in the wild. I’ll take Mrs E out for supper or coffee and we’ll see if any problems occur, then address them as needed. I’ll post the results in Part 3.

Update: Part 3 is up now! You can read it here.

Deaf fish in a hearing pond, Part 1

Introduction

As a deaf person who works in a predominantly hearing environment, I have a keen interest in anything that helps me break down communication barriers and engage with hearing people on as close to an equal footing as possible. So, every once in a while, I review the status of the market for accessibility aids for deaf people to see what’s new.

These accessibility aids range from physical devices to services and software that help deaf people communicate with hearing people. The common constant I notice during each market review is that pretty much everything I come across has one or more show-stopping problems. However, along with the show-stoppers, there’s usually at least one good idea behind each of those otherwise hilarious accessibility aids.

I was particularly intrigued by the UbiDuo 2, a device that lets people chat face to face in realtime.

UbiDuo 2

It’s a clamshell device that unfolds and splits into 2 separate devices that communicate over a wireless ZigBee connection. Each person types into their unit, and the other can see each keystroke happening in realtime. This makes for a much more fluid conversational experience compared to, say, paper and pen.

The problem with IM clients and conversing on paper is that there’s an inherent lag in the communication process. Using an IM application, one person types, and the other person twiddles their thumbs waiting for the typing party to finish and send. With a notepad, one person twiddles their thumbs waiting for the other party to finish scribbling and hand over the notepad. It’s just…clunky.

With something like an UbiDuo 2, there’s no wait. You’re watching each keystroke happen and seeing the other person’s thoughts being composed in realtime. It may not sound like a big difference, but it is–if you’re a hearing person, imagine how silly things would be if you could only communicate with other hearing people by dictating your message into a tape recorder and then passing the tape recorder to the other party, who then listens to the tape and then records their response. That’s the one brilliant point in the UbiDuo’s favor.

In the flaws column, there are a number of significant issues with it that limit its practicality for me. The unit is about the width and length of a 15″ laptop and weighs 4 pounds. Additionally, the MSRP of $1,995.00 is something I have an issue with. I just can’t bring myself to drop that kind of dough on an unitasker that I can’t fit into my messenger bag along with the rest of my other stuff, and I’m not about to load myself down with extra bags like some kind of tourist.

That got me to thinking, and since I like challenges, I decided to see if I could roll my own functionally equivalent setup for a fraction of the size and cost, and have some fun with it in the process. I’ll be documenting the journey here as I go along.

Update: Part 2 is up now! You can read it here.