Interaction Designer
001-MacBook-Silver---Multiple-TS-pics2.jpg

Typing Signs

 

Typing Signs

Typing Signs is a series of ongoing projects to improve access to Sign Languages in both analogue and digital worlds.

 
 

The Why

Sign Languages like ASL (American Sign Language) are used by Deaf communities. ASL is distinct from English, meaning there are different grammatical structures, and words with no English equivalent (like French terms “déjà vu” or Japanese term “komorebi”). 

Unfortunately, many sign languages lack official recognition in their countries. This reduces the educational resources a person has access to, especially in remote areas without strong communities. Only 2% of the deaf population have access to education in sign language. 

Written forms of signed languages are currently unintuitive and difficult to produce. Writing also reduces the expressiveness of a language. However, the ability to write in one's native language is important, as it allows information to be mentally processed without the cognitive load of translating between two different ones. 

The ability to write, print, and search ASL would facilitate its propagation in everyday life. In addition to books, written signs can serve as an invisible encoding for digital technologies. This could help public kiosks recognize sign. It can also be useful in creating animated avatars – for mediums like Cartoons and Videogames. This in turn facilitates cultural awareness, access, and education of the language.


Process

The purpose of the Typing Signs project is to improve access and prevalence of Sign Languages like ASL. This endeavor has involved many discussions, experiments, and learnings. 

Throughout the design process, I’ve moved cities, transitioning from remote user research to local engagements. I’ve also transitioned from designing for the masses to focusing narrowly on individuals. 

Whether I’m researching, designing, or coding up a prototype – the iterative design cycle remains the same. Prototypes are rapidly shared to offer critical insight into designing and planning future pilots. These experiments include apps, dictionaries, and investigations into designs built upon computer vision and machine learning. 

 

1. Early Learnings

When I began learning ASL, I whipped together a ‘Daily ASL’ twitter bot to help learn a new sign every day. Social interactions on this account were intriguing. ( Done with access to over 2000 Sign-with-Robert gifs in Python on Heroku. )

The 'ASL Words' - iOS App was created so signs could be searched offline and on the go. Though other students have found this useful – many (including myself) sometimes have trouble understanding the abstract and low visual fidelity of SignWriting. Such insights informed further projects. (Done with Xcode & Swift, to present 10,000 signs from the Sutton SignWriting database in the app, and in the OS-level Spotlight.)

As the number of tools grew, I created a 'Typing Signs' website to house them. The portal held functional prototypes (therefore, ones not reliant on wizard-of-oz or other in-person user research methods).

There are multiple online dictionaries for searching ASL signs. After needing to constantly search terms across multiple sites – a web-based search index was created, an ASL Lookup via English Gloss (Glossing is when you write one language in another, like using English to try and write ASL).

(The interface had to respond instantly in order to facilitate a sense of exploration. All existing JS frameworks at the time showed search results with delays of several hundreds of milliseconds. Hence a custom search function was created to allow instant lookup for over 12,000 ASL glosses.)

Currently, ASL dictionaries can only be searched using english. This is a significant obstacle to language acquisition. In major languages, you can see a word you don't know and then easily look it up – there's no equivalent right now in ASL.

Therefore, an experimental keyboard was created to perform 'ASL Lookup by Sign'. This experiment proved it’s possible to lookup signs by handshapes, motions, and locations. Future explorations are required to increase intuitiveness. It was also found that the popularity of individual signs would help increase the usability of such a tool.

 

2. Explorations of Horizons

As a thought experiment, I wondered about the possibility of being able to lookup signs on the go by actually physically signing. Exploratory tests proved this was definitely possible. (First leveraging Deep Neural Networks, Computer Vision, and iPhone 7’s A10 chip. Later tested with CoreML ).

An iMessage app was created for sharing animated stickers (on iPhone & Apple Watch). The app also allowed messages to be composed using fingerspelling. (Created by modeling in Maya, animating and rendering in Cinema 4D, and coding in Swift. ) 

The increased fidelity (in animations and visuals) resulted in a stark increase in users compared to apps depicting SignWriting. However, the manual creation process meant there were only a few signs – with users always requesting more. Procedural animation could be an opportunity to bridge the gap between quantity and quality.

A way to display SignWriting throughout all of iOS was also found. This means that SignWriting can be displayed in safari, notifications, alarms, texts, calendars, reminders, files etc. This is at the Operating System level and can be configured on anyone's device (no jailbreak needed).

Another fun technical exploration determined it’d be incredibly quick and easy to perform translation of english words to ASL (via text, speech, or camera). However, further reflection determined there weren't enough real benefits to outweigh the many ways this could be abused. Learning that just because we can, doesn’t mean we should.

 
 

Whilst designing for groups and having techy or aesthetic explorations was fun, I’ve grown to realize that designing with a single person (or two) in mind, in this space, is much more powerful.

Solutions for a single person are often sharable with others, so by focusing intently on a single friend or family-member, we can still create something that benefits everyone.

 

3. Typing ASL

The ASL Text (app) was created so that someone learning ASL could take notes and send an occasional message to someone else.

(This was first prototyped on the web. The TestFlight app uses the SignWriting database for definitions and symbols. The signs are procedurally colored and drawn using CoreGraphics & ImageMagick. Glyphs, a font tool, packages everything as a color font.)

Signs are predicted based on the probability they'll be used. Anonymized data helps predict the next possible signs in a sentence.

Whilst ASL Text is currently great for taking simple notes or journaling – sessions show there's still a lot of room to improve:

  • There are only 10,000 signs and no way yet to add your own.

  • Needing to use english-gloss to look up words continuously pulls writers out of the ASL frame of mind.

  • Though the procedural coloring of SignWriting helps, understandability remains an issue for the many people new to SignWriting.


Next Steps

A major next step would be making it easy for Deaf Artists to create their own fonts. One of my favorite comic books, Hawkeye 2012 (by Matt Fraction & David Aja Omnibus) was written completely in ASL. Seeing ASL in popular media is very rare, though it shouldn't be. Introducing tools for Deaf creatives to make alternatives to the SignWriting font could increase ASL's prevalence in pop culture.

An easier way to type ASL also has invisible benefits. By allowing a computer to understand ASL, we can create tools that more easily produce ASL animations in TV and Games. 

(All projects, source-code, fonts etc. created here are intended to be open to the community. Tutorials are being written so anything created here can be learnt and reproduced.) 

(Hawkeye using ASL in a comicbook)

(Animated mouse signing - from the game; Moss)


Design Beacon

Principles of Human-Centered Design and Interaction Design have largely remained the same over the last 30 years. However, like the advent of the Printing Press and the Graphical User Interface (GUI), the mediums our services reside have changed drastically. This decade has seen the rise of smartphones, crowdsourcing, and big data (artifacts not present in earlier attempts to writing signed languages).

Leaps in Deep Learning and mobile processors in the last half decade are precursors to the astronomically different services we can now design and create. In this case, it is now possible to design for an evolving, multi-dialect, 4-dimensional language like ASL. We can do things like accurately match searches to individual user-intent, process vastly more feedback, transcribe emotions and signs from camera feeds in real time, and much more. 


Overall

Sign Languages are 4-Dimensional languages with conceptual-origins, they open up dimensions of thinking that aren’t easily found in phonetic languages such as english. Typing Signs is an ongoing series of projects to ease access to signed languages, in an endeavor to open up new worlds.

As Ludwig Wittgenstein puts it: "The limits of my language are the limits of my world."

Screen-Shot-2016-11-06-at-1.24.56-PM.png