Interaction Designer
TypingSigns-Cover-201907b.jpg

Typing Signs

 

Typing Signs

Typing Signs is a series of ongoing projects to improve access to Sign Languages in both analogue and digital worlds.

 
 

The Why

Sign Languages like American Sign Language (ASL) are created by Deaf communities. ASL is distinct from English, meaning there are different grammatical structures, and words with no English equivalent (like French terms “déjà vu” or Japanese term “komorebi”). 

Unfortunately, many sign languages lack official recognition in their countries. This reduces the educational resources a person has access to, especially in remote areas without strong communities. Only 2% of the deaf population have access to education in sign language. 

Written forms of signed languages are currently unintuitive and difficult to produce. Writing also reduces the expressiveness of a language. However, the ability to write in one's native language is important, as it allows information to be mentally processed without the cognitive load of translating between two different ones. 

The ability to write, print, and search ASL would facilitate its propagation in everyday life. In addition to books, written signs can serve as an invisible encoding for digital technologies. This could help public kiosks recognize sign. It can also be useful in creating animated avatars – for mediums like Cartoons and Videogames. This in turn facilitates cultural awareness, access, and education of the language.


Process

The purpose of the Typing Signs project is to improve access and prevalence of Sign Languages like ASL. This endeavor has involved many discussions, experiments, and learnings. 

Throughout this initiative, I’ve moved cities, transitioned from remote user research to local engagements. Narrowing design focus to work with individuals. 

Whether it’s researching, designing, or coding up a prototype – the iterative design cycle remains the same. Prototypes are rapidly shared to offer critical insight into designing and planning future pilots. These experiments include apps, dictionaries, and investigations into computer vision and machine learning as design materials. 

 

1. Early Experiments & Learnings

When I began learning ASL, a ‘Daily ASL’ twitter bot was whipped up to help learn a new sign every day. Social interactions on this account were intriguing. ( Done with access to over 2000 Sign-with-Robert gifs in Python on Heroku. )

The 'ASL Words' - iOS App was created so signs could be searched offline and on the go. Though other students have found this useful – many sometimes have trouble understanding the abstract and low visual fidelity of SignWriting. Such insights informed further projects. (Done with Xcode & Swift, to present 10,000 signs from the Sutton SignWriting database in the app, and in the OS-level Spotlight.)

Website: Typing Signs

As the number of tools grew, the 'Typing Signs' website was created to house them. The portal held functional prototypes (therefore, ones not reliant on wizard-of-oz or other in-person user research methods).

There are multiple online dictionaries for searching ASL signs. After needing to constantly search terms across multiple sites – a web-based search index was created, an ASL Lookup via English Gloss (Glossing is when you write one language in another, like using English to try and write ASL).

(We like to talk a lot about pixel precision, but in this case, milli-second precision matters. The interface had to respond instantly in order to facilitate a sense of exploration. All existing JS frameworks at the time showed search results with delays of several hundreds of milliseconds. Hence a custom search function was created to allow instant lookup for nearly 50,000 depictions of ASL signs.)

Currently, ASL dictionaries can only be searched using english. This is a significant obstacle to language acquisition. In major languages, you can see a word you don't know and then easily look it up – there's no equivalent right now in ASL.

Therefore, an experimental keyboard was created to perform 'ASL Lookup by Sign'. This experiment proved it’s possible to lookup signs by handshapes, motions, and locations. Future explorations are required to increase intuitiveness. It was also found that knowing the popularity of individual signs would help increase the usability of such a tool.

 

2. Explorations of Horizons

As a thought experiment, we wondered about the possibility of being able to lookup signs on the go by actually physically signing. Exploratory tests proved this was definitely possible. (First leveraging Deep Neural Networks, Computer Vision, and iPhone’s A-Series chip. Later tested with CoreML ).

 

An iMessage app was created for sharing animated stickers (on iPhone & Apple Watch). People could send these animated stickers or finger-spell custom messages. (Created by modeling in Maya, animating and rendering in Cinema 4D, and coding in Swift. ) 

The increased fidelity (in animations and visuals) resulted in a stark increase in users compared to apps depicting SignWriting. However, the manual creation process meant there were only a few signs – with users always requesting more. Procedural animation could be an opportunity to bridge the gap between quantity and quality.

An official method for displaying SignWriting throughout all of iOS was also created. This means that SignWriting can be displayed in safari, notifications, alarms, texts, calendars, reminders, files etc. This is at the Operating System level and can be enabled on anyone's device (no jailbreak needed).

A way to convert signing videos into understandable images was also explored. These visual transcripts makes it much easier to scan, and study, ASL in motion. This also allows ASL videos to be printed as study materials.

Another fun technical exploration determined it’d be incredibly quick and easy to perform translation of english words to ASL (via text, speech, or camera). However, further reflection determined there weren't enough real benefits to outweigh the many ways this could be abused. Learning that just because we can, doesn’t mean we should.

 

We’ve learnt that standard commercial Human-Centered Design practices are excellent processes for targeting mass audiences, but need to be considered carefully with specific audiences. In this space, designing with a specific person, or group, can be incredibly powerful.

 

3. Typing ASL

 

The beginnings of a scalable 3D rendered typeface.

TheASL Textapp was created so that someone learning ASL could take notes and send an occasional message to someone else.

(This was first prototyped on the web. The TestFlight app uses the SignWriting database for definitions and symbols. The signs are procedurally colored and drawn using CoreGraphics & ImageMagick. Glyphs, a font tool, packages everything as a color font.)

Signs are predicted based on the probability they'll be used. Anonymized data helps predict the next possible signs in a sentence.

Whilst ASL Text is currently great for taking simple notes or journaling – sessions and feedback teach us that there's still a lot of room to improve:

  • There are only 10,000 signs and no way yet to add your own.

  • Needing to use english-gloss to look up words continuously pulls writers out of the ASL frame of mind.

  • Though legibility has improved with the coloring of SignWriting symbols, legibility still remains an issue for many people new to SignWriting. 3D rendering is currently being investigated as a solution.


Next Steps

A major next step would be making it easy for Deaf Artists to create their own fonts. One of my favorite comic books, Hawkeye 2012 (by Matt Fraction & David Aja Omnibus) was written completely in ASL. Seeing ASL in popular media is very rare, though it shouldn't be. Introducing tools for Deaf creatives to make alternatives to the SignWriting font could increase ASL's prevalence in pop culture.

An easier way to type ASL also has invisible benefits. By allowing ASL to be searchable, we can create tools that more easily produce ASL animations in TV and Games. 

(All projects, source-code, fonts etc. created here are intended to be open to the community. Tutorials are being written so anything created here can be learnt and reproduced. If you are Deaf and would like more information, get in touch.) 

(Hawkeye using ASL in a comicbook)

(Animated mouse signing - from the game; Moss)


Design Beacon

Principles of Human-Centered Design and Interaction Design have largely remained the same over the last 30 years. However, like the advent of the Printing Press and the Graphical User Interface (GUI), the mediums our services reside have changed drastically. This decade has seen the rise of smartphones, crowdsourcing, and big data (artifacts not present in earlier attempts to writing signed languages).

Leaps in Deep Learning and mobile processors in the last half decade are precursors to the astronomically different services we can now design and create. In this case, it is now possible to design for an evolving, multi-dialect, 4-dimensional language like ASL. We can do things like accurately match searches to individual user-intent, process vastly more feedback, transcribe visual emotions and signs in real time, and much more. 


Overall

Sign Languages are 4-Dimensional languages with conceptual-origins, they open up dimensions of thinking that aren’t easily found in phonetic languages such as english. Typing Signs is an ongoing series of projects to ease access to signed languages, in an endeavor to open up new worlds.

As Ludwig Wittgenstein puts it: "The limits of my language are the limits of my world."

Screen-Shot-2016-11-06-at-1.24.56-PM.png