Skip to content

Playing around drones with Android's Speech-to-text & Text-to-Speech; Setting up a Wake-up-word other than OK Google, and trying to match converted text to a given Ontology

Notifications You must be signed in to change notification settings

yassinchabeb/voice-IT

Repository files navigation

VOICE-IT

Playing around with Android's Speech-to-text & Text-to-Speech; Setting up a Wake-up-word other than OK Google, and trying to match converted text to a given Ontology

Demo

WHY, OH WHY

Well, one of our clients wanted a Virtual Assistant hands free to handle orders like 'Send me a bunch of PRODUCT-X next week', 'Send the same as last week' or 'Wheres my stuff?!' Kind of like Amazon's ECHO or Google HOME

Our idea was to create a 3D printed object which would hide a Raspberry PI inside who would handle voice interactions and eventually make REST calls to CRM services or whatever and then re-translate text to Speech.

As a first prototype, we set up an Android App that would do all of this, given that Android's API provides both Speech to Text an text to Speech out of the box, there's a speaker and a microphone already integrated and we wouldn't have to deal with hardware stuff from the beginning.

HOW!??!

So, we've created an Android app which has 2 steps/actions/screens: first an infinite Wake-up-word receiver (a la "OK GOOGLE") and second, a command receiver which matches recognized text to a set of given actions.

For our small prototype, we set off to send out commands to a small Drone in the form of "Hey robot -> do a long jump!/go forward a bit/turn right and spin!"

Actions were defined via an Ontology File, created via Stanford's ontology project 'PROTEGE'

Once text is recognized via Android's Speech API, we (very basically) search for actions in the Ontology which in turn replies with an API method. API method is then executed and our current implementation makes our drone move around!

Links

A basic Architecture diagram

Archi!

What's next??!

Well, there's a bunch of things we would like to do:

  • On Android, avoid the Speech recognizer dialog with button. Tried it, but it's buggy!
  • Improve time between 'recognized Wake up Word' and 'ready to receive commands'
  • Recognize natural speech and not just words Options:
  • Parse a list of actions instead of just one-at-a-time commands (Combo!)
  • Parse text and recognize actions and parameters via NLP (Natural language Processing)
  • Make everything work on a RaspberryPi!
    • Speech-to-text (offline ideally)
    • Text-to-speech
  • Use Apache Jena to parse and search through the Ontology