50
50
50
50
50
50
50
50

The miniDSP UMA-8 microphone array, with onboard direction detection, echo cancellation, and noise reduction, has a wide variety of applications. In this app note we'll run through its use with Watson, the "intelligent virtual assistant" from IBM to build a ChatBot. For those of you wondering what is a chatbot? It is a service, powered by rules that you interact with via a chat interface.

Why use the UMA-8?[Top]

By default, a chatbot requires a microphone to capture audio and one could say you could use a random 2 dollar plugin mic. This is fine if you are sitting at the computer, but for use of these programs as a "far field" hands-free assistant a more sophisticated microphone is needed. The UMA-8 has:

  • Beam-forming running across an array of 7 microphones to improve voice detection and eliminate extraneous noises.

  • Echo cancellation and noise reduction to reduce the effects of non-voice sounds (like music playing) and noise (traffic, kitchen noises etc).

The UMA-8 is "plug and play" – you do not have to configure anything to make it work with RPi. If you wish, however, you can use the miniDSP plugin to tweak the processing parameters of the microphone array (recommended for advanced users only!)

uma 8 alexa photo

What you need:

  • 1 x Raspberry Pi 2 or 3 along with a USB power supply for your Pi
  • 1 x Keyboard
  • 1 x HDMI Display
  • An internet connection
  • 1 x pair of Headphones (if your HDMI display doesn’t have inbuilt speakers)
  • 1 x miniDSP UMA-8 USB Microphone Array
  • An IBM BlueMix Account (see below details for registration)

As for skill-set required to put this demo together, you'll need:

  • Some basic experience with the Raspberry Pi platform
  • Some basic Bash skills (cd, ls, etc.)
  • Basic Nano text editor skills (opening and saving)
  • An eager mind, ready to learn how to make a Watson chat-bot!

Connecting Everything Up

First up, let’s build our bot! Take your Raspberry Pi and connect up the keyboard, HDMI display and headphones (if you need them). Next, you’ll need to plug in the UMA-8 USB Microphone Array, simply plug it into a USB port! That’s all there is to it as the UMA-8 is plug&play with RPi.

Preparing the Pi

Before powering on the Pi, you’ll need to download Raspbian Jessie LITE from the Raspberry Pi Foundation. Ensure you get the LITE version which doesn’t include a desktop. Burn the downloaded image onto a micro-SD card. Plug the micro-SD card into the Pi and plug in the power. You will see your monitor or TV come up with the a login prompt.

To log in, use the default Raspberry Pi login details.

Username: pi

Password: raspberry

If you have an Ethernet (wired) internet connection available, you're all set. Otherwise, you’ll need to setup the Wifi. Follow this guide to configure the Wifi on your Pi.

Downloading and Installing NodeJS

Once you’ve hacked your way into the pi, it’s time to start installing the brains of our bot. First up we need to install NodeJS. We can easily download node from our inbuilt package manager. . Run these commands to get NodeJS downloaded and installed:

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs

Getting the Bot’s Brains Connected Up

Next up, we need to install the application that NodeJS will run to allow our bot to hear (via the UMA-8) and speak (via the speakers in your display, or headphones). First download the repository which contains the code:

git clone https://github.com/minidsp/uma8-watson-chatbot mic-array

cd mic-array

Now, let’s install all of the extra bits and bobs this application requires to run:

sudo apt-get install libasound2-dev
npm install

Once the installation is complete, you’ll be ready to jump into IBM Watson

Configuring IBM Watson

Now that our Pi is completely ready to go, let’s put it aside for a second, we need to configure our bot’s brain so he knows how we’d like him to respond to our questions! If you haven’t already, sign up for IBM BlueMix. Once you’re logged in, navigate to the BlueMix Services Dashboard.

Click the big “Create Watson service” button in the middle of the screen.

uma-8-watson-S1

Next, you’ll be presented with a catalogue of all the service IBM BlueMix has to offer. Let’s skip down to the “Watson” section in the sidebar.

UMA8+WATSON-S2

We now need to add all 3 of these Watson services, one at a time:

  • Conversation
  • Speech to Text
  • Text to Speech

uma8+watson-S3

First up, let’s install the Conversation service. Click the tile (outlined in red above). Once the page has loaded, we can leave the default settings and choose the “Create” button in the bottom left corner.

Lastly, we need to get the “Service credentials”. There details are used by our app will to connect to Watson.

uma8+watson-S5

Click the “View credentials” button to expand the tile and then copy the lines which contain the “username” and “password” into a text document on your computer. We’ll need these later, so make sure that they’re clearly labelled as “Conversation Creds” or similar.

uma8+watson-s6

Once you’ve got those detailed stored on your computer, we can move on to adding the other two services. Repeat this process and copy the credentials for the other 2 services too. NOTE: You will need all 3 pairs of credentials in order for the app to work.

BM S7

Programming our Bot with Watson Conversation

Now that your BlueMix account is setup and you have all of the required services created, it’s time to write the logic for your chat bot. IBM have written an excellent blog to explain the basics and get you started. Check it out from this link.

Once you have created a bot you’re happy with, you’ll need to note down it’s workspace ID so you can tell your Pi which bot to use. To get your workspace ID, navigate to the conversation homepage by clicking the “Watson Conversation” link in the top-left of the bot configuration page.

uma8+watson-WorkspaceID0

Next, select the three dots in the top-right corner of the workspace tile you’d like to do and choose “View details”

uma8+watson-WorkspaceID1

Finally, copy down the “Workspace ID”. There’s a small, handy button next to the workspace ID which allows you to copy it to your clipboard.

uma8+watson-WorkspaceID2

Telling our Pi how to talk with Watson

Now that we have finished the process of retrieving our Watson credentials, we need to let our bot know to login to Watson. Jump back in front of your Pi, and ensure you’re in the mic-array directory which contains the app. We need to edit the “settings.js” file to include these details. Open the file with this command:

sudo nano settings.js

Now, replace each of the settings outlined in this file, with the settings you recorded earlier. You will also notice the workspace ID setting under the “Conversation”. Type in the workspace ID which we saved earlier here.

You’ll also see a setting to include a name for your bot. By default, his name is Duncan and he will respond when ever you address him by his name. You can change these names if you like!

Once your app is configured, it’s ready to run!

node app.js

 

Wrapping up[Top]

That's it for this app note! Have fun, and please let us know about your UMA-8 and Watson Chatbot experience in our forum.


50
50
50
50
50
50
50
50

The miniDSP UMA-16 & UMA-8 microphone arrays are low cost USB microphone arrays and the perfect fit for software developpers looking for a plug&play USB audio interface. Thanks to their RAW/Unprocessed audio multichannel capability, one can test/develop beamforming or AEC algorithms in a very short time within Matlab. In this app note we'll showcase an example on how to discover the UMA-16/UMA-8 within Matlab. For non commercial applications, we recommend Matlab Home (only 99USD at time of writing) as a very powerful & affordable solution to get started!

Important note: While this app note showcases how to discover the UMA-16 within Matlab environment, we'd like to highlight that our team will not be providing any tech support for custom Matlab applications. Please contact Matlab technical support/community. 

Why use the UMA-16 or UMA-8?[Top]

As a software/firmware or hardware engineer, building a microphone array that easily interfaces with your development environment isn't always straightforward. System design, schematics, layout, prototyping will seriously slowdown your ability to focus on what may matter the most: software and product development. That is actually the reason why miniDSP engineered the UMA-8 and UMA-16! As our team struggled ways to quickly develop proof of concepts for new mic arrangements, we built the UMA16. A two-board design with a brain (XMOS + SHARC doing PDM to PCM to USB conversion) and a low cost 2layer MEMS PCB stacked on top. Want to modify the arrangement? Sure, we've provided all schematics to get you started! 

UMA-16 with camera

Discovering the UMA-8/16 with Matlab[Top]

Plug the supplied USB cable into the miniUSB port on the UMA-16, and plug the other end into a spare USB port on your PC/Mac. Plug the 12V external power supply. 

Inside Matlab, you can define your recording interface using the device reader object. More info on this object can be found here. 

fs = 48000;
audioFrameLength = 1024;
deviceReader = audioDeviceReader(...
'Device', 'miniDSP ASIO Driver',...
'Driver', 'ASIO', ...
'SampleRate', fs, ...
'NumChannels', 16 ,...
'OutputDataType','double',...
'SamplesPerFrame', audioFrameLength);

Notice how the UMA-8/16 will be discovered as "miniDSP ASIO Driver". Make sure that you use ASIO for your definition as standard WDM recording object won't be working. 

That's all, you now have real time audio within your environment! Lucky you, one miniDSP community member (Flo96) was very kind to share with us his starting point for a beamformer! Check out this link for more details and make sure to thank Flo96 for his effort.

Now, it's time to get started on your coding. :)


Some great links worth reading

[Top]

To get you started in your development, here are few links to some great resources. Please send us more examples/comments so we can populate this list! 

While this example isn't 100% similar (linear vs rectangular/circular array), the concepts of live/recorded are very much applicable and could be a great starting point for your project. 

AudioArrayDOAEstimationExample 01

Similar to the above example, here is a great app note with examples on how to make use of the real time audio libraries within Matlab. 

audiostreamprocessing developmentworkflow 

A great starting point to understand how geometry will affect the performance of your array is ploting the Grating Lobe diagram showing the positions of the peaks of the narrowband array pattern. The array pattern depends only upon the geometry of the array and not upon the types of elements which make up the array. Visible and nonvisiblegrating lobes are displayed as open circles.

GratingLobeDiagramForURAExample 01

It's the structure of the UMA-16 so you may as well learn to use this toolbox! Check out examples here and do note that the indexing is different from the miniDSP channel assignment. 

fg ura arrayindexing

Wrapping up[Top]

That's it for this app note! Have fun, and please send us your feedback by contacting us on how we can improve this app note for future users! Hopefully an open source project will spring up from the community effort! 
Have fun! 


50
50
50
50
50
50
50
50

The miniDSP UMA-8 microphone array, with onboard direction detection, echo cancellation, and noise reduction, has a wide variety of applications. In this app note we'll run through its use with Amazon's Alexa "intelligent personal assistant" and a Raspberry Pi.

This app note is based heavily on the article published by Amazon for their Alexa sample application.

Why use the UMA-8?[Top]

In the Amazon example article for the Raspberry Pi, it's suggested that a cheap USB microphone be used. (The Raspberry Pi does not have an inbuilt microphone.) This is not an optimum solution. Instead:

  • The UMA-8 has beam-forming running across the 7 microphones, which improves voice detection.

  • The UMA-8 also has echo cancellation and noise reduction, to reduce the effects of non-voice sounds (like music playing) and noise (traffic, kitchen noises etc).

The UMA-8 is "plug and play" – you do not have to configure anything to make it work with the Raspberry Pi and Alexa, just plug it into your Pi and follow the instructions!

1. Getting connected[Top]

Connect your Raspberry Pi to a keyboard, mouse, and an HDMI monitor or TV. For the UMA-8, just plug it into one of the USB ports. (It is powered over USB).

When installing the Alexa application, you will have a choice on whether to output audio from the 3.5mm analog jack or over HDMI. We selected HDMI and connected the Pi directly to the second input of a nanoAVR HD so the audio output from the Pi goes through a home theater system with room EQ. We bet Alexa never sounded so good :)

Photograph of UMA-8, Raspberry Pi, and nanoAVR HDA

Before powering on the Pi, download Raspbian from the Raspberry Pi Foundation and burn it to a micro-SD card. Plug the micro-SD card into the Pi and plug in the power. You will see your monitor or TV come up with the Raspbian desktop.

At this point, you may like to explore a little. If you are using a Raspberry Pi 3 with inbuilt Wifi, use the Settings (top right of screen) to join your wireless network. You may also want to change your keyboard layout to U.S., as it defaults to a U.K. layout.

2. Setting Up[Top]

All of the steps are documented in the Amazon article. You will need to create an Amazon developer account and "register" your Raspberry Pi. Then, using the terminal in Raspbian, download and install the sample application.

You will need to edit a text file to enter the credentials Amazon generated for your Pi. When you do so, be aware that for the field "Product ID" you must use the value that Amazon labels "Device Type ID." (If you use "Security Profile ID", it won't work and you will have to start again.)

With the credentials entered, run the installer. Select Yes to the prompts asking if you want to download required software. You will also need to select audio output from the 3.5mm jack or HDMI. (We used HDMI.) The installer may take half an hour or more to run.

amazon dev account

3. Running[Top]

To start the Alexa client, follow the steps in the Amazon article. Be sure to follow all steps in the instructions exactly!

You will need to open three terminal windows and execute a command in each of them. After entering the second command, wait until a window pops up asking you to authenticate. Clicking the Yes button will open a browser window – you will need to log into your Amazon developer account and then click "Okay."

The third command starts the "wake word" service, so you can say "Alexa" to wake up the device. You don't have to use this. Instead, you can just click the "Listen" button on the screen.

Once you have completed all steps, just say "Alexa". You should get a beep in response, after which you can ask it any question you like.

(The LEDs around the edge of the board will indicate the direction of beam-forming.)

To do more, use the Alexa app on your iOS or Android device to browse for and add "skills."

Limitations[Top]

There are some things to be aware of:

  • If you are outside the USA:

    • You will have difficulty getting the companion Alexa app, as it's only available in the US Apple App Store and Google Play Store. There are workarounds, at least for the iOS version, like here.

    • You will not be able to set your location to outside of the USA.

  • Some things are not supported with this sample/Raspberry Pi version, like Amazon Music. You can still add skills with the Alexa app.

Wrapping up[Top]

That's it for this app note! Have fun, and please let us know about your UMA-8 and Raspberry Pi/Alexa experience in our forum.