The miniDSP UMA-8 microphone array, with onboard direction detection, echo cancellation, and noise reduction, has a wide variety of applications. In this app note we'll run through its use with Google Assistant and a Raspberry Pi.

This app note is based on this article published by Google. You will need to follow that article closely.

google assistant uma8

Why use the UMA-8?[Top]

In Google's hardware setup page, it's suggested that a cheap USB microphone be used. (The Raspberry Pi does not have an inbuilt microphone.) This is not an optimum solution. Instead:

  • The UMA-8 has beam-forming running across the 7 microphones, which improves voice detection.

  • The UMA-8 also has echo cancellation and noise reduction, to reduce the effects of non-voice sounds (like music playing) and noise (traffic, kitchen noises etc).

1. Getting connected[Top]

Connect your Raspberry Pi to a keyboard, mouse, and an HDMI monitor or TV. Plug the UMA-8 into one of the USB ports. (It is powered over USB).

(Google's hardware setup page provides a number of different setup methods, including "headless." For simplicity we suggest starting with a connected monitor, keyboard and mouse. Later on you can try a headless version.)

Install Raspbian onto a micro-SD card and insert it into the Pi. The Google page linked above suggests using NOOBS. We just downloaded Raspbian Stretch with Desktop and burnt it onto an SD card.

Power on the Raspberry Pi. After a short time you should see a desktop appear on your monitor. Click on the Raspbian icon at the top left, select Preferences then Raspberry Pi Configuration. Set your timezone on the Localization tab. You may at this time also wish to enable SSH.

You may like to explore a little further. If you are using a Raspberry Pi 3 with inbuilt Wifi, use the Settings (top right of screen) to join your wireless network. You may also want to change your keyboard layout to U.S., as it defaults to a U.K. layout.

2. Setting up audio[Top]

We connected our Raspberry Pi to the second input of a nanoAVR HD so the audio output from the Pi goes through a home theater system with room EQ. We bet Google Assistant never sounded so good :)

Photograph of UMA-8, Raspberry Pi, and nanoAVR HDA

The instructions on the Google page Configure and Test the Audio aren't quite in the right order. You will need to open a terminal on the Pi to do this. First, list your output audio devices by typing

  alist -l

Here is our output. Note that the HDMI output is card 0 and device 0:

**** List of PLAYBACK Hardware Devices ****
card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA]
  Subdevices: 8/8
  Subdevice #0: subdevice #0
  Subdevice #1: subdevice #1
  Subdevice #2: subdevice #2
  Subdevice #3: subdevice #3
  Subdevice #4: subdevice #4
  Subdevice #5: subdevice #5
  Subdevice #6: subdevice #6
  Subdevice #7: subdevice #7
card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: UAC20 [miniDSP micArray XVSM UAC2.0], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

Now list your input audio devices by typing

  arecord -l

Here is our output. Note that the UMA-8 output is card 1 and device 0:

**** List of CAPTURE Hardware Devices ****
card 1: UAC20 [miniDSP micArray XVSM UAC2.0], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

Now you will need to edit the file .asoundrc in your home directory /home/pi so that it looks like this (note that the card and device number for pcm.mic and pcm.speaker are as detected by the above commands):

pcm.!default {
  type asym
  capture.pcm "mic"
  playback.pcm "speaker"
}
pcm.mic {
  type plug
  slave {
    pcm "hw:1,0"
  }
}
pcm.speaker {
  type plug
  slave {
    pcm "hw:0,0"
  }
}

To test that the UMA-8 is working, enter this and then speak aloud near the UMA-8:

  arecord --format=S16_LE --duration=5 --rate=16000 --file-type=raw out.raw

To play the file back, enter this:

  aplay --format=S16_LE --rate=16000 out.raw

3. Setting up Google Assistant[Top]

You will now need to Configure a Developer Project and Account Settings. (Follow the instructions on that page. There are a lot of steps, so take your time and make sure you get everything done. Also, you might want to create a separate Google account for this purpose using e.g. a new gmail address.)

Now it's time to install the application. Follow the instructions on the Download the Library and Run the Sample page.

4. Running[Top]

Now you can start the sample application (finally!). In a terminal window, enter

  google-assistant-demo

Then say "Hey Google" followed by a question, like this: "Hey Google, where am I?" or "OK Google, what is my name?".

By default, the UMA-8 is configured for automatic direction detection. You can alter the behavior with the switches on the board as follows:

  • SW1: enables and disables microphone beam-forming.
  • SW3: manually select direction of beam-forming (if beam-forming is turned on and automatic detection is turned off).
  • SW4: enables and disables automatic detection of microphone direction.

(The LEDs around the edge of the board will indicate the direction of beam-forming.)

After some experience with the setup, you may wish to tweak the audio processing parameters of the UMA-8. To do this, you will need to unplug it from your Raspberry Pi and plug it into your Windows or Mac computer. Start the miniDSP micArray configuration program and click the Connect button to change parameters. For example, to turn on the automatic gain control and increase mic signal levels, try settings like this:

miniDSP UMA-8 microphone array AGC settings

Wrapping up[Top]

That's it for this app note! Have fun, and please let us know about your UMA-8 and Raspberry Pi with Google Assistant experience in our forum.


 

The miniDSP UMA-8 microphone array, with onboard direction detection, echo cancellation, and noise reduction, has a wide variety of applications. In this app note we'll run through its use with Cortana, the "intelligent virtual assistants" from Microsoft.

Why use the UMA-8?[Top]

By default, Microsoft Cortana uses the inbuilt microphone in your computer. This is fine if you are sitting at the computer, but for use of these programs as a "far field" hands-free assistant a more sophisticated microphone is needed. The UMA-8 has:

  • Beam-forming running across an array of 7 microphones to improve voice detection and eliminate extraneous noises.

  • Echo cancellation and noise reduction to reduce the effects of non-voice sounds (like music playing) and noise (traffic, kitchen noises etc).

The UMA-8 is "plug and play" – you do not have to configure anything to make it work with Cortana. If you wish, however, you can use the miniDSP plugin to tweak the processing parameters of the microphone array (recommended for advanced users only!)

 

Using the UMA-8 with Cortana[Top]

Plug the supplied USB cable into the micro-USB port on the UMA-8, and plug the other end into a spare USB port on your Windows 10 computer. Open the Control Panel and go to Manage Audio Devices. Select the Recording tab and set the miniDSP micArray as the default device:

Selecting the UMA-8 microphone array for Cortana

To enable Cortana to respond to hands-free voice commands ("Hey Cortana") on Windows 10, you will need to enable it. This is described in this article on c-net:

And that's it! You should now be able to say "Hey Cortana" from anywhere in the room to activate it. Check out Microsoft's help page for more details on how Cortana can help you.

cortana win10

Further experimentation[Top]

By default, the UMA-8 is configured for automatic direction detection. You can alter the behavior with the switches on the board as follows:

  • SW1: enable and disable microphone beam-forming.
  • SW3: manually select direction of beam-forming.
  • SW4: enable and disable automatic direction detection.

Note that the LEDs around the edge of the board (on the underside, if the switches are on the top) will indicate the direction of beam-forming.

After some experience with the setup, you may wish to tweak the audio processing parameters of the UMA-8. Start the miniDSP micArray configuration program and click the Connect button to change parameters. For example, to turn on the automatic gain control and increase mic signal levels, try settings like this:

miniDSP UMA-8 microphone array AGC settings

Wrapping up[Top]

That's it for this app note! Have fun, and please let us know about your UMA-8 and Cortana experience in our forum.


The miniDSP UMA-8 microphone array, with onboard direction detection, echo cancellation, and noise reduction, has a wide variety of applications. In this app note we'll run through its use with Watson, the "intelligent virtual assistant" from IBM to build a ChatBot. For those of you wondering what is a chatbot? It is a service, powered by rules that you interact with via a chat interface.

Why use the UMA-8?[Top]

By default, a chatbot requires a microphone to capture audio and one could say you could use a random 2 dollar plugin mic. This is fine if you are sitting at the computer, but for use of these programs as a "far field" hands-free assistant a more sophisticated microphone is needed. The UMA-8 has:

  • Beam-forming running across an array of 7 microphones to improve voice detection and eliminate extraneous noises.

  • Echo cancellation and noise reduction to reduce the effects of non-voice sounds (like music playing) and noise (traffic, kitchen noises etc).

The UMA-8 is "plug and play" – you do not have to configure anything to make it work with RPi. If you wish, however, you can use the miniDSP plugin to tweak the processing parameters of the microphone array (recommended for advanced users only!)

uma 8 alexa photo

What you need:

  • 1 x Raspberry Pi 2 or 3 along with a USB power supply for your Pi
  • 1 x Keyboard
  • 1 x HDMI Display
  • An internet connection
  • 1 x pair of Headphones (if your HDMI display doesn’t have inbuilt speakers)
  • 1 x miniDSP UMA-8 USB Microphone Array
  • An IBM BlueMix Account (see below details for registration)

As for skill-set required to put this demo together, you'll need:

  • Some basic experience with the Raspberry Pi platform
  • Some basic Bash skills (cd, ls, etc.)
  • Basic Nano text editor skills (opening and saving)
  • An eager mind, ready to learn how to make a Watson chat-bot!

Connecting Everything Up

First up, let’s build our bot! Take your Raspberry Pi and connect up the keyboard, HDMI display and headphones (if you need them). Next, you’ll need to plug in the UMA-8 USB Microphone Array, simply plug it into a USB port! That’s all there is to it as the UMA-8 is plug&play with RPi.

Preparing the Pi

Before powering on the Pi, you’ll need to download Raspbian Jessie LITE from the Raspberry Pi Foundation. Ensure you get the LITE version which doesn’t include a desktop. Burn the downloaded image onto a micro-SD card. Plug the micro-SD card into the Pi and plug in the power. You will see your monitor or TV come up with the a login prompt.

To log in, use the default Raspberry Pi login details.

Username: pi

Password: raspberry

If you have an Ethernet (wired) internet connection available, you're all set. Otherwise, you’ll need to setup the Wifi. Follow this guide to configure the Wifi on your Pi.

Downloading and Installing NodeJS

Once you’ve hacked your way into the pi, it’s time to start installing the brains of our bot. First up we need to install NodeJS. We can easily download node from our inbuilt package manager. . Run these commands to get NodeJS downloaded and installed:

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs

Getting the Bot’s Brains Connected Up

Next up, we need to install the application that NodeJS will run to allow our bot to hear (via the UMA-8) and speak (via the speakers in your display, or headphones). First download the repository which contains the code:

git clone https://github.com/minidsp/uma8-watson-chatbot mic-array

cd mic-array

Now, let’s install all of the extra bits and bobs this application requires to run:

sudo apt-get install libasound2-dev
npm install

Once the installation is complete, you’ll be ready to jump into IBM Watson

Configuring IBM Watson

Now that our Pi is completely ready to go, let’s put it aside for a second, we need to configure our bot’s brain so he knows how we’d like him to respond to our questions! If you haven’t already, sign up for IBM BlueMix. Once you’re logged in, navigate to the BlueMix Services Dashboard.

Click the big “Create Watson service” button in the middle of the screen.

uma-8-watson-S1

Next, you’ll be presented with a catalogue of all the service IBM BlueMix has to offer. Let’s skip down to the “Watson” section in the sidebar.

UMA8+WATSON-S2

We now need to add all 3 of these Watson services, one at a time:

  • Conversation
  • Speech to Text
  • Text to Speech

uma8+watson-S3

First up, let’s install the Conversation service. Click the tile (outlined in red above). Once the page has loaded, we can leave the default settings and choose the “Create” button in the bottom left corner.

Lastly, we need to get the “Service credentials”. There details are used by our app will to connect to Watson.

uma8+watson-S5

Click the “View credentials” button to expand the tile and then copy the lines which contain the “username” and “password” into a text document on your computer. We’ll need these later, so make sure that they’re clearly labelled as “Conversation Creds” or similar.

uma8+watson-s6

Once you’ve got those detailed stored on your computer, we can move on to adding the other two services. Repeat this process and copy the credentials for the other 2 services too. NOTE: You will need all 3 pairs of credentials in order for the app to work.

BM S7

Programming our Bot with Watson Conversation

Now that your BlueMix account is setup and you have all of the required services created, it’s time to write the logic for your chat bot. IBM have written an excellent blog to explain the basics and get you started. Check it out from this link.

Once you have created a bot you’re happy with, you’ll need to note down it’s workspace ID so you can tell your Pi which bot to use. To get your workspace ID, navigate to the conversation homepage by clicking the “Watson Conversation” link in the top-left of the bot configuration page.

uma8+watson-WorkspaceID0

Next, select the three dots in the top-right corner of the workspace tile you’d like to do and choose “View details”

uma8+watson-WorkspaceID1

Finally, copy down the “Workspace ID”. There’s a small, handy button next to the workspace ID which allows you to copy it to your clipboard.

uma8+watson-WorkspaceID2

Telling our Pi how to talk with Watson

Now that we have finished the process of retrieving our Watson credentials, we need to let our bot know to login to Watson. Jump back in front of your Pi, and ensure you’re in the mic-array directory which contains the app. We need to edit the “settings.js” file to include these details. Open the file with this command:

sudo nano settings.js

Now, replace each of the settings outlined in this file, with the settings you recorded earlier. You will also notice the workspace ID setting under the “Conversation”. Type in the workspace ID which we saved earlier here.

You’ll also see a setting to include a name for your bot. By default, his name is Duncan and he will respond when ever you address him by his name. You can change these names if you like!

Once your app is configured, it’s ready to run!

node app.js

Further experimentation[Top]

By default, the UMA-8 is configured for automatic direction detection. You can alter the behavior with the switches on the board as follows:

  • SW1: enable and disable microphone beam-forming.
  • SW3: manually select direction of beam-forming.
  • SW4: enable and disable automatic direction detection.

Note that the LEDs around the edge of the board (on the underside, if the switches are on the top) will indicate the direction of beam-forming.

After some experience with the setup, you may wish to tweak the audio processing parameters of the UMA-8. Start the miniDSP micArray configuration program and click the Connect button to change parameters. For example, to turn on the automatic gain control and increase mic signal levels, try settings like this:

miniDSP UMA-8 microphone array AGC settings

Wrapping up[Top]

That's it for this app note! Have fun, and please let us know about your UMA-8 and Watson Chatbot experience in our forum.


The miniDSP UMA-8 microphone array, with onboard direction detection, echo cancellation, and noise reduction, has a wide variety of applications. In this app note we'll run through its use with Siri, the "intelligent virtual assistant" from Apple loaded on OSx.

Note: this app note applies only to desktops and laptops running MacOS. The UMA-8 typically requires more power than can be supplied by phones and tablets.

Why use the UMA-8?[Top]

By default, Siri would use the inbuilt microphone in your computer. This is fine if you are sitting at the computer, but for use of these programs as a "far field" hands-free assistant a more sophisticated microphone is needed. The UMA-8 has:

  • Beam-forming running across an array of 7 microphones to improve voice detection and eliminate extraneous noises.

  • Echo cancellation and noise reduction to reduce the effects of non-voice sounds (like music playing) and noise (traffic, kitchen noises etc).

The UMA-8 is "plug and play" – you do not have to configure anything to make it work with Siri. If you wish, however, you can use the miniDSP plugin to tweak the processing parameters of the microphone array (recommended for advanced users only!)

Using the UMA-8 with Siri[Top]

Plug the supplied USB cable into the micro-USB port on the UMA-8, and plug the other end into a spare USB port on your Mac. Open System Preferences and then Sound, and select the miniDSP micArray as the input source:

Selecting the UMA-8 microphone array for Siri

To enable Siri to respond to hands-free voice commands ("Hey Siri") on the Mac, you will need to a. set up a keyboard shortcut and b. map the keyboard shortcut to a voice command. This is described in this article on MacWorld:

And that's it! You should now be able to say "Hey Siri" from anywhere in the room to activate it. Check out the section Apple's help page for more details on Siri.

macos sierra siri waveform

Further experimentation[Top]

By default, the UMA-8 is configured for automatic direction detection. You can alter the behavior with the switches on the board as follows:

  • SW1: enable and disable microphone beam-forming.
  • SW3: manually select direction of beam-forming.
  • SW4: enable and disable automatic direction detection.

Note that the LEDs around the edge of the board (on the underside, if the switches are on the top) will indicate the direction of beam-forming.

After some experience with the setup, you may wish to tweak the audio processing parameters of the UMA-8. Start the miniDSP micArray configuration program and click the Connect button to change parameters. For example, to turn on the automatic gain control and increase mic signal levels, try settings like this:

miniDSP UMA-8 microphone array AGC settings

Wrapping up[Top]

That's it for this app note! Have fun, and please let us know about your UMA-8 and Siri experience in our forum.


The miniDSP UMA-8 microphone array, with onboard direction detection, echo cancellation, and noise reduction, has a wide variety of applications. In this app note we'll run through its use with Amazon's Alexa "intelligent personal assistant" and a Raspberry Pi.

This app note is based heavily on the article published by Amazon for their Alexa sample application.

Why use the UMA-8?[Top]

In the Amazon example article for the Raspberry Pi, it's suggested that a cheap USB microphone be used. (The Raspberry Pi does not have an inbuilt microphone.) This is not an optimum solution. Instead:

  • The UMA-8 has beam-forming running across the 7 microphones, which improves voice detection.

  • The UMA-8 also has echo cancellation and noise reduction, to reduce the effects of non-voice sounds (like music playing) and noise (traffic, kitchen noises etc).

The UMA-8 is "plug and play" – you do not have to configure anything to make it work with the Raspberry Pi and Alexa, just plug it into your Pi and follow the instructions!

1. Getting connected[Top]

Connect your Raspberry Pi to a keyboard, mouse, and an HDMI monitor or TV. For the UMA-8, just plug it into one of the USB ports. (It is powered over USB).

When installing the Alexa application, you will have a choice on whether to output audio from the 3.5mm analog jack or over HDMI. We selected HDMI and connected the Pi directly to the second input of a nanoAVR HD so the audio output from the Pi goes through a home theater system with room EQ. We bet Alexa never sounded so good :)

Photograph of UMA-8, Raspberry Pi, and nanoAVR HDA

Before powering on the Pi, download Raspbian from the Raspberry Pi Foundation and burn it to a micro-SD card. Plug the micro-SD card into the Pi and plug in the power. You will see your monitor or TV come up with the Raspbian desktop.

At this point, you may like to explore a little. If you are using a Raspberry Pi 3 with inbuilt Wifi, use the Settings (top right of screen) to join your wireless network. You may also want to change your keyboard layout to U.S., as it defaults to a U.K. layout.

2. Setting Up[Top]

All of the steps are documented in the Amazon article. You will need to create an Amazon developer account and "register" your Raspberry Pi. Then, using the terminal in Raspbian, download and install the sample application.

You will need to edit a text file to enter the credentials Amazon generated for your Pi. When you do so, be aware that for the field "Product ID" you must use the value that Amazon labels "Device Type ID." (If you use "Security Profile ID", it won't work and you will have to start again.)

With the credentials entered, run the installer. Select Yes to the prompts asking if you want to download required software. You will also need to select audio output from the 3.5mm jack or HDMI. (We used HDMI.) The installer may take half an hour or more to run.

amazon dev account

3. Running[Top]

To start the Alexa client, follow the steps in the Amazon article. Be sure to follow all steps in the instructions exactly!

You will need to open three terminal windows and execute a command in each of them. After entering the second command, wait until a window pops up asking you to authenticate. Clicking the Yes button will open a browser window – you will need to log into your Amazon developer account and then click "Okay."

The third command starts the "wake word" service, so you can say "Alexa" to wake up the device. You don't have to use this. Instead, you can just click the "Listen" button on the screen.

Once you have completed all steps, just say "Alexa". You should get a beep in response, after which you can ask it any question you like.

By default, the UMA-8 is configured for automatic direction detection. You can alter the behavior with the switches on the board as follows:

  • SW1: enables and disables microphone beam-forming.
  • SW3: manually select direction of beam-forming (if beam-forming is turned on and automatic detection is turned off).
  • SW4: enables and disables automatic detection of microphone direction.

(The LEDs around the edge of the board will indicate the direction of beam-forming.)

After some experience with the setup, you may wish to tweak the audio processing parameters of the UMA-8. To do this, you will need to unplug it from your Raspberry Pi and plug it into your Windows or Mac computer. Start the miniDSP micArray configuration program and click the Connect button to change parameters. For example, to turn on the automatic gain control and increase mic signal levels, try settings like this:

miniDSP UMA-8 microphone array AGC settings

To do more, use the Alexa app on your iOS or Android device to browse for and add "skills."

Limitations[Top]

There are some things to be aware of:

  • If you are outside the USA:

    • You will have difficulty getting the companion Alexa app, as it's only available in the US Apple App Store and Google Play Store. There are workarounds, at least for the iOS version, like here.

    • You will not be able to set your location to outside of the USA.

  • Some things are not supported with this sample/Raspberry Pi version, like Amazon Music. You can still add skills with the Alexa app.

Wrapping up[Top]

That's it for this app note! Have fun, and please let us know about your UMA-8 and Raspberry Pi/Alexa experience in our forum.