Increasingly there are some setups where multiple microphone devices are becoming available to a regular or mobile computing device like a laptop or smartphone. Examples of these include:
- A headset audio adaptor (whether USB or Bluetooth connected) that has an integrated mic but is used along with a headset that has its own microphone system
- A (wired or Bluetooth) headset with an integrated mic connected to a computer that has its own mic or is also connected to a Webcam or similar peripheral that has its own mic
- The use of one or more stereo-microphone setups, whether a single-piece (2-element) stereo microphone or a pair of mono microphones, connected to a computer.
All these setups can lead to the creation of a multiple-microphone array which can lead to accurate voice capture and improved background-noise rejection. This becomes important for telecommunications but is also as important when you are dealing with voice-recognition applications like voice-driven personal assistants (Siri, Cortana, Google Now) or voice-to-text transcription.
Here, this would require that the microphone array is created at the operating-system level rather than the hardware level. It would require that the OS enumerate all microphone devices that are connected and active to establish a software-defined microphone array based on these mics.
This would lead to the software having to learn about the microphone-array setup including the proximity of the mics to each other as well as how they pick the sound up. This is to create an ideal “voice focus” that is required to gain benefit from the microphone array.
In some cases, this may be achieved automatically but there are situations where it may require the operator to adjust the settings manually. These situations may come about with microphones that have different characteristics like pick-up ranges and sensitivities.
Another factor that affects this kind of setup is whether a microphone device will be active at all times during the usage session. This can happen with, for example, a headset that is connected to or disconnected from a tablet or laptop on an ad-hoc basis. Similarly, the user may move around with their headset while using the microphone-equipped tablet or laptop, an activity more feasible with a Bluetooth headset or adaptor. Here, this factor requires the software to re-define the microphone array so as to “catch” the user’s voice.
In the case of a user moving around between microphones, the requirement would be about readjusting the microphone array in real time to identify the key sounds to put the focus towards.
These are issues that may limit the idea of creating a software-defined microphone array, especially for voice recognition or telecommunications. Let’s not forget that a software-defined microphone array will also be demanding computer resources which can be very limiting with not-so-powerful setups.
But once the concept of software-defined microphone arrays is proven and able to be implemented at the operating system level, it could be a path towards allowing users to gain the benefits from a microphone array while being able to use a combination of existing microphone-equipped devices.