The Audio Product Education Institute (APEI), an initiative of the Audio Engineering Society (AES), presents a new webinar in its Voice and DSP education pillar, “Demystifying Beamformers,” addressing microphone beamforming in theory and practice and the latest optimizations for voice recognition platforms. The unique knowledge-sharing online session, featuring three top industry experts with vast experience in the development of the latest cutting-edge voice-based products, will be held Wednesday, July 13, at 12:00 p.m. ET.
Beamforming is a signal processing technique extensively explored in radio frequency and acoustics applications. Thanks to advancements in DSP, beamforming is now widely implemented in loudspeaker directivity control and for the processing of signals from multiple omnidirectional microphones to optimize the performance of speakerphones and voice recognition systems. By giving a voice recognition AI system the cleanest possible voice signal to work with, engineers can assure the most accurate voice recognition and the greatest reliability for voice interface applications.
[AES Announces New Standard for Loudspeaker Measurement]
The performance of a beamformer in voice capture depends on how well the beam pattern of the microphone array can be optimized, and the signal-to-noise ratio (SNR) of the array. In this webinar, attendees will learn about beamformer functionality, including how the number of microphones and geometry impacts performance, the importance of microphone matching and SNR, and how to create steerable arrays. The presentation will also address the technical challenges with beamforming and how new MEMs microphones can help give consistent directionality across the audible range and good SNR.
[SCN Hybrid World: Look Up, Listen Up with These 9 Ceiling Mics]
Following an introduction on beamformers by Paul Beckmann (the co-founder and CTO of DSP Concepts), Sahil Gupta (co-founder and Product Lead at Soundskrit) will discuss the frequency dependency of broadside beamformers and the SNR challenge with differential endfire beamformers when trying to create dipoles. In the third presentation, Arash Radmoghadam (director of engineering and applied machine learning at Fluent.ai) will discuss improving voice activation and recognition with beamforming.
The webinar will be hosted by Dave Lindberg (DB Enterprises, Hong Kong), APEI's Supply Chain and Sourcing pillar co-chair. Attendees will be able to submit questions to the presenters, which will be addressed in the last 30 minutes of this session.
The Audio Product Education Institute’s Voice and DSP education pillar is sponsored by DSP Concepts and underscores the AES’s commitment to providing its membership and the industry at large with information on real-world solutions for audio product development. You can register for the webinar here.