Implementing AI Algorithms with Audio and Video
Jun 15, 2023
ED39
Education Session
CTS: 1
CTS-D: 1
W314AB
Audio
Conferencing and Collaboration
Over the last few years AI algorithms found their way into collaboration devices and applications. In video cameras AI algorithms classifying people or items have become important to allow for automatic camera tracking. In audio devices AI algorithms are being used to identify noise versus meaningful sound to improve the noise reduction algorithms. Implementing AI algorithms with a combination of audio and video information allows improving the behavior of current features as well as the implementation of new features. Using audio information improves the identification of heads and allows to further distinguish between active participants and passive participants, improving automatic camera steering. Using Video information improves the identification of meaningful audio and improves elimination of unwanted sounds. Eliminating voices or not tracking the camera on identified people determined to be outside the meeting are just some of the new features possible by combining audio and video information in the AI algorithms. This session will introduce some of the improved functionality and new functionality possible by this combination, as well as providing ideas to the listener to find further opportunities how this can help their designs and integrations in the future.
Learning Objectives
Speakers
- Describe how AI can be used to improve the video or the audio experience in UC installations.
- Discuss how combining Audio and Video information into one AI algorithm improves the user experience beyond what can be achieved by handling the two separately.
- Conduct installations that take advantage of both Audio and Video information for AI decisions on what image or what audio is relevant in an UC meeting.