You can edit almost every page by Creating an account. Otherwise, see the FAQ.

Audeering

From EverybodyWiki Bios & Wiki



audEERING GmbH is a German company that provides software for artificial intelligence based on automatic audio analysis. The company maintains the openSMILE toolkit, an open source audio analysis software which serves as world-wide standard in speech feature extraction and automatic emotion recognition. Audeering contribues to the research area of affective computing and was founded by Björn Schuller in 2012 as a spin-off of the Machine Intelligence and Signal Processing research group at the Technische Universität München (TUM), Munich, Germany.

audEERING GmbH
TypeGmbH
Founded2012
HeadquatersGilching, Germany
FounderDagmar Schuller (CEO),
Prof. Björn Schuller (CSO),
Dr. Florian Eyben (CTO),
Dr. Martin Wöllmer (CIO)
ProductsopenSMILE
Websitewww.audeering.com

Overview[edit]

audEERING mainly develops software for paralinguistic information retrieval and Emotion AI (artificial intelligence). Unlike automatic speech recognition (ASR), Emotion AI does not extract the spoken content out of a speech signal, but aims to automatically recognize the characteristics of a given speech segment. Such characteristics in human speech are a speaker's emotion..[1], personality, level of interest, and health states like depression, intoxication, or vocal pathological disorders.

First applications of Emotion AI and paralinguistic information retrieval include market research tools for customer satisfaction analysis, call center quality monitoring, voice coaching, targeted advertising, emotion-sensitive speech dialog systems[2], humanoid robots, voice interfaces for gaming, and clinical voice analysis for detecting diseases like Parkinson and depression. Another possible application is speech analysis supporting early diagnosis of Autism.

Since 2013, audEERING maintains the audio analysis toolkit openSMILE[3], an open-source tool for audio feature extraction and pattern recognition which has become a standard tool within the affective computing research community and has been used as benchmark software in several research competitions such as the annual Interspeech computational paralinguistics challenge[4]. Between 2013 and 2018, openSMILE has been downloaded more than 150000 times. openSMILE and its extension openEAR have been cited in about 1400 scientific publications[5][6].

Technology[edit]

Research activities of audEERING include the development and optimization of various signal processing and pattern recognition technologies such as voice activity detection[7], audio feature extraction, automatic emotion recognition, and auditory scene analysis. The founders of audEERING have published pioneering work on the introduction of Long Short-Term Memory deep learning methods for automatic speech and emotion recognition[8][9].

audEERING has contributed to the Geneva Minimalistic Acoustic Parameter Set (GeMAPS)[10] baseline standards for audio research related to the human voice. Together with Prof. Klaus Scherer (director of the Swiss Center for Affective Sciences in Geneva) audEERING has developed an automatic emotion recognition technology which is based on the appraisal theory and enables modeling and detection of 50 different emotion classes[11].

In 2018, audEERING received a substantial funding from the Danish audio device manufacturer GN who has acquired a minority shareholding from audEERING and supports the company as strategic investor[12]

Awards[edit]

In 2010, openSMILE was awarded in the context of the ACM Multimedia Open Source Competition. In a report published by Gartner Inc. in 2017, audEERING was named „Vendor to Watch“ as one of the most relevant providers of next-generation artificial intelligence based on affective computing[13]. Together with GfK, audEERING developed the „MarketBuilder Voice“, an emotion recognition tool for market research, which won the BVM price for Market Research Innovation 2017[14]. audEERING was awarded as „Innovator of the Year“ 2017, winning the Digital Marketing Innovation World Cup[15]. In 2018, audEERING won the Bavarian Innovation Award, which was provided by the Bavarian Minister of Economics Hubert Aiwanger, the Bavarian Chamber of Industry and Commerce, and the working group of the Bavarian Chamber of Crafts[16]

References[edit]

  1. B. Schuller, B. Vlasenko, F. Eyben, M. Wöllmer, A. Stuhlsatz, A. Wendemuth, G. Rigoll, "Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies (Extended Abstract)," in Proc. of ACII 2015, Xi'an, China, invited for the Special Session on Most Influential Articles in IEEE Transactions on Affective Computing.
  2. M. Schröder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, M. Valstar, M. Wöllmer, "Building Autonomous Sensitive Artificial Listeners (Extended Abstract)," in Proc. of ACII 2015, Xi'an, China, invited for the Special Session on Most Influential Articles in IEEE Transactions on Affective Computing.
  3. F. Eyben, M. Wöllmer, B. Schuller: "openSMILE - The Munich Versatile and Fast Open-Source Audio Feature Extractor", In Proc. ACM Multimedia (MM), ACM, Florence, Italy, ACM, pp. 1459-1462, October 2010.
  4. B. Schuller, S. Steidl, A. Batliner, P. B. Marschik, H. Baumeister, F. Dong, … & C. Einspieler: "The Interspeech 2018 Computational Paralinguistics Challenge: Atypical & self-assessed affect, crying & heart beats", Proc. of INTERSPEECH 2018, Hyderabad, India, 2018.
  5. Google Scholar page on openSMILE
  6. Google Scholar page on openEAR
  7. F. Eyben, F. Weninger, S. Squartini, B. Schuller, "Real-life voice activity detection with LSTM Recurrent Neural Networks and an application to Hollywood movies," in Proc. of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 483-487, 26-31 May 2013.
  8. M. Wöllmer, F. Eyben, B. Schuller, G. Rigoll, "Recognition of Spontaneous Conversational Speech using Long Short-Term Memory Phoneme Predictions," In Proc. of Interspeech 2010, ISCA, pp. 1946-1949, Makuhari, Japan, 2010.
  9. M. Wöllmer, F. Eyben, S. Reiter, B. Schuller, C. Cox, E. Douglas-Cowie, R. Cowie, "Abandoning emotion classes-towards continuous emotion recognition with modelling of long-range dependencies," In Proc. of Interspeech 2008, ISCA, pp. 597-600, Brisbane, Australia, 2008.
  10. F. Eyben, K. Scherer, B. Schuller, J. Sundberg, E. Andre, C. Busso, L. Devillers, J. Epps, P. Laukka, S. Narayanan, K. Truong, "The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing", in IEEE Transactions on Affective Computing, 2015.
  11. audEERING lässt Sprachassistenten Emotionen lesen Article on Home&Smart
  12. GN Store Nord partners with German AI technology company focused on sound analyses News post on gn.com
  13. Market Trends: How AI and Affective Computing Deliver More Personalized Interactions With Devices, Report published by Gartner, Inc.
  14. Marktforschung mittels Sprachanalyse News post on gfk-verein.org
  15. The Digital Marketing 2017 Innovation World Cup Winners Article on mynewsdesk.com
  16. Innovationspreis Bayern Article on innovationspreis-bayern.de

External Links[edit]


This article "Audeering" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:Audeering. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.