26 February 2026

On-demand SDK Webinar: A new era for audio AI data

A new era for audio AI data

Our biggest SDK update ever: Dynamic scenes, large-scale data workflows and better AI outcomes.

Watch the on-demand webinar showcasing major advancements in the Treble SDK.

In this session, we introduce dynamic acoustic scene simulation, scalable synthetic audio data workflows, and new capabilities designed to improve AI performance and accelerate product development.

Register here to access to get the recording and technical deep dives.

On the agenda for this Webinar:

  • Audio that moves - finally
    Synthetic audio has traditionally been static, unable to represent the fluid nature of sound in real life. People move. Devices move. Conversations shift direction. Acoustic conditions change continuously. In this webinar, Treble will introduce world-first dynamic scene modeling, enabling simulation of moving sound sources, moving listeners, and time-varying spatial relationships. This unlocks more realistic training data and enables virtual prototyping of audio devices operating in dynamic environments - a fundamental expansion of what synthetic audio can represent.
  • Curated audio data, directly accessible
    Treble is evolving from a simulation SDK into a full Audio AI Data Platform. We will show how teams can directly access large-scale, curated audio and acoustics datasets and work with them alongside simulation and data generation. The session will demonstrate how data can be processed and tailored to specific use cases, with metadata treated as a first-class object that can be customized and enriched.
  • Research spotlight: why better data wins
    Treble will present new research showing measurable improvements in downstream ML performance when training on Treble data, including gains in speech enhancement, speech recognition, direction-of-arrival, and distance estimation. We will discuss why, as model architectures converge, data quality and diversity become decisive for audio AI performance.
  • New data-centric ML workflows
    We will introduce new data-centric workflows for handling and processing large audio datasets at scale, improving collaboration between data, simulation, and ML teams, and integrating Treble seamlessly with existing training infrastructure. With the new “collections” feature you can now combine audio data from multiple projects, filter datasets based on acoustic parameters and enrich datasets with metadata such as, distance between source receiver pairs, line of sight, and acoustic parameters such as T20 and C50.
  • Own-voice modeling and enhanced device handling
    We will showcase new capabilities for modeling challenging device-level audio scenarios, including devices with microphones and speakers and complex near-field and own-voice interactions. The session will demonstrate how detailed geometry and acoustic configurations can be handled efficiently, improving both training data quality and realism in virtual device prototyping. This update enables AEC simulations for smart speakers and speakerphones along with own-voice modeling for wearable acoustic devices.
  •  Automated scene generation
    We will introduce new tools for automated virtual audio scene generation, enabling teams to construct nuanced audio scenes with multiple talkers and noise sources at scale by controlling SNR, speech overlap, turn-taking, and environmental complexity, while incorporating real speech data where needed.

Industry guests

Industry leaders in audio technology and voice AI will join us to discuss how Treble elevates their products and accelerates their research and development efforts.

Jabra GN

In the audio industry, many measurements are required for product development, testing, and recordings across different scenarios, and these are often costly and time consuming. The approach is to use Treble software to replace selected measurements, enable deeper investigation of existing products, support the design of new ones, and generate synthetic data to improve audio algorithms. As an initial step, Treble was validated through simulations of rooms with varying sizes and reverberation times, as well as simulations of videobars and speakerphones, which showed very strong agreement with real measurements. Treble is now used to further develop products and audio algorithms, and the presentation therefore covers room and device simulations across scenarios, microphone array design, and synthetic data generation.

Hugging Face

Hugging Face will present an overview of their audio ecosystem including education resources leaderboards and Transformer based ASR and TTS models and explain why high quality data is critical for building reliable audio systems. They will highlight the Treble10 collaboration through the public datasets released on Hugging Face and demonstrate how Treble enables training and evaluation in realistic acoustic conditions with reverb noise and movement while new Treble SDK features simplify development workflows through Hugging Face tooling.

ai-coustics

ai-coustics will showcase why speech enhancement models require training data that closely matches real world acoustic conditions and how high-quality synthetic data makes it possible to introduce controlled variability in speaker positions noise sources and room characteristics. They will show how this enables systematic scene construction such as placing multiple speakers at different distances from a microphone to train more specialized and robust models.

Meet the speakers

CEO - Treble Technologies

Dr. Finnur Pind

Dr. Finnur Pind is the CEO and co-founder of Treble, a company revolutionizing sound simulation and acoustic design through cutting-edge technology. With a background in acoustical engineering and years of experience driving innovation, Finnur has been at the forefront of developing tools that enable designers, engineers, and architects to create better-sounding environments and products. Under his leadership, Treble has become a global leader in cloud-based sound solutions, pushing the boundaries of precision and collaboration in the industry.

Senior Product Manager for the Treble SDK - Treble Technologies

Dr. Daniel Gert Nielsen

Dr. Daniel Gert Nielsen is a specialist in numerical vibro-acoustics, with a PhD focused on loudspeaker modeling and optimization. His expertise spans acoustic simulation for communication devices and synthetic data generation for machine learning applications. With a strong background in numerical methods and audio technology, he plays a key role in shaping advanced acoustic modeling solutions at Treble.

Senior Simulation Specialist - Treble Technologies

Dr. Sarabeth Mullins

Dr. Sarabeth S Mullins is a specialist in room acoustics, with her dissertation focused on the acoustics of Notre-Dame de Paris. Her expertise includes large-scale room acoustic modeling, interactive acoustics for VR/AR, and human–acoustic interactions. With an interdisciplinary research background and experience in simulation workflows, dataset development, and signal processing, she contributes to high-fidelity acoustic modeling at Treble.

Senior Audio Research Engineer - Treble Technologies

Dr. Georg Götz

Dr. Georg Götz is a leading expert in acoustics and audio signal processing, with a PhD specializing in data-driven room acoustics modeling. His research and work focuses on large-scale data acquisition, multi-exponential sound energy decay, and 6DoF late reverberation rendering, contributing to more accurate and immersive spatial audio simulations. With extensive experience bridging theoretical research and real-world applications, Dr. Götz plays a key role in advancing next-generation acoustic modeling at Treble.

Audio ML Engineer - Hugging Face

Dr. Eric Bezzam

Eric Bezzam is an audio ML engineer at Hugging Face. He received his PhD from EPFL, and previously worked at Snips, Sonos, DSP Concepts, and Fraunhofer IDMT. He was one of the main developers of pyroomacoustics.

Head of Research - ai-coustics

Dr. Tim Janke

Tim Janke is Head of Research at ai-coustics. He holds a Ph.D. in Machine Learning from TU Darmstadt and has published at NeurIPS. With 10+ deep learning papers, he’s an expert in generative audio and Voice AI, driving breakthroughs that are shaping the future of audio technology at ai-coustics.

Simulation Specialist - GN

Dr. Yauheni Belahurau

Dr. Yauheni Belahurau works as a Simulation Specialist at GN in the Audio Technology department. Prior to joining GN, he completed his PhD at the Technical University of Denmark (DTU), at the Centre for Acoustic-Mechanical Microsystems, followed by a postdoctoral fellowship. His research and professional interests include the design of acoustic metamaterials and transducers, computational optimization, and advanced numerical methods for vibroacoustic problems. His work focuses on leveraging simulation-driven approaches to support product development and the optimization of audio systems.

Recent posts

17 February 2026

The Next Major Evolution of the Treble SDK

The release is centered around two main areas. First, a substantial expansion of our device modeling capabilities. Second, a new framework for large scale data workflows and acoustic scene generation. Together, these changes further position the Treble SDK as a core infrastructure layer for modern audio algorithm development and virtual prototyping.
03 February 2026

Meet Treble at ISE 2026

We will be at ISE 2026 in Barcelona on February 3-6 where wewill showcase the Treble Web Application for high-fidelity room acoustic simulations, giving visitors a hands-on look at our streamlined cloud-based approach to acoustic prediction and auralization.
28 January 2026

Synthetic realism in speech enhancement: Training models for the real world with ai-coustics

Tim Janke, Co-founder and Head of Research at ai-coustics, examines why voice AI systems struggle outside controlled environments and how training data is often the limiting factor. The article introduces synthetic realism as a data driven approach to teaching models what truly matters in real acoustic scenes, from distance and geometry to room acoustics. It also shows how physics based simulation with the Treble SDK enables spatially accurate training data that translates into more reliable speech enhancement and voice agent behavior in production.