The Next Major Evolution of the Treble SDK
By Daniel Gert Nielsen, Senior Product Manager, Treble SDK
On the 26th of February, we are releasing the most significant update to the Treble SDK since launch. This release strengthens three core areas: the simulation capabilities of the SDK, structured data workflows, and scene generation.
More information about the webinar
We are introducing dynamic source and receiver types with trajectory based movement and time varying orientation, supporting mono and spatial configurations including DRTF rendering. We are also launching a new source receiver device type and a redesigned device import workflow.
Together, these upgrades enable:
- Realistic modeling of moving talkers, head movement, and rotating devices
- Simulation of own voice for head worn devices and AEC for smart speakers and speaker phones
- Improved geometry validation and performance benchmarking during import
These improvements reduce friction in virtual prototyping while significantly increasing physical realism and scalability.
We are introducing collections, a feature that turns simulation outputs into structured, reusable datasets.
Collections allow users to aggregate impulse responses across projects and enrich them with metadata, making large scale acoustic data easier to manage and reuse.
With collections, you can:
- Aggregate and tag impulse responses with geometric, acoustic and simulation specific metadata
- Include parameters such as source receiver distance, line of sight, T20, and C50
- Instantly filter datasets based on defined acoustic or spatial constraints
This replaces manual file handling with scalable, reproducible data workflows for audio ML development and enables curated acoustic datasets to be packaged and made commercially available without running new simulations.
Scene generation is now a native capability in the Treble SDK.
Users can construct complete acoustic environments by combining curated impulse responses with speech and noise signals, placing devices directly within the scene and modeling realistic spatial interactions.
Scene generation enables the combination of high-fidelity simulated impulse responses with speech and noise:
- Multi talker conversational scenarios including devices
- Bulk scene generation based on conversation rules to create realistic audio scenes
- Add microphone noise, sample delay and lumped circuits for more realistic device behavior
- Scalable generation of structured test scenes
This shifts synthetic workflows from isolated impulse responses to fully simulated acoustic environments.
During the webinar, we will present internal and collaborative research on speech enhancement, ASR, and source localization workflows.
Using publicly available models such as SpatialNet, we demonstrate that replacing simplified synthetic data with wave based Treble simulations leads to measurable and significant performance gains, including up to 30% reduction in word error rate.
This highlights the direct impact that physically accurate, wave based acoustic data can have on real world model performance.
A Foundation for the Next Phase of Audio R&D
The Treble SDK was built to address fundamental limitations in the current state of acoustic simulation. As outlined in our broader strategy documents , modern audio development requires accuracy, scalability, and seamless integration into automated workflows.
With this release, we are strengthening all three dimensions:
- More realistic device modeling
- More structured and scalable data handling
- Integrated audio scene generation
For teams developing the next generation of speech enhancement, spatial audio, automotive systems, and adaptive audio algorithms, simulation must move from a niche tool to core infrastructure.
During the webinar, we will be joined by Hugging Face, Jabra, and Ai-coustics, who will present real world use cases built on top of the Treble SDK.
That is the direction we are building toward.
We look forward to sharing these updates in detail and to continuing the dialogue with our users as we refine and extend the platform.
More information on the webinar is here
You can also sign up to the webinar by filling out the form here bellow:
Senior Product Manager for the Treble SDK
Dr. Daniel Gert Nielsen
Dr. Daniel Gert Nielsen is a specialist in numerical vibro-acoustics, with a PhD focused on loudspeaker modeling and optimization. His expertise spans acoustic simulation for communication devices and synthetic data generation for machine learning applications. With a strong background in numerical methods and audio technology, he plays a key role in shaping advanced acoustic modeling solutions at Treble.
