28 May 2025

Webinar: Audio device simulations for virtual prototyping and algorithm design

Watch on demand today

Webinar Details

Whether you're working in audio machine learning, product design, or signal processing, this session is tailored to help you extract real-world acoustic performance from your CAD models. Helping you to guide early decision making around audio hardware designs, build algorithms from an early stage and test products that are not yet built.

The webinar is available to watch on demand now. Please sign up below.

Duration: 50 minutes + Live Q&A

Agenda

  • Introduction: The Problem We’re Solving
    Go from a CAD file to comprehensive acoustic performance data in minutes or hours, not weeks or months.
  • Feature Spotlight: How are device specific impulse responses in complex environments generated in the Treble SDK?
    Deep dive into the new SDK feature: Upload CAD files to Treble, generate device specific IRs, render your device into any acoustic environment
  • Virtual Prototyping at Scale
    From AR glasses to smart speakers to hearing aids: test, iterate, and evaluate acoustic performance before you build.
  • Machine Learning Data Augmentation
    Programmatically generate millions of real-world, device-specific audio samples for ML training.
  • Roadmap Preview
    We will go over the roadmap for the DRTF from CAD feature
  • Live Q&A

Watch on demand

Sign up to watch the webinar on demand.

Meet the speakers

Product Manager for the Treble SDK - Treble Technologies

Dr. Daniel Gert Nielsen

Dr. Daniel Gert Nielsen is a specialist in numerical vibro-acoustics, with a PhD focused on loudspeaker modeling and optimization. His expertise spans acoustic simulation for communication devices and synthetic data generation for machine learning applications. With a strong background in numerical methods and audio technology, he plays a key role in shaping advanced acoustic modeling solutions at Treble.

Principal Simulation Specialist & Team Lead

Dr. Solvi Thrastarson

Dr. Solvi Thrastarson is a principal simulation specialist at Treble, with deep expertise in wave physics and numerical modeling. Holding a PhD in seismology, his academic background centers on the propagation of complex wave phenomena and the development of high-fidelity simulation techniques. Dr. Thrastarson’s work integrates advanced finite element methods and large-scale optimization strategies, driving the accuracy and performance of Treble’s wave-based acoustic engine.

Recent posts

26 February 2026

Webinar: A new era for audio AI data

Join our webinar on February 26th unveiling new Treble SDK capabilities for advanced voice processing and audio AI, including dynamic scene simulation, a new source receiver device for own voice and echo cancellation scenarios and powerful tools for large scale synthetic data generation. Industry leaders in audio technology and voice AI will join to discuss how these advancements elevate product performance and accelerate research and development.
03 February 2026

Meet Treble at ISE 2026

We will be at ISE 2026 in Barcelona on February 3-6 where wewill showcase the Treble Web Application for high-fidelity room acoustic simulations, giving visitors a hands-on look at our streamlined cloud-based approach to acoustic prediction and auralization.
28 January 2026

Synthetic realism in speech enhancement: Training models for the real world with ai-coustics

Tim Janke, Co-founder and Head of Research at ai-coustics, examines why voice AI systems struggle outside controlled environments and how training data is often the limiting factor. The article introduces synthetic realism as a data driven approach to teaching models what truly matters in real acoustic scenes, from distance and geometry to room acoustics. It also shows how physics based simulation with the Treble SDK enables spatially accurate training data that translates into more reliable speech enhancement and voice agent behavior in production.