06 June 2023

Treble wins ICASSP 2023 best Startup award

The International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2023 and the 2nd Signal Processing Society Entrepreneurship Forum announced Treble as the winners of this year’s Startup Fair.

Treble’s innovative cloud-based sound system and solid pitch has secured them the top spot in this year’s competition. Our cutting-edge solutions showcased an exemplary application of signal processing technology.

Our cloud-based synthetic data generation tool enables audio teams to work faster and produce better machine learning models. It represents a new approach for training and testing audio machine learning models enabling accelerated AI development, superior data quality, and the ability for users to create custom datasets with deep synthetic complexity and variety.

 

Another win for Treble!

Finnur Pind was honored by IEEE SPS President, Professor Athina Petropoulou, who awarded the prize and certificate

Try Treble for yourself!

Fill out the form bellow and we'll contact you about your free access to Treble

Recent posts

26 February 2026

Webinar: A new era for audio AI data

Join our webinar on February 26th unveiling new Treble SDK capabilities for advanced voice processing and audio AI, including dynamic scene simulation, a new source receiver device for own voice and echo cancellation scenarios and powerful tools for large scale synthetic data generation. Industry leaders in audio technology and voice AI will join to discuss how these advancements elevate product performance and accelerate research and development.
03 February 2026

Meet Treble at ISE 2026

We will be at ISE 2026 in Barcelona on February 3-6 where wewill showcase the Treble Web Application for high-fidelity room acoustic simulations, giving visitors a hands-on look at our streamlined cloud-based approach to acoustic prediction and auralization.
28 January 2026

Synthetic realism in speech enhancement: Training models for the real world with ai-coustics

Tim Janke, Co-founder and Head of Research at ai-coustics, examines why voice AI systems struggle outside controlled environments and how training data is often the limiting factor. The article introduces synthetic realism as a data driven approach to teaching models what truly matters in real acoustic scenes, from distance and geometry to room acoustics. It also shows how physics based simulation with the Treble SDK enables spatially accurate training data that translates into more reliable speech enhancement and voice agent behavior in production.