Recently, I came across an announcement from GPU Audio announcing a partnership between that company and Audio Modeling, an Italian developer that specializes in component modeling synthesis. With the majority of virtual instrument developers utilizing sampling as opposed to synthesis to achieve acoustic instrumental sounds, physical modeling has so far taken a back seat. This might be attributed to the heavy CPU load involved with modeling. I wrote an article on The Fundamentals of Physical Modeling Synthesis a few years ago that the reader might find useful.
With latency being a major stumbling block for the technology in the past, I was interested in hearing how this was being addressed. I reached out to Bill Collins of GPU.audio for some clarification.
PHIL MANTIONE: Thanks for agreeing to this interview, Bill. Could you talk a bit about your background and how you came to be involved with GPU.audio?
BILL COLLINS: I’ve spent two decades in the audio and music industry, as a musician, audio engineer, and composer. I’ve held roles in sales, marketing, education, and composition for music libraries like Universal Production Music. In spring 2022, I came across GPU Audio through a colleague’s LinkedIn post, got intrigued by their work with Nvidia, and later applied for the Marketing Manager position when it became available. I was hired to lead their Partner and Product Marketing team in December of 2022.
PM: So as the name implies, the essence of your company is focused on the use of the GPU (Graphics Processing Unit) for audio, which traditionally uses the CPU (Central Processing Unit). Can you explain the difference and why it’s important? Why is GPU so much faster than CPU and if it is, why not run the whole computer on GPU architecture? Is architecture even the right word here?
BC: Without getting too technical, GPUs handle tasks concurrently, while CPUs process data sequentially. CPUs typically feature 8 to 24 cores, whereas GPUs commonly utilize hundreds to thousands of cores. This allows GPUs to efficiently process massive amounts of data, a crucial capability in the current era marked by immersive audio, artificial intelligence, and machine learning advancements.
PM: So has GPU.audio come up with some real solutions for handling the latency issue? I realize it’s likely proprietary, but can you shed some light on the basic approach?
BC: Latency is obviously a concern when it comes to processing audio. The foundation of GPU Audio’s patented technology is built around a device called the Scheduler. Think of it as a lightning-fast multitasker that assigns data to be processed and delivered in near real-time, offering low latency while processing large track counts with effects.
PM: I understand your business model has shifted from developer to a sort of consulting partnership approach–connecting with companies that want to use GPU.audio’s methods to shift to GPU-based architecture. Other than Audio Modeling, are there other partnerships on the horizon?
GPU AUDIO PARTNERSHIPS
BC: At the time of this interview, the partnerships with Vienna Symphonic Library and Audio Modeling have been publicly announced. However, there are additional partnerships that, regrettably, I cannot currently reveal. We intend to unveil some of these new partnerships during the NAMM Show in January 2024.
PM: I can see where the extra computing power needed for convolution reverbs such as in The Vienna Symphony Library would be useful. Do you envision composers who write full orchestral scores shifting away from sample-based instruments to physical modeling anytime soon?
BC: This is hard to say. It’s worth noting that a composer might have made a significant investment in their sample library, which could discourage them from changing their workflow. Nevertheless, the allure of physical modeling, whether it’s the extensive performance control it offers or its minimal storage requirements, is undeniably attractive to contemporary composers and producers.
PM: Is the GPU approach also feasible or practical for hardware devices?
BC: Yes. GPU Audio’s technology will work with embedded GPUs. This opens up the possibility for smartphones, tablets, wearables, automobiles, and more. As a musician, I get particularly excited thinking about this tech integrated with guitars, pedals, synths, and mixers. The convergence of audio processing capabilities with compact and portable devices has the potential to reshape how musicians interact with and integrate technology into their musical workflows.
PM: I can’t help wondering if AI will somehow be part of the GPU/Physical Modeling paradigm. Can you offer any insights on that?
BC: I’ve been part of some exciting discussions about this topic, and I can confidently predict the emergence of groundbreaking products in the market, fueled by AI, in the coming years. It’s also important to note that AI processing is driven by GPUs — so I think it’s safe to say GPU Audio will play a role in driving these innovations in audio and music tech.
PM: I truly appreciate you sharing your thoughts on Waveinformer, Bill. Please stay in touch as things develop.
EXTRAS
Assess your knowledge of essential audio production concepts using our online Quizzes.
Explore our Member Resources page to find free content for Subscribers, Academic, and Pro Members.
Not a Pro Member yet? Check the Member Benefits page for details. There are FREE, paid, and educational options.