Home » Blog » Ever wanted to hear a saxophone bark? Nvidia just made the ‘world’s most flexible sound machine’ that uses AI to blend music, voices and sounds

Ever wanted to hear a saxophone bark? Nvidia just made the ‘world’s most flexible sound machine’ that uses AI to blend music, voices and sounds

by
0 comments

  • Nvidia has announced its new Fugatto generative AI audio tool
  • It can create and mix audio in all kinds of ways, but isn’t out yet
  • Fugatto promies to create unique sounds, audio mixes, speech, and more

Nvidia has announced a new generative AI audio tool called Fugatto, which it’s describing as the “world’s most flexible sound machine” – capable of producing all kinds of music, speech, and other audio, and even unique sounds that have never been heard before.

Fugatto, which is short for Foundational Generative Audio Transformer Opus 1, can work with text prompts and audio samples. You can simply describe what you want to hear, or get the AI model to modify or combine existing audio clips.

For example, you can have the sound of a train transform into a lush orchestral arrangement, or mix a banjo melody with the sounds of rainfall. You can hear the sound of a saxophone barking, or a flute meowing, just by typing in a prompt.

Fugatto can also isolate vocals from tracks, and change the vocal delivery style, as well as generate speech from scratch. Feed in an existing melody, and you can have it played on whatever instrument you like, in any kind of style.

The bad news – it’s not available yet

So how can you try out this impressive new AI technology? You can’t, for the time being: you’ll have to make do with Nvidia’s promo video and a website of samples. There’s no word yet on when Fugatto will be available for public testing.

Some of the samples published by Nvidia include the sound of a female voice barking, a factory machine screaming, a typewriter whispering, and a cello shouting with anger. You can see the wide variety of audio effects that are possible.

Nvidia has also demonstrated how the AI engine is able to produce spoken word clips, which can then be delivered with a range of different emotions (from angry to happy) and even with different accents applied.

“We wanted to create a model that understands and generates sound like humans do,” says Nvidia’s Rafael Valle, one of the Fugatto team. “Fugatto is our first step toward a future where unsupervised multitask learning in audio synthesis and transformation emerges from data and model scale.”

You might also like

You may also like

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00
Verified by MonsterInsights