Google built a musical instrument that uses A.I. -- and released the plans so you can make your own

Awadh Jamal (Ajakai)
By -
0
Google on Tuesday revealed a synthesizer that uses artificial intelligence to make interesting sounds based on normal sounds from real life. But there's a catch: You won't be able to buy it.

Despite Google's recent hardware push under executive Rick Osterloh, this thing isn't like the Pixel phone or the Home speaker, which bring in revenue for Google. It's a research project. It came about basically because some people at Google wanted to see what can happen if they build a dedicated hardware version of a software synth that they'd previously come up with. The researchers are publishing hardware designs and the underlying software on GitHub, so people can assemble their own versions.

This isn't even a product that could show Google's intent to do more in the music gear business with companies like Korg and Roland. Really, Google is just showing what it can do with the AI software it has developed.


Instead, it shows what's technically possible -- and other companies aren't pushing the envelope in this way. It also shows that AI isn't just a frightening out-of-control technology that steals jobs. It can also help the creative human process.

"In the '60s your thing might have been having a soldering iron, and now we're saying that we can do something with machine learning that's equally hacky," Google senior staff research scientist Doug Eck told CNBC during a demonstration of the project at Google headquarters last week.

Part of Google Brain's AI research


At the heart of the software synthesizer was an AI system called NSynth -- the N stands for neural, as in neural network -- which is trained on hundreds of thousands of short recordings of single musical notes played on different musical instruments. That data makes it possible for the software to come up with the sounds of other notes, longer notes, and even notes that blend the sounds of multiple instruments.

NSynth came out last spring. It's one of the foundational technologies from Magenta, an effort from the Google Brain AI research group to push the envelope in the area of creating art and music with AI. Other Magenta projects include AI Duet, which lets you play piano alongside a computer that riffs on what you play, and sketch-rnn, an AI model for drawing pictures based on human drawings.

In developing NSynth, Google Brain worked together with DeepMind, an AI research firm under Google's parent company, Alphabet. The researchers even released a data set of musical notes and an open-source version of the NSynth software for others to work with.

With the virtual synth, you could choose a pair of instruments and then move a slider toward one or another, to create a combination between the two. Then, with the keys on your computer keyboard, you could play piano notes with that unusual combination acting as a filter.

It's interesting, but it's limited in its powers.

The hardware synth prototype, which goes by the name NSynth Super, provides several physical knobs to turn and a slick display to drag your finger on, making it more accommodating for live performers who are used to tweaking hardware boxes on the fly. There are controls for adjusting how much of a note you want to play, along with qualities known as attack, decay, sustain and release. And it lets you play sounds with a combination of four instruments at once, not two. It pushes the limits of what's possible with NSynth.
Tags:

Post a Comment

0Comments

Post a Comment (0)

#buttons=(Accept !) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Accept !