Producing reasonable audio requires modeling data represented at completely different scales. For instance, simply as music builds advanced musical phrases from particular person notes, speech combines temporally native buildings, reminiscent of phonemes or syllables, into phrases and sentences. Creating well-structured and coherent audio sequences in any respect these scales is a problem that has been addressed by coupling audio with transcriptions that may information the generative course of, be it textual content transcripts for speech synthesis or MIDI representations for piano. Nevertheless, this strategy breaks when attempting to mannequin untranscribed points of audio, reminiscent of speaker traits essential to assist folks with speech impairments recuperate their voice, or stylistic elements of a piano efficiency.
In “AudioLM: a Language Modeling Strategy to Audio Era”, we suggest a brand new framework for audio era that learns to generate reasonable speech and piano music by listening to audio solely. Audio generated by AudioLM demonstrates long-term consistency (e.g., syntax in speech, melody in music) and excessive constancy, outperforming earlier techniques and pushing the frontiers of audio era with functions in speech synthesis or computer-assisted music. Following our AI Rules, we have additionally developed a mannequin to establish artificial audio generated by AudioLM.
From Textual content to Audio Language Fashions
In recent times, language fashions educated on very massive textual content corpora have demonstrated their distinctive generative talents, from open-ended dialogue to machine translation and even commonsense reasoning. They’ve additional proven their capability to mannequin different alerts than texts, such as pure photographs. The important thing instinct behind AudioLM is to leverage such advances in language modeling to generate audio with out being educated on annotated knowledge.
Nevertheless, some challenges have to be addressed when transferring from textual content language fashions to audio language fashions. First, one should deal with the truth that the info price for audio is considerably larger, thus resulting in for much longer sequences — whereas a written sentence may be represented by just a few dozen characters, its audio waveform sometimes comprises a whole lot of 1000’s of values. Second, there’s a one-to-many relationship between textual content and audio. Because of this the identical sentence may be rendered by completely different audio system with completely different talking types, emotional content material and recording situations.
To beat each challenges, AudioLM leverages two sorts of audio tokens. First, semantic tokens are extracted from w2v-BERT, a self-supervised audio mannequin. These tokens seize each native dependencies (e.g., phonetics in speech, native melody in piano music) and international long-term construction (e.g., language syntax and semantic content material in speech, concord and rhythm in piano music), whereas closely downsampling the audio sign to permit for modeling lengthy sequences.
Nevertheless, audio reconstructed from these tokens demonstrates poor constancy. To beat this limitation, along with semantic tokens, we depend on acoustic tokens produced by a SoundStream neural codec, which seize the small print of the audio waveform (reminiscent of speaker traits or recording situations) and permit for high-quality synthesis. Coaching a system to generate each semantic and acoustic tokens leads concurrently to excessive audio high quality and long-term consistency.
Coaching an Audio-Solely Language Mannequin
AudioLM is a pure audio mannequin that’s educated with none textual content or symbolic illustration of music. AudioLM fashions an audio sequence hierarchically, from semantic tokens as much as high quality acoustic tokens, by chaining a number of Transformer fashions, one for every stage. Every stage is educated for the following token prediction based mostly on previous tokens, as one would prepare a textual content language mannequin. The primary stage performs this process on semantic tokens to mannequin the high-level construction of the audio sequence.
Within the second stage, we concatenate your complete semantic token sequence, together with the previous coarse acoustic tokens, and feed each as conditioning to the coarse acoustic mannequin, which then predicts the long run tokens. This step fashions acoustic properties reminiscent of speaker traits in speech or timbre in music.
Within the third stage, we course of the coarse acoustic tokens with the high quality acoustic mannequin, which provides much more element to the ultimate audio. Lastly, we feed acoustic tokens to the SoundStream decoder to reconstruct a waveform.
After coaching, one can situation AudioLM on just a few seconds of audio, which permits it to generate constant continuation. With a purpose to showcase the overall applicability of the AudioLM framework, we think about two duties from completely different audio domains:
- Speech continuation, the place the mannequin is predicted to retain the speaker traits, prosody and recording situations of the immediate whereas producing new content material that’s syntactically appropriate and semantically constant.
- Piano continuation, the place the mannequin is predicted to generate piano music that’s coherent with the immediate when it comes to melody, concord and rhythm.
Within the video under, you possibly can hearken to examples the place the mannequin is requested to proceed both speech or music and generate new content material that was not seen throughout coaching. As you hear, word that all the things you hear after the grey vertical line was generated by AudioLM and that the mannequin has by no means seen any textual content or musical transcription, however somewhat simply realized from uncooked audio. We launch extra samples on this webpage.
To validate our outcomes, we requested human raters to hearken to quick audio clips and determine whether or not it’s an unique recording of human speech or an artificial continuation generated by AudioLM. Primarily based on the rankings collected, we noticed a 51.2% success price, which isn’t statistically considerably completely different from the 50% success price achieved when assigning labels at random. Because of this speech generated by AudioLM is tough to differentiate from actual speech for the common listener.
Our work on AudioLM is for analysis functions and we have now no plans to launch it extra broadly at the moment. In alignment with our AI Rules, we sought to know and mitigate the chance that individuals may misread the quick speech samples synthesized by AudioLM as actual speech. For this objective, we educated a classifier that may detect artificial speech generated by AudioLM with very excessive accuracy (98.6%). This reveals that regardless of being (nearly) indistinguishable to some listeners, continuations generated by AudioLM are very straightforward to detect with a easy audio classifier. This can be a essential first step to assist defend towards the potential misuse of AudioLM, with future efforts probably exploring applied sciences reminiscent of audio “watermarking”.
We introduce AudioLM, a language modeling strategy to audio era that gives each long-term coherence and excessive audio high quality. Experiments on speech era present not solely that AudioLM can generate syntactically and semantically coherent speech with none textual content, but in addition that continuations produced by the mannequin are nearly indistinguishable from actual speech by people. Furthermore, AudioLM goes properly past speech and may mannequin arbitrary audio alerts reminiscent of piano music. This encourages the long run extensions to different kinds of audio (e.g., multilingual speech, polyphonic music, and audio occasions) in addition to integrating AudioLM into an encoder-decoder framework for conditioned duties reminiscent of text-to-speech or speech-to-speech translation.
The work described right here was authored by Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Olivier Teboul, David Grangier, Marco Tagliasacchi and Neil Zeghidour. We’re grateful for all discussions and suggestions on this work that we acquired from our colleagues at Google.