pip install TTS Copy PIP instructions

Released: Dec 12, 2023

Deep learning for Text to Speech by Coqui.

Project links

  • Discussions
  • Documentation
  • Open issues:

View statistics for this project via Libraries.io , or by using our public dataset on Google BigQuery

License: Mozilla Public License 2.0 (MPL 2.0) (MPL-2.0)

Author: Eren Gölge

Requires: Python >=3.9.0, <3.12

Maintainers

Avatar for coqui from gravatar.com

Classifiers

  • Science/Research
  • OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)
  • POSIX :: Linux
  • Python :: 3
  • Python :: 3.9
  • Python :: 3.10
  • Python :: 3.11
  • Multimedia :: Sound/Audio
  • Multimedia :: Sound/Audio :: Speech
  • Scientific/Engineering :: Artificial Intelligence
  • Software Development
  • Software Development :: Libraries :: Python Modules

Project description

🐸coqui.ai news.

  • 📣 ⓍTTSv2 is here with 16 languages and better performance across the board.
  • 📣 ⓍTTS fine-tuning code is out. Check the example recipes .
  • 📣 ⓍTTS can now stream with <200ms latency.
  • 📣 ⓍTTS, our production TTS model that can speak 13 languages, is released Blog Post , Demo , Docs
  • 📣 🐶Bark is now available for inference with unconstrained voice cloning. Docs
  • 📣 You can use ~1100 Fairseq models with 🐸TTS.
  • 📣 🐸TTS now supports 🐢Tortoise with faster inference. Docs
  • 📣 Voice generation with prompts - Prompt to Voice - is live on Coqui Studio !! - Blog Post
  • 📣 Voice generation with fusion - Voice fusion - is live on Coqui Studio .
  • 📣 Voice cloning is live on Coqui Studio .

speech to text library

🐸TTS is a library for advanced Text-to-Speech generation.

🚀 Pretrained models in +1100 languages.

🛠️ Tools for training new models and fine-tuning existing models in any language.

📚 Utilities for dataset analysis and curation.

speech to text library

💬 Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.

🔗 Links and Resources

🥇 tts performance.

speech to text library

Underlined "TTS*" and "Judy*" are internal 🐸TTS models that are not released open-source. They are here to show the potential. Models prefixed with a dot (.Jofish .Abe and .Janice) are real human voices.

  • Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
  • Speaker Encoder to compute speaker embeddings efficiently.
  • Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
  • Fast and efficient model training.
  • Detailed training logs on the terminal and Tensorboard.
  • Support for Multi-speaker TTS.
  • Efficient, flexible, lightweight but feature complete Trainer API .
  • Released and ready-to-use models.
  • Tools to curate Text2Speech datasets under dataset_analysis .
  • Utilities to use and test your models.
  • Modular (but not too much) code base enabling easy implementation of new ideas.

Model Implementations

Spectrogram models.

  • Tacotron: paper
  • Tacotron2: paper
  • Glow-TTS: paper
  • Speedy-Speech: paper
  • Align-TTS: paper
  • FastPitch: paper
  • FastSpeech: paper
  • FastSpeech2: paper
  • SC-GlowTTS: paper
  • Capacitron: paper
  • OverFlow: paper
  • Neural HMM TTS: paper
  • Delightful TTS: paper

End-to-End Models

  • VITS: paper
  • 🐸 YourTTS: paper
  • 🐢 Tortoise: orig. repo
  • 🐶 Bark: orig. repo

Attention Methods

  • Guided Attention: paper
  • Forward Backward Decoding: paper
  • Graves Attention: paper
  • Double Decoder Consistency: blog
  • Dynamic Convolutional Attention: paper
  • Alignment Network: paper

Speaker Encoder

  • GE2E: paper
  • Angular Loss: paper
  • MelGAN: paper
  • MultiBandMelGAN: paper
  • ParallelWaveGAN: paper
  • GAN-TTS discriminators: paper
  • WaveRNN: origin
  • WaveGrad: paper
  • HiFiGAN: paper
  • UnivNet: paper

Voice Conversion

  • FreeVC: paper

You can also help us implement more models.

Installation

🐸TTS is tested on Ubuntu 18.04 with python >= 3.9, < 3.12. .

If you are only interested in synthesizing speech with the released 🐸TTS models, installing from PyPI is the easiest option.

If you plan to code or train models, clone 🐸TTS and install it locally.

If you are on Ubuntu (Debian), you can also run following commands for installation.

If you are on Windows, 👑@GuyPaddock wrote installation instructions here .

Docker Image

You can also try TTS without install with the docker image. Simply run the following command and you will be able to run TTS without installing it.

You can then enjoy the TTS server here More details about the docker images (like GPU support) can be found here

Synthesizing speech by 🐸TTS

🐍 python api, running a multi-speaker and multi-lingual model, running a single speaker model, example voice conversion.

Converting the voice in source_wav to the voice of target_wav

Example voice cloning together with the voice conversion model.

This way, you can clone voices by using any model in 🐸TTS.

Example text to speech using Fairseq models in ~1100 languages 🤯.

For Fairseq models, use the following name format: tts_models/<lang-iso_code>/fairseq/vits . You can find the language ISO codes here and learn about the Fairseq models here .

Command-line tts

Synthesize speech on command line.

You can either use your trained model or choose a model from the provided list.

If you don't specify any models, then it uses LJSpeech based English model.

Single Speaker Models

List provided models:

Get model info (for both tts_models and vocoder_models):

Query by type/name: The model_info_by_name uses the name as it from the --list_models.

For example:

Query by type/idx: The model_query_idx uses the corresponding idx from --list_models.

Query info for model info by full name:

Run TTS with default models:

Run TTS and pipe out the generated TTS wav file data:

Run a TTS model with its default vocoder model:

Run with specific TTS and vocoder models from the list:

Run your own TTS model (Using Griffin-Lim Vocoder):

Run your own TTS and Vocoder models:

Multi-speaker Models

List the available speakers and choose a <speaker_id> among them:

Run the multi-speaker TTS model with the target speaker ID:

Run your own multi-speaker TTS model:

Voice Conversion Models

Directory structure, project details, release history release notifications | rss feed.

Dec 12, 2023

Dec 1, 2023

Nov 30, 2023

Nov 24, 2023

Nov 17, 2023

Nov 15, 2023

Nov 13, 2023

Nov 10, 2023

Nov 8, 2023

Nov 7, 2023

Nov 6, 2023

Oct 30, 2023

Oct 25, 2023

Oct 21, 2023

Oct 20, 2023

Oct 19, 2023

Oct 6, 2023

Sep 29, 2023

Sep 25, 2023

Sep 15, 2023

Sep 14, 2023

Sep 13, 2023

Sep 4, 2023

Aug 26, 2023

Aug 13, 2023

Jul 31, 2023

Jul 8, 2023

Jul 3, 2023

Jun 30, 2023

Jun 29, 2023

Jun 28, 2023

Jun 6, 2023

Jun 5, 2023

May 16, 2023

Apr 17, 2023

Apr 14, 2023

Apr 12, 2023

Apr 5, 2023

Mar 17, 2023

Feb 10, 2023

Jan 11, 2023

Dec 26, 2022

Dec 15, 2022

Nov 16, 2022

Aug 22, 2022

Jun 21, 2022

Jun 20, 2022

Apr 20, 2022

Mar 10, 2022

Mar 7, 2022

Jan 4, 2022

Dec 8, 2021

Oct 26, 2021

Sep 17, 2021

Sep 13, 2021

Sep 6, 2021

Aug 31, 2021

Aug 11, 2021

Jul 27, 2021

Jul 6, 2021

Jul 3, 2021

Jun 8, 2021

Jun 4, 2021

May 27, 2021

May 4, 2021

Apr 29, 2021

Apr 15, 2021

Apr 2, 2021

Mar 17, 2021

Mar 10, 2021

0.0.9.2 yanked

Feb 1, 2021

0.0.9.1 yanked

0.0.9 yanked

Jan 26, 2021

0.0.9a10 pre-release yanked

0.0.9a9 pre-release yanked

Jan 25, 2021

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages .

Source Distribution

Uploaded Dec 12, 2023 Source

Built Distributions

Uploaded Dec 12, 2023 CPython 3.11

Uploaded Dec 12, 2023 CPython 3.10

Uploaded Dec 12, 2023 CPython 3.9

Hashes for TTS-0.22.0.tar.gz

Hashes for tts-0.22.0-cp311-cp311-manylinux1_x86_64.whl, hashes for tts-0.22.0-cp310-cp310-manylinux1_x86_64.whl, hashes for tts-0.22.0-cp39-cp39-manylinux1_x86_64.whl.

  • português (Brasil)

Supported by

speech to text library

  • Español – América Latina
  • Português – Brasil
  • Cloud Speech-to-Text
  • Documentation

Speech-to-Text Client Libraries

This page shows how to get started with the Cloud Client Libraries for the Speech-to-Text API. Client libraries make it easier to access Google Cloud APIs from a supported language. Although you can use Google Cloud APIs directly by making raw requests to the server, client libraries provide simplifications that significantly reduce the amount of code you need to write.

Read more about the Cloud Client Libraries and the older Google API Client Libraries in Client libraries explained .

Install the client library

If you are using .NET Core command-line interface tools to install your dependencies, run the following command:

For more information, see Setting Up a C# Development Environment .

For more information, see Setting Up a Go Development Environment .

If you are using Maven , add the following to your pom.xml file. For more information about BOMs, see The Google Cloud Platform Libraries BOM .

If you are using Gradle , add the following to your dependencies:

If you are using sbt , add the following to your dependencies:

If you're using Visual Studio Code, IntelliJ, or Eclipse, you can add client libraries to your project using the following IDE plugins:

  • Cloud Code for VS Code
  • Cloud Code for IntelliJ
  • Cloud Tools for Eclipse

The plugins provide additional functionality, such as key management for service accounts. Refer to each plugin's documentation for details.

For more information, see Setting Up a Java Development Environment .

For more information, see Setting Up a Node.js Development Environment .

For more information, see Using PHP on Google Cloud .

For more information, see Setting Up a Python Development Environment .

For more information, see Setting Up a Ruby Development Environment .

Set up authentication

For production environments, the way you set up ADC depends on the service and context. For more information, see Set up Application Default Credentials .

For a local development environment, you can set up ADC with the credentials that are associated with your Google Account:

Install and initialize the gcloud CLI .

When you initialize the gcloud CLI, be sure to specify a Google Cloud project in which you have permission to access the resources your application needs.

Create your credential file:

A sign-in screen appears. After you sign in, your credentials are stored in the local credential file used by ADC .

Use the client library

The following example shows how to use the client library.

Additional resources

The following list contains links to more resources related to the client library for C#:

  • API reference
  • Client libraries best practices
  • Issue tracker
  • google-cloud-speech on Stack Overflow
  • Source code

The following list contains links to more resources related to the client library for Go:

The following list contains links to more resources related to the client library for Java:

The following list contains links to more resources related to the client library for Node.js:

The following list contains links to more resources related to the client library for PHP:

The following list contains links to more resources related to the client library for Python:

The following list contains links to more resources related to the client library for Ruby:

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2024-03-27 UTC.

IMAGES

  1. Pyttsx3 Python Library: Create Text to Speech (TTS) Applications

    speech to text library

  2. Speech-to-Text

    speech to text library

  3. The Best Text-to-Speech Apps and Tools for Every Type of User

    speech to text library

  4. Speech to Text Online

    speech to text library

  5. Google launches DeepMind technology enabled Cloud Text-to-Speech for

    speech to text library

  6. Google translate bot text to speech

    speech to text library

VIDEO

  1. The Best Text to Speech Tool Powered by AI 2024 (Free Access Link Below)

  2. Speech to Text

  3. TEXT To Speech Emoji Groupchat Conversations

  4. RealtimeSTT: A low-latency speech-to-text library with advanced voice activity detection

  5. how to add text to speech in our video || #capcut#tutorials#shorts

  6. Convert Text to Speech with AI Voiceovers

COMMENTS

  1. Speech to text

    The Audio API provides two speech to text endpoints, transcriptions and translations, based on our state-of-the-art open source large-v2 Whisper model. They can be used to: Transcribe audio into whatever language the audio is in. Translate and transcribe the audio into english.

  2. TTS · PyPI

    🐸TTS is a library for advanced Text-to-Speech generation. 🚀 Pretrained models in +1100 languages. 🛠️ Tools for training new models and fine-tuning existing models in any language.

  3. Speech-to-Text Client Libraries

    Install the client library. If you are using Visual Studio 2017 or higher, open nuget package manager window and type the following: Install-Package Google.Apis. If you are using .NET Core command-line interface tools to install your dependencies, run the following command: dotnet add package Google.Apis.