Speech Recognition & Synthesis

Screen reader application by Google From Wikipedia, the free encyclopedia

Speech Recognition & Synthesis

Speech Recognition & Synthesis, formerly known as Speech Services,[3] is a screen reader application developed by Google for its Android operating system. It powers applications to read aloud (speak) the text on the screen, with support for many languages. Text-to-Speech may be used by apps such as Google Play Books for reading books aloud, Google Translate for reading aloud translations for the pronunciation of words, Google TalkBack, and other spoken feedback accessibility-based applications, as well as by third-party apps. Users must install voice data for each language.

Quick Facts Developer(s), Initial release ...
Speech Recognition & Synthesis
Developer(s)Google
Initial release10 October 2013; 11 years ago (2013-10-10)
Stable release
20250331.02/p0 (Build 742452802) / 15 April 2025; 16 days ago (2025-04-15)[1][2]
Operating systemAndroid
TypeScreen reader
Close

Supported languages

  • Afrikaans (South Africa)
  • Albanian (Albania)
  • Amharic (Ethiopia)
  • Arabic (Saudi Arabia)
  • Assamese (India)
  • Basque (Spain)
  • Bengali (Bangladesh)
  • Bengali (India)
  • Bodo (India)
  • Bosnian (Bosnia and Herzegovina)
  • Bulgarian (Bulgaria)
  • Burmese (Myanmar)
  • Cantonese (Hong Kong)
  • Catalan (Spain)
  • Chinese (China)
  • Chinese (Taiwan)
  • Croatian (Croatia)
  • Czech (Czech Republic)
  • Danish (Denmark)
  • Dogri (India)
  • Dutch (Belgium)
  • Dutch (Netherlands)
  • English (Australia)
  • English (Nigeria)
  • English (India)
  • English (United Kingdom)
  • English (United States)
  • Estonian (Estonia)
  • Filipino (Philippines)
  • Finnish (Finland)
  • French (Canada)
  • French (France)
  • Galician (Spain)
  • German (Germany)
  • Greek (Greece)
  • Gujarati (India)
  • Hausa (Nigeria)
  • Hebrew (Israel)
  • Hindi (India)
  • Hungarian (Hungary)
  • Icelandic (Iceland)
  • Indonesian (Indonesia)
  • Italian (Italy)
  • Japanese (Japan)
  • Javanese (Indonesia)
  • Kannada (India)
  • Kashmiri (India)
  • Khmer (Cambodia)
  • Konkani (India)
  • Korean (South Korea)
  • Latin (Vatican City)
  • Latvian (Latvia)
  • Lithuanian (Lithuania)
  • Maithili (India)
  • Malay (Malaysia)
  • Malayalam (India)
  • Manipuri (India)
  • Marathi (India)
  • Nepali (Nepal)
  • Norwegian (Norway)
  • Odia (India)
  • Polish (Poland)
  • Portuguese (Brazil)
  • Portuguese (Portugal)
  • Punjabi (India)
  • Romanian (Romania)
  • Russian (Russia)
  • Sanskrit (India)
  • Santali (India)
  • Serbian (Serbia)
  • Sindhi (India)
  • Sinhala (Sri Lanka)
  • Slovak (Slovakia)
  • Slovenian (Slovenia)
  • Spanish (Spain)
  • Spanish (United States)
  • Sundanese (Indonesia)
  • Swahili (Kenya)
  • Swedish (Sweden)
  • Tamil (India)
  • Telugu (India)
  • Thai (Thailand)
  • Turkish (Turkey)
  • Ukrainian (Ukraine)
  • Urdu (Pakistan)
  • Urdu (India)
  • Vietnamese (Vietnam)
  • Welsh (United Kingdom)

History

Summarize
Perspective

Some app developers have started adapting and tweaking their Android Auto apps to include Text-to-Speech, such as Hyundai in 2015.[4] Apps such as textPlus and WhatsApp use Text-to-Speech to read notifications aloud and provide voice-reply functionality.

Google Cloud Text-to-Speech is powered by WaveNet,[5] software created by Google's UK-based AI subsidiary DeepMind, which was bought by Google in 2014.[6] It tries to distinguish from its competitors, Amazon and Microsoft.[7]

Most voice synthesizers (including Apple's Siri) use concatenative synthesis,[5] in which a program stores individual phonemes and then pieces them together to form words and sentences. WaveNet synthesizes speech with human-like emphasis and inflection on syllables, phonemes, and words. Unlike most other text-to-speech systems, a WaveNet model creates raw audio waveforms from scratch. The model uses a neural network that has been trained using a large volume of speech samples. During training, the network extracts the underlying structure of the speech, such as which tones follow each other and what a realistic speech waveform looks like. When given a text input, the trained WaveNet model can generate the corresponding speech waveforms from scratch, one sample at a time, with up to 24,000 samples per second and smooth transitions between the individual sounds.[5]

The service was renamed Speech Recognition & Synthesis in 2023.[citation needed]

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.