1876年,我們發(fā)明了電話,我們可以通過(guò)電線傳輸音頻,隔年我們又發(fā)明了留聲機(jī),成功地實(shí)現(xiàn)了聲音的記錄和播放。1952年,貝爾實(shí)驗(yàn)室實(shí)現(xiàn)了一個(gè)可以識(shí)別數(shù)字的語(yǔ)音識(shí)別軟件,1985年,IBM實(shí)現(xiàn)了一個(gè)可以識(shí)別1000個(gè)單詞的軟件。在聲音的識(shí)別上,我們一直在前進(jìn)。在人聲識(shí)別的應(yīng)用上,人工語(yǔ)音對(duì)話,智能應(yīng)答,語(yǔ)音輸入等功能已經(jīng)相當(dāng)常見(jiàn),這都得益于各種第三方包的實(shí)現(xiàn)。如今,在Python中Tensorflow,Keras,Librosa,Kaldi和語(yǔ)音轉(zhuǎn)文本API等多種第三方庫(kù)和api使語(yǔ)音識(shí)別和操作變得更加容易。今天小編帶來(lái)的就是使用gtts和speech_recognition實(shí)現(xiàn)的一個(gè)簡(jiǎn)單的人工語(yǔ)音對(duì)話。思路就是將語(yǔ)音轉(zhuǎn)文本,經(jīng)過(guò)一定的邏輯操作后,輸出對(duì)應(yīng)文本,然后再將文本轉(zhuǎn)語(yǔ)音輸出。話不多說(shuō),來(lái)看操作吧!
gtts
gtts是將文字轉(zhuǎn)化為語(yǔ)音,但是需要在VPN下使用。這個(gè)因?yàn)橐庸雀璺?wù)器。
具體gtts的官方文檔:
下面,讓我們看一段簡(jiǎn)單的的代碼
from gtts import gTTS
def speak(audioString):
print(audioString)
tts = gTTS(text=audioString, lang='en')
tts.save("audio.mp3")
os.system("audio.mp3")
speak("Hi Runsen, what can I do for you?")
執(zhí)行上面的代碼,就可以生成一個(gè)mp3文件,播放就可以聽(tīng)到了Hi Runsen, what can I do for you?
。這個(gè)MP3會(huì)自動(dòng)彈出來(lái)的。
speech_recognition
speech_recognition用于執(zhí)行語(yǔ)音識(shí)別的庫(kù),支持在線和離線的多個(gè)引擎和API。
speech_recognition具體官方文檔
安裝speech_recognition可以會(huì)出現(xiàn)錯(cuò)誤,對(duì)此解決的方法是通過(guò)該網(wǎng)址安裝對(duì)應(yīng)的whl包
在官方文檔中提供了具體的識(shí)別來(lái)自麥克風(fēng)的語(yǔ)音輸入的代碼
下面就是 speech_recognition 用麥克風(fēng)記錄下你的話,這里我使用的是
recognize_google,speech_recognition 提供了很多的類似的接口。
import time
import speech_recognition as sr
# 錄下來(lái)你講的話
def recordAudio():
# 用麥克風(fēng)記錄下你的話
print("開(kāi)始麥克風(fēng)記錄下你的話")
r = sr.Recognizer()
with sr.Microphone() as source:
audio = r.listen(source)
data = ""
try:
data = r.recognize_google(audio)
print("You said: " + data)
except sr.UnknownValueError:
print("Google Speech Recognition could not understand audio")
except sr.RequestError as e:
print("Could not request results from Google Speech Recognition service; {0}".format(e))
return data
if __name__ == '__main__':
time.sleep(2)
while True:
data = recordAudio()
print(data)
下面是我亂說(shuō)的英語(yǔ)
對(duì)話
上面,我們實(shí)現(xiàn)了用麥克風(fēng)記錄下你的話,并且得到了對(duì)應(yīng)的文本,那么下一步就是字符串的文本操作了,比如說(shuō)how are you
,那回答"I am fine”
,然后將"I am fine”
通過(guò)gtts是將文字轉(zhuǎn)化為語(yǔ)音
# @Author:Runsen
# -*- coding: UTF-8 -*-
import speech_recognition as sr
from time import ctime
import time
import os
from gtts import gTTS
# 講出來(lái)AI的話
def speak(audioString):
print(audioString)
tts = gTTS(text=audioString, lang='en')
tts.save("audio.mp3")
os.system("audio.mp3")
# 錄下來(lái)你講的話
def recordAudio():
# 用麥克風(fēng)記錄下你的話
r = sr.Recognizer()
with sr.Microphone() as source:
audio = r.listen(source)
data = ""
try:
data = r.recognize_google(audio)
print("You said: " + data)
except sr.UnknownValueError:
print("Google Speech Recognition could not understand audio")
except sr.RequestError as e:
print("Could not request results from Google Speech Recognition service; {0}".format(e))
return data
# 自帶的對(duì)話技能(邏輯代碼:rules)
def jarvis():
while True:
data = recordAudio()
print(data)
if "how are you" in data:
speak("I am fine")
if "time" in data:
speak(ctime())
if "where is" in data:
data = data.split(" ")
location = data[2]
speak("Hold on Runsen, I will show you where " + location + " is.")
# 打開(kāi)谷歌地址
os.system("open -a Safari https://www.google.com/maps/place/" + location + "/&")
if "bye" in data:
speak("bye bye")
break
if __name__ == '__main__':
# 初始化
time.sleep(2)
speak("Hi Runsen, what can I do for you?")
# 跑起
jarvis()
當(dāng)我說(shuō)how are you?會(huì)彈出I am fine的mp3
當(dāng)我說(shuō)where is Chiana?會(huì)彈出Hold on Runsen, I will show you where China is.的MP3
同樣也會(huì)彈出China的谷歌地圖
本項(xiàng)目對(duì)應(yīng)的Github
小結(jié)
以上就是30行Python代碼打造一款簡(jiǎn)單的人工語(yǔ)音對(duì)話的詳細(xì)介紹,更多Python學(xué)習(xí)內(nèi)容和人工語(yǔ)音對(duì)話的資料請(qǐng)關(guān)注W3Cschool其它相關(guān)文章!