VoiceInteraction
Rua Alves Redol, 9
1000-029 Lisboa
Portugal
Website: http://www.voiceinteraction.pt
VoiceInteraction, founded in 2008 and based in Lisboa (Portugal), is a company specialised in the development of speech technologies. VoiceInteraction develops voice synthesis and speech recognition engines for many web and telephony applications.
Portuguese · Portuguese (Brazil)
This is for de Debian Lenny version, for other Linux, read the official Audimus Installation manual. To configure the apt client, just edit and add to the file /etc/apt/sources.list one of the following line set.
For asterisk 1.Y.X
deb http://services.voiceinteraction.pt/repo/Debian/5.0 engines 3rdparty Dixi Audimus deb http://services.voiceinteraction.pt/repo/Debian/5.0 asterisk.1.Y.X Dixi Audimus
Refresh the package lacal base with:
# apt-get update
And Install the package audimus-asterisk-xx-xx (where xx-xx is the language requested):
# apt-get install audimus-asterisk-es-es
The following extra packages will be installed:
audimus audimus-config audimus-model-es-es-monophones-g2p-phonemodels audimus-model-es-es-monophones-mlp-telephone audimus-model-es-es-monophones-task-asterisk
Activate the license with:
# audimus_activate_license
Please enter your Audimus license:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Your License was successfully activated!
To enable speech recognition, change the main speech parameter. The “speech” parameter can get three values, “yes”, “automatic”, “no” or “emulation” (don’t generate errors if you enable speech grammars).
… speech=automatic speechprovider=verbio …
In the VoiceXML browser configuration file:
############################ # ASR server configuration # ############################ client.rec.resource.0.cacheDir VXIString /tmp/cacheContent client.rec.resource.0.format VXIString txt client.rec.resource.0.syntax VXIString doctype
You need to restart the Vxi and Asterisk to get all the changes.
Logs file from the ASR engine:
# tail -f /var/log/VI/VI.log
The following VoiceXML example uses the speech recognition, with the built in grammar, ‘digits.’
<?xml version="1.0" encoding="iso-8859-1"?> <vxml version="2.0" xmlns="http://www.w3.org/2001/vxml" xml:lang="en-UK"> <form> <property name="inputmodes" value="voice"/> <property name="timeout" value="30s"/> <field name="text" type="digits"> <catch event="noinput nomatch"> <reprompt/> </catch> <prompt> Speak to me: </prompt> </field> <filled> <prompt> You say me: <value expr="text" /> </prompt> <clear namelist="text" /> </filled> </form> </vxml>