AndroidAppsTutorials

Beginner’s guide to Tasker, part 8: AutoVoice

guide-autovoice-banner

A month and a half ago, I posted a guide to AutoVoice. At that point I didn’t think I was going to end up using it myself, but I was wrong. AutoVoice is becoming more popular every minute, and has even gotten several new features since the original guide, so I thought I would take the experience I’ve had with AutoVoice since then and write a more comprehensive, updated guide for AutoVoice, and make it part of the Beginner’s Guide in the process.

Note: AutoVoice is a plug-in app for Tasker. This means that in order to be able to use it, you absolutely have to know how to use Tasker. I say this because with the increasing popularity of AutoVoice recently, a lot of people are looking into Tasker for the first time specifically for using AutoVoice, and I cannot stress enough that you need to know Tasker before you can start using AutoVoice. This guide deals with AutoVoice itself, and won’t explain every term and concept that’s a standard part of Tasker. For a guide to the basics of Tasker, follow this link. Trust me when I say there are no shortcuts here, and you cannot expect to start creating amazing home automation systems and voice assistants right away if you’ve never touched Tasker before, because Tasker itself is a bigger part of an AutoVoice setup than AutoVoice is. 

Note 2: This guide is being released simultaneously as a major AutoVoice update, and covers the new version. It’s important that you update in order to see all the features referred to here. 

What is AutoVoice?

Screenshot_2013-06-30-08-49-55AutoVoice is designed to allow you to use voice recognition to transfer spoken commands into Tasker, where you can then use those commands for various things. Tasker does come with a similar feature out of the box, called Get Voice. Get Voice is an action that pops up a voice input box, waits for you to speak, stores the result in the variable %VOICE, and that’s it. To actually use the information in that variable, you need to use complex systems of if/then conditions and sometimes also split and process the data in the variable before it’s of any use to you. It works, it’s just that it’s one of so many features that only the basic functionality is in place. 

AutoVoice, on the other hand, is a standalone app whose only purpose is to give users as many features related to voice control in Tasker as possible. The basic concept of how it works is different and easier to manage, you have more variables to access the information (making data processing easier/less likely to be needed), Bluetooth headset support, direct shortcut support, a separate log system, a feature for chaining commands together, and much more. Bottom line, if you took Get Voice and expanded it, you’d get AutoVoice.

autovoice qr - for some reason we don't have an alt tag here

DownloadGoogle Play

How AutoVoice works: Action/profile relationship

One of the fundamental differences between Get Voice and AutoVoice is what it does with the data after you speak into the microphone. Get Voice is basically designed to be part of a task, where you use the Get Voice action to input data at some point in the task, and you then normally use that data later in the same task. In many ways, Get Voice is the voice equivalent of popping up a keyboard to ask you to input something.

AutoVoice on the other hand is designed more with voice assistant functionality in mind. It assumes that you’re not using it to just input some text somewhere instead of using a keyboard, but instead that you’re using it to give voice commands to your phone. Because of this, each response to a specific command is its own profile in Tasker. These profiles consists of a standard Tasker task that contains everything you want to happen when a specific command is recognized, as well as the AutoVoice Recognized context which makes sure the profile runs when specific words or phrases are detected in the voice input. The AutoVoice Recognized context has a so-called command filter, which is a field where you specify the word or phrase that should trigger that profile. If the command filter is “hello”, the profile will trigger when the word “hello” is used in a voice command.

Triggering the voice input box that you speak to is done completely separately from these profiles, either using a Tasker action, a direct shortcut (part of the Android shortcut system and available in launchers and similar apps), or a direct response to clicking a Bluetooth headset button (the options for this can be found in the main AutoVoice app, and are named “AutoRecognize BT”). This way you have a standard way (or ways) of triggering the voice input box, and then however many AutoVoice Recognized profiles you have will all be able to trigger based on what you say.

Example

As an example, let’s say you want to create a task that runs when you tell AutoVoice “hello”. You would then start by creating a new profile, and selecting the AutoVoice Recognized context from the State-Plugin section. AutoVoice Recognized is simply the context that’s used for making profiles active when AutoVoice hears a specific phrase. In the configuration screen for this, you should first enable Event Behavior. This option should basically be enabled on every single profile, as it makes it so that the profile acts as an Event profile as per standard Tasker behavior. Next, you would go into Command Filter, enter “hello”, and save out of everything. For the task attached to this context, you would put in everything you’d want Tasker to do when you tell AutoVoice “hello”. For the sake of the example, let’s say you put in a Say action with “hello to you too”.

Now you have a profile that responds to any mention of “hello” when you use voice commands. This includes “hello”, “hello how are you”, and “I just wanted to say hello to you”; in other words, the command filter just has to be part of the spoken phrase.

Finally you need to set up a way to trigger voice recognition. As mentioned earlier, you have three options: Tasker action, direct shortcut, and Bluetooth headset buttons. The Tasker action is called Autovoice Recognize (not to be confused with the AutoVoice Recognized context) and can be found in the Plugin section of the action library. Simply running a task with that action in it will pop open a speech input box so you can start giving voice commands, and the same goes for the other two methods of invoking this speech box. The trick is to use these options to give you easy ways of activating the box to start giving voice commands. I have a shortcut on my lock screen, an AutoRemote command, and an anywhere-accessible touch screen gesture using GMD GestureControl which I can use to trigger it on my phone.

Accessing voice data in the task

The above example used a static response that simply triggered on a keyword, but a lot of the time you’ll want to use whatever you say as part of the task. A simple example would be a task that responds to “Call me [Name]”, where it would respond with “I will call you [Name]”. In that case you’ll need to actually get your hands on the name that was spoken, and AutoVoice gives you several options to do that.

The first is with the variable %avcomm, which contains the entire command that was recognized. If you say “Hello how are you today”, %avcomm will contain “Hello how are you today”.

The second variable you can use is %avcommnofilter. This is essentially the variable from above minus the command filter that was specified. If you say “Hello how are you today” and the command filter is “hello”, %avcommnofilter will contain “how are you today”.

The third option is an array, so knowledge of how to use arrays is useful. Each word you speak will be given a separate variable as part of an array, %avword. This means that if you say “Hello how are you today”, %avword1 will be “Hello”, %avword2 will be “how”, %avword3 will be “are”, %avword4 will be “you”, and %avword5 will be “today”. This array adapts to however many words the command consists of.

variables

The final option is to use regex to create your own variables based on the input command. This is done by replacing dynamic parts of the command filter with (?<variablename>.+), and will create %variablename with a value of whatever it replaced. As an example, a regex command filter “say hello to (?<name>.+)” and a spoken command “say hello to Bob” will create %name with the value “Bob”. This gets increasingly useful the more integrated the information you need is, for instance if you have a command filter along the lines of “”turn (?<room>.+) lights to (?<level>.+) percent”. You can of course combine this with other regex features like wildcards to tailor what it triggers on.

To properly go through how to use this method in various situations for anyone who’s not familiar with regex would require a regex guide in itself, and I won’t do that. An alternative to this method that doesn’t require regex knowledge is available here. The regex method is cleaner, the alternative is easier to understand, so I’ll leave it at that.

As you can see, all of these variables are local variables, hence the lower case letters. That means they’re only available in the entry task for any profile triggered by the command, and not anywhere else. You can of course use standard Tasker methods for copying them into global variables if you need to store the information or use it elsewhere.

Command ID system

One of AutoVoice’s big advantages over Get Voice is the command ID system. This system allows you to better control when AutoVoice profiles can trigger, by making them dependent on certain command IDs.

A command ID is simply a number that you can specify in order to link profiles together. You can set the current command ID either using the Set Last Command ID action in a task and then running it, or by using the Command Id field in a AutoVoice Recognized context and make that profile activate. You can then specify the same number under “Last Command Id” in an AutoVoice Recognized context in order to limit that context to only become active if the given command ID is set.

Example: Command ID with context

Let’s say you create a profile that sends an email  based on your voice command. You want to make sure it gets things right, so you first have it read it back to you and ask you if that’s what you want to do. You then create profiles for “yes” and “no”, and everything works fine.

A bit later on you create a similar profile for Twitter, and you add new profiles that use “yes” and “no” as the trigger. The first time you use it, you discover that when you say “yes”, it triggers both the yes-profile for Twitter and the one for email! If you say no, it triggers both no-profiles!

To fix this, you set up the “yes” and “no” profiles for email to have 1 in the Last Command Id field. You then go into the first email profile (the one that asks you for confirmation) and set the Command Id field to 1. Next up, you do the same with the Twitter profiles, but use the number 2 instead.

This will limit the yes/no profiles so that they can only run after their “parent profiles”.

Example 2: Command ID with action

The above example was for using a context for setting the command ID. However, you can also do it with an action, which can be useful in a lot of cases.

One example is if you want to have different variations of a profile trigger based on how you initiated the voice input box. Let’s say you have a profile for starting AutoVoice Recognize with a Bluetooth headset button, as well as a home screen shortcut to a task that starts it directly. When you use the home screen shortcut, you want it to respond to you using discrete Flash messages on the screen, but when you use the Bluetooth headset, you want it to respond using voice with the Say action.

You could then use the AutoVoice Set Cmd Id action to set a command ID as part of the task that initiates the voice box, using different IDs for the Bluetooth profile and the home screen shortcut task. You can then make two separate profiles, one for the Flash response and one for the Say response, and simply use the respective command ID to limit when they can run. That way you can have two separate versions of the same profile, both with the same command filter, and control which one triggers based on how you initiated voice input.

A more advanced example is to use AutoRemote to initiate voice input on your device from your PC, and then use the same method to decide whether something is done locally (when the voice input box is triggered locally) or remotely (when it’s triggered from AutoRemote). One way of using this would be to have a Google search feature that would search on your Android device if the voice recognition was initiated there, or on your PC if it was initiated from there.

Please also read this post about a certain situation where the Set Cmd Id action is a fix for an issue with context-set command IDs.

Rec Failed and No Match

AutoVoice comes with two contexts that are important to know how to use: AutoVoice Rec Failed and AutoVoice No Match.

didnt-catchAutoVoice Rec Failed is a context that will make a profile activate when the voice recognition failed. This means it failed to find any language in what you were saying, or it didn’t hear you say anything at all. Do not confuse this with misinterpreting what you said, as then the recognition didn’t fail, it simply didn’t recognize it correctly. The purpose of this is to create profiles that for instance asks you to repeat if the recognition fails. Command IDs are supported for this context.

AutoVoice No Match is however a context that allows you to create a profile that triggers when it did detect some language in what you said, but couldn’t find a profile that matched anything. An example would be if you said “hello” to trigger a profile with command filter “hello”, but it heard it as “hell of”. This feature is new to AutoVoice and is what a lot of people thought the above feature was for. It does not support command IDs a the time of this writing, but you can find an option in AutoVoice’s advanced settings section to set the time that AutoVoice will wait for a profile to activate before deciding nothing was matched.

Bluetooth headset support

AutoVoice supports Bluetooth headsets, which is another bonus over Get Voice. You will find Bluetooth related options all throughout AutoVoice, and most of them are fairly self explanatory. When configuring the AutoVoice Recognize action, you can device whether or not it will use the headset. To use the headset’s buttons to trigger Tasker tasks, use the AutoVoice BT Pressed contexts that come with AutoVoice to create standard profiles that are triggered when the buttons are pressed.

Please note that Bluetooth headsets are often very different, and you have no guarantee that a particular headset will work with the button contexts. You basically just have to try.

AutoVoice Continuous

AutoVoice Continuous allows you to start a background process that will listen for voice commands continuously. This means that when it’s active, you should be able to just start speaking without triggering the voice input box first.

To use it, you use the AutoVoice Continuous action in the plugin category. You basically run that action with the check mark checked to start it, and again with it unchecked to stop it. A toggleable task works well here.

This feature has been improved a lot recently, but it’s still to be considered a work in progress. It lacks some important features, and it might not work well or at all on some devices, Android versions, ROMs, etc. It also drains the battery quite quickly, and is best used connected to a charger.

Useful tips

This section will cover things that are important to know to use AutoVoice, but hard to fit into their own sections

Direct home screen shortcut

shortcutThe latest version of AutoVoice has support for adding a direct shortcut to initiating voice recognition, using the standard Android shortcut system compatible with most launchers. The advantage of this is that it’s often a lot faster than doing it via a task, but the disadvantage is that you can’t do things like set a command ID with an action when you do it.

Limiting how many suggested interpretations are used

Voice recognition involves a bit of guess work on the server side, using common phrases and language models to try to interpret what you said. Normally the Google systsem that AutoVoice uses to convert voice into text will return several possible results, up to five possible results. AutoVoice profiles will trigger on any of these suggestions. A single profile will only trigger on one result, but other profiles can trigger on the same or a different result. Sometimes this can cause issues because secondary interpretations trigger profiles you didn’t intend.

As an example, I had a profile that triggered on the word “goodbye”, and another that triggered on “goodnight”. One time both profiles triggered when I said “goodbye”, because even though the first suggested interpretation was indeed correct, one of the other four suggestions was “goodnight”. Because of this, both profiles triggered.

If you have issues with this happening, you can change how many suggested interpretations are considered in the options for each AutoVoice Recognized context. The option is called Precision.

It should also be noted that even though multiple suggestions can trigger a profile, only the one that actually triggers it will populate the AutoVoice variables.

Using regex for command filters

Command filters support regular expressions. This isn’t a regex tutorial so I won’t go into detail on that, but this post shows one example of regex being used, in that case to allow for multiple command filters in a single profile.

Trigger multiple profiles with the same command: Home automation example

The key to being able to use dynamic commands in AutoVoice is to pick your command filters carefully. If you can use simpler command filters without the risk of accidentally triggering the profiles, do it. This allows you to dynamically trigger multiple profiles with one command by simply working multiple command filters into the command.

An example is Doug Gregory‘s now famous AutoVoice home automation video, seen below:

This video makes it seem as though the phone is a genius, understanding everything he says no matter how he phrases it, even allowing for multiple tasks per command. It looks difficult to set up, but it actually uses very simple logic.

The system assumes that the user is not brain dead. If you mention a specific lamp, it’s extremely likely that you want to do the exact opposite of what’s currently happening. For instance, if the lamp is on, it’s fairly likely that you’ll tell the system to turn it off, as few people would stand there saying “turn on the lamp” when it’s already on. As such, it doesn’t actually need to pay attention to whether or not you say “on”, “off”, or any synonyms (kill, disable, enable, activate). It only needs to toggle the lamp whenever a reference to it is being made.

So, let’s say you want to control the “bar lights”. All you then do is create an AutoVoice Recognized profile, and specify “bar lights” in the Command Filter field. This is then tied to an action to toggle the bar lights using whatever home automation system is being used.

The result is that the bar lights will seem to react to extremely dynamic commands, like “I don’t want to see the bar lights anymore, please make them go away”. In reality, it simply picks up “bar lights”, and sends a command to toggle those. Unless you do something like tell it to turn them on when they’re actually on, or for instance mention that you need to buy new bar lights, it will seem like the system is more intelligent than it is.

By creating profiles like that for more appliances, you can mention multiple in one sentence and have them trigger. If you create similar profiles for the kitchen and living room lights, you could for instance say “turn off the bar lights and the living room lights, and turn on the kitchen lights”, and all three individual profiles would trigger, send toggle commands, and it would look like magic.

Of course, there are examples in the video that are a bit more complicated, like telling the system he’s home or to shut down. Those are then essentially standalone profiles with tasks that do everything in one go, rather than activate lots of individual profiles. By simply using Off commands rather than Toggle commands, it’s then possible to make sure that it all turns off regardless of current configuration, instead of essentially just inverting it.

Multi-context profiles as an alternative to command IDs

At the end of the day, AutoVoice Recognized context are just contexts like anything else. That means that you can use multiple contexts in a single profile, and by doing so, control which profiles trigger. An example would be to pair an AutoVoice Recognized context with a standard WiFi Connected context in order to limit where that profile can become active.

An example would be to set up several profiles that all trigger on the word “leaving”, but have other contexts as well. One could have a WiFi Connected context for your home WiFi network, and shut down your home automation system when run- allowing you to say “I’m leaving” when you leave the house. Then you could make one that had your office WiFi as a second context, and saying “I’m leaving” there would then disable office mode, text your wife, or something like that.

Event behavior should practically always be checked

Tasker doesn’t allow plug-ins to add Event contexts, so the workaround is to make a context that activates and deactivates quickly. That is what the event behavior option does. There might be cases where you wouldn’t check it, which is why it’s an option at all, but for 99.9% of cases it should be checked.

Do note that since this turns the profile into a state context that quickly activates and deactivates, you may need to disable the restore settings option for the profile (by long pressing on the profile name and then clicking the settings icon) to prevent it from instantly reverting some settings.

This behavior also means that exit tasks currently don’t really have any business being in AutoVoice profiles. If you need to make a profile that can be activated or deactivated with voice, you need to create a toggleable task or use some other method for making that happen.

Integrating AutoVoice with other services

I have several articles in the AutoVoice section of the Tasker content portal that deal with using other services from AutoVoice. This includes using Google Navigation, Google Now, and voice dialing. Make sure to follow the site to get more individual AutoVoice articles!

Read the Google Play description

I’ve been amazed many times at how some people never bother reading the description of an app, and I’ve seen people get angry at the developer for not properly informing about something that is written extremely clearly in the app description. As such, I find myself having to point out that reading the Google Play description really is something you should do.

Frequently Asked Questions

This is a collection of the most frequently asked questions and problems that both myself and the AutoVoice developer has come across.

How do I change the recognition language?

AutoVoice uses Google’s services for voice recognition, so to change the language, go into the Google Search app, then into the settings for it, Voice, and then Language.

AutoVoice has taken over my phone dialer, how do I fix it?

Some Bluetooth headsets have their buttons mapped to dial back the last number. If you then choose to always use AutoVoice for this, you’re not telling the device to use AutoVoice for that button, you’re telling it to use AutoVoice as the dialer!

To fix it, head into system settings, Application Manager, then find AutoVoice. Click the Clear defaults-button, and it’s fixed!

AutoVoice doesn’t work with my Bluetooth headset!

Send the developer that specific headset model for free, and he might be able to add support for it. Seriously, some things you can’t guess your way to adding support for.

I tried changing settings with AutoVoice but it doesn’t work!

See the section called “Event behavior should practically always be checked” further up.

How can I make AutoVoice work offline?

Ask Google, utoVoice uses its voice recognition system.

AutoVoice doesn’t understand what I’m saying!

Try changing the language as described above. For English there are multiple variants supported, so try them all if you’re having problems.

I get connection errors even though I’m connected to the net!

cant-reach-googleThis is a known issue with some versions of the Google search app, which also affects AutoVoice. Google will need to fix it.

AutoVoice cuts off my commands!

From the Google Play description:

This version is limited to commands with 4 characters. If you want to unlock complete commands, you can do it in-app or buy the separate unlock key here: http://goo.gl/g8cKH

Dear developer, how do I make a task that does X, Y, and Z?

The AutoVoice developer is responsible for AutoVoice, not Tasker. This is a distinction I see a lot of people missing completely. Like I said at the beginning, you need to know how to use Tasker in order to use AutoVoice. At the end of the day, most of AutoVoice’s functionality relates to triggering profiles, and what the tasks for those profiles do is in most cases a matter of using Tasker, not AutoVoice.

So, on behalf of everyone who wants the dev to be able to work on adding more features to the app instead of answering support requests for something that’s not actually related to AutoVoice, please make sure that if you do contact him directly, it’s actually about AutoVoice and not Tasker.

Remember, it’s not the responsibility of a cup holder manufacturer to help you troubleshoot problems with the car the cup holder is attached to.

Usage examples

Below is a few videos showing my own use of AutoVoice in various situations, to give you an idea of what you can use AutoVoice for. The reason why I’m not going through each of these in detail is that the AutoVoice part of these examples is pretty much the same for each, and nothing out of the ordinary. What separates them is what the task does once the profile triggers, and that’s all Tasker, not AutoVoice.

I cannot emphasize this distinction enough, as even though AutoVoice opens up for creating a lot of new and interesting Tasker profiles, most of the job in creating something useful with AutoVoice is knowing what to do with the Tasker part of the equation.

Good luck!



tasker banner - for some reason we don't have an alt tag here

Pocketables does not accept targeted advertising, phony guest posts, paid reviews, etc. Help us keep this way with support on Patreon!
Become a patron at Patreon!

Andreas Ødegård

Andreas Ødegård is more interested in aftermarket (and user created) software and hardware than chasing the latest gadgets. His day job as a teacher keeps him interested in education tech and takes up most of his time.

Avatar of Andreas Ødegård