Sample Actions for the Google Assistant

Leon Nicholls
6 min readJul 19, 2018

Today we’re launching a handful of new sample Actions that are integrated with Dialogflow. These sample Actions are the fastest and easiest way for you to create an Action for the Google Assistant, and to learn more about the powerful features of the platform! You get a working Dialogflow agent with intents and full source code for the fulfillment, which you can then edit for your own Action on the Assistant.

We have 7 sample Actions that show developers how to use Actions on Google features such as rich responses, SSML, persistence, built-in intents and media playback.

There are several Actions that can use daily updates and notifications, or be added to routines for habitual use. You can also preview the user experience of each sample type by playing the audio recording.

Once you have selected the sample Action you want to create, click on the ‘Add to Dialogflow’ button. You’ll be taken to the Dialogflow page below, where you can name your agent and get it up and running within a few clicks.

Dialogflow Agent

When the ‘Add to Dialogflow’ process completes, you’ll get a working agent with intents and full source code for the fulfillment. To run the agent as an Action, just follow the instructions in the fulfillment code editor.

The agent intents are designed to follow our best practices for Actions:

  • Error handling with no-input and fallback logic
  • Support for help
  • Support for repeating prompts
  • Support for navigating back
  • Exit and cancel prompts

When applicable we also support built-in intents to make the Action more discoverable by users. For example, the horoscope sample is configured for the ‘actions.intent.GET_HOROSCOPE’ built-in intent:

Other built-in intents you can use for your own Actions are listed in our reference docs.

The agent is configured to use Cloud Functions for Firebase for the fulfillment. The source code in the Dialogflow fulfillment editor provides step-by-step instructions on how to customize the Action behavior. For example, you can change the Action name used in the welcome prompt by modifying the ‘name’ variable value:

The source code provides some default data, but you need to add your own data to make the Action your own.

To try out the Action, you don’t need to understand the rest of the source code. However, if you want to customize more aspects of the Action or even add your own features, then you can dig further.

Audio

Each of the Action types come with their own sounds and effects. We use various sounds, including some from our AoG Sound Library for introducing the Actions and for providing feedback during the conversation. Here are the sounds we use for the choose-your-own adventure Action:

const introSound = 
‘https://storage.googleapis.com/actionsresources/intro4norm.ogg';
const backgroundSound =
‘https://actions.google.com/sounds/v1/ambiences/ambient_hum_air_conditioner.ogg';
const endSound =
‘https://storage.googleapis.com/actionsresources/outro4norm.ogg';
const reactionSound =
‘https://actions.google.com/sounds/v1/doors/wood_door_open_close.ogg';

The sounds are included in the conversation responses by using the SSML <audio> tag. The responses also include background sounds using the SSML media <par> tag, which is unique to Actions on Google and plays sounds in parallel. For example, here is a code snippet for generating the SSML markup in a response to the user:

conv.ask(`<speak>
<par>
<media xml:id="reactionSound">
<audio
src="${reactionSound}"/>
</media>
<media xml:id="intro" begin="reactionSound.end+2.0s">
<speak>${options.prompt1}<break time="500ms"/></speak>
</media>
</par>
</speak>`);
conv.ask(`<speak>
<par>
<media xml:id="data">
<speak>${options.prompt2}</speak>
</media>
<media xml:id="more" begin="data.end+1.0s">
<speak>${options.prompt3}</speak>
</media>
<media xml:id="backgroundSound" begin="intro.begin-0.0s"
end="more.end-0.0s" fadeOutDur="1.0s" soundLevel="-5dB">
<audio
src="${backgroundSound}"/>
</media>
</par>
</speak>`);

Prompts

Each Action type include various kinds of prompts that follow our conversational design guidelines. Each prompt type can have multiple alternatives:

const prompts = {
'welcome': [
`Welcome to ${name}.`,
`Hi! It's time for ${name}.`
],
'welcome_back': [
`Welcome back to ${name}.`,
`Hi again. Welcome back to ${name}.`
],

};

A prompt is selected randomly to make the conversation more natural. The source code includes a utility to get a random prompt without sequential repeats.

Intent Handling

The fulfillment uses a very useful feature that was added to v2 of our AoG client library, called middleware. It allows you to do common tasks for each intent invocation. We use it to do some logging, to determine if the request input is by voice and also to reset a variable used to track the number of fallbacks:

app.middleware((conv) => {
console.log(`Intent=${conv.intent}`);
console.log(`Type=${conv.input.type}`);
// Determine if the user input is by voice
conv.voice = conv.input.type === 'VOICE';
if (!(conv.intent === 'Default Fallback Intent' ||
conv.intent === 'No-input')) {
// Reset the fallback counter for error handling
conv.data.fallbackCount = 0;
}
});

For the welcome intent, the code uses the convenient conv.user.last.seen value to determine if the user has used the action before. Different prompts are also used if the user input type is voice:

app.intent('Default Welcome Intent', (conv) => {
console.log(`Welcome: ${conv.user.last.seen}`);
reset(conv);
const response = selectOption(conv, null);
const config = {
intro: true,
prompt1: conv.user.last.seen ?
getRandomItem(prompts.welcome_back) :
getRandomItem(prompts.welcome),
prompt2: getRandomPrompt(conv, 'intro'),
prompt3: !conv.voice ? `${response.description}
${response.optionsText}` : `${response.description} <break
time="500ms"/>${response.options}`
};
makeSsml(conv, config);
// Add suggestions to continue the conversation
conv.ask(response.suggestions);
});

Suggestion tips are used to make it easy for users to select a response.

The client library intent handling has a convenience where more than one intent can be handled by the same intent handler. Here is how the ‘More/Yes/Next/Again’ intents are handled:

app.intent(['More', 'Yes', 'Next', 'Again'], (conv) => {
console.log(`More: fallbackCount=${conv.data.fallbackCount}`);
const response = selectOption(conv, null);
const config = {
prompt1: getRandomPrompt(conv, 'confirmation'),
prompt2: '',
prompt3: !conv.voice ? `${response.description}
${response.optionsText}` : `${response.description} <break
time="500ms"/>${response.options}`
};
makeSsml(conv, config);
conv.ask(response.suggestions);
});

Error Handling

As part of our error handling strategy, both no-inputs and no-match fallbacks are explicitly handled in the fulfillment logic. Here is how no-inputs are handled by tracking how many times the user hasn’t provided a response and attempts to get the conversation back on track with reprompts:

app.intent('No-input', (conv) => {
const repromptCount =
parseInt(conv.arguments.get('REPROMPT_COUNT'));
console.log(`No-input: repromptCount=${repromptCount}`);
if (repromptCount === 0) {
conv.ask(getRandomPrompt(conv, 'no_input1'));
} else if (repromptCount === 1) {
let options = [];
if (data[conv.data.current].options) {
for (let key of Object.keys(data[conv.data.current].options))
{
options.push(data[conv.data.current].options[key]);
}
}
conv.ask(`${getRandomPrompt(conv, 'no_input2')}
${getRandomPrompt(conv, 'options')}
${makeOxfordCommaList(options)}.`);
} else if (conv.arguments.get('IS_FINAL_REPROMPT')) {
// Last no-input allowed; close conversation
conv.close(getRandomPrompt(conv, 'no_input3'));
}
});

Daily updates and routines

Most of the sample Actions are designed to support the automatic discovery flow of daily updates and routines where the Assistant takes care of showing the opt-in to the user.

To enable daily updates or routines, go to the Actions console for you project, select the ‘Actions’ menu option, select ‘One-shot’ from the list of Actions and then enable daily updates or routines under ‘User engagement’. The Assistant will then automatically suggest to the user to add your Action to daily updates or routines after they have invoked your Action.

Testing your Action

After you have modified the fulfillment code, save and deploy it as a Cloud Function by clicking on the ‘Deploy’ button at the bottom of the Dialogflow fulfillment editor.

Once the function is deployed, click on ‘Integrations/Google Assistant/Test’ to launch the Actions simulator. Now you can test your new Action.

When you are ready to publish the Action, go to the Actions console and provide some additional information for the Actions directory and submit your Action for review.

Get Started

So, try these new sample Actions on our Actions on Google samples page.

We look forward to see how you use these to create your own Actions!

Want More? Head over to the Actions on Google community to discuss Actions with other developers. Join the Actions on Google developer community program and you could earn a $200 monthly Google Cloud credit and an Assistant t-shirt when you publish your first app.

--

--