Make Alexa speak before executing action with Python SDK - alexa

The use case is pretty similar to things that are working already out there. I want Alexa to say something and then execute an action. For example, with the Spotify integration I can ask Alexa to play a playlist:
Alexa play rock playlist
Alexa says "I will play the rock classics playlist in spotify"
Alexa proceeds to play the playlist.
Notice that Alexa spoke BEFORE actually sending the command to the Spotify API asking to play the playlist. So my question is, how can I achieve the same behavior with the python SDK?
class GetNewFactHandler(AbstractRequestHandler):
"""Handler for Skill Launch and GetNewFact Intent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return (is_request_type("LaunchRequest")(handler_input) or
is_intent_name("GetNewFactIntent")(handler_input))
def handle(self, handler_input):
# type: (HandlerInput) -> Response
logger.info("In GetNewFactHandler")
print("Running action here")
speak = "I will run the action"
return (handler_input.response_builder.speak(speak).response)
I have something similar to the code above. But it always executes the action before Alexa says it will execute it. Is there any way to pass a callback to the response builder? I'm very new to the SDK, sorry if it is too obvious.

You can use Progressive Response
Your skill can send progressive responses to keep the user engaged while your skill prepares a full response to the user's request. A progressive response is interstitial SSML content (including text-to-speech and short audio) that Alexa plays while waiting for your full skill response.
To note, Alexa is not calling the API after saying the speech, it is calling the API while responding to the user to make it looks smooth.
Phython code example
def get_progressive_response(handler_input):
# type: (HandlerInput) -> None
request_id_holder = handler_input.request_envelope.request.request_id
directive_header = Header(request_id=request_id_holder)
speech = SpeakDirective(speech="Ok, give me a minute")
directive_request = SendDirectiveRequest(
header=directive_header, directive=speech)
directive_service_client = handler_input.service_client_factory.get_directive_service()
directive_service_client.enqueue(directive_request)
time.sleep(5)
return
class HelloWorldIntentHandler(AbstractRequestHandler):
# Handler for Hello World Intent
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return is_intent_name("HelloWorldIntent")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speech_text = "Hello World!"
get_progressive_response(handler_input)
handler_input.response_builder.speak(speech_text).set_card(
SimpleCard("Hello World", speech_text)).set_should_end_session(
False)
return handler_input.response_builder.response

Related

Alexa app does not recognize custom intent, but alexa web simulator does

im trying to develop my own alexa skill. When i ty it out using the alexa web simulator, it works find and recognizes my intent, but when i try it using my mobile app (im signed in with the same account that in the web console) it does not recognizes my intent. My code is the following:
# -*- coding: utf-8 -*-
# This sample demonstrates handling intents from an Alexa skill using the Alexa Skills Kit SDK for Python.
# Please visit https://alexa.design/cookbook for additional examples on implementing slots, dialog management,
# session persistence, api calls, and more.
# This sample is built using the handler classes approach in skill builder.
import logging
import ask_sdk_core.utils as ask_utils
from ask_sdk_core.skill_builder import SkillBuilder
from ask_sdk_core.dispatch_components import AbstractRequestHandler
from ask_sdk_core.dispatch_components import AbstractExceptionHandler
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_model import Response
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
class LaunchRequestHandler(AbstractRequestHandler):
"""Handler for Skill Launch."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_request_type("LaunchRequest")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = "Welcome, you can say Hello or Help. Which would you like to try?"
return (
handler_input.response_builder
.speak(speak_output)
.ask(speak_output)
.response
)
class HelloWorldIntentHandler(AbstractRequestHandler):
"""Handler for Hello World Intent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_intent_name("HelloWorldIntent")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = "Hello World!"
return (
handler_input.response_builder
.speak(speak_output)
# .ask("add a reprompt if you want to keep the session open for the user to respond")
.response
)
class HelpIntentHandler(AbstractRequestHandler):
"""Handler for Help Intent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_intent_name("AMAZON.HelpIntent")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = "You can say hello to me! How can I help?"
return (
handler_input.response_builder
.speak(speak_output)
.ask(speak_output)
.response
)
class CancelOrStopIntentHandler(AbstractRequestHandler):
"""Single handler for Cancel and Stop Intent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return (ask_utils.is_intent_name("AMAZON.CancelIntent")(handler_input) or
ask_utils.is_intent_name("AMAZON.StopIntent")(handler_input))
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = "Goodbye!"
return (
handler_input.response_builder
.speak(speak_output)
.response
)
class FallbackIntentHandler(AbstractRequestHandler):
"""Single handler for Fallback Intent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_intent_name("AMAZON.FallbackIntent")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
logger.info("In FallbackIntentHandler")
speech = "Hmm, I'm not sure. You can say Hello or Help. What would you like to do?"
reprompt = "I didn't catch that. What can I help you with?"
return handler_input.response_builder.speak(speech).ask(reprompt).response
class SessionEndedRequestHandler(AbstractRequestHandler):
"""Handler for Session End."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_request_type("SessionEndedRequest")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
# Any cleanup logic goes here.
return handler_input.response_builder.response
class IntentReflectorHandler(AbstractRequestHandler):
"""The intent reflector is used for interaction model testing and debugging.
It will simply repeat the intent the user said. You can create custom handlers
for your intents by defining them above, then also adding them to the request
handler chain below.
"""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_request_type("IntentRequest")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
intent_name = ask_utils.get_intent_name(handler_input)
speak_output = "You just triggered " + intent_name + "."
return (
handler_input.response_builder
.speak(speak_output)
# .ask("add a reprompt if you want to keep the session open for the user to respond")
.response
)
class CatchAllExceptionHandler(AbstractExceptionHandler):
"""Generic error handling to capture any syntax or routing errors. If you receive an error
stating the request handler chain is not found, you have not implemented a handler for
the intent being invoked or included it in the skill builder below.
"""
def can_handle(self, handler_input, exception):
# type: (HandlerInput, Exception) -> bool
return True
def handle(self, handler_input, exception):
# type: (HandlerInput, Exception) -> Response
logger.error(exception, exc_info=True)
speak_output = "Sorry, I had trouble doing what you asked. Please try again."
return (
handler_input.response_builder
.speak(speak_output)
.ask(speak_output)
.response
)
class CajaVecinaHandler(AbstractRequestHandler):
def can_handle(self, handler_input):
return ask_utils.is_intent_name("CajaVecina")(handler_input)
def handle(self, handler_input):
logger.info("Llamando a caja vecina")
logger.info(handler_input.request_envelope.context)
# isGeoSupported = context.System.device.supportedInterfaces.Geolocation;
# geoObject = context.Geolocation;
speak_output = "La caja vecina más cercana es la del paso los trapenses."
return (
handler_input.response_builder
.speak(speak_output)
.ask(speak_output)
.response
)
# The SkillBuilder object acts as the entry point for your skill, routing all request and response
# payloads to the handlers above. Make sure any new handlers or interceptors you've
# defined are included below. The order matters - they're processed top to bottom.
sb = SkillBuilder()
sb.add_request_handler(LaunchRequestHandler())
sb.add_request_handler(HelloWorldIntentHandler())
sb.add_request_handler(HelpIntentHandler())
sb.add_request_handler(CancelOrStopIntentHandler())
sb.add_request_handler(FallbackIntentHandler())
sb.add_request_handler(SessionEndedRequestHandler())
sb.add_request_handler(CajaVecinaHandler())
sb.add_request_handler(IntentReflectorHandler()) # make sure IntentReflectorHandler is last so it doesn't override your custom intent handlers
sb.add_exception_handler(CatchAllExceptionHandler())
lambda_handler = sb.lambda_handler()
The intent triggers when i say "caja vecina" and when i type it on the web simulator it works fine but when i do it using the mobile app, it does not recognizes the intent. The skill does appear on "my skills" in the app. This is how my app looks likeapp alexa
What does your interaction model look like? Often times when you come across that behavior where what you type in the simulator works, but what you say does not, there could be a problem with Alexa recognizing the speech. Try adding more sample utterances to your interaction model for that intent.
For example, if I have an intent with just one sample utterance "hotdog", and if I type "hotdog" in the web simulator, chances are Alexa will do the right thing in that case because "hotdog" of course exactly matches. But if I say the phrase when using the app or device, Alexa may actually be hearing "hot dog" with a space. In which case, adding "hot dog" as a sample will help catch that and correct the issue.

How can i add a queue function in my discord music bot?

I dont know how to put a queue function in here, when i play another song while its already playing, it gives me an error with "Already playing audio".
this is inside a cog btw
here is my play command code:
#commands.command()
async def play(self, ctx, *, url):
if ctx.voice_client is None:
voice_channel = ctx.author.voice.channel
if ctx.author.voice is None:
await ctx.send("`You are not in a voice channel!`")
if (ctx.author.voice):
await voice_channel.connect()
else:
pass
FFMPEG_OPTIONS = {'before_options':'-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5', 'options' : '-vn'}
YDL_OPTIONS = {'format':'bestaudio', 'default_search':'auto'}
vc = ctx.voice_client
with youtube_dl.YoutubeDL(YDL_OPTIONS) as ydl:
info = ydl.extract_info(url, download=False)
if 'entries' in info:
url2 = info['entries'][0]['formats'][0]['url']
title = info['entries'][0]['title']
elif 'formats' in info:
url2 = info['formats'][0]['url']
title = info['title']
source = await discord.FFmpegOpusAudio.from_probe(url2, **FFMPEG_OPTIONS)
if ctx.author.voice is None:
await ctx.send("`You are not in a voice channel`")
else:
await ctx.send(f"`Now Playing: {title}`")
vc.play(source)
Before anything, you seem to have answered your own question? "put a queue".
All you had to google was "How to make queue in Python" and this would have shown you the way to queue standard library of python.
Now what you need to do is identify, what type of queue do you want.
For example, for now assume there are three types of queue. One of them is a type of queue where you stack everything together, and then use the top one first and work the way to bottom. If you want to visualize this, a bunch of stacked plates might help. When you wash the plates, you stack the plates over each other and if you want one of the plates, you won't start from the bottom as it's harder to get the one in the bottom without disturbing the top ones. This is what we call a LIFO (last in first out) Queue.
Another type of queue is a FIFO (first in first out) Queue, to visualize this, you need to imagine a line of people in front of a shop for a limited game/figurine/item, if you went first, you will be the first one to get the item and get out, causing the second person to come in. The last person won't be the first one to go with the item.
There is also a type of queue called Priority Queue, this is as the name suggest. Imagine the same line we were talking about before, but except, it doesn't matter when you arrive, there are some VIP people who can get chance first even when you arrived earlier than them. This type of queue expect an iterable when you are putting things, the first element of this iterable is a number which displays the priority. So let's say, we put (1, "hello") then we put (30, "nope"), then we put (15, "wahahah") and then we get() from this queue, it would give (1, "hello"), second time calling get() would give (15, "wahahah") and lastly (30, "nope"). It prioritizes the value from lower to higher.
It is without a doubt, that what you are looking for is a FIFO queue first in first out, i.e the song that you play first, is played first, followed by other songs.
The queue standard library of python has a Queue class which offers this. Similarly, for a LIFO Queue, there is a queue.LifoQueue, you can determine the size of these queues by passing a maxsize kwarg, defaults to 0 which means infinite items. You would want to use this if you are low on RAM.
Now that you know what queues are and know basic types of queues, you are wondering how you could implement this.
For this, stop looking at the code, and write points for what you want to do. Or alternatively, visualize, what a user will do, and what you want you bot to do.
++ User executes play command
++ The bot looks if a `Queue` for the guild.id exists
++ If it does, you add the song to this queue, and notify user "Hey, I
have added your song to the current queue"
-- If it doesn't, you create a new Queue for that guild's ID and then
insert the current song that will be played. Don't play it yet,
otherwise it would be hard to write code in a uniform manner.
++ Now that we have a queue, or at least we know the song has been added, We
check if our voice client is currently playing something or not.
Thankfully, `discord.VoiceClient` offers us a method of `is_playing()`,
this is wonderful as we need this to determine if the voice is currently
playing or not. If it isn't, then we can play the voice, then we will
start a `task` in the background. This is not `ext.tasks` but an
`asyncio.Task`. Think of it as launching a ball in space and never
expecting it return anything, and you move on with your life. That is the
case, until you await it. Awaiting an `asyncio.Task` object will make it
execute on spot, and it will block pretty horribly, we don't want that, so
we will just make it a background task, so it does its thing in the
background.
++ After song is finished, check if our queue of guild.id has any song, if it doesn't, we leave vc notifying user, if it does, we play that, and create a background task again.
From now on whatever I will say is "my way of approaching this", I don't believe this is the best way but it works.
So here is pseudo logic of out looping function
async def check_play(self, ctx: commands.Context):
get our current voice client
while our voice client exists and it is playing:
sleep for 1 second asynchronously
dispatch an event that tells our bot that a track has ended
Now that last line touches in an internal function bot.dispatch. To explain briefly, this is the line that is responsible for 'dispatching' "events" in your bot. on_message, on_raw_message, on_reaction_add, on_raw_reaction_add, on_ready and etc. This means you can create custom events and dispatch them.
I would assume the above paragraph wasn't ample visualization, but here is a small example to help
bot = commands.Bot(...)
#bot.listen()
async def on_rude(eek: str):
print("Wow the rude person said", eek)
#bot.command()
async def rude(ctx, *, arg):
bot.dispatch('rude', arg)
# on using ?rude what the hell??
# it will print: Wow the rude person said what the hell??
So now we can use this to our advantage. How about dispatching an on_track_end event which checks if our queue has more songs, and if it does, then we play it, while if it doesn't, we say we ran out of songs and leave the VC.
Now combining all that, here is the code.
# utils/models.py
from queue import Queue
class Playlist:
def __init__(self, id: int):
self.id = id
self.queue: Queue = Queue(maxsize=0) # maxsize <= 0 means infinite size
def add_song(self, song: str):
self.queue.put(song)
def get_song(self):
return self.queue.get()
def empty_playlist(self):
self.queue.clear()
#property
def is_empty(self):
return self.queue.empty()
#property
def track_count(self):
return self.queue.qsize()
# main.py
import functools
from typing import Dict
import asyncio
import discord
from discord.ext import commands
import youtube_dl
from utils.models import Playlist
class Music(commands.Cog):
def __init__(self, bot):
self.bot = bot
self.playlists: Dict[int, Playlist] = {}
async def check_play(self, ctx: commands.Context):
client = ctx.voice_client
while client and client.is_playing():
await asyncio.sleep(1)
self.bot.dispatch("track_end", ctx)
#commands.command()
async def play(self, ctx: commands.Context, *, url: str):
if ctx.voice_client is None:
voice_channel = ctx.author.voice.channel
if ctx.author.voice is None:
await ctx.send("`You are not in a voice channel!`")
if (ctx.author.voice):
await voice_channel.connect()
else:
pass
FFMPEG_OPTIONS = {'before_options':'-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5', 'options' : '-vn'}
YDL_OPTIONS = {'format':'bestaudio', 'default_search':'auto'}
with youtube_dl.YoutubeDL(YDL_OPTIONS) as ydl:
info = ydl.extract_info(url, download=False)
if 'entries' in info:
url2 = info['entries'][0]['formats'][0]['url']
title = info['entries'][0]['title']
elif 'formats' in info:
url2 = info['formats'][0]['url']
title = info['title']
source = await discord.FFmpegOpusAudio.from_probe(url2, **FFMPEG_OPTIONS)
self.bot.dispatch("play_command", ctx, source, title)
#commands.Cog.listener()
async def on_play_command(self, ctx: commands.Context, song, title: str):
playlist = self.playlists.get(ctx.guild.id, Playlist(ctx.guild.id))
self.playlists[ctx.guild.id] = playlist
to_add = (song, title)
playlist.add_song(to_add)
await ctx.send(f"`Added {title} to the playlist.`")
if not ctx.voice_client.is_playing():
self.bot.dispatch("track_end", ctx)
#commands.Cog.listener()
async def on_track_end(self, ctx: commands.Context):
playlist = self.playlists.get(ctx.guild.id)
if playlist and not playlist.is_empty:
song, title = playlist.get_song()
else:
await ctx.send("No more songs in the playlist")
return await ctx.guild.voice_client.disconnect()
await ctx.send(f"Now playing: {title}")
ctx.guild.voice_client.play(song, after=functools.partial(lambda x: self.bot.loop.create_task(self.check_play(ctx))))
# for the above code, instead of functools.partial, you could also create_task on the next line, I just find using the `after` kwargs much better
def setup(bot):
bot.add_cog(Music(bot))

Include Video In Alexa Flask-Ask Response

just want to know how can i play a video as the response for an intent using flask-ask. I'm using this:
#ask.intent('AllYourBaseIntent')
def all_your_base():
return statement('All your base are belong to us') \
.simple_card(title='CATS says...', content='Make your time')
I guess this is not the right way.

How can I get Alexa to send authentication to android app before executing voice command

So this is what I am trying to do. When I ask Alexa do turn on light (for example) it then needs to send an authentication code to an android app (which I need to develop as well) then when the users confirms the code then the light needs to turn on.
Can this be done and is there any tutorials which will explain this or is there a better way of securing voice commands to perform certain functions?
Any help will be appreciated.
I followed this tutorial so am think of when asking to turn on light, Alexa the ask's for pin or password and then send the command to the GPIO Pin
Below is the code on the tutorial:
import logging
import os
from flask import Flask
from flask_ask import Ask, request, session, question, statement
import RPi.GPIO as GPIO
app = Flask(__name__)
ask = Ask(app, "/")
logging.getLogger('flask_ask').setLevel(logging.DEBUG)
STATUSON = ["on", "switch on", "enable", "power on", "activate", "turn
on"]
# all values that are defined as synonyms in type
STATUSOFF = ["off", "switch off", "disactivate", "turn off", "disable",
"turn off"]
#ask.launch
def launch():
speech_text = 'Welcome to the Raspberry Pi alexa automation.'
return
question(speech_text).reprompt(speech_text).simple_card(speech_text)
#ask.intent('LightIntent', mapping = {'status':'status'})
# HERE TO ASK FOR CODE THEN IF CORRECT CONTINUE TO TURN LIGHT ON
def Gpio_Intent(status,room):
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(17,GPIO.OUT)
if status in STATUSON:
GPIO.output(17,GPIO.HIGH)
return statement('Light was turned on')
elif status in STATUSOFF:
GPIO.output(17,GPIO.LOW)
return statement('Light was turned off')
else:
return statement('Sorry, this command is not possible.')
#ask.intent('AMAZON.HelpIntent')
def help():
speech_text = 'You can say hello to me!'
return
question(speech_text).reprompt(speech_text).simple_card('HelloWorld',
speech_text)
#ask.session_ended
def session_ended():
return "{}", 200
if __name__ == '__main__':
if 'ASK_VERIFY_REQUESTS' in os.environ:
verify = str(os.environ.get('ASK_VERIFY_REQUESTS', '')).lower()
if verify == 'false':
app.config['ASK_VERIFY_REQUESTS'] = False
app.run(debug=True)
Thanks

Errors with multi turn dialog in alexa skills

I have created a skill with name "BuyDog" and its invocation name is "dog app"
So that should mean, I can use the intents defined inside only after the invocation name is heard. (is that correct?)
Then I have defined the Intents with slots as:
"what is {dog} price."
"Tell me the price of {dog}."
where the slot {dog} is of slot type "DogType". I have marked this slot as required to fulfill
Then I have added the endpoint to AWS lambda function where I have used the blueprint code of factskills project in node.js, and done few minor changes just to see the working.
const GET_DOG_PRICE_MESSAGE = "Here's your pricing: ";
const data = [
'You need to pay $2000.',
'You need to pay Rs2000.',
'You need to pay $5000.',
'You need to pay INR 3000.',
];
const handlers = {
//some handlers.......................
'DogIntent': function () {
const factArr = data;
const factIndex = Math.floor(Math.random() * factArr.length);
const randomFact = factArr[factIndex];
const speechOutput = GET_DOG_PRICE_MESSAGE + randomFact;
}
//some handlers.......................
};
As per the about code I was expecting when
I say: "Alexa open dog app"
It should just be ready to listen to the intent "what is {dog} price." and the other one. Instead it says a random string from the node.js code's data[] array. I was expecting this response after the Intent was spoken as the slot was required for intent to complete.
And when
I say: "open the dog app and Tell me the price of XXXX."
It asks for "which breed" (that is my defined question) But it just works fine and show the pricing
Alexa says: "Here's your pricing: You need to pay $5000."
(or other value from the data array) for any XXXX (i.e. dog or not dog type).
Why is alexa not confirming the word is in slot set or not?
And when
I say: "open the dog bark".
I expected alexa to not understand the question but it gave me a fact about barking. WHY? How did that happen?
Does alexa have a default set of skills? like search google/amazon etc...
I am so confused. Please help me understand what is going on?
Without having your full code to see exactly what is happening and provide code answers, I hope just an explanation for your problems/questions will point you in the right direction.
1. Launching Skill
I say: "Alexa open dog app"
It should just be ready to listen to the intent...
You are expecting Alexa to just listen, but actually, Alexa opens your skill and is expecting you to have a generic welcome response at this point. Alexa will send a Launch Request to your Lambda. This is different from an IntentRequest and so you can determine this by checking request.type. Usually found with:
this.event.request.type === 'LaunchRequest'
I suggest you add some logging to your Lambda, and use CloudWatch to see the incoming request from Alexa:
console.log("ALEXA REQUEST= " + event)
2. Slot Value Recognition
I say: "open the dog app and Tell me the price of XXXX."
Why is alexa not confirming the word is in slot set or not?
Alexa does not limit a slot to the slot values set in the slotType. The values you give the slotType are used as a guide, but other values are also accepted.
It is up to you, in your Lambda Function, to validate those slot values to make sure they are set to a value you accept. There are many ways to do this, so just start by detecting what the slot has been filled with. Usually found with:
this.event.request.intent.slots.{slotName}.value;
If you choose to set up synonyms in the slotType, then Alexa will also provide her recommended slot value resolutions. For example you could inlcude "Rotty" as a synonym for "Rottweiler", and Alexa will fill the slot with "Rotty" but also suggest you to resolve that to "Rottweiler".
var resolutionsArray = this.event.request.intent.slots.{slotName}.resolutions.resolutionsPerAuthority;
Again, use console.log and CloudWatch to view the slot values that Alexa accepts and fills.
3. Purposefully Fail to Launch Skill
I say: "open the dog bark".
I expected alexa to not understand the question but it gave me a fact about barking.
You must be doing this outside of your Skill, where Alexa will take any inputs and try to recognize an enabled skill, or handle with her best guess of default abilities.
Alexa does have default built-in abilities (not skills really) to answer general questions, and just be fun and friendly. You can see what she can do on her own here: Alexa - Things To Try
So my guess is, Alexa figured you were asking something about dog barks, and so provided an answer. You can try to ask her "What is a dog bark" and see if she responds with the exact same as "open the dog bark", just to confirm these suspicions.
To really understand developing an Alexa skill you should spend the time to get very familiar with this documentation:
Alexa Request and Response JSON Formats
You didn't post a lot of your code so it's hard to tell exactly what you meant but usually to handle incomplete events you can have an incomplete even handler like this:
const IncompleteDogsIntentHandler = {
// Occurs when the required slots are not filled
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'DogIntent'
&& handlerInput.requestEnvelope.request.dialogState !== 'COMPLETED'
},
async handle(handlerInput) {
return handlerInput.responseBuilder
.addDelegateDirective(handlerInput.requestEnvelope.request.intent)
.getResponse();
}
you add this handler right above your actual handler usually in the index.js file of your lambda
This might not fix all your issues, but it will help you handle the event when a user doesn't mention a dog.

Resources