How to set default action in Rasa - default

Using vanilla Rasa NLU will cause Rasa core to make use of the outputs of the highest probability of an intent or entity value. In other words, even if the probability of an intent is low, and yet it is the highest of all the options, it is still taken by Rasa core as the intent the user is conveying. How do I make it so that Rasa core performs a default action if the probability of the maximum probability intent provided by the NLU is below a certain threshold, say 5%?

We can achieve this by adding FallbackPolicy in policies file. For eg:
policies:
- name: "FallbackPolicy"
nlu_threshold: 0.1
core_threshold: 0.1
fallback_action_name: "fallback_action"

This feature was added recently and is called Fallback Policy.
See documentation here: https://core.rasa.com/patterns.html?highlight=fallback%20policy#fallback-default-actions

Simply you can do it in two steps
Step 1 In domain.yml file
actions:
- action_default_fallback
Step 2 In action.py file
class ActionDefaultFallback(Action):
def name(self):
return "action_default_fallback"
def run(self, dispatcher, tracker, domain):
dispatcher.utter_message("Sorry, I couldn't understand.")
Now, whenever intent classification confidence will be below a certain threshold, this default action will execute.
Resource:

Related

Exceeded soft memory limit of 512 MB with 532 MB after servicing 3 requests total. Consider setting a larger instance class in app.yaml

We are on Google App engine standard environment, F2 instance (generation 1 - python 2.7). We have a reporting module that follows this flow.
Worker Task is initiated in a queue.
task = taskqueue.add(
url='/backendreport',
target='worker',
queue_name = 'generate-reports',
params={
"task_data" : task_data
})
In the worker class, we query Google datastore and write the data to a Google Sheet. We paginate through the records to find additional report elements. When we find additional page, we call the same task again to spawn another write, so it can fetch the next set of report elements and write them to Google sheet.
in the backendreport.py we have the following code.
class BackendReport():
# Query google datastore to find the records(paginated)
result = self.service.spreadsheets().values().update(
spreadsheetId=spreadsheet_Id,
range=range_name,
valueInputOption=value_input_option,
body=resource_body).execute()
# If pagination finds additional records
task = taskqueue.add(
url='/backendreport',
target='worker',
queue_name = 'generate-reports',
params={
"task_data" : task_data
})
We run the same BackendReport (with pagination) as a front end job (not as a task). The pagination works without any error - meaning we fetch each page of records and display to the front end. But when we execute the tasks iteratively it fails with the soft memory limit issue. We were under the impression that every time a task is called (for each pagination) it should act independently and there shouldn't be any memory constraints. What are we doing wrong here?
Why doesn't GCP spin a different instance when the soft memory limit is reached - automatically (our instance class is F2).
The error message says soft memory limit of 512 MB reached after servicing 3 requests total - does this mean that the backendreport module spun up 3 requests - does it mean there were 3 tasks calls (/backendreport)?
Why doesn't GCP spin a different instance when the soft memory limit is reached
One of the primary mechanisms for when app engine decides to spin up a new instance is max_concurrent_requests. You can checkout all of the automatic_scaling params you can configure here:
https://cloud.google.com/appengine/docs/standard/python/config/appref#scaling_elements
does this mean that the backendreport module spun up 3 requests - does it mean there were 3 tasks calls (/backendreport)?
I think so. To be sure, you can open up Logs viewer, find the log where this was printed and filter your logs by that instance-id to see all the requests it handled that lead to that point.
you're creating multiple tasks in Cloud Tasks, but there's no limitation for the dispatching queue there, and as the queue tries to dispatch multiple tasks at the same time, it reaches the memory limit. So the limitations you want to set in place is really max_concurrent_requests, however not for the instances in app.yaml, it should be set for the queue dispatching in queue.yaml, so only one task at a time is dispatched:
- name: generate-reports
rate: 1/s
max_concurrent_requests: 1

build mode - activate logs with a certain level

I'm using QX 5.0.2.
In Build mode, is there a way to (re)activate logs and only show warning+error (log level?)?
Env key 'qx.debug' = true seems to reactivate logs but I can't find the env key for log level (to set it for example to warning).
Thanks in advance.
I think what you're looking for is the static addFilter method, documented at http://www.qooxdoo.org/devel/api/#qx.log.Logger. It lets you filter messages by the class from which they originate, by log level, and on an all-appenders basis or per appender.
Derrell

Creating a cluster before sending a job to dataproc programmatically

I'm trying to schedule a PySpark Job. I followed the GCP documentation and ended up deploying a little python script to App Engine which does the following :
authenticate using a service account
submit a job to a cluster
The problem is, I need the cluster to be up and running otherwise the job won't be sent (duh !) but I don't want the cluster to always be up and running, especially since my job needs to run once a month.
I wanted to add the creation of a cluster in my python script but the call is asynchronous (it makes an HTTP request) and thus my job is submitted after the cluster creation call but before the cluster is really up and running.
How could I do ?
I'd like something cleaner than just waiting for a few minutes in my script !
Thanks
EDIT : Here's what my code looks like so far :
To launch the job
class EnqueueTaskHandler(webapp2.RequestHandler):
def get(self):
task = taskqueue.add(
url='/run',
target='worker')
self.response.write(
'Task {} enqueued, ETA {}.'.format(task.name, task.eta))
app = webapp2.WSGIApplication([('/launch', EnqueueTaskHandler)], debug=True)
The job
class CronEventHandler(webapp2.RequestHandler):
def create_cluster(self, dataproc, project, zone, region, cluster_name):
zone_uri = 'https://www.googleapis.com/compute/v1/projects/{}/zones/{}'.format(project, zone)
cluster_data = {...}
dataproc.projects().regions().clusters().create(
projectId=project,
region=region,
body=cluster_data).execute()
def wait_for_cluster(self, dataproc, project, region, clustername):
print('Waiting for cluster to run...')
while True:
result = dataproc.projects().regions().clusters().get(
projectId=project,
region=region,
clusterName=clustername).execute()
# Handle exceptions
if result['status']['state'] != 'RUNNING':
time.sleep(60)
else:
return result
def wait_for_job(self, dataproc, project, region, job_id):
print('Waiting for job to finish...')
while True:
result = dataproc.projects().regions().jobs().get(
projectId=project,
region=region,
jobId=job_id).execute()
# Handle exceptions
print(result['status']['state'])
if result['status']['state'] == 'ERROR' or result['status']['state'] == 'DONE':
return result
else:
time.sleep(60)
def submit_job(self, dataproc, project, region, clusterName):
job = {...}
result = dataproc.projects().regions().jobs().submit(projectId=project,region=region,body=job).execute()
return result['reference']['jobId']
def post(self):
dataproc = googleapiclient.discovery.build('dataproc', 'v1')
project = '...'
region = "..."
zone = "..."
clusterName = '...'
self.create_cluster(dataproc, project, zone, region, clusterName)
self.wait_for_cluster(dataproc, project, region, clusterName)
job_id = self.submit_job(dataproc,project,region,clusterName)
self.wait_for_job(dataproc,project,region,job_id)
dataproc.projects().regions().clusters().delete(projectId=project, region=region, clusterName=clusterName).execute()
self.response.write("JOB SENT")
app = webapp2.WSGIApplication([('/run', CronEventHandler)], debug=True)
Everything works until the deletion of the cluster. At this point I get a "DeadlineExceededError: The overall deadline for responding to the HTTP request was exceeded." Any idea ?
In addition to general polling either through list or get requests on the Cluster or the Operation returned with the CreateCluster request, for single-use clusters like this you can also consider using the Dataproc Workflows API and possibly its InstantiateInline interface if you don't want to use full-fledged workflow templates; in this API you use a single request to specify cluster settings along with jobs to submit, and the jobs will automatically run as soon as the cluster is ready to take it, after which the cluster will be deleted automatically.
You can use the Google Cloud Dataproc API to create, delete and list clusters.
The list operation can be (repeatedly) performed after create and delete operations to confirm that they completed successfully, since it provides the ClusterStatus of the clusters in the results with the relevant State information:
UNKNOWN The cluster state is unknown.
CREATING The cluster is being created and set up. It is not ready for use.
RUNNING The cluster is currently running and healthy. It is ready for use.
ERROR The cluster encountered an error. It is not ready for use.
DELETING The cluster is being deleted. It cannot be used.
UPDATING The cluster is being updated. It continues to accept and process jobs.
To prevent plain waiting between the (repeated) list invocations (in general not a good thing to do on GAE) you can enqueue delayed tasks in a push task queue (with the relevant context information) allowing you to perform such list operations at a later time. For example, in python, see taskqueue.add():
countdown -- Time in seconds into the future that this task should run or be leased. Defaults to zero. Do not specify this argument if
you specified an eta.
eta -- A datetime.datetime that specifies the absolute earliest time at which the task should run. You cannot specify this argument if
the countdown argument is specified. This argument can be time
zone-aware or time zone-naive, or set to a time in the past. If the
argument is set to None, the default value is now. For pull tasks, no
worker can lease the task before the time indicated by the eta
argument.
If at the task execution time the result indicates the operation of interest is still in progress simply enqueue another such delayed task - effectively polling but without an actual wait/sleep.

What response times can be expected from GAE/NDB?

We are currently building a small and simple central HTTP service that maps "external identities" (like a facebook id) to an "internal (uu)id", unique across all our services to help with analytics.
The first prototype in "our stack" (flask+postgresql) was done within a day. But since we want the service to (almost) never fail and scale automagically, we decided to use Google App Engine.
After a week of reading&trying&benchmarking this question emerges:
What response times are considered "normal" on App Engine (with NDB)?
We are getting response times that are consistently above 500ms on average and well above 1s in the 90percentile.
I've attached a stripped down version of our code below, hoping somebody can point out the obvious flaw. We really like the autoscaling and the distributed storage, but we can not imagine 500ms really is the expected performance in our case. The sql based prototype responded much faster (consistently), hosted on one single Heroku dyno using the free, cache-less postgresql (even with an ORM).
We tried both synchronous and asynchronous variants of the code below and looked at the appstats profile. It's always RPC calls (both memcache and datastore) that take very long (50ms-100ms), made worse by the fact that there are always multiple calls (eg. mc.get() + ds.get() + ds.set() on a write). We also tried deferring as much as possible to the task queue, without noticeable gains.
import json
import uuid
from google.appengine.ext import ndb
import webapp2
from webapp2_extras.routes import RedirectRoute
def _parse_request(request):
if request.content_type == 'application/json':
try:
body_json = json.loads(request.body)
provider_name = body_json.get('provider_name', None)
provider_user_id = body_json.get('provider_user_id', None)
except ValueError:
return webapp2.abort(400, detail='invalid json')
else:
provider_name = request.params.get('provider_name', None)
provider_user_id = request.params.get('provider_user_id', None)
return provider_name, provider_user_id
class Provider(ndb.Model):
name = ndb.StringProperty(required=True)
class Identity(ndb.Model):
user = ndb.KeyProperty(kind='GlobalUser')
class GlobalUser(ndb.Model):
uuid = ndb.StringProperty(required=True)
#property
def identities(self):
return Identity.query(Identity.user==self.key).fetch()
class ResolveHandler(webapp2.RequestHandler):
#ndb.toplevel
def post(self):
provider_name, provider_user_id = _parse_request(self.request)
if not provider_name or not provider_user_id:
return self.abort(400, detail='missing provider_name and/or provider_user_id')
identity = ndb.Key(Provider, provider_name, Identity, provider_user_id).get()
if identity:
user_uuid = identity.user.id()
else:
user_uuid = uuid.uuid4().hex
GlobalUser(
id=user_uuid,
uuid=user_uuid
).put_async()
Identity(
parent=ndb.Key(Provider, provider_name),
id=provider_user_id,
user=ndb.Key(GlobalUser, user_uuid)
).put_async()
return webapp2.Response(
status='200 OK',
content_type='application/json',
body = json.dumps({
'provider_name' : provider_name,
'provider_user_id' : provider_user_id,
'uuid' : user_uuid
})
)
app = webapp2.WSGIApplication([
RedirectRoute('/v1/resolve', ResolveHandler, 'resolve', strict_slash=True)
], debug=False)
For completeness sake the (almost default) app.yaml
application: GAE_APP_IDENTIFIER
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: .*
script: main.app
libraries:
- name: webapp2
version: 2.5.2
- name: webob
version: 1.2.3
inbound_services:
- warmup
In my experience, RPC performance fluctuates by orders of magnitude, between 5ms-100ms for a datastore get. I suspect it's related to the GAE datacenter load. Sometimes it gets better, sometimes it gets worse.
Your operation looks very simple. I expect that with 3 requests, it should take about 20ms, but it could be up to 300ms. A sustained average of 500ms sounds very high though.
ndb does local caching when fetching objects by ID. That should kick in if you're accessing the same users, and those requests should be much faster.
I assume you're doing perf testing on the production and not dev_appserver. dev_appserver performance is not representative.
Not sure how many iterations you've tested, but you might want to try a larger number to see if 500ms is really your average.
When you're blocked on simple RPC calls, there's not too optimizing you can do.
The 1st obvious moment I see: do you really need a transaction on every request?
I believe that unless most of your requests create new entities it's better to do .get_by_id() outside of transaction. And if entity not found then start transaction or even better defer creation of the entity.
def request_handler(key, data):
entity = key.get()
if entity:
return 'ok'
else:
defer(_deferred_create, key, data)
return 'ok'
def _deferred_create(key, data):
#ndb.transactional
def _tx():
entity = key.get()
if not entity:
entity = CreateEntity(data)
entity.put()
_tx()
That should give much better response time for user facing requests.
The 2nd and only optimization I see is to use ndb.put_multi() to minimize RPC calls.
P.S. Not 100% sure but you can try to disable multithreading (threadsave: no) to get more stable response time.

How do I get the values from the counter after I processed all the records with Google AppEngine MapReduce?

How do I get the values from the counter after I processed all the records with Google AppEngine MapReduce?
Or am I missing the use case for counters here?
Sample Code from http://code.google.com/p/appengine-mapreduce/wiki/UserGuidePython
How would I retrieve the value of counter counter1 when the mapreduce is done?
app.yaml
handlers:
- url: /mapreduce(/.*)?
script: mapreduce/main.py
login: admin
mapreduce/main.py
from mapreduce import operation as op
def process(entity):
yield op.counters.Increment("counter1")
mapreduce.yaml
mapreduce:
- name: <Some descriptive name for UI>
mapper:
input_reader: mapreduce.input_readers.DatastoreInputReader
handler: main.process
params:
- name: entity_kind
default: <your entity name, e.g. main.MyEntity>
OK. Since you're posting the code from the docs I'll assume you haven't tried to run a mapper yet. The results from the counter should show up on the admin page for the mapreduce session. There must be a way to access the values from within the mapper as well but I am not familiar enough with the Python version of the API to tell you how. I know it is doable in the Java side.

Resources