How to import a Python library into Alexa Skill - alexa

I am simply trying to import the random library into my Alexa Skill, but I'm getting "There was a problem with the requested skill's response" every time i try to use it in my lambda functions code (e.g. index = random.randrange(len(array)) ). Ive seen answers ranging from simply putting the library into your requirements.txt to using a zip file to do it. Nothing has worked and/or makes sense. Thanks for the help.

The random library is part of the PSL. You just need to add import random at the top of your script with the other import statements. For non-PSL libraries, you should add them to the "requirements.txt" and then import them in the script, just like you'd do with Node and the "package.json" file.
Here's my edited version of the LaunchIntent from the default Python Hello World template in the Alexa Developer Console. The import random was added to the script after import logging up around line 8.
class LaunchRequestHandler(AbstractRequestHandler):
"""Handler for Skill Launch."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_request_type("LaunchRequest")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
rnum = str(random.randint(0,9))
speak_output = "Welcome, your random number is " + rnum
return (
handler_input.response_builder
.speak(speak_output)
.ask(speak_output)
.response
)
Because I'm combining the number with a string to produce a string, I have to cast the integer to a string, but that's about it.
Beyond that there's a "CloudWatch Logs" button at the top of the editor in the Alexa Developer Console. Click that and check your the latest log stream for the output of what error might be occurring.
For example, I don't write Python often and I'm so used to JavaScript's type coercion, I forgot to cast the random number to a string before adding it to a string. I know... bad habit. That error was clearly indicated in the CloudWatch logs. I fixed it and everything worked like a charm.

Related

How to read in files with a specific file ending at compile time in nim?

I am working on a desktop application using nim's webgui package, which sort of works like electron in that it renders a gui using HTML + CSS + JS. However, instead of bundling its own browser and having a backend in node, it uses the browser supplied by the OS (Epiphany under Linux/GNOME, Edge under Windows, Safari under iOS) and allows writing the backend in nim.
In that context I am basically writing an SPA in Angular and need to load in the HTML, JS and CSS files at compile-time into my binary.
Reading from a known absolute filepath is not an issue, you can use nim's staticRead method for that.
However, I would like to avoid having to adjust the filenames in my application code all the time, e.g. when a new build of the SPA changes a file name from main.a72efbfe86fbcbc6.js to main.b72efbfe86fbcbc6.js.
There is an iterator in std/os that you can use at runtime called walkFiles and walkPattern, but these fail when used at compileTime!
import std/[os, sequtils, strformat, strutils]
const resourceFolder = "/home/philipp/dev/imagestable/html" # Put into config file
const applicationFiles = toSeq(walkFiles(fmt"{resourceFolder}/*"))
/home/philipp/.choosenim/toolchains/nim-#devel/lib/pure/os.nim(2121, 11) Error: cannot 'importc' variable at compile time; glob
How do I get around this?
Thanks to enthus1ast from nim's discord server I arrived at an answer: using the collect macro with the walkDir iterator.
The walkDir iterator does not make use of things that are only available at runtime and thus can be safely used at compiletime. With the collect macro you can iterate over all your files in a specific directory and collect their paths into a compile-time seq!
Basically you start writing collect-block, which is a simple for-loop that at its end evaluates to some form of value. The collect macro will put them all into a seq at the end.
The end result looks pretty much like this:
import std/[sequtils, sugar, strutils, strformat, os]
import webgui
const resourceFolder = "/home/philipp/dev/imagestable/html"
proc getFilesWithEnding(folder: string, fileEnding: string): seq[string] {.compileTime.} =
result = collect:
for path in walkDir(folder):
if path.path.endswith(fmt".{fileEnding}"): path.path
proc readFilesWithEnding(folder: string, fileEnding: string): seq[string] {.compileTime.} =
result = getFilesWithEnding(folder, fileEnding).mapIt(staticRead(it))

Sentinel import inside Terraform Cloud confusion: key "find_resources" doesn't support function calls

I'm using a Sentinel policy inside a Terraform Cloud workspace. My policy is rather simple:
import "tfplan/v2" as tfplan
allBDs = tfplan.find_resources("aci_bridge_domain")
violatingBDs = tfplan.filter_attribute_does_not_match_regex(allBDs,
"description", "^demo(.+)", true)
main = rule {
length(violatingBDs["messages"]) is 0
}
Unfortunately, it fails when invoked with this message:
An error occurred: 1 error occurred:
* ./allowed-terraform-version.sentinel:3:10: key "find_resources" doesn't support function calls
The documentation and source for find_resources (doc) expects a string, yet the Sentinel interpreter seems to think I'm invoking a method of tfplan? It's quite unclear why that is, and the documentation doesn't really help.
Any ideas?
OK I found the issue. If I paste the code for find_resources and its dependencies (to_string, evaluate_attribute) then everything works as expected.
So I have a simple import problem and need to figure out how to properly import https://raw.githubusercontent.com/hashicorp/terraform-guides/master/governance/third-generation/common-functions/tfplan-functions/tfplan-functions.sentinel

Flask in PythonAnywhere doesn't find file name with spanish characters

I'm trying to build a website that has one page for downloading some resources, and it just happened that my local flask version finds perfectly any file name (when using send_from_directory()), but once deployed at PythonAnywhere it just doesn't work for the filenames that have spanish accented characters such as á.
I guess it has something to do with unicode, but I couldn't find how to fix it (the logs at pythonanywhere don't seem to show anything, since flask simply delivers a "Not found" page to the user).
...and I'd really like to have those accents in the name of the files people download (they're anki decks, some of them intended for education purposes, and it just feels wrong to deliver bad ortography in the deck name).
My code looks like this:
#app.route('/anki/d/<file>')
def d_anki(file):
if file == "verbscat":
ankideck = "[Rusca] Temps Verbals Catalans.apkg"
elif file == "irregular":
ankideck = "[Rusca] Verbs Irregulars Anglès.apkg"
# ...
else:
return f"The file {file} wasn't found."
return send_from_directory("./static/anki/", ankideck, as_attachment=True, cache_timeout=0)
(then I link to this url in a button by <a href="/anki/d/irregular" ...>)
Oh I just realized I can choose a different name for the downloaded file by adding attachment_filename="Whatever I want to call it" to the parameters in send_from_directory.
So I guess we can do with this workaround (having the original files with simple non-accented names and adding the proper name afterwards).
if file == "irregular":
ankideck = "irregular.apkg"
name = "[Rusca] Verbs Irregulars Anglès.apkg"
# ...
return send_from_directory("./static/anki/", ankideck, as_attachment=True, attachment_filename=name cache_timeout=0)

How to view JSON logs of a managed VM in the Log Viewer?

I'm trying to get JSON formatted logs on a Compute Engine VM instance to appear in the Log Viewer of the Google Developer Console. According to this documentation it should be possible to do so:
Applications using App Engine Managed VMs should write custom log
files to the VM's log directory at /var/log/app_engine/custom_logs.
These files are automatically collected and made available in the Logs
Viewer.
Custom log files must have the suffix .log or .log.json. If the suffix
is .log.json, the logs must be in JSON format with one JSON object per
line. If the suffix is .log, log entries are treated as plain text.
This doesn't seem to be working for me: logs ending with .log are visible in the Log Viewer, but displayed as plain text. Logs ending with .log.json aren't visible at all.
It also contradicts another recent article that states that file names must end in .log and its contents are treated as plain text.
As far as I can tell Google uses fluentd to index the log files into the Log Viewer. In the GitHub repository I cannot find any evidence that .log.json files are being indexed.
Does anyone know how to get this working? Or is the documentation out-of-date and has this feature been removed for some reason?
Here is one way to generate JSON logs for the Managed VMs logviewer:
The desired JSON format
The goal is to create a single line JSON object for each log line containing:
{
"message": "Error occurred!.",
"severity": "ERROR",
"timestamp": {
"seconds": 1437712034000,
"nanos": 905
}
}
(information sourced from Google: https://code.google.com/p/googleappengine/issues/detail?id=11678#c5)
Using python-json-logger
See: https://github.com/madzak/python-json-logger
def get_timestamp_dict(when=None):
"""Converts a datetime.datetime to integer milliseconds since the epoch.
Requires special handling to preserve microseconds.
Args:
when:
A datetime.datetime instance. If None, the timestamp for 'now'
will be used.
Returns:
Integer time since the epoch in milliseconds. If the supplied 'when' is
None, the return value will be None.
"""
if when is None:
when = datetime.datetime.utcnow()
ms_since_epoch = float(time.mktime(when.utctimetuple()) * 1000.0)
return {
'seconds': int(ms_since_epoch),
'nanos': int(when.microsecond / 1000.0),
}
def setup_json_logger(suffix=''):
try:
from pythonjsonlogger import jsonlogger
class GoogleJsonFormatter(jsonlogger.JsonFormatter):
FORMAT_STRING = "{message}"
def add_fields(self, log_record, record, message_dict):
super(GoogleJsonFormatter, self).add_fields(log_record,
record,
message_dict)
log_record['severity'] = record.levelname
log_record['timestamp'] = get_timestamp_dict()
log_record['message'] = self.FORMAT_STRING.format(
message=record.message,
filename=record.filename,
)
formatter = GoogleJsonFormatter()
log_path = '/var/log/app_engine/custom_logs/worker'+suffix+'.log.json'
make_sure_path_exists(log_path)
file_handler = logging.FileHandler(log_path)
file_handler.setFormatter(formatter)
logging.getLogger().addHandler(file_handler)
except OSError:
logging.warn("Custom log path not found for production logging")
except ImportError:
logging.warn("JSON Formatting not available")
To use, simply call setup_json_logger - you may also want to change the name of worker for your log.
I am currently working on a NodeJS app running on a managed VM and I am also trying to get my logs to be printed on the Google Developper Console. I created my log files in the ‘/var/log/app_engine’ directory as described in the documentation. Unfortunately this doesn’t seem to be working for me, even for the ‘.log’ files.
Could you describe where your logs are created ? Also, is your managed VM configured as "Managed by Google" or "Managed by User" ? Thanks!

go-endpoint Invalid date format and method not found

Hy, I have some problems with the Go endpoints and Dart client library.
I use the Go library https://github.com/crhym3/go-endpoints and the dart generator https://github.com/dart-lang/discovery_api_dart_client_generator
The easy examples works fine. But they show never how to use time.Time.
In my project, I have a struct with a field:
Created time.Time `json:"created"`
The output in the explorer looks like this:
"created": "2014-12-08T20:42:54.299127593Z",
When i use it in the dart client library, I get the error
FormatException: Invalid date format 2014-12-08T20:53:56.346129718Z
Should I really format every time fields in the go app (Format Timestamp in outgoing JSON in Golang?)?
My research come to that the dart accept something:
t.Format(time.RFC3339) >> 2014-12-08T20:53:56Z
Second problem, if comment out the Created field or leave it blank. I get a other error:
The null object does not have a method 'map'.
NoSuchMethodError: method not found: 'map' Receiver: null Arguments:
[Closure: (dynamic) => dynamic]
But I can't figure it out which object is null. I'm not sure if I'm using the Dart client correct
import 'package:http/browser_client.dart' as http;
...
var nameValue = querySelector('#name').value;
var json = {'name':nameValue};
LaylistApi api = new LaylistApi(new http.BrowserClient());
api.create(new NewLayListReq.fromJson(json)).then((LayList l) {
print(l);
}).catchError((e) {
querySelector('#err-message').innerHtml=e.toString();
});
Does anyone know of a larger project on github with Go endpoint and Dart?
Thanks for any advice
UPDATE[2014-12-11]:
I fixed the
NoSuchMethodError
with the correct discovery url https://constant-wonder-789.appspot.com/_ah/api/discovery/v1/apis/greeting/v1/rest
The problem with the time FormatExcetion still open, but I'm one step further. If i create a new item, it doesn' work. But if I load the items from the datastore and send it back, this works.
I guess this can be fixed with implementing Marshaler interface, thanks Alex. I will update my source soon.
See my example:
http://constant-wonder-789.appspot.com/
The full source code:
https://github.com/cloosli/greeting-example

Resources