ask-cli api or lambda does not exist - alexa

I am wondering why ask-cli does not work if I want to list the commands. Does anybody know the reason? I could run ask deploy and can see my skill up and running on developer.amazon.com
> ask -v
1.1.2
> ask --help
Usage: ask [options] [command]
Command Line Interface for Alexa Skill Management API
Commands:
init [options] initialize the ask-cli with your Amazon developer account credentials
deploy [options] deploy a skill to your developer account
new [options] create a new skill project on your computer
clone [options] clone an existing skill project on your computer
simulate [options] simulate a user using your skill
lambda list of AWS Lambda commands
api list of Alexa Skill Management API commands
help [cmd] display help for [cmd]
> ask lambda
ask-lambda(1) does not exist, try --help
> ask api
ask-api(1) does not exist, try --help

After an installation of ask-cli via npm everything works as expected:
> npm install -g ask-cli
I used before brew to install ask-cli.

Related

How we can execute a Jupyter notebook python script automatically in Sagemaker?

I used terraform to create Sagemaker notebook instance and deploy Jupyter notebook python script to create and deploy a regression model.
I was able to run the scribe and create the model successfully via AWS console manually. However, I could not find a way to get it executed automatically. I even tried executing the script via shell commands through notebook instance’s lifecycle configuration. However, it did not work as expected. Any other idea please?
Figured this out. Passed the below script to notebook instance as lifecycle configuration.
#!/bin/sh
sudo -u ec2-user -i <<'EOF'
source activate python3
pip install runipy
nohup runipy <<path_to_the_jupyter_notebook>> &
source deactivate
EOF

gcloud: how to download the app via cli

I depolyed an app with gcloud preview app deploy.
Is there a way to download it to an other local machine?
How can I get the files? I tried it via ssh with no success (can't access the docker dir)
UPDATE:
I found this:
gcloud preview app modules download default --version 1 --output-dir=my_dir
but it's not loading files
Log
Downloading module [default] to [my_dir/default]
Fetching file list from server...
|- Downloading [0] files... -|
I am coming to Google App Engine after two years, I see that they have made lots of improvements and added tons of features. But sadly, their documentation sometimes leaves much to be desired.
I used to download my code of the uploaded version with the appcfg.pyusing the following command.
appcfg.py download_app -A <app_id> -V <version> <output-dir>
But of course now that they have culminated everything in the gcloud shell where appcfg.py is not accessible.
However, the following method helped me to download the deployed code:
Go the console and in to the Google App Engine.
Select the project you want to work with.
Once the project's dashboard opens, Click on the top right to
open the built in console window.
Which should load the cloud shell at the bottom, now if you check appcfg.py is available to you to use in this VM.
Hence, use appcfg.py download_app -A <app_id> -V <version> <output-dir> to download the code.
Now once you have the code in the desired folder, in order to download it on your local machine - You can open the docker code editor
Now here I assumed if I rightclicked and exported the desired
folder it would work,
but instead it gave me the following error message.
{"Error":"'concurrency' must be a number but it is [object Undefined]","Message":"'concurrency' must be a number but it is [object Undefined]"}
So, I thought maybe it would play along nicely if the the folder
was an archive. Go back to the cloud shell and using whatever
utility you fancy make an archive of the folder
zip -r mycode.zip mycode
Go to the docker code editor, export and download.
Now. Of course there might many more ways do it (hopefully) but this is what made sense to me after returning to Google App Engine after 2 years.
Currently, the best way to do this is to pull the files out of Docker.
Put instance into self-managed mode, so that you can ssh into it:
$ gcloud preview app modules set-managed-by default --version 1 --self
Find the name of the instance:
$ gcloud compute instances list | grep gae-default-1
Copy it out of the Docker container, change the permissions, and copy it back to your local machine:
$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 'sudo docker cp gaeapp:/app /tmp'
$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 "chown -R $USER /tmp/app"
$ gcloud compute copy-files --zone=us-central1-f gae-default-1-1234:/tmp/app /tmp/
$ ls /tmp/app
Dockerfile
[...]
IMHO, the best option today (Aug 2018) is:
Under the main menu, under Products, go to Tools -> Cloud Build -> Build history.
There, click the ID of the build you want.
Then, in the opened window (Build details), click the source link, the download of your compressed code begins.
As simple as that.
HTH.
As of Feb 2021, you can install appengine-sdk using pip
pip install appengine-sdk
Once installed, appcfg can be used to download the app code.
python -m appcfg download_app -A app_id [ -V version ] out-dir
Nothing works. Finally I found the source code this way. Simply go to google cloud storage. choose buckets starting with us.artifacts...., select containers > images > download the latest one (look by created date). unzip after downloaded file. it will have all the deployed source code of app engine.

Unable to clone app engine project in pycharm

I am trying to clone an App Engine project written in python, into Pycharm.
My version of git is 1.9
I have the latest version of PyCharm.
I have run gcloud auth login so that I can authenticate using my google account. When I try to clone the repository at https://source.developers.google.com/p/APP-ENGINE-PROJECT
I get a dialog box similar to the one below, request me to enter a username and password.
I enter my gmail account but I can't login. It tells me it can't connect to the repository.
Please help.
Do the following:
Create a directory where you want your local Git repository to be located and navigate to it.
$mkdir directory
$cd directory
Run the gcloud auth login command. This command gets the credentials required to access your Cloud Repository from the Google Cloud Platform.
$ gcloud auth login
Run the gcloud init command. This command creates the local Git repository and adds your Cloud Repository as the Git origin remote.
$ gcloud init project_id
The gcloud init command creates a directory named project_id/default in the current directory. The default directory contains your local Git repository.
Run PyCharm and do one of the following:
On the Welcome screen, click Open
On the main menu, choose File | Open.
In the Select Path dialog box, select the directory named project_id/default
Pycharm will connect to the repository automatically.

gcloud auth login with Docker does not work as it says in documentation

I've followed the Docker instructions from here exactly: https://cloud.google.com/sdk/#install-docker (click Alternative Methods to find Docker instructions).
But when I run:
docker run -t -i --volumes-from gcloud-config google/cloud-sdk gcloud compute instances list
I get:
docker run -t -i --volumes-from gcloud-config google/cloud-sdk gcloud compute instances list
ERROR: (gcloud.compute.instances.list) You do not currently have an active account selected.
Please run:
$ gcloud auth login
to obtain new credentials, or if you have already logged in with a
different account:
$ gcloud config set account <account name>
to select an already authenticated account to use.
It doesn't look like it's picking up that I already authenticated. Any ideas?
The link doesn't point to anything about Docker, could you give correct link.
The encountered error output is pointing you to follow the two-step Google verification. If you provide :
gcloud auth login
command, the verification process will start, and then you will be able to manipulate your Google CLoud project
However this page [1] could guide you installing Docker on Google Cloud.
[1] - http://docs.docker.com/installation/google/

Binding database to app in cloudbees when deploying via jenkins

I have now managed to deploy a web app on run#cloud. I do have the cloudbees deployer plugin on jenkins, however, i am looking for a way to use the bees sdk to bind a database to the deployed app. I was wondering how do i go about it.
Currently, I deploy it via jenkins as a postbuild action.
You can configure the Bees SDK in DEV#cloud with a script like this (assuming that you have uploaded a build secret zip file containing your ~/.bees/bees.config using the environment variable ${SECRET} - please see Build Secret Plugin
Run this as an "Execute Shell" task within Jenkins, and then you can use the Bees SDK in the normal way to bind the database (or any resource) to your app, e.g.
bees app:bind -a acme/test -db mydb
See the Database Guide for more details.
Jenkins Execute Shell Script:
if [[ ! -d "${WORKSPACE}/bees-sdks" ]]
then
mkdir ${WORKSPACE}/bees-sdks
fi
cd ${WORKSPACE}/bees-sdks;
curl -o cloudbees-sdk-1.5.0-bin.zip http://cloudbees-downloads.s3.amazonaws.com/sdk/cloudbees-sdk-1.5.0-bin.zip;
unzip -o cloudbees-sdk-1.5.0-bin.zip
rm cloudbees-sdk-1.5.0-bin.zip
PATH=${WORKSPACE}/bees-sdks/cloudbees-sdk-1.5.0:$PATH; export PATH
if [[ ! -d ~/.bees ]]
then
mkdir ~/.bees
fi
cp ${SECRET}/bees.config ~/.bees/bees.config
I've done an online example here that illustrates how this all works. Sorry this is a little more complicated than we would like: we are working to make it smoother and I will update this answer shortly once the changes go live.

Resources