I have manually deployed a number of Alexa skills using a lambda backend and understand the manual process, however I am new to using the ask cli v2.
I believe I have all of the steps in the guide as far as having both ask and aws cli set. I have set my roles in AWS.
I am currently just trying to get used to the process and running
ask new
changing the invocation and then running
ask deploy
Everything runs seemingly correct until
Skill code built successfully.
Code for region default built to C:\location\projectName.ask\lambda\build.zip successfully with build flow nodejs-npm.
==================== Deploy Skill Infrastructure ====================
/ Deploy Alexa skill infrastructure for region "default"
→ No IAM role exists. Creating an IAM role...
And then we just wait... forever.
The AWS CLI profile has IAMFullAccess to create roles as needed.
What am I missing?
So It ended up being an issue somewhere between permissions on my aws role and the configuration. I changed which role I was using and re-configured ask and aws.
I am not exactly sure where things were fixed because I immediately ran into another error that ended up being a bit of a rabbit hole. That I will describe here because it is common enough and could be seen while trouble shooting my original issue.
The issue I ran into is was when the deploy would happen successfully I could not test with the code that made it to my lambda. In cloud watch it presented as
"Runtime.ImportModuleError: Error: Cannot find module './dispatcher/error/mapper/GenericErrorMapper'"
This ended up being a bug within powershell and compressing to .zip on windows and being unpacked on linux.
I had to run
Install-Module Microsoft.PowerShell.Archive -MinimumVersion 1.2.3.0 -Repository PSGallery -Force
https://github.com/PowerShell/PowerShell/issues/2140
This fixed my final issue.
Related
Problem
I created an app and deployed it via AWS Amplify. The app works, but every time I try to do an operation which uses my database I get an error. The peculiar thing is that when I am developing on localhost and connecting to the database, everything works.
Debugging
I checked whether the environment variables are set correctly and they are. When checking the cloud logs, I can see this error: code: 'ER_GET_CONNECTION_TIMEOUT'.
Could this be a problem with the security group or something else? There are no problems connecting from my local ip. There is only one inbound rule specified:
I am not really well versed in all the IAM management stuff, so there is a good chance that I have messed this up. Any hints or help are very welcome. Thanks in advance.
If you amplify mock function .... test a Lambda, I believe it runs using the permissions of the amplify-cli user and not necessarily the Lambda's actual permissions.
Try amplify env checkout prod so your local environment is pointing to the 'production' environment on AWS. Test the front-end (carefully, knowing you're making changes in production) and see if that works.
You'll probably need to log out of the front-end website and log back in using a production user.
If that fails, then I suspect something is different between your dev & prod environments. Look at your environment variables. Make sure you didn't hard-code any table names -dev instead of -${process.env.ENV} etc.
IF the above test does work, then consider the differences between production and development environments. If everything is managed by Amplify, then the should be the same. If you have some pre-existing resources, then you'll need to examine the permissions resources have to talk to those resources. Did you grab an ARN from somewhere in your dev and not from prod? etc.
I have a Quarkus application already deployed on Google Cloud Run.
It depends on MySQL, hence there is an instance started on Cloud SQL.
Next step in my deployment process is to add keycloak. From what I've read the best option seems to be Google App Engine.
The approved answer in this question gave me some good insight of what needs to be done ... mostly.
What I did was:
Locally I made a sub-directory in the main project.
In that directory I added the app.yaml and the Dockerfile (as described here for instance).
There I executed the said two commands: gcloud init and gcloud app deploy.
I had my doubts about this set up and they were backed up by the error I got eventually:
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: The first service (module) you upload to a new application must be the 'default' service (module). Please upload a version of the 'default' service (module) before uploading a version for the 'morph-keycloak-service' service (module).
I understand my set up breaks the overall structure of the project but I'm not sure how to mix those two application with the right services.
I understand keycloak is a stateful application, hence cannot live on Cloud Run (by the way the intention is for keycloak to use the same database instance shared with the application).
So does any one know a more sensible set up, or what can I move in mine in order to fix it?
In short:
The answer really is in reading the error message (thanks #gaefan) - about the error itself it explains enough. So I just commented out the service: my-keycloak-service line in the app.yaml (thus leaving gcloud to implicitly mark it as the default one) and the deployment continued.
Eventually keycloak didn't connect to the database but if I don't manage to adjust the configurations that would probably be a subject to a different question.
On the point of project structure and functionality:
First off, thanks #NoCommandLine and #guillaume-blaquiere for your input!
#NoCommandLine the application on Cloud Run is sort of a headless REST API enabled backend. Most of the API calls are secured by keycloack. A next step in the deployment process would be to port an existing UI (React) client on the Firebase hosting (or on another suitable service - I'm still not completely sure which approach is best) and in order for the users to work with this client properly they must make an SSO through keycloak first.
I'm quite new to GCP and the number and variants of the available options are still overwhelming to me - one must get familiar with the nuances but I guess it takes time. So I'm still taking suggestions on how to adjust my project structure to fit better the services stack. Thanks!
I've setup Google App Engine to run my AdonisJS API for my website. I update the code using the CLI for google cloud services ("gcloud app deploy"). I get a success message from the terminal, and I have checked both the cloud build and version number, and both are the most recent deployment. However, when I try to use my website, I get an error due to the API using old code and trying to access table columns from my database that no longer exist. I have downloaded the most recent cloud build file and checked the codebase within it and the updated code is there. I have also tried deploying multiple times, and it still is using the old code. Does anyone know why this is happening and/or how to fix this?
If you need more information, let me know. Thanks
ANSWER:
Fixed this a while ago, but wanted to update here just in case others ran into this. I discovered that when deploying to GAE through the command line, my build command wasn't running prior to the deploy since my script had an error, so it was uploading updated code, but not an updated build. So just make sure to run the build command prior to uploading to GAE and everything should work.
In console.cloud.google.com, go to your GAE project and check which version of your project is running I.e. which one is receiving traffic
Clear your cache.
Known issue:
Installing google-cloud-sdk (linux package or from tarball) has a quirk where you cannot create projects from the command line before accepting the terms of service.
Steps to reproduce:
Download sdk, untar, move folder to home directory and add google sdk root directory to PATH using install.sh
Initialize and login with: $ google init
Create a project from the CL: $ gcloud projects create --set-as-default
This will spit out an error like:
ERROR: (gcloud.projects.create) Operation [cp.5641973328385684887] failed: 9: Callers must accept Terms of Service
I hazard a guess that accepting the terms of service have not been built into the command line initialization yet. Omitting such a fundamental step to an installation process should be illegal, with consequences ranging from death by a thousand key-stokes, to 'build an operating system in headfuck'... but that's just me...
We find the solution in the most unholiest of places: the google cloud control interface (cloud console).
Go to your cloud console
Create a project by selecting 'select a project' (top-left next to "Google Cloud Platform" and then 'create project' (top-right in the popup window).
This will prompt the terms of service agreement and you may carry on after agreeing to the terms of service.
I hope this helps whoever else stumbles upon this most infuriating of errors.
Live long and prosper
Bitshift
Let's not be so dramatic with "death by a thousand key-strokes". This is a security measure that should be implemented. Security is not always convenient but can save your checking/credit account a lot of grief.
Imagine this theoretical scenario. You provide me with a service account that has the roles to create a project. I create a new project. This project is created under your Google Billing Account. I know what I am doing with Google IAM so I remove you from the new project and make myself the Project Owner. Now you have no access to the new project but your credit card is paying the bills for my project. I think you would then be screaming "death by a million key-strokes".
There are two types of projects:
Independent projects not part of an organization.
Projects that are part of an organization.
If you are part of a Google Cloud Organization, you can easily create projects up to your quota limit (default is 5). No prompting, accepting TOS, etc. Using the CLI to create a new project is effortless.
If you are not part of an Google Cloud Organization then you are basically creating a new account, you need to set up account billing, accept terms of service, etc. This means that you should not use the CLI to create a new project as the CLI does not prompt you for the items that a new project requires. Why, the CLI should be using a service account. The service account is not the IAM member that owns the account. This forces you to log into the Google Cloud Console using your User Credentials to create the new project.
For anyone getting this message when trying to create a dialogflow agent:
Go to https://console.cloud.google.com, login and accept the displayed terms and conditions.
Afterwards it worked for me...
Using Eclipse, I am experiencing an error when trying to deploy a rather basic web app with JAX-RS and JAXB. It runs okay locally, but when trying it on the remote servers I get the message shown below...
'Deploying to Google' has encountered a problem / This application does not exist
Below shows my appengine-web.xml
The XML file illustrates that I am using the same name in the xml as what's specified in the project properties...
The output window show...
------------ Deploying frontend ------------
Preparing to deploy:
Created staging directory at: '/var/folders/n8/6by626014jbfc0dwmxnb0ly00000gn/T/appcfg2754901216637807129.tmp'
Scanning for jsp files.
Scanning files on local disk.
Initiating update.
com.google.appengine.tools.admin.HttpIoException: Error posting to URL: https://appengine.google.com/api/appversion/create?app_id=hillingarincident&version=0&
404 Not Found
This application does not exist (app_id=u'hillingarincident').
Debugging information may be found in /private/var/folders/n8/6by626014jbfc0dwmxnb0ly00000gn/T/appengine-deploy447984481661870877.log
The referenced debug logs show...
Unable to update:
com.google.appengine.tools.admin.HttpIoException: Error posting to URL: https://appengine.google.com/api/appversion/create?app_id=hillingarincident&version=0&
404 Not Found
This application does not exist (app_id=u'hillingarincident').
at com.google.appengine.tools.admin.AbstractServerConnection.send1(AbstractServerConnection.java:293)
at com.google.appengine.tools.admin.AbstractServerConnection.send(AbstractServerConnection.java:253)
at com.google.appengine.tools.admin.AbstractServerConnection.post(AbstractServerConnection.java:232)
at com.google.appengine.tools.admin.AppVersionUpload.send(AppVersionUpload.java:644)
at com.google.appengine.tools.admin.AppVersionUpload.beginTransaction(AppVersionUpload.java:449)
at com.google.appengine.tools.admin.AppVersionUpload.doUpload(AppVersionUpload.java:124)
at com.google.appengine.tools.admin.AppAdminImpl.doUpdate(AppAdminImpl.java:371)
at com.google.appengine.tools.admin.AppAdminImpl.update(AppAdminImpl.java:53)
at com.google.appengine.eclipse.core.proxy.AppEngineBridgeImpl.deploy(AppEngineBridgeImpl.java:433)
at com.google.appengine.eclipse.core.deploy.DeployProjectJob.runInWorkspace(DeployProjectJob.java:148)
at org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:53)
Any answers will be appreciated. At one point my browser was not logged in to the target google account, so I swapped to the correct one a little later, Google does render the application name as expected.
Okay, this was simple in the end! Eclipse performs an auto-login to the Google account, unfortunately I created the Eclipse project whilst being logged in to one Google account and then tried to specify the application name afterwards.
You'll see in the bottom-right (or bottom-left in some versions) a Google icon with the name of the user that you are logged in as. If that's not the account where your application is defined, then simply logout of that account, then login as the correct Google account.
Now there's no error :-)
I know this question is super old but I had this issue all day and finally I found a solution. Maybe it will help someone out in the future.
After you create a project in Google Cloud Platform, you must go to google cloud shell in your project and run the command
gcloud beta app create
After you run this command, you will get prompted to choose a region. Then go back to eclipse and try deploying it. It worked for me.
There are not just 1 way can cause this problem. For me, I have this problem when I create the project using Maven. But I don't have the same issue if I directly create the project from the Google plugin.
There might be another issue, when you register with Google App Engine, you receive email indicating your activation. If you have not received the email yet, this problem could occur too.
Another issue could be to use the gmail account for the Google App Engine to avoid any such errors.