Problem
I created an app and deployed it via AWS Amplify. The app works, but every time I try to do an operation which uses my database I get an error. The peculiar thing is that when I am developing on localhost and connecting to the database, everything works.
Debugging
I checked whether the environment variables are set correctly and they are. When checking the cloud logs, I can see this error: code: 'ER_GET_CONNECTION_TIMEOUT'.
Could this be a problem with the security group or something else? There are no problems connecting from my local ip. There is only one inbound rule specified:
I am not really well versed in all the IAM management stuff, so there is a good chance that I have messed this up. Any hints or help are very welcome. Thanks in advance.
If you amplify mock function .... test a Lambda, I believe it runs using the permissions of the amplify-cli user and not necessarily the Lambda's actual permissions.
Try amplify env checkout prod so your local environment is pointing to the 'production' environment on AWS. Test the front-end (carefully, knowing you're making changes in production) and see if that works.
You'll probably need to log out of the front-end website and log back in using a production user.
If that fails, then I suspect something is different between your dev & prod environments. Look at your environment variables. Make sure you didn't hard-code any table names -dev instead of -${process.env.ENV} etc.
IF the above test does work, then consider the differences between production and development environments. If everything is managed by Amplify, then the should be the same. If you have some pre-existing resources, then you'll need to examine the permissions resources have to talk to those resources. Did you grab an ARN from somewhere in your dev and not from prod? etc.
Related
I have a Quarkus application already deployed on Google Cloud Run.
It depends on MySQL, hence there is an instance started on Cloud SQL.
Next step in my deployment process is to add keycloak. From what I've read the best option seems to be Google App Engine.
The approved answer in this question gave me some good insight of what needs to be done ... mostly.
What I did was:
Locally I made a sub-directory in the main project.
In that directory I added the app.yaml and the Dockerfile (as described here for instance).
There I executed the said two commands: gcloud init and gcloud app deploy.
I had my doubts about this set up and they were backed up by the error I got eventually:
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: The first service (module) you upload to a new application must be the 'default' service (module). Please upload a version of the 'default' service (module) before uploading a version for the 'morph-keycloak-service' service (module).
I understand my set up breaks the overall structure of the project but I'm not sure how to mix those two application with the right services.
I understand keycloak is a stateful application, hence cannot live on Cloud Run (by the way the intention is for keycloak to use the same database instance shared with the application).
So does any one know a more sensible set up, or what can I move in mine in order to fix it?
In short:
The answer really is in reading the error message (thanks #gaefan) - about the error itself it explains enough. So I just commented out the service: my-keycloak-service line in the app.yaml (thus leaving gcloud to implicitly mark it as the default one) and the deployment continued.
Eventually keycloak didn't connect to the database but if I don't manage to adjust the configurations that would probably be a subject to a different question.
On the point of project structure and functionality:
First off, thanks #NoCommandLine and #guillaume-blaquiere for your input!
#NoCommandLine the application on Cloud Run is sort of a headless REST API enabled backend. Most of the API calls are secured by keycloack. A next step in the deployment process would be to port an existing UI (React) client on the Firebase hosting (or on another suitable service - I'm still not completely sure which approach is best) and in order for the users to work with this client properly they must make an SSO through keycloak first.
I'm quite new to GCP and the number and variants of the available options are still overwhelming to me - one must get familiar with the nuances but I guess it takes time. So I'm still taking suggestions on how to adjust my project structure to fit better the services stack. Thanks!
I am working on a mern stack project where i am using local storage to store some data and then use it on front-end. It is working fine on local host but i need to deploy my app on heroku. I was wondering how i would manage this local storage part on heroku.
Before everything answer is NO.
You can use it but file will disapear.
Source: https://help.heroku.com/K1PPS2WM/why-are-my-file-uploads-missing-deleted
Youy mean on file storage it's ok. Heroku dont have permament file storage.
It means files out of git repo will be deleted on every sleep.
Please try and notify me about:
Custom ftp server connection are NOT possible!
Direct FTP access to Heroku?
Helping: try free plan for aws, azure, gcloud.
Go to heroku app setting tab on web panel.
Open global env vars popup.
You must read :
https://devcenter.heroku.com/articles/sftptogo#configuring-dns-for-custom-subdomain
It is little problem if you wanna 0$ cost.
It is some kind of a sandbox. Every trick will be limited.
You need to open and verify credit card it is free on azure , aws or gcloud.
Take a look for free quotes and get access data.
Heroku is aws oriented by my opinion maybe it is best for this choose to take aws.
Please notify if you have some success.
Good luck!
I need to test my wordpress install which I have set up already and deployed. I have to debug, so waiting 10-15 mins for it to deploy to test one thing isn't going to work.
All they mention in their docs: https://cloud.google.com/appengine/docs/flexible/php/testing-and-deploying-your-app#running_locally
Running locally
"To test your application's functionality before deploying, run your application in your local environment with the development tools that you usually use."
That's it. How can I actually serve my wordpress application? My tools I "usually use" are xampp...very confused.
Can someone help me run my flex env locally to test it?
You may want to take a look at this for the initial tests for your PHP application. You would have to install composer on your shell for it, if you haven't done it already.
Then, for the WordPress application, follow the steps described here to test the Cloud SQL instance that is associated to the app. There is also the possibility to test all the updates you want to apply to the WordPress side. Skip the deploying part until you confirm all your changes work for you, so that you don't have to wait all that time for a deployment.
I would like to debug my Google App Engine (GAE) app locally but without using localhost. Since my application is made up of microservices, the urls in a production environment would be along the lines of:
https://my-service.myapp.appspot.com/
But code in one service can call another service and that means that the urls are hardcoded. I could of course use a mechanism in code to determine whether the app is running locally or on GAE and use urls that are different although I don't see how a local url would handle the since the only way to run an app locally is to use localhost. Hence:
http://localhost:8080/some-service
Notice that "some-service" maps to a servlet, whereas "my-service" is a name assigned to a service when the app is uploaded. These are really two different things.
The only possible solution I was able to find was to use a reverse proxy which would map one url to a different one. Still, it isn't clear whether the GAE development SDK even supports this.
Personally I chose to detect the local development vs GAE environment and build my inter-services URLs accordingly. I feel it was a well-worthy effort, I've been (re)using it a lot. No reverse proxy or any other additional ops necessary, it just works.
Granted, I'm using Python, so I'm not 100% sure a complete similar Java solution exists. But maybe it can point you in the right direction.
To build the per-service URLs I used modules.get_hostname() (the implementation is presented in Resolve Discovery path on App Engine Module). I believe the Java equivalent would be getInstanceHostname() from com.google.appengine.api.modules.
This method, when executed on the local server, automatically provides the particular port the server listens to for each service.
BTW, all my services for an app are executed by a single development server process, which listens on multiple ports (this is, I guess, how it can provide the modules.get_hostname() info). See Running multiple services using dev_appserver.py on different ports. This is part I'm unsure about: if/how the java local dev server can simultaneously run multiple services. Apparently this used to be supported some time ago (when services were still called modules):
Serving multiple GAE modules from one development server?
GAE modules on development server
This can be accomplished with the following steps:
Create an entry in the hosts file
Run the App Engine Dev server from a Terminal using certain options
Use IntelliJ with Remote debugging to attach the App Engine Dev server.
To edit the hosts file on a Mac, edit the file /etc/hosts and supply the domain that corresponds to your service:. Example:
127.0.0.1 my-service.myapp.com
After you save this, you need to restart your computer for the changes to take place.
Run the App Engine Dev server manually:
dev_appserver.sh --address=0.0.0.0 --jvm_flag=-Xdebug
--jvm_flag=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000
[path_to_exploded_war_directory]
In IntelliJ, create a debug configuration. Use the Remote template to create this configuration. Set the host to the url you set in the hosts file and set the port to 8000.
You can set a breakpoint and run the app in IntelliJ. IntelliJ will attach to the running instance of App Engine Dev server.
Because you are using a port during debugging and no port is actually used when the app is uploaded to the GAE during production, you need to add code that identifies when the app is running locally and when it's running on GAE. This can be done as follows:
private String mServiceUrl = "my-service.my-app.appspot.com";
...
if (SystemProperty.environment.value() != SystemProperty.Environment.Value.Production) {
mServiceUrl += ":8000";
}
See https://cloud.google.com/appengine/docs/standard/java/tools/using-local-server
An improved solution is to avoid including the port altogether and not having to use code to determine whether your app is running locally or on the production server. One way to do this is to use Charles (an application for monitoring and interacting with requests) and use a feature called Remote Mapping which lets you map one url to another. When enabled, you could map something like:
https://my-service.my-app.appspot.com/
to
https://localhost:8080
You would then enable the option to include the original host, so that this gets delivered to the local dev server. As far as your code is concerned it only sees:
https://my-service.my-app.appspot.com/
although the ip address will be 127.0.0.1:8080 when remote mapping is enabled. To use https on local host however does require that you enable ssl certificates for Charles.
For a complete overview on how to setup and debug microservices for a GAE Java app in IntelliJ, see:
https://github.com/JohannBlake/gae-microservices
By deploy, I assume they mean code changes are pushed to production?
If I had 2 or three devs working with me on a project, what is to stop them from pushing changes that break the production site?
What checks and balances do we have to avoid such error? Do you setup a staging and production environment in GAE, having someone manually verify everything appears to work before making live.
If you want to limit individual devs doing uncontrolled pushes to the code.google.com (from which the deploys happen), then arrange for one and only one local repository be the local repo-of-record, and only configure that one to know about the source.google.com 'origin' server. Integrations are pulled into that repo, and (when you're ready) push to deploy from there.
Here's how it works under the covers. When you use gcloud to set up a project, it modifies default/.git/config to know about source.google.com, and to use an authentication helper that ties in to oauth (re-using the token that gcloud auth login will store locally) to authenticate. To limit deployment, make this the integration repo, and configure dev repos to push to it.
If you set up a separate staging version of the app, it's a policy decision on your side about whether to use the same scheme, or let developers deploy to staging individually. The mechanism you'd use to configure this is all .git/config wiring.