I am working on a mern stack project where i am using local storage to store some data and then use it on front-end. It is working fine on local host but i need to deploy my app on heroku. I was wondering how i would manage this local storage part on heroku.
Before everything answer is NO.
You can use it but file will disapear.
Source: https://help.heroku.com/K1PPS2WM/why-are-my-file-uploads-missing-deleted
Youy mean on file storage it's ok. Heroku dont have permament file storage.
It means files out of git repo will be deleted on every sleep.
Please try and notify me about:
Custom ftp server connection are NOT possible!
Direct FTP access to Heroku?
Helping: try free plan for aws, azure, gcloud.
Go to heroku app setting tab on web panel.
Open global env vars popup.
You must read :
https://devcenter.heroku.com/articles/sftptogo#configuring-dns-for-custom-subdomain
It is little problem if you wanna 0$ cost.
It is some kind of a sandbox. Every trick will be limited.
You need to open and verify credit card it is free on azure , aws or gcloud.
Take a look for free quotes and get access data.
Heroku is aws oriented by my opinion maybe it is best for this choose to take aws.
Please notify if you have some success.
Good luck!
Related
Problem
I created an app and deployed it via AWS Amplify. The app works, but every time I try to do an operation which uses my database I get an error. The peculiar thing is that when I am developing on localhost and connecting to the database, everything works.
Debugging
I checked whether the environment variables are set correctly and they are. When checking the cloud logs, I can see this error: code: 'ER_GET_CONNECTION_TIMEOUT'.
Could this be a problem with the security group or something else? There are no problems connecting from my local ip. There is only one inbound rule specified:
I am not really well versed in all the IAM management stuff, so there is a good chance that I have messed this up. Any hints or help are very welcome. Thanks in advance.
If you amplify mock function .... test a Lambda, I believe it runs using the permissions of the amplify-cli user and not necessarily the Lambda's actual permissions.
Try amplify env checkout prod so your local environment is pointing to the 'production' environment on AWS. Test the front-end (carefully, knowing you're making changes in production) and see if that works.
You'll probably need to log out of the front-end website and log back in using a production user.
If that fails, then I suspect something is different between your dev & prod environments. Look at your environment variables. Make sure you didn't hard-code any table names -dev instead of -${process.env.ENV} etc.
IF the above test does work, then consider the differences between production and development environments. If everything is managed by Amplify, then the should be the same. If you have some pre-existing resources, then you'll need to examine the permissions resources have to talk to those resources. Did you grab an ARN from somewhere in your dev and not from prod? etc.
I have a Quarkus application already deployed on Google Cloud Run.
It depends on MySQL, hence there is an instance started on Cloud SQL.
Next step in my deployment process is to add keycloak. From what I've read the best option seems to be Google App Engine.
The approved answer in this question gave me some good insight of what needs to be done ... mostly.
What I did was:
Locally I made a sub-directory in the main project.
In that directory I added the app.yaml and the Dockerfile (as described here for instance).
There I executed the said two commands: gcloud init and gcloud app deploy.
I had my doubts about this set up and they were backed up by the error I got eventually:
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: The first service (module) you upload to a new application must be the 'default' service (module). Please upload a version of the 'default' service (module) before uploading a version for the 'morph-keycloak-service' service (module).
I understand my set up breaks the overall structure of the project but I'm not sure how to mix those two application with the right services.
I understand keycloak is a stateful application, hence cannot live on Cloud Run (by the way the intention is for keycloak to use the same database instance shared with the application).
So does any one know a more sensible set up, or what can I move in mine in order to fix it?
In short:
The answer really is in reading the error message (thanks #gaefan) - about the error itself it explains enough. So I just commented out the service: my-keycloak-service line in the app.yaml (thus leaving gcloud to implicitly mark it as the default one) and the deployment continued.
Eventually keycloak didn't connect to the database but if I don't manage to adjust the configurations that would probably be a subject to a different question.
On the point of project structure and functionality:
First off, thanks #NoCommandLine and #guillaume-blaquiere for your input!
#NoCommandLine the application on Cloud Run is sort of a headless REST API enabled backend. Most of the API calls are secured by keycloack. A next step in the deployment process would be to port an existing UI (React) client on the Firebase hosting (or on another suitable service - I'm still not completely sure which approach is best) and in order for the users to work with this client properly they must make an SSO through keycloak first.
I'm quite new to GCP and the number and variants of the available options are still overwhelming to me - one must get familiar with the nuances but I guess it takes time. So I'm still taking suggestions on how to adjust my project structure to fit better the services stack. Thanks!
I have a website that is currently running under GAE... unfortunately, I, nor anyone on the team, does not have access the local environment that it was created from.... Is it possible to create a local environment or at least get a copy of the application files and database from an existing GAE installation?
What you need is the application source code, not the "local environment".
Ideally this source code would be on a version control system (ie GIT,SVN), Google cloud platform provides free GIT repositories for your projects so you might try looking there first. There's also a tool for both Java and python that allow you to download the source of a deployed version, provided you are authenticated as either the dev who uploaded it or a project owner. EDIT: as stated by Dan Cornilescu this feature can be disabled.
As for the database info there's plenty of tools available to "export" your GAE datastore info, just consider for your project that it might be easier to do the queries manually than actually implementing this tools.
Thanks for help... But unfortunately, this code is not in GIT. Furthermore,
being new to Google hosting, I wasn't clear on my setup... My web instance is actually running within Compute Engine not Application Engine. Be that as it may, with some additional search, I was first able to find out how to browse my filesystem by accessing the VM Instances menu option under the Compute Engine section of the Google Cloud Platform interface. On the VM Instances page, it will show your instance and an option to the left side of the instance to connect with a drop down box that will allow you to open a browser window that shows the instance's file system. In addition to this, I found this link https://www.youtube.com/watch?v=9ssfE6ODpak that shows how to configure Filezila FTP client to access your server instance - very helpful. From there, I was able to download all of my site files from the var/www directory. Now, onto extracting my data... Thanks again!
The application(JSF+hibernate) is been deployed using the vmc commands as on the cloudfoundry site. able to see the welcome page. postgreSQl service is binded with the application but the application is not able to connect with the database.
And also viewed about the VCAP_SERVICES using java but dont know much about it rather how to create it.
Cloud Foundry uses auto-reconfiguration if you have one service (either MySQL or Postgres) bind to your application. That means you don't need to touch your code at all!
Please review the following article on our docs site:
http://docs.cloudfoundry.com/frameworks/java/spring/spring.html#using-cloud-foundry-services-in-spring-applications
If you still have issues, go ahead and upload a war file of your app and we can take a look.
I am developing an app locally in Google App Engine. I have built a small datastore for development purposes. Rebuilding it after every power cycle on my Mac got tedious so I made it permanent. Now I run my app locally with the following command:
/usr/local/bin/dev_appserver.py "--datastore_path=./permanent.datastore" appengine_prototype
Life is good. I have decided to deploy my app so I can test http post commands from a different machine. When I tried to register my current application id (example), I found that it was unavailable (shocker!). So I registered a different application id and planned to change my local application id to match. However, when I changed the
application: *app-id*
line in my app.yaml file, my app stopped recognizing my permanent datastore.
So, how can I change my application id to the one I registered, maintain the connection to the permanent datastore and then push the whole shebang online? I tried running the app twice locally, first with the permanent datastore referenced in the command and then without, hoping that the default temporary datastore would inherit from the previous permanent datastore. That didn't work. Do I need to start by copying the permanent datastore to the default temporary datastore? How would I do that? Any help would be much appreciated.
Thanks,
Dessie
If your intention is to eventually push your local data to your live environment anyway, then your best bet is to:
use bulkloader.py to backup your local data (while using oldid in your config)
then change your config to your newid
then use bulkloader.py to push your data to your new development server (ran with --datastore_path=./permanent.datastore2 or something)
then use bulkloader.py to push your data to the GAE production server
Details of bulkloader.py can be found in the docs and an example here