I am using imagekit.io CDN in front of Cloud Storage rendering with the next/image component. I had next/image working prior seeing online that it was not compatible with the GAE standard environment. This concerns me... even though it is working, I'm wondering if there is some kind of inefficiency because next cannot cache images outside of /tmp. next/image is kind of a magic black box to me.
next.config.js
const nextConfig = {
experimental: {
externalDir: true
},
reactStrictMode: true,
images: {
domains: ["ik.imagekit.io"]
},
distDir: "build"
};
module.exports = nextConfig;
Moving to flex is not an option right now. I need to know if I should move off next/image.
TL; DR Google App Engine Standard is not a great option for Next
As you've pointed out from the tutorial:
Nextjs Features like Image Component with Optimization, Incremental Static Regeneration requires nextjs cache folder to be read/written on runtime. But our Standard Environment has read and write access only to the /tmp directory.
Is not suitable to use next/image component on App Engine Standard due to the limitation that you could not write on runtime. This conclusion is based on the official documentation from both, Google App Engine Standard and NextJS
NextJS (next/image)
Images are optimized dynamically upon request and stored in the <distDir>/cache/images directory. The optimized image file will be served for subsequent requests until the expiration is reached.
Google App Engine Standard:
All files in this directory are stored in the instance's RAM, therefore writing to /tmp takes up system memory. In addition, files in the /tmp directory are only available to the app instance that created the files. When the instance is deleted, the temporary files are deleted.
This seems to be a common struggle between developers using some NextJS components that requires to write to the /cache folder running on App Engine Standard, as shown on this GitHub discussion:
Currently, the image optimiser caches images locally, in the servers filesystem, typically under .next/cache/images. While this is fine on some platform, it's not uncommon for managed deployment target to restrict write access to the filesystem (e.g. Google App Engine only provides write access to an in-memory /tmp folder)
...
Currently, the only option we have to use next image optimization within Google App Engine is to use a custom server, override the handler for _next/images with an handler that does exactly the same as the image optimizer
And on this GitHub issue:
Did you have a solution for this one? Considering moving our marketing site to AWS temporarily over this. Can't use next/image on GCP GAE.
If you really need to deploy to GCP GAE, you can set your application to a flex environment.
See also:
Anyone managed to use next/Image on GAE?
Related
My website changes every day - I run a news website with new stories every day. I want Google to index my site as often as possible and want/need to autogenerate the sitemap.
I use Google App Engine (with Node.js) to run my site. With GAE - I do not have write-access to the root directory. To post the site map - I need to re-deploy my whole site after generating the map. That is an unnecessarily complex step.
I have searched far and wide and cannot see how to save my sitemap. So - I considered using a static one with a dynamically generated child that I store in another location where I have write access. Google says it wants all linked sitemaps in the same directory. So that appears to be a dead-end.
Can I use "App Deploy" in such a way that only the sitemap is uploaded? Any other possibilities? Appreciate any and all suggestions. It seems unlikely that Google didn't provide some way to solve this.
For a site where new URLs are being created regularly (like a news, blog site, etc), don't 'store' your sitemap. It should be generated on demand i.e. your App should include code to generate the content when the link <your_website>/sitemap.xml is loaded.
Separately, you should note that gcloud app deploy doesn't always deploys all your files. It usually deploys only files that have changed. You can easily confirm this by running the deploy command, changing a single file and then running the deploy command again. You will see that the logs will say something like - Uploading 1 files to Google Cloud Storage and the deploy will be faster. You can change X number of files, deploy again and the message will be updated to indicate it is only deploying x files.
However, I'm not sure what it uses to compute the diff. Maybe it compares it to the files currently in your staging bucket and if the files in the staging bucket have been deleted (they have a default life span of 15 days) it will deploy all the files again (but as I said, I'm not sure of this)
Our site is being made available with the following structure:
Static Blob Container Azure > CDN > Cloudflare > User.
The React app build is made available in an Azure Static Blob Container that is accessed by an Azure CDN. When we access the app via the CDN URL, we never have a cache problem. We also use cloudflare to manage the DNS and supposedly improve the cache. But when we access the app through cloudflare, we have a serious cache problem, returning extremely old versions for users who have accessed the site before.
Even after turning off all cache options available in Cloudflare's dashboard and its graphics show that cache consumption has dropped, the bug still persists. We were unable to identify where our problem is in the structure mentioned above.
The problem is because a CDN uses multiple nodes to serve the content. The proper way to 'solve' this is appending a version in the filename or path, this way, whenever you need to change something, the CDN will download the latest version. Just using a regular 'app.js' is not enough.
More info:
How to force the browser to reload cached CSS and JavaScript files
https://stackoverflow.com/a/34604256/1384539
Excuse the naivety here, I'm new to GAE, and haven't been able to find much in the available literature / answers about the deployed filesystem's public status...
My question is quite simple:
Assuming a standard app.yaml config, are the files that get pushed to GAE with gcloud app deploy publicly inaccessible, unless exposed by (in the example of Node.js) an express endpoint?
I want to make sure sensitive data like key files (for reference in code) in our deployed bundle are not exposed, and that the local filesystem of a deployment is only accessible privately by the code itself.
Unless you work with the “static” files & dir directives no data should be made visible to outside users by Google.
Authenticated Admin users (youself) can see all files deployed to the server in the admin console unless you disable “code downloads” (which was available on legacy App Engine but seems to be removed now).
I am new in development world. I am trying to understand how the Heroku filesystem works.
I did an Express project using multer to upload images.
In production, everything worked well including fetching the images from my static folder.
However, when I did it with React (frontend=React & Backend=Express) the images are not displaying even though console shows no error.
According to my research Heroku says
Heroku filesystem is ephemeral - that means that any changes to the
filesystem whilst the dyno is running only last until that dyno is
shut down or restarted
and that I should use a dedicated file storage service such as AWS S3 (for static files).
How does this apply to my React project since I didn't use it in the Express project?
In production, everything worked well including fetching the images from my static folder.
In fact, it probably didn't.
Files can be saved to Heroku's ephemeral filesystem, and even loaded, but, as you have seen, this isn't permanent. Whenever your dyno restarts the filesystem resets. This happens frequently (at least once per day).
Express vs. React is irrelevant. You should always use something like S3 for user uploads on Heroku.
Is there a way to update selected files when using the App Engine Flexible env?
I'm facing an issue whenever I do a small change in the app.yaml file: to test it I would need to deploy the whole application which takes ~5mins.
Is there a way to update only the config file? OR is there a way to test these files locally.
Thanks!
The safe/blanket answer would be no as the flex env docker image would need to be updated regardless of how tiny the changes are, see How can I speed up Rails Docker deployments on Google Cloud Platform?
However, there might be something to try (YMMV).
From App Engine Flexible Environment:
You always have root access to Compute Engine VM instances. SSH access to VM instances in the flexible environment is disabled by
default. If you choose, you can enable root access to your app's VM
instances.
So you might be able to login as root on your GAE instance VM and try to manually modify a particular app artifact. Of course, you'd need to locate the artifact first.
Some artifacts might not even be present in the VM image itself (those used exclusively by the GAE infra, queue definitions, for example). But it should be possible to update these artifacts without updating the docker image, since they aren't part of the flex env service itself.
Other artifacts might be read-only and it might not be possible to change them to read-write.
Even if possible, such manual changes would be volatile, they would not survive an instance reload (which would be using the unmodified docker image), which might be required for some changes to take effect.
Lots of "might"s, lots of risks (manual fiddling with the app code could negatively impact its functionality), up to you to determine if a try is really worthy.
Update: it seems this is actually documented and supported, see Accessing Google App Engine Python App code in production