Can I delete container images from Google Cloud Storage artifacts bucket? - google-app-engine

I have a Google App Engine app, which connects to Google Cloud Storage.
I noticed that the amount of data stored was unreasonably high (4.01 GB, when it should be 100MB or so).
So, I looked at how much each bucket was storing, and I found that there was an automatically created bucket called us.artificats. that was taking up most of the space.
I looked inside, and all it has is one folder: containers/images/.
From what I've Googled, it seems like these images come from Google Cloud Build.
My question is, can I delete them without compromising my entire application?

I have solved this problem by applying a deletion rule. Here's how to do it:
Open the project in Google Cloud console
Open the storage management (search for "Storage" for example).
In the Browser tab, select the container us.artifacts....
Now, open the Lifecycle section. You should see something like:
Click on Add a rule and provide the following conditions:
In the action, select Delete object
In the conditions, select Age and enter for example 3 days
Click on create to confirm the creation
Now all objects older than 3 days will be automatically deleted. It might take a few minutes for this new rule to be applied by Google Cloud.

For those of you seeing this later on, I ended up deleting the folder, and everything was fine.
When I ran Google Cloud Build again, it added items back into the bucket, which I had to delete later on.
As #HarshitG mentioned, this can be set up to happen automatically via deletion rules in cloud storage. As for myself, I added a deletion step to my deployment GitHub action.

Here is the reference to the documentation: link
Built container images are stored in the app-engine folder in Container Registry. You can download these images to keep or run elsewhere. Once deployment is complete, App Engine no longer needs the container images. Note that they are not automatically deleted, so to avoid reaching your storage quota, you can safely delete any images you don't need. For more information about managing images in Container Registry, see the Container Registry documentation.
This can be automated by adding a Lifecycle rules like #HarshitG mentioned.

You can add a trigger to your lifecycle rules on console.cloud.google.com.
Access the bucket with artifacts (default is "us.artifacts.yourAppName.appspot.com")
Go to "Life cycle".
Click on "Add a rule".
Check "Object delete" and press "Continue".
Check the filter to delete the bucket, I chose "age" and selected three days as the number of auto delete old elements (after element has 3 days of life it's auto deleted).
Click on "Create" and the rule is working now, then you do not need to visit every day to clean the bucket.

Same issue. Thanks for the update, Caleb.
I'm having the same issue, but I don't have an App running; I just have:
Firebase Auth
Firestore
Firebase Functions
Cloud Storage
Not sure why I have 4GB stored in those containers, and I'm not sure if I should delete them or if that would break my functions.
UPDATE:
I deleted the container folder and all still works. Not sure if those are backups or whatnot, but I can't find anything online or in the docs. I will post here if something happens. As soon as a cloud function ran, the folder had 33 files again.

I recommend against setting up a lifecycle rules on your Storage buckets. It will likely lead to breaking subsequent updates to the function (as described in Cloud Function build error - failed to get OS from config file for image)
If you are interested in cleaning up container images, you should instead delete container images stored in Google Cloud Registry https://console.cloud.google.com/gcr. Deleting container images on GCR repos will automatically cleanup objects stored in your Cloud Storage.
https://issuetracker.google.com/issues/186832976 has relevant information from a Google Cloud Functions engineer.

Related

Delete old react build files in S3

I am deploying my react app in AWS S3 using AWS code build and caching through AWS CloudFront, But the bucket size is increased to more than 10GB within a month due to frequent deployment.
I tried to delete old build files while deploying but it is causing issues to users who has the old code cached in their browser. As the old files trying to get the previous version build but those are deleted, So it throws 404.
I tried to set no-cache for index.html file but that does not resolve this issue.
Does anyone face this issue?
#Nilanth here is what I do for the similar case:
My stack is also a React app (not so business critical) (it is used to offer article selection possibility for main content management flow..) app is build via CodeCommit - Codebuild to s3 Bucket using CodePipeline & buildspec.yml file. Build it triggered by commit of the repository. I faced a similar problem, that Cloudfront didn't "offer" the newest JS files for the browser (html) so it started to feel like Cache-issue.
I make pretty good solution like this:
Update Cloudfront Cache settings (edit behaviour, set to use "Use legacy cache settings") and set min / max TTLs to 0. (this helps for cache, so user should get newest versions immediately)
For JS / CSS file issue, I add "aws cli remove command" lines to buildspec.yml file like:
aws s3 rm s3://<s3_bucket>/static/js/ --recursive
aws s3 rm s3://<s3_bucket>/static/css/ --recursive
Set those as pre_build commands
Note: See that by removing JS files your application can not be used before new ones are offered again from folders /js & /css. I your application is business critical then u could think beyond this, since there will be 30 - 60s time that app can not be used. And what if build fails, then there is no js/css assets at all, well then you can trigger old build from Codebuild. This will require some effort to do business critical app's Devops work here..
To allow "remove" executions to S3 Bucket, you need to give Codebuild additional permissions. Go to build projects, see the environment's service role. Then go to IAM / roles / pick up the correct role name, and give more S3 permissions, e.g. AmazonS3FullAccess, its enough for sure..
I am not sure, that this is 99% correct solution from Cloudfront side, but it seems to avoid caching-problem and also the bucket size stays small.
-MM
There are many elements there that could throw 404 you'll need to prove one-by-one if they are working to find the root cause(s).
First I'd try the bucket itself, use <s3-bucket-url>/index.html and see if the file (in this case index.html ) exists.
Second the cloudfront, I'll assume the cloudfront distribution is configured correctly (i.e. / path redirects to /index.html). Also, every time you edit the bucket files, create an invalidation to speed up propagation.
Third, you'll need to tell your users to constantly hard reload the page, or use incognito, specially if your site is in constant development.

App Engine: Static Files Not Updating on Deploy

I pushed an HTML static file containing an Angular SPA as catch-all handler for my custom domain with this settings:
- url: /(api|activate|associate|c|close_fb|combine|import|password|sitemap)($|/.*)
script: gae.php
- url: /.*
static_files: public/static/app/v248/es/app.html
upload: public/static/app/v248/es/app.html
expiration: "1h"
That worked fine, but if I push a new app.html it doesn't update. I've tried to change the local path, deploy a new app version, even replacing the catch-all handler with a custom php endpoint, but it doesn't work, the response still is the first version of app.html I uploaded.
Other people has had the same problem (CSS File Not Updating on Deploy (Google AppEngine)), and it looks like is related to Google CDN cache but, as far as I know, there isn't any way to flush it.
There is a way to flush static files cached by your app on Google Cloud.
Head to your Google Cloud Console and open your project. Under the left hamburger menu, head to Storage -> Cloud Storage -> Browser. There you should find at least one Bucket: your-project-name.appspot.com. Under the Lifecycle column, click on the link with respect to your-project-name.appspot.com. Delete any existing rules, since they may conflict with the one you will create now.
Create a new rule by clicking on the 'Add A Rule' button. For the action, select "Set storage to nearline". For the object conditions, choose only the 'Number of newer versions' option and set it to 1. Click on the 'Continue' button and then click 'Create'.
This new rule will take up to 24 hours to take effect, but at least for my project it took only a few minutes. Once it is up and running, the version of the files being served by your app under your-project-name.appspot.com will always be the latest deployed, solving the problem. Also, if you are routinely editing your static files, you should remove any expiration element from handlers related to those static files and the default_expiration element from the app.yaml file, which will help avoid unintended caching by other servers.
When performing changes in static files in an App Engine application, changes will not be available immediately, due to cache, as you already imagined. The cache in Google Cloud cannot be manually flushed, so instead I would recommend you to change the expiration time to a shorter period (by default it is 10 minutes) if you want to test how it works, and later setting an appropriate expiration time according to your requirements.
Bear in mind that you can change the static cache expiration time both for all static files or for just the ones you choose, just by setting the proper element in the app.yaml file.
2020 Update:
For my application I found that App Engine started failing to detect my latest app deployments once I reached 50 Versions in my Versions list.
See (Burger Menu) -> App Engine -> Versions
After deleting a bunch of old versions on next deploy it picked up my latest changes immediately. Not sure if this is specific to my account or billing settings but that solved it for me.
I had my static files over a service in Google Cloud Platform. My problem was that I didn't execute
gcloud app deploy dispatch.yaml
Once executed, everything was fine. I hope it helps
Another problem that could be causing this is caching in Google's frontend, which depends on the cache header returned by your application. In my case, I opened Firefox's inspector on the Network tab, and saw that the stale file had a cache-control setting of 43200 seconds, i.e. 12 hours:
This was for a file returned from Flask, so I fixed this by explicitly specifying max-age in the Flask response from my GAE code:
return flask.send_from_directory(directory, filename, max_age=600)
This causes intermediate caches such as Google's frontend to only cache the file for a period of 600 seconds (10 minutes).
Unfortunately, once a file has been caches there is no way to flush it, so you will have to wait out the 12 hours. But it will solve the problem for the next time.

GAE still serving image from google cloud storage after calling delete_serving_url and deleting file

Current procedure to serve image is as follows:
Store image on google cloud storage
Get blob_key: google.appengine.ext.blobstore.create_gs_key(filename)
Get url: google.appengine.api.images.get_serving_url(blob_key,size=250,secure_url=True)
To remove the image, after retrieving the blob_key:
Delete serving url:
google.appengine.api.images.delete_serving_url(blob_key)
Delete google cloud storage file: 'cloudstorage.delete(filename)'
Issue
The issue is that the url is still serving for an undefined amount of time, even though the underlying image does not exist on google cloud storage anymore. Most of the time the url returns 404 in ~24hrs, but have also seen 1 image still serving now (~2wks).
What are the expectations about the promptness of the delete_serving_url call? Any alternatives to delete the url faster?
I can address one of your two questions. Unfortunately, it's the less helpful one. :/
What are the expectations about the promptness of the delete_serving_url call?
Looking at the Java documentation for getServingUrl, they clearly spell out to expect it to take 24 hours, as you observed. I'm not sure why the Python documentation leaves this point out.
If you wish to stop serving the URL, delete the underlying blob key. This takes up to 24 hours to take effect.
The documentation doesn't explain why one of your images would still be serving after 2 weeks.
It is also interesting to note that they don't reference deleteServingUrl as part of the process to stop serving a blob. That suggests to me that step (1) in your process to "delete the image" is unnecessary.

Manually add entity to empty Google App Engine DataStore

From the tutorial, which I confirmed by creating a simple project, the index.yaml file is auto-generated when a query is run. What I further observe is that until then the admin console (http://localhost:8080/_ah/admin/datastore) does not show the data-store.
My problem is this: I have a project for which data/entities are to be added manually through the datastore admin console. The website is only used to display/retrieve data, not to add data to the data-store.
How do I get my data-store to appear on the console so I can add data?
Yes, try retrieving from the empty data-store through the browser just so I can get the index.yaml to populate, etc. But that does not work.
The easiest way is probably just to create a small python script inside your project folder and create your entities in script. Assign it to a URL handler that you'll use once, then disable.
You can even do it from the python shell. It's very useful for debugging, but you'll need to set it up once.
http://alex.cloudware.it/2012/02/your-app-engine-app-in-python-shell.html
In order to do the same on production, use the remote_api:
https://developers.google.com/appengine/articles/remote_api
This is a very strange question.
The automatic creation of index.yaml only happens locally, and is simply to help you create that file and upload it to AppEngine. There is no automatic creation or update of that file once it's on the server: and as the documentation explains, no queries can be run unless the relevant index already exists in index.yaml.
Since you need indexes to run queries, you must create that file locally - either manually, or by running the relevant queries against your development datastore - then upload it along with your app.
However, this has nothing at all to do with whether the datastore viewer appears in the admin. Online, it will always show, but only entity kinds that actually have an instance in the store will be shown. The datastore viewer knows nothing about your models, it only knows about kinds that exist in the datastore.
On your development server you can use the interactive console to create/instantiate/save an entity, which should cause the entity class to appear in the datastore interface, like so:
from google.appengine.ext import ndb
class YourEntityModel(ndb.Model):
pass
YourEntityModel().put()

app engine upload mechanisme for changed data, does it upload whole or delta?

I upload my_app to app engine app by:
appcfg.py update c:\my_app ...
If I already uploaded for my_app then done a minor changement in file,
Does it upload whole project to app engine and overwrite whole previous project?
Or does it upload only relevent change and overwrite relevent part?
And what is the case for the issue for this command:
bulkloader.py --restore --filename=my_kind.dump ...
Did you try it?
update uploads the whole application each time. There's no concept of a delta. Normally, when you upload a new version I would suggest changing the version setting - that way you can keep up to 10 previous versions of your app on the site, and only set the new one to be the default once you are sure it is working.
If you upload without changing the version, AppEngine actually creates a new version before deleting the old one, so you need a spare slot in your versions list.
I don't understand your question about the bulkloader. Are you asking if that does a delta? No, it can't, because it sends the data serially via the remote API - there's no way for it to know in advance which rows in your data file already exist in the datastore.

Resources