trying to access my AWS Glacier vaults returns an empty list, even though I have existing vaults in the region - amazon-glacier

I was trying to delete a Glacier vault which turned out to be impossible to do from the console, as the vault was not empty. I learned that I had to use the AWS cli to first delete all the archives from the vault, so I first try to list all my vaults in Ireland (EU) using the command:
aws glacier list-vaults --account-id - --region us-west-1
This returns:
{
"VaultList": []
}
To be sure that I'm logged in properly, I tried another command, like 'aws s3 ls', which correctly returns all my S3 buckets.
I am, in fact, using the AWS cli for the first time, so I might be missing something trivial here, but I feel rather stuck at the moment. Does anyone have an idea what I might be doing wrong?
Thanks!

The solution is probably to fix the --region parameter.

Related

Salesforce CI/CD Pipeline with Github Actions

Please help me with this UseCase. Thanks for any help in advance.
I am creating some Salesforce project in VS Code. I have cloned the repository and pushed it in Github. I have three branches in the repo named as Feature, Developer and Master. Feature is the base branch. Whenever I change or write a code, on deployment it is pushed to Feature.
Now I want that there should be Dev org attached to Developer branch as well and whenever a code after testing is pushed from Feature to Developer branch, or I pull the code to Developer from Feature, all the code shall be deployed to the attached org.
And similarly on pushing the code from Developer to Master.
I wrote the workflow rule and did it nearly.But On creating a pull req, the workflow was working but it was showing error while build and deploy and decryption was failing with error something like that- can't read the directory. Lastly, when I removed encrypt Decrypt keys, Authorization step is not passing and showing the same endless error- OAuth client secret of personal connected app?: sh: 1: read: Illegal option -s.
So the youtube video I followed confused me while encrypting server key. He got some hashkey and hash IV from somewhere and generated new hash key and hash IV to generate some server.key.enc
What tutorial(?) are you following? I don't recognise steps about decrypting keys, it might be some old or overcomplicated method.
There's cool blog post at https://tigerfacesystems.com/blog/sfdx-continuous-integration and/or you can look what SF does themselves in the LWC recipes repository (you know, the one to which most of LWC documentation points to): https://github.com/trailheadapps/lwc-recipes/blob/main/.github/workflows/ci.yml
You'd have to login with sfdx to all target orgs you need, use that "sfdx force:org:display" and "Sfdx Auth Url" trick, save each org's value as different Github variable and create similar scripts.

Can I delete container images from Google Cloud Storage artifacts bucket?

I have a Google App Engine app, which connects to Google Cloud Storage.
I noticed that the amount of data stored was unreasonably high (4.01 GB, when it should be 100MB or so).
So, I looked at how much each bucket was storing, and I found that there was an automatically created bucket called us.artificats. that was taking up most of the space.
I looked inside, and all it has is one folder: containers/images/.
From what I've Googled, it seems like these images come from Google Cloud Build.
My question is, can I delete them without compromising my entire application?
I have solved this problem by applying a deletion rule. Here's how to do it:
Open the project in Google Cloud console
Open the storage management (search for "Storage" for example).
In the Browser tab, select the container us.artifacts....
Now, open the Lifecycle section. You should see something like:
Click on Add a rule and provide the following conditions:
In the action, select Delete object
In the conditions, select Age and enter for example 3 days
Click on create to confirm the creation
Now all objects older than 3 days will be automatically deleted. It might take a few minutes for this new rule to be applied by Google Cloud.
For those of you seeing this later on, I ended up deleting the folder, and everything was fine.
When I ran Google Cloud Build again, it added items back into the bucket, which I had to delete later on.
As #HarshitG mentioned, this can be set up to happen automatically via deletion rules in cloud storage. As for myself, I added a deletion step to my deployment GitHub action.
Here is the reference to the documentation: link
Built container images are stored in the app-engine folder in Container Registry. You can download these images to keep or run elsewhere. Once deployment is complete, App Engine no longer needs the container images. Note that they are not automatically deleted, so to avoid reaching your storage quota, you can safely delete any images you don't need. For more information about managing images in Container Registry, see the Container Registry documentation.
This can be automated by adding a Lifecycle rules like #HarshitG mentioned.
You can add a trigger to your lifecycle rules on console.cloud.google.com.
Access the bucket with artifacts (default is "us.artifacts.yourAppName.appspot.com")
Go to "Life cycle".
Click on "Add a rule".
Check "Object delete" and press "Continue".
Check the filter to delete the bucket, I chose "age" and selected three days as the number of auto delete old elements (after element has 3 days of life it's auto deleted).
Click on "Create" and the rule is working now, then you do not need to visit every day to clean the bucket.
Same issue. Thanks for the update, Caleb.
I'm having the same issue, but I don't have an App running; I just have:
Firebase Auth
Firestore
Firebase Functions
Cloud Storage
Not sure why I have 4GB stored in those containers, and I'm not sure if I should delete them or if that would break my functions.
UPDATE:
I deleted the container folder and all still works. Not sure if those are backups or whatnot, but I can't find anything online or in the docs. I will post here if something happens. As soon as a cloud function ran, the folder had 33 files again.
I recommend against setting up a lifecycle rules on your Storage buckets. It will likely lead to breaking subsequent updates to the function (as described in Cloud Function build error - failed to get OS from config file for image)
If you are interested in cleaning up container images, you should instead delete container images stored in Google Cloud Registry https://console.cloud.google.com/gcr. Deleting container images on GCR repos will automatically cleanup objects stored in your Cloud Storage.
https://issuetracker.google.com/issues/186832976 has relevant information from a Google Cloud Functions engineer.

403 forbidden error when changing s3 bucket files

I am trying to host my website with amazon aws s3 static website hosting.
I created a bucket, completed permissions and the bucket policy etc.
And it was returning a 403 forbidden access when I tried to access my end point.
After leaving it for a weekend I went back to have another go and it was working.
Now I tried to delete the contents of the bucket and add some different files. ( basically the same just a few changes in some paragraphs.)
And once again it is now giving me a 403 forbidden access. My question is. Is there a waiting period or something when a bucket or it’s contents are changed. ?
Or is it just me doing something wrong. ? I didn’t change my policy or permissions so I don’t see why it has gone back to giving me a "403 forbidden" message again.
I have looked at previous questions and also aws documentation but couldn’t find anything specific to this.
Appreciate any information.
You need to make all files public. You can either make them public using S3 console/interface or aws cli. It does not take any time to apply public rule, it will be applied instantly.

Mongodb data corruption from heroku app cause & prevention

I have a free heroku plan and a nodejs app on the heroku server. The nodejs app is built with meanjs, so the code for mongodb connections is exactly what you would find in the configuration files. I use a mongolab free mongo database to store the data. Occasionally (depending on how much I interact/change code I believe), the mongodb data is corrupted. I believe this to be true because I use a script to register names, and I can always log into them for awhile until I receive a no user/pass error. If I get this error and immediately create a new user, the user can successfully be logged in and out. All of the user data is still in the database. I also have a few other crud modules that use different collections in the same database, and I (so far) have not seen anything happen to that data, or anything to any of the data besides the password. I don't know where my error is possibly coming from, or what code is relevant, as I haven't touched the config files at all and to my knowledge haven't written any code that looks at user passwords at all. Also, my user object is occasionally empty (user = "") in the markup, but that bug was introduced after the original, I believe while I was trying to find out what was going on. Again, I don't have any clue though, so I included it just in case. Thanks!
After a lot of trial and error, I found the cause to my problem.
After I created these users, I go into my Mongolab account and manually edit the roles based on what module I'm working on (doing role based authentication). It is when editing the data that my passwords become corrupted. I don't know why, but I've pinpointed the problem to there. I've messed with some other data, with similar results.

TransformationError on blob via get_serving_url (app engine)

TransformationError
This error keeps coming up for a specific image.
There are no problems with other images and I'm wondering what the reason for this exception could be.
From Google:
"Error while attempting to transform the image."
Update:
Development server it works fine, only live it fails.
Thanks
Without more information I'd say it's either the image is corrupted, or it's in a format that cannot be used with get_serving_url (animate GIF for example).
I fought this error forever and incase anyone finds they get the dreaded TransformationError please note that you need to make sure that your app has owner permissions on the files you want to generate a url for
It'll look something like this in your IAM tab:
App Engine app default service account
your-project-name-here#appspot.gserviceaccount.com
In IAM on that member you want to scroll down to Storage and grant "Storage Object Admin" to that user. That is as long as you have your storage bucket under the same project... if not I'm not sure how...
This TransformationError exception seems to show up for permissions errors so it is a bit misleading.
I way getting this error because I had used the Bucket Policy Only permissions on a bucket in a different project.
However after changing this back to Object Level permissions and giving my App Engine app access (from a different project) I was able to perform the App Engine Standard Images operation (google.appengine.api.images.get_serving_url) that I was trying to implement.
Make sure that you set your permissions correctly either in the Console UI or via gsutil like so:
gsutil acl ch -u my-project-a#appspot.gserviceaccount.com:OWNER gs://my-project-b

Resources