Multi object Deleteion google cloud storage go - google-app-engine

I'm using go in order to interact with cloud storage .
I can't use gsutil from the app engine and delete with the rm command ?
I can delete one object with DeleteObject or iterate over a range of objects and delete each one , but i`m looking for another solution something like DeleteMulti in Datastor.
Do you have a better solution for multi deletion ?

Each object that is deleted requires one call to GCS. Iterating over each object and calling delete is the easiest and likely best solution. If you need faster performance, you may want to send multiple delete requests to GCS at a time using multiple threads.
If this is a significant performance issue for your app, there is, however, another way, which I hesitate to mention because it adds significant complexity and doesn't buy much extra performance. GCS supports batching calls together into a single connection. It likely won't be much faster than sending delete requests over several threads, but it does behave more like a DeleteMulti call.
Effectively, batch calls work by sending a multipart HTTP request to the /batch path, each part of which representing an HTTP call. A request to delete several objects would look like this:
POST /batch HTTP/1.1
Host: www.googleapis.com
Content-Length: content_length
Content-Type: multipart/mixed; boundary="===============7330845974216740156=="
Authorization: Bearer oauth2_token
--===============7330845974216740156==
Content-Type: application/http
Content-Transfer-Encoding: binary
Content-ID: <b29c5de2-0db4-490b-b421-6a51b598bd22+1>
DELETE /storage/v1/b/example-bucket/o/obj1 HTTP/1.1
accept: application/json
--===============7330845974216740156==
Content-Type: application/http
Content-Transfer-Encoding: binary
Content-ID: <b29c5de2-0db4-490b-b421-6a51b598bd22+2>
DELETE /storage/v1/b/example-bucket/o/obj2 HTTP/1.1
accept: application/json
--===============7330845974216740156==--
There's more documentation on it here: https://cloud.google.com/storage/docs/json_api/v1/how-tos/batch
But, again, I recommend just sending individual delete requests. Batch calls are not atomic, which means some deletes might succeed while others fail. In the event one of the batch delete operations fails, you'll need to parse the batch response message to figure out which call failed so that you can retry it, which is very likely not worth the effort for the gain you'll get back.

Related

WireMock keep-alive

I am want to know whether WireMock supports keep-alive by default or I need to set it explicitly somewhere.
I have setup a basic WireMock standalone which accepts request and generates a json payload. However when I curl -v the service I do not see the Connection: keep-alive header generated.
Also when I make a call to the mock service using the apache benchmark tool with -k option I do see lot of connections getting created and destroyed which means wiremock by default is not supporting the keep-alive.
I assume there must be something on the WireMock server to tell it to support persistent connections by default?
Regards,
Adi
You could have that header returned by updating your mapping as such:
"response": { "headers": { "Content-Type": "application/json" }
This however doesn't mean that it's a persistent connection and persistent connections are supported on WireMock. There is a "persistent" option for mappings, but per the WireMock docs, it's:
Indicates that the stub mapping should be persisted immediately on create/update/delete and survive resets to default.
https://wiremock.org/docs/api/#tag/Stub-Mappings/paths/~1__admin~1mappings/get

One Drive Multipart upload error HTTP 400 Bad Request

When i was doing upload file to onedrive with following:
HTTP POST https://apiis.live.net/v5.0/{foldid}/files?access_token={ACCESS_TOKEN}
Content-Type: multipart/form-data; boundary={boundary}
--{boundary}
Content-Disposition: form-data; name="file"; filename="{filename}"
Content-Type: application/octet-stream
{File content goes here}
--{boundary}
by which I follow the guide from https://msdn.microsoft.com/en-us/library/office/dn659726.aspx
It always give me error "java.lang.Exception: HTTP 400. Bad Request".
Would one drive team or anyone help to give me advice what it was going wrong?
Thanks and Best Regards,
Ronald
It seems your request is malformed. I don't know how one drive works but after a quick overview on your link, did you try to remove the 'HTTP' before 'POST' header ?
Or is your file content properly sended ?
From the url, https://apis.live.net/v5.0/{folderid}/files?access_token={ACCESS_TOKEN}, this would indicate you are using the deprecated LiveConnect API. I would recommend using the supported APIs located at https://api.onedrive.com with the upload method described here https://dev.onedrive.com/items/upload_put.htm where the request does not need the multipart mime schema
PUT .../drive/root:/{parent-path}/{filename}:/content
Content-Type: text/plain
The contents of the file goes here.
Get more information about these APIs at https://dev.onedrive.com If the updated uploading method is still causing you trouble, please make sure to include the full HTTP response headers and body.

Returning Gzip-ed response bodies on App Engine

When caching item in App Engine's memcache I used gzip compression to save space and get below the 1MB limit for some files.
Since I also put rendered pages into the memcache, I though it would be nice and much quicker to directly return the gzipped body to the client, if it accepts gzip encdoing.
Unfortunately the request's Accept-Encoding only has the value identity (using the AE dev server with Go), which to me means I have to return the body as-is (i.e. plain html).
Is one not supposed to gzip contents themselves? Or could I always return gzipped content with the appropriate headers, and the AE infrastructure would decompress this when the client does not support compression?
After all I hope to get even better response times by caching the response in its output state.
For caching the response, if your response is public (same copy for all users), you can make use of Google's edge cache by setting the proper HTTP headers, for example:
Cache-Control: public,max-age=86400
Expires: Sat, 16 May 2015 07:23:15 +0000
About compression, as far as I know, Google automatically compress the content in HTTP response whenever possible. There is no need to handle this manually.

413 - Request Entity Too Large

I can upload small drafts OK using the metadata endpoint (https://www.googleapis.com/gmail/v1/users/me/drafts), e.g.:
{"message":{"raw":"TUlNRS1WZXJzaW9uOiAxLjANClgtTWFpbGVyOiBNYWlsQmVlLk5FVCA3LjAuNC4zMjgNClRvOiBjaHJpcy53b29kQG5vdGFibHlnb29kLmNvbQ0KU3ViamVjdDogdGVzdCENCkNvbnRlbnQtVHlwZTogbXVsdGlwYXJ0L21peGVkOw0KCWJvdW5kYXJ5PSItLS0tPV9OZXh0UGFydF8wMDBfQUFEQV9FOUMzOEZCNy5BMjRFQjI2OSINCg0KDQotLS0tLS09X05leHRQYXJ0XzAwMF9BQURBX0U5QzM4RkI3LkEyNEVCMjY5DQpDb250ZW50LVR5cGU6IHRleHQvcGxhaW4NCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IHF1b3RlZC1wcmludGFibGUNCg0KVGVzdCBjb250ZW50DQotLS0tLS09X05leHRQYXJ0XzAwMF9BQURBX0U5QzM4RkI3LkEyNEVCMjY5DQpDb250ZW50LVR5cGU6IGFwcGxpY2F0aW9uL29jdGV0LXN0cmVhbTsNCgluYW1lPSJUcmFjZS5sb2ciDQpDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50Ow0KCWZpbGVuYW1lPSJUcmFjZS5sb2ciDQpDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiBiYXNlNjQNCg0KVTNWdUlESTVJRXBoYmlBeU1ERXlJREV5T2pJek9qTTBMalkzTmlBNklDTXhNREExTXpvZ1YxTkJSVU5QVGs1QlFrOVNWRVZFT2lCVA0KYjJaMGQyRnlaU0JqWVhWelpXUWdZMjl1Ym1WamRHbHZiaUJoWW05eWRDNGdJRWc2TURVMU9USWdSam9uU0VOVFRsUlRiMk5yWlhSZg0KVTJWdVpDY2dRVG9uYzJWdVpDY2dWRG9uVTI5amEyVjBQVFUwTUM0Z1JtbHVhWE5vWldRZ2NtVjBjbmxwYm1jdUp5QU5DbE4xYmlBeQ0KT1NCS1lXNGdNakF4TWlBeE1qb3lNem96TkM0Mk9UQWdPaUFqTVRBd09Eb2dSWEp5YjNJNklDQklPakExTlRreUlFWTZKMU5sYm1SVg0KYzJWeVJHVm1hVzVsWkVoVVZGQk5aWE56WVdkbFFtOWtlU2NnVkRvblJYSnliM0lnYzJWdVpHbHVaeUIxYzJWeUlHUmxabWx1WldRZw0KWm1sc1pTQmpiMjUwWlc1MGN5QjBieUJqYkdsbGJuUWdLRUp2WkhrcExpQlNaWFIxY200OUxURXVKeUFOQ2xOMWJpQXlPU0JLWVc0Zw0KTWpBeE1pQXhNam95TXpvek5DNDJPVElnT2lBak1UQXdOVE02SUZkVFFVVkRUMDVPUVVKUFVsUkZSRG9nVTI5bWRIZGhjbVVnWTJGMQ0KYzJWa0lHTnZibTVsWTNScGIyNGdZV0p2Y25RdUlDQklPakExTlRreUlFWTZKMGhEVTA1VVUyOWphMlYwWDFObGJtUW5JRUU2SjNObA0KYm1RbklGUTZKMU52WTJ0bGREMDFOREF1SUVacGJtbHphR1ZrSUhKbGRISjVhVzVuTGljZ0RRcFRkVzRnTWprZ1NtRnVJREl3TVRJZw0KTVRJNk1qTTZNelF1TmprMElEb2dJekV3TURnNklFVnljbTl5T2lBZ1JYSnliM0lnY21WeGRXVnpkR2x1WnlCQ1lYTnBZeUJCZFhSbw0KWlc1MGFXTmhkR2x2Ymk0TkNsTjFiaUF5T1NCS1lXNGdNakF4TWlBeE1qb3lOVG96TVM0NE1EWWdPaUFqTVRBeE5Ub2dVMmgxZEdSdg0KZDI0NklDQkdjbVZsSUZCeWIzaDVJRk5sY25acFkyVWdjM1J2Y0hCbFpDNE5DZz09DQotLS0tLS09X05leHRQYXJ0XzAwMF9BQURBX0U5QzM4RkI3LkEyNEVCMjY5LS0NCg"}}
However, when I try a larger file that's still within the 35MB limit (e.g. an 11MB file), I get the following HTTP WebException:
The remote server returned an error: (413) Request Entity Too Large.
Is this a bug in the new API, or is this down to the fact I should be using the media endpoint instead for this kind of thing? If so, can anybody provide an example of how to do this using the .NET Client?
You need to use the /upload "media upload" path to upload anything over a few MB. The URL and POST format are slightly different:
You'd do:
POST https://www.googleapis.com/upload/gmail/v1/users/userId/drafts
add a HTTP header like "Content-type: multipart/related; boundary=\"part_boundary\""
POST body looks more like:
--part_boundary
Content-Type: application/json; charset=UTF-8
{
}
--part_boundary
Content-Type: message/rfc822
From: script#example.org
To: user#example.com
Subject: test
body here
--part_boundary--
See this for more info (which then links to this).

Headers and caching in REST service call from Silverlight

I've been developing a small Silverlight client, which will talk to a REST service build using the WCF WEBAPI....
When the service is called using GET, it'll kick of a long running process, that'll generate a resource, so the service will return 'Accepted' and a URI in a Location header, to where the resource will be found.
Server: ASP.NET Development Server/10.0.0.0
Date: Fri, 18 Nov 2011 09:00:17 GMT
X-AspNet-Version: 4.0.30319
Content-Length: 3
Location: http://localhost:52878/myservice?fileid=f68201f6-9d77-4818-820e-e5e796e9710a
Cache-Control public, max-age=21600
Expires: 21600
Content-Type: text/plain
Connection: Close
Now, in my Silverlight client, I need to access this header information, however using the BrowserHTTP stack, this is not possible... so I've switched to the ClientHTTP, which makes it possible for me to access the header information returned.
However the ClientHTTP stack doesn't support Content Caching:
http://www.wintellect.com/CS/blogs/jprosise/archive/2009/10/14/silverlight-3-s-new-client-networking-stack.aspx
which is causing me troubles..... I wan't the same resource to be returned for 6 hours, before a new one is generated.
Is there a way to get the best of both... being able to access the Header info AND have content caching??
TIA
Søren
Stop using a header to return the information needed by the client code.
If you include the required information in the entity body using either raw or encoded in some message format (e.g. XML or JSON) then you can continue to use the BrowserHTTP and benefit from its caching.
Using the headers is the correct way to convey this information. That's why it's in the standard.
I don't do silverlight, but what I get from that post is that you will now need to implement the caching. Using the BrowserHttp leverages the browsers caching mechanism. Now using ClientHttp you are dropping closer to the metal and you will have to implement caching.

Resources