What is the most efficient way to get the emails "read status" after doing list of messages for some search query?
As mentioned in the include extra field question for messages#list there are field options in Google's try api but it doesn't return the results with those fields. May be this is a bug or gmail team didn't add these fields in response for efficiency reasons.
Assuming we can not get any extra fields including labels from messages#list api, what is the best way to get only read status for the list of messages obtained from messages#list api? I want to avoid loading anything other than read status which we get while using "minimal" from get api.
If you only need message.id and read/unread status you could do that without ever calling messages.get() and that would be most efficient. Simply make two list() calls, one with "is:unread" and the other "is:read" and that'll provide the info you need.
Alternatively, if you need more than just read/unread status after doing a messages.list(), pass those message.ids to a (batched) messages.get() call with format=MINIMAL (or METADATA or whatnot). You should be able to do that quite efficiently and quickly.
Related
I havent find ressources online to solve my problem.
I'm creating an app with React Native that fetches and shows news articles from my database.
At the top of the page, there's some buttons with filters inside, for example:
one button "energy",
one button "politics"
one button "people"
one button "china"
etc...
Everytime I press one of those buttons, the filter corresponding is stored in an array "selectedFilters", and I want to fetch my database to only show articles that are corresponding to those filters.
Multiple filters can be selected at the same time.
I know one way of doing it, with a POST request:
await fetch('187.345.32.33:3000/fetch-articles', {
method: 'POST',
headers: {'Content-Type':'application/x-www-form-urlencoded'},
body: 'filters=${JSON.stringify(selectedFilters)}'
});
But the fact is, I read everywhere, and I also was teached, that POST request are used when creating or removing, and theoretically, what I should use is a GET request.
But I don't know how to send an Array with GET request.
I read online that I can pass multiple parameters to my url(for example: arr[0]=selectedFilters[0]&arr[1]=... but the fact is I never know in advance how many items will be in my array.
And also I'm not sure if I could write exactly the same way as my POST request above, but with GET:
await fetch('187.345.32.33:3000/fetch-articles', {
method: 'GET',
headers: {'Content-Type':'application/x-www-form-urlencoded'},
body: 'filters=${JSON.stringify(selectedFilters)}'
});
or if I can only pass items in the url, but does this work ?
await fetch('187.345.32.33:3000/fetch-articles?arr[0]=${selectedFilters[0]', {
Or even better if something like this could work:
await fetch('187.345.32.33:3000/fetch-articles?filters=${JSON.stringify(selectedFilters)}', {
Thanks for your help
You should definitely use a GET request if your purpose is to fetch the data.
One way of passing the array through the URL is by using a map function to create a comma separated string with all the filters. This way you would not need to know in advance how many elements are in the array. The server can then fetch the string from the URL and split it on the commas.
One more method you can try is to save a filters array on the server side for the session. You can then use a POST/PUT request to modify that array with new filter as user adds or remove them. Finally you can use an empty GET request to fetch the news as the server will already have the filters for that session.
But the fact is, I read everywhere, and I also was teached, that POST request are used when creating or removing, and theoretically, what I should use is a GET request.
Yes, you do read that everywhere. It's wrong (or at best incomplete).
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” (Fielding, 2009)
It may help to remember that on the HTML web, POST was the only supported method for requesting changes to resources, and the web was catastrophically successful.
For requests that are effectively read only, we should prefer to use GET, because general purpose HTTP components can leverage the fact that GET is safe (for example, we can automatically retry a safe request if the response is lost on an unreliable network).
I'm not sure if I could write exactly the same way as my POST request above, but with GET
Not quite exactly the same way
A client SHOULD NOT generate content in a GET request unless it is made directly to an origin server that has previously indicated, in or out of band, that such a request has a purpose and will be adequately supported. An origin server SHOULD NOT rely on private agreements to receive content, since participants in HTTP communication are often unaware of intermediaries along the request chain. -- RFC 9110
The right idea is to think about this in the framing of HTML forms; in HTML, the same collection of input controls can be used with both GET and POST. The difference is what the browser does with the information.
Very roughly, a GET form is used when you want to put the key value pairs described by the submitted form into the query part of the request target. So something roughly like
await fetch('187.345.32.33:3000/fetch-articles?filters=${JSON.stringify(selectedFilters)}', {
method: 'GET'
});
Although we would normally want to be using a URI Template to generate the request URI, rather than worrying about escaping everything correctly "by hand".
However, there's no rule that says general purpose HTTP components need to support infinitely long URI (for instance, Internet Explorer used to have a limit just over 2000 characters).
To work around these limits, you might choose to support POST - it's a tradeoff, you lose the benefits of safe semantics and general purpose cache invalidation, you gain that it works in extreme cases.
In order to reduce response time or to shorten time that the user waits for data when rendering views, i'm trying to determine what's best in interacting with a REST API. I'll be getting an array of items with 5-7 fields, e.g. name, title, imgUrl. I can either make one big call and traverse through the response to get the data that I need or make 5-7 requests to get the exact information that I need.
There are two issues with making the large call.
ALOT of data is returned with each item. I tested with retrieving 3 items and it took about 899 ms.
The fields that I need cannot simply be referenced by a key. Each Item is returned as array of fields. Each field is an object and I can only determine which fields I need by traversing each object and reading its field_id. Its returned like this:
item :[
{ ...
field_id: 3423423,
...
},
{
...
field_id: 343434,
...
}
...
]
I can rather send one request with an item_id and field_id and I'll get the field I need, but I'll have to make 7 of these calls. Which is better?
I recently had to make a similar decision and I ended up adding methods to the back-end that packaged the data up in a format that was more suitable for consumption. If you are able to do that, I would recommend that approach.
I suspect that the API is out of your control.In that case, I would probably go with multiple async calls so that you can provide feedback while the data is retrieved. Using async calls and promises you can let all the individual pieces get retrieved in the background in whatever order they come in and then assemble them from there.
I'm using hood.ie for a web app I'm making. I like the simplicity of it however there's something I'm not too sure about.
When retrieving data from the couchDB there is a method: findAll - which as an example looks like:
hoodie.store.findAll('todo')
.done(function(allTodos) {
//do something with allTodos
})
What I was wondering/don't really like is the fact that I'm getting all the items of type todo then filtering down once I have e.g. todo with todays date.
Instead of getting all of them, is it possible to just get ones I actually want.
I know there is a find method but that requires an id which i won't have.
Or do i simply not need to worry about this - is the call to get all data not that expensive (if i had 1000+ records I feel it may be).
Any guidance would be appreciated.
Thanks.
You don't need to worry about it.
Hoodie stores all data in your browser, from where it also retrieves the data, it does not send any requests to CouchDB in the background when you call hoodie.store.findAll('todo')
In future, this particular call will become more efficient as Hoodie will use indexing by object types, but unless you have thousands of objects per user, you shouldn't even see the the difference
I am currently using ajax to do autocomplete emails and would like to find out what is the best way to do this without too much read operations. Thanks!
The best way to do these kind of operations is use the following approach
Use full text search:
https://cloud.google.com/appengine/docs/java/search/
When creating a document to search on, you could tokenize the email id. for example if you have foobar#baz.com. you could tokenize it to f, fo, foo, foobar .... and save it into a textfield.
then use index.search to query for the results.
then every successful lookup can be cached for say 2 hours ( you can change it as per your requirement ).
Anytime you update the model add/update/remove entries then delete the memcache entries/flush the memcache, preferably using the datastore callbacks.
https://cloud.google.com/appengine/docs/java/datastore/callbacks
please note that the tokenize + adding a document could to be processed in task queue to fit into the "gae way of doing things"
Also as a footnote, you could try implementing client side caching mechanism using http cache control + etags. I have not implemented such a solution so others could pitch in how their experience was implementing such a solution.
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching?hl=en
I am working on an ISAPI Filter to strip certain content out of responses. I need to collect all the body of the response before I do the processing, as the content I'm stripping could overlap send buffers.
To do this I'd like to buffer the response content with each SF_NOTIFY_SEND_RAW_DATA notification until I get to the last one, then send the translated data. I would like to know the best way to determine which SF_NOTIFY_SEND_RAW_DATA is actually the last. If I wait until the SF_NOTIFY_END_OF_REQUESTnotification, then I don't know how to send the data I've buffered.
One approach would be to use the content-length. This would require I detect the end of the headers. It would also require assuming the content-length header is correct (is that guaranteed?). Since HTTP doesn't even require a content-length header, I'm not even sure it will always be there. There seems like there should be an easier way.
I'm assuming the response is not chunked, so I am not handling dechunking before I do the response change. Also, when I do the modifications to the response body, the size of teh response body will not change, so I do not need to go back and update the content-length.
I eventually found some good discussions via google.
This posts answers my questions, as well as raises issues a more complicated filter would have to address: http://groups.google.com/group/microsoft.public.platformsdk.internet.server.isapi-dev/browse_thread/thread/85a5e75f342fad2b/cbb638f9a85c9e03?q=HTTP_FILTER_RAW_DATA&_done=%2Fgroups%3Fq%3DHTTP_FILTER_RAW_DATA%26start%3D20%26&_doneTitle=Back+to+Search&&d&pli=1
The filter I have s buffering the full request into its own buffer then using the SF_NOTIFY_END_OF_REQUEST to send the contents. The modification it does does not change the size, and precludes the possibility that the response is chunked, so in my case the filter is relatively simple.