NSURLCache cachedresponseforrequest no data - nsurlcache

why the responseCache is nil? i'd run this post and really get the responseObject from cache. How can i get the responseCache?
manager.requestSerializer = [AFJSONRequestSerializer serializer];
manager.responseSerializer = [AFJSONResponseSerializer serializer];
manager.requestSerializer.cachePolicy=NSURLRequestReturnCacheDataElseLoad;
[manager POST:URL parameters:paramdic progress:^(NSProgress * _Nonnull uploadProgress) {
} success:^(NSURLSessionDataTask * _Nonnull task, id _Nullable responseObject) {
NSData * data=[NSJSONSerialization dataWithJSONObject:responseObject options:NSJSONWritingPrettyPrinted error:nil];
NSURLCache * cache=[NSURLCache sharedURLCache];
NSCachedURLResponse * responseCache=[cache cachedResponseForRequest:task.originalRequest];
NSCachedURLResponse * response=[[NSCachedURLResponse alloc]initWithResponse:task.response data:data userInfo:nil storagePolicy:NSURLCacheStorageAllowed];
[cache storeCachedResponse:response forRequest:task.originalRequest];
} failure:^(NSURLSessionDataTask * _Nullable task, NSError * _Nonnull error) {
NSLog(#"%#",error);
}];

There are three reasons it is nil at that point:
POST requests are not cached by any iOS/OS X networking code because they are not guaranteed to be idempotent (i.e. they can have side effects, such as storing data on the server). The only way a POST request will ever get stored in an NSURLCache is if you explicitly add it.
POST requests are not cached because NSURLCache uses the URL as the lookup key. Because the URL does not (cannot) include the POST body, any POST operations to the same URL would be returned for a different POST request, which is almost certainly not what you would want. So if you do add it, you'll have to add some custom rewriting of the URL on its way into the cache and custom lookup code to make the URLs unique enough based on specific POST body fields or whatever.
The cache is highly asynchronous, so cached data would not necessarily be available when the request's completion handler runs even if this were a GET request.
This is not necessarily a complete set of reasons. :-)
The cache is intended to reduce network traffic. You shouldn't generally consult it yourself. The normal lookup path used by NSURLSession et al performs checks for certain protocol caching policies (e.g. response expiration) that would not be performed by merely asking the cache if it has a response for a particular key.
If you need a general mechanism for storing a single response for later use by your app (rather than keeping it in memory), you should do so in your own internal dictionary (or, if the response is large, by using a download task and moving the file into a temporary folder in your app's sandbox that you purge on every launch).

Related

Is there a way to source stream from query parameters with akka-http?

I know how to source stream from an entity via a POST request, but I want to be able to also create a source stream from the query parameters of a GET request.
I know i can get query parameters to a case class via a as[] directive, but it seems like a miss to have to wrap that in a source in order to source stream it.
The query parameters that are part of the URL are not "streamed" from the client, rather they are part of the request line. Therefore, when you have an HttpRequest object in your memory you have already consumed enough space to hold the query parameters. This means that you lose any back-pressure benefits from using a Source. I recommend analyzing why you want to create a Source in the first place...
If you absolutely have to create a Source out of the parameters then you can use the parameterSeq Directive:
val route =
parameterSeq { params : Seq[(String, String)] =>
val parameterSource : Source[(String, String), _] = Source(params)
}

Provide a callback URL in Google Cloud Storage signed URL

When uploading to GCS (Google Cloud Storage) using the BlobStore's createUploadURL function, I can provide a callback together with header data that will be POSTed to the callback URL.
There doesn't seem to be a way to do that with GCS's signed URL's
I know there is Object Change Notification but that won't allow the user to provide upload specific information in the header of a POST, the way it is possible with createUploadURL's callback.
My feeling is, if createUploadURL can do it, there must be a way to do it with signed URL's, but I can't find any documentation on it. I was wondering if anyone may know how createUploadURL achieves that callback calling behavior.
PS: I'm trying to move away from createUploadURL because of the __BlobInfo__ entities it creates, which for my specific use case I do not need, and somehow seem to be indelible and are wasting storage space.
Update: It worked! Here is how:
Short Answer: It cannot be done with PUT, but can be done with POST
Long Answer:
If you look at the signed-URL page, in front of HTTP_Verb, under Description, there is a subtle note that this page is only relevant to GET, HEAD, PUT, and DELETE, but POST is a completely different game. I had missed this, but it turned out to be very important.
There is a whole page of HTTP Headers that does not list an important header that can be used with POST; that header is success_action_redirect, as voscausa correctly answered.
In the POST page Google "strongly recommends" using PUT, unless dealing with form data. However, POST has a few nice features that PUT does not have. They may worry that POST gives us too many strings to hang ourselves with.
But I'd say it is totally worth dropping createUploadURL, and writing your own code to redirect to a callback. Here is how:
Code:
If you are working in Python voscausa's code is very helpful.
I'm using apejs to write javascript in a Java app, so my code looks like this:
var exp = new Date()
exp.setTime(exp.getTime() + 1000 * 60 * 100); //100 minutes
json['GoogleAccessId'] = String(appIdentity.getServiceAccountName())
json['key'] = keyGenerator()
json['bucket'] = bucket
json['Expires'] = exp.toISOString();
json['success_action_redirect'] = "https://" + request.getServerName() + "/test2/";
json['uri'] = 'https://' + bucket + '.storage.googleapis.com/';
var policy = {'expiration': json.Expires
, 'conditions': [
["starts-with", "$key", json.key],
{'Expires': json.Expires},
{'bucket': json.bucket},
{"success_action_redirect": json.success_action_redirect}
]
};
var plain = StringToBytes(JSON.stringify(policy))
json['policy'] = String(Base64.encodeBase64String(plain))
var result = appIdentity.signForApp(Base64.encodeBase64(plain, false));
json['signature'] = String(Base64.encodeBase64String(result.getSignature()))
The code above first provides the relevant fields.
Then creates a policy object. Then it stringify's the object and converts it into a byte array (you can use .getBytes in Java. I had to write a function for javascript).
A base64 encoded version of this array, populates the policy field.
Then it is signed using the appidentity package. Finally the signature is base64 encoded, and we are done.
On the client side, all members of the json object will be added to the Form, except the uri which is the form's address.
var formData = new FormData(document.forms.namedItem('upload'));
var blob = new Blob([thedata], {type: 'application/json'})
var keys = ['GoogleAccessId', 'key', 'bucket', 'Expires', 'success_action_redirect', 'policy', 'signature']
for(field in keys)
formData.append(keys[field], url[keys[field]])
formData.append('file', blob)
var rest = new XMLHttpRequest();
rest.open('POST', url.uri)
rest.onload = callback_function
rest.send(formData)
If you do not provide a redirect, the response status will be 204 for success. But if you do redirect, the status will be 200. If you got 403 or 400 something about the signature or policy maybe wrong. Look at the responseText. If is often helpful.
A few things to note:
Both POST and PUT have a signature field, but these mean slightly different things. In case of POST, this is a signature of the policy.
PUT has a baseurl which contains the key (object name), but the URL used for POST may only include bucket name
PUT requires expiration as seconds from UNIX epoch, but POST wants it as an ISO string.
A PUT signature should be URL encoded (Java: by wrapping it with a URLEncoder.encode call). But for POST, Base64 encoding suffices.
By extension, for POST do Base64.encodeBase64String(result.getSignature()), and do not use the Base64.encodeBase64URLSafeString function
You cannot pass extra headers with the POST; only those listed in the POST page are allowed.
If you provide a URL for success_action_redirect, it will receive a GET with the key, bucket and eTag.
The other benefit of using POST is you can provide size limits. With PUT however, if a file breached your size restriction, you can only delete it after it was fully uploaded, even if it is multiple-tera-bytes.
What is wrong with createUploadURL?
The method above is a manual createUploadURL.
But:
You don't get those __BlobInfo__ objects which create many indexes and are indelible. This irritates me as it wastes a lot of space (which reminds me of a separate issue: issue 4231. Please go give it a star)
You can provide your own object name, which helps create folders in your bucket.
You can provide different expiration dates for each link.
For the very very few javascript app-engineers:
function StringToBytes(sz) {
map = function(x) {return x.charCodeAt(0)}
return sz.split('').map(map)
}
You can include succes_action_redirect in a policy document when you use GCS post object.
Docs here: Docs: https://cloud.google.com/storage/docs/xml-api/post-object
Python example here: https://github.com/voscausa/appengine-gcs-upload
Example callback result:
def ok(self):
""" GCS upload success callback """
logging.debug('GCS upload result : %s' % self.request.query_string)
bucket = self.request.get('bucket', default_value='')
key = self.request.get('key', default_value='')
key_parts = key.rsplit('/', 1)
folder = key_parts[0] if len(key_parts) > 1 else None
A solution I am using is to turn on Object Changed Notifications. Any time an object is added, a Post is sent to a URL - in my case - a servlet in my project.
In the doPost() I get all info of objected added to GCS and from there, I can do whatever.
This worked great in my App Engine project.

What are the best practices for pre-fetching in backbone.js?

What's the best way to pre-fetch the next page in backbone.js?
Is there a build-in mechanism to do that, or do I have to take care of it myself by making Ajax calls and storing the results.
Also, is there a way to preload the entire page like in JQuery mobile( http://jquerymobile.com/demos/1.2.0/docs/pages/page-cache.html)
There is no built in support for a such a thing. It's dependent on your use case but you could do a number of things.
1) Use setTime() to wait a short time before fetching the data you might be needing shortly. (Probably not a good solution)
2) Set up an event handler to fetch the data on a specific event, or something similar:
$('#my-button').on('hover',function() {
//fetch data
});
To fetch the data you can use the fetch() function on a backbone model or collection, which will return a jqXHR (or you can use a straight $.ajax() call). You can then wait and see if it failed or passed:
var fetch = myModel.fetch();
fetch.done(function(data) {
// Do something with data, maybe store it in a variable than you can later use
})
.fail(function(jqXHR) {
// Handle the failed ajax call
// Use the jqXHR to get the response text and/or response status to do something useful
});
No build-in support, but actually easy to add. Please referer to the concept of View Manager, it is able to handle both "view-keeping" tasks and transitions.
In short, the concept is: view manager is component, which is reposible for switching from one application view to another. It will dispose existing view, so it prevents zombies and memory leaks. As well it could handle transitions between view switch.
Here my way how I handle the loading of pages into an "endless scrolling" list.
Make you backend paginate aware
First of all you require a DB backend which is capable of handlling page load requests:
As an example refer to my git modelloader project which provides a small Coffee based framework integrated into a Node.js/Moongoose environment.
Model Loader on GIT will contain additional information and samples.
Here the most important points:
You backend should support the following Pagination features
Each request will return partial response only limiting it to for example 20 records (size of a page).
By default the last JSON record entry returned by a request, will contain additional technical and meta information about the request, necessary for allowing consumers to implement a paging.
{
_maxRec: "3",
_limit: "20",
_offset: "0"
}
_maxRec will list the total amount of records in the collection
_limit will list the maximum number of requests which are given back
_offset will tell you which set of records was passed back, i.e. an _offset of 200 would mean that result list skipped the first 199 records and presents the records from 200-220
The backend should support the following pagination control parameters:
http(s)://<yourDomainName>/<versionId</<collection>?offset=<number>
Use offset to skip a number of records, as for example with a limit of 20, you would send a first request with offset=0 then offset=20, then offset=40 etc. until you reached _maxRec.
In order to reduce the db activities you should provide a possiblity to reduce the _maxRec calculation for subsequent requests:
http(s)://<yourDomainName>/<versionId</<collection>?maxRec=<number>
By passing in a maxRec parameter (normally the one gotten by an earlier paging requerst), the request handler will by pass the database count objects statement, which results in one db activity less (performance optimization). The passed in number will passed back via _maxRec entry. Normally a consumer will fetch in the first request the _maxRec number and pass it back for the subsequent request, resulting in a faster data access request.
Fire of Backbone Model requests if necessary
So now you have to implement on the Backbone side the firing of page loading requests when necessary.
In the example below we assume to have a Backbone.View which has a list loaded into a jquery.tinyscrollbar based HTML element. The list contains the first 20 records loaded via the URL when built up initially:
http(s)://<yourDomainName>/<versionId</<collection>?offset=0
The View would listen in this case to the following scrolling events
events:
'mousewheel': 'checkScroll'
'mousemove': 'checkScroll'
Goal is as soon the user has scrolled down to the bottom of the scrollable list (e.g. he reaches a point which is 30px above the scrollable list end point) a request will be fired to load the next 20 entries. The following code sample desrcribes the necessary step:
checkScroll: () =>
# We calculate the actual scroll point within the list
ttop = $(".thumb").css("top")
ttop = ttop.substring(0,ttop.length-2)
theight = $(".thumb").css("height")
theight = theight.substring(0,theight.length-2)
triggerPoint = 30 # 30px from the bottom
fh = parseFloat(ttop)+parseFloat(theight)+triggerPoint
# The if will turn true if the end of the scrollable list
# is below 30 pixel, as well as no other loading request is
# actually ongoing
if fh > #scrollbarheight && ! #isLoading && #endLessScrolling
# Make sure that during fetch phase no other request intercepts
#isLoading = true
log.debug "BaseListView.checkscroll "+ ttop + "/"+ theight + "/"+ fh
# So let's increase the offset by the limit
# BTW these two variables (limit, offset) will be retrieved
# and updated by the render method when it's getting back
# the response of the request (not shown here)
skipRec = #offset+#limit
# Now set the model URL and trigger the fetch
#model.url = #baseURL+"?offset="+skipRec+"&maxRec="+#maxRec
#model.fetch
update: true
remove: false
merge: false
add: true
success: (collection, response, options) =>
# Set isLoading to false, as well as
# the URL to the original one
#isLoading = false
#model.url = #baseURL
error: (collection, xhr, options) =>
#isLoading = false
#model.url = #baseURL
The render method of the view will get the response back and will update the scrollable list, which will grow in size and allows the user again to start scrolling down along the new loaded entries.
This will load nicely all the data in a paged manner.

Returning multiple items with Servlet

Good day, I'm working on a Servlet that must return a PDF file and the message log for the processing done with that file.
So far I'm passing a boolean which I evaluate and return either the log or the file, depending on the user selection, as follows:
//If user Checked the Download PDF
if (isDownload) {
byte[] oContent = lel;
response.setContentType("application/pdf");
response.addHeader("Content-disposition", "attachment;filename=test.pdf");
out = response.getOutputStream();
out.write(oContent);
} //If user Unchecked Download PDF and only wants to see logs
else {
System.out.println("idCompany: "+company);
System.out.println("code: "+code);
System.out.println("date: "+dateValid);
System.out.println("account: "+acct);
System.out.println("documentType: "+type);
String result = readFile("/home/gianksp/Desktop/Documentos/Logs/log.txt");
System.setOut(System.out);
// Get the printwriter object from response to write the required json object to the output stream
PrintWriter outl = response.getWriter();
// Assuming your json object is **jsonObject**, perform the following, it will return your json object
outl.print(result);
outl.flush();
}
Is there an efficient way to return both items at the same time?
Thank you very much
HTTP protocol doesn't allow you to send more than one HTTP response per one HTTP request. With this restriction in mind you can think of the following alternatives:
Let client fire two HTTP requests, for example by specifyingonclick event handler, or, if you returned HTML page in the first response, you could fire another request on window.load or page.ready;
Provide your for an opportunity of choosing what he'd like to download and act in a servlet accordingly: if he chose PDF - return PDF; if he chose text - return text and if he chose both - pack them in an archive and return it.
Note that the first variant is both clumsy and not user friendly and as far as I'm concerned should be avoided at all costs. A page where user controls what he gets is a much better alternative.
You could wrap them in a DTO object or place them in the session to reference from a JSP.

AppEngine - Optimize read/write count on POST request

I need to optimize the read/write count for a POST request that I'm using.
Some info about the request:
The user sends a JSON array of ~100 items
The servlet needs to check if any of the received items is newer then its counterpart in the datastore using a single long attribute
I'm using JDO
what i currently do is (pseudo code):
foreach(item : json.items) {
storedItem = persistenceManager.getObjectById(item.key);
if(item.long > storedItem.long) {
// Update storedItem
}
}
Which obviously results in ~100 read requests per request.
What is the best way to reduce the read count for this logic? Using JDO Query? I read that using "IN"-Queries simply results in multiple queries executed after another, so I don't think that would help me :(
There also is PersistenceManager.getObjectsById(Collection). Does that help in any way? Can't find any documentation of how many requests this will issue.
I think you can use below call to do a batch get:
Query q = pm.newQuery("select from " + Content.class.getName() + " where contentKey == :contentKeys");
Something like above query would return all objects you need.
And you can handle all the rest from here.
Best bet is
pm.getObjectsById(ids);
since that is intended for getting multiple objects in a call (particularly since you have the ids, hence keys). Certainly current code (2.0.1 and later) ought to do a single datastore call for getEntities(). See this issue

Resources