I have a Backbone application in which I am creating a new model and saving it. Here's a snippet of the code, with debugging, in CoffeeScript:
newListing = new Listing
console.log "New?", newListing.isNew()
newListing.save creation, {
wait: true
success: (model, response) =>
console.log "SAVED", model
console.log "RESPONSE", response
}
For debugging, I have also overridden Backbone.sync:
oldSync = Backbone.sync
Backbone.sync = (method, model) ->
console.log "Syncing:", method, model
oldSync(arguments...)
Usually, this works fine. I get this in the console:
New? true
Syncing: create > Listing
SAVED > Listing
RESPONSE > Object
And in the Network Inspector I see:
listings POST 200 application/json
I also see a POST request logged in my (Rails) application log.
However, after creating a few Listings, I start to see the following behavior on the console:
New? true
Syncing: create > Listing
SAVED > Listing
RESPONSE [> Object, > Object, > Object, > Object, > Object, > Object]
where each Object is a Listing already saved in the database. The Network Inspector and my application log also indicate that Backbone performed a GET request to /listings. Furthermore, the Listing in the third line that has been "SAVED" is a client-side representation of the listing (without some additional details that the server usually inserts).
I have not been able to find any pattern to this behavior; sometimes, Backbone insists on sending GET after GET, and after a refresh it starts working. Sometimes it acts up until I restart my application server. Happy to explore any suggestions!
[Edit]
Alright, after some sleuthing, it appears that after I call fetch() on a Listings collection, Listing#save() starts doing this. The issue only appears on my laptop, which is running Chrome Dev (v19). On other browsers and older versions of Chrome it works fine.
Clearly this is an application specific issue. My guess is that the issue is server side. Perhaps your server side code is returning all of the Listings after a creation rather than just the one that was created. On successful creation your server should respond with 200 OK, and the representation of the created entity in the response body.
Related
I work on a full stack application that is composed of:
Django (Django Rest Framework)
React
PostgreSQL database
Redis
Celery
It is deployed through docker. Whole application works well and has no bugs that cannot be traced.
However, when I try to delete Project item from database (this is domain specific), I get error 500 and no specific trace.
I figured this bug out on deployed application. While inspecting Networking tab in Developer Tools I found the request and saw 500 return code. However, nothing was returned in returned in Response.
However, I think something should have been returned. Code is as such:
class ProjectCRUD(GenericAPIView):
# [...]
def delete(self, request, pk):
try:
# [...] code that deletes all referenced values and current project
except ProtectedError as e:
return JsonResponse({
"error": "Project still referenced",
"details": str(e)
}, status=400)
except Exception as e:
return JsonResponse({"error": "Wrong project id"}, status=status.HTTP_400_BAD_REQUEST)
return JsonResponse({
'message': f'Project with id {project_id} was deleted successfully!'
}, status=status.HTTP_204_NO_CONTENT)
# [...]
This "Wrong project id" assumption is by all means bad and this will be refactored as soon as this bug is also found. This code makes sure that if exception is raised, it is caught, something is returned with at least some amount of information given. If exception is not caught, return 204.
So I go to the application, I create a new project, try to delete it and error 500 with nothing in Networking appears.
Next step is trying things locally. I start local server using python manage.py runserver. This doesn't go through docker because redis and celery are not used for this feature. I create a new project, try to delete it and console logs writes 204, which means it passed.
I start docker. Repeat process. Everything works, 204 is returned.
Next I check docker logs of deployed application. This is where it starts to be really weird. Backend logs show 204 as it did locally. Frontend logs show 204 as well. However, client (ie browser) in networking displays error 500.
From searching I concluded that the bug happens somewhere between Frontend and client.
My questions are:
any idea why is this happening
where should I look next in order to catch a bug
So the whole application works as expected except for this feature.
Thanks.
Problem
We have an Outlook add-in that uses Outlook Office365 REST API to update mail item properties. One example is writing some custom metadata to the SingleValueExtendedProperties field of a mail-item.
Since yesterday we started noticing that the values that we write on the extended properties are not getting synced to the exchange server.
Steps to reproduce
PATCH on the Messages resource to update item properties with some data
Method: PATCH
URL: https://outlook.office.com/api/v2.0/me/messages('<messageId>')
Request Body:
{
"SingleValueExtendedProperties": [
{
"PropertyId": "{propertyId}",
"Value": “{\”color\":\"green\"}"
}
]
}
Use GET API to get the latest values.
Method: GET
URL: https://outlook.office.com/api/v2.0/me/messages('<messageId>')?$expand=SingleValueExtendedProperties($filter=PropertyId eq ‘{propertyId}’)
Observation
The PATCH call ran through successfully, but the GET call didn't return the latest values. The update that we tried yesterday also has not yet synced.
Environment
Client: Microsoft Outlook for Mac
Build Version: 16.42 (20101102)
Note
We have already started using the Microsoft Graph API in the new software that we are building since that’s the recommended one for interacting with mail/events. We still rely on Office365 API in our current systems to fetch/update data.
Is there a known issue that could be causing this? Is anyone else experiencing this?
In reading about the File API and wanting to write data directly from an indexedDB database to the client disk instead of first building and holding a large blob in RAM to download to disk, there are a few basic items I'm not understanding.
In the MDN documents these two statements are found:
In Gecko, privileged code can create File objects representing any local file without user interaction.
If you want to use the DOM File API in chrome code, you can do so without restriction. In fact, you get one bonus feature: you can create File objects specifying the path of the file on the user's computer. This only works from privileged code, so web content can't do it.
Where exactly does one write chrome code and/or Gecko priveleged code? Is this beyond a web extension? I've read and experimented with extensions; so, I'm not asking specifically about how to access them.
I'm not concerned about a 'normal' web page and server accessing the client disk. I know that it's not permitted inorder to protect the individual.
I'm interested in what can be done offline through the browser--with the aid of web extensions and/or a separate profile granting special permissions but without node.js, electron, etc.-- by an individual who knowingly wants to use the browser to do maybe what they should have built in the OS rather than the browser.
Put another way, if I want to use the browser just to run my javascript code to perform tasks all offline on my own machine, where is the privileged code written that gives access to these types of APIs that aren't subject to the security issues of a normal web page?
Is it still javascript or C++ in these areas?
Thank you.
This old question provides a link to their extension which includes the File API that writes to disk in a way that appears to provide a means to bypass the creation of a large blob of data. It's six years old but appears to contain what is needed, at least to get started.
I'm not referring to their trying to get around using indexedDB, but just that using this type of extension could allow for writing each object from the database directly to the client disk without first having to generate a large blob to download.
Attempt at employing Andrew Swan's suggestion
I'm trying to put the pieces together but have reached a point where am not sure how to continue. I wrote the code below in the background script of an extension. In attempting to employ Andrew Swan's suggestion, the plan is to intitiate a GET request for a text/csv file, which is intercepted and replaced by data extracted from database and written to the GET request by the stream filter.
First, make a GET request to a bogus url and listen for response, as follows:
let request = new XMLHttpRequest();
request.open("GET", url );
request.setRequestHeader( "Content-Type", "text/csv" );
request.send(null);
request.onreadystatechange = () =>
{
portFromCS.postMessage( { 'func' : 'disp_result', 'args' : { 'msg' : "request.status :", 'value' : request.status + ' : ' + request.statusText} } );
};
Second, intercept the request and write to the GET, as follows:
browser.webRequest.onBeforeRequest.addListener(
listener,
{ urls : ["<all_urls>"] },
["blocking"] );
function listener( details )
{
let filter = browser.webRequest.filterResponseData(details.requestId);
let decoder = new TextDecoder("utf-8");
let encoder = new TextEncoder();
filter.onstart = event =>
{
let str = decoder.decode(event.data, {stream: true});
str = '' +
'HTTP/1.1 200 OK \r\n' +
'Content-Length: 17 \r\n' +
'Content-Type: text/csv \r\n\r\n\r\n' +
'This is a string.';
filter.write(encoder.encode(str));
filter.disconnect();
};
}
The message sent from the background script in the request.onreadystatechange function is received in the content script, and the request.status is '0'.
The filter.onstart is used because the ondata event will never fire since the url is bogus. Also, that means there will be no converting of data from the url, but only the writing of new data through the filter.
The str data is written and received by the request, but only as responseText and not as a response header. The request.status remains '0' instead of '200'.
It seems that can't change the response header unless in onHeadersReceived which will never take place, it appears, for a bogus url. However, I tried this on a real url and, even though the event fired, an error of webRequest.HttpHeaders is not a function was thrown. I had "responseHeaders" in the webRequest extraInfoSpec at that time.
My questions are:
Can a response header be written to set the request.status to '200' and then start writing the database data through an async function in small blocks as retrieved?
Can the Content Disposition section of the header response be set such that it automatically starts the download of the response.text and allows the user to select the file name and save location, and stay "open" as keep writing to the file as the data is extracted from the database and passed to the GET request through the filter.write()?
Thank you.
Conclusion
It was a good idea but I don't think it is possible for at least two reasons.
One is that webRequest doesn't appear to intercept a downloads.download() function at all, or any download event; so, you can't intercept a download, and an event with a Content Disposition of 'attachment' is needed to even try to write to it with a stream. I could intercept a forced click to an anchor tag href but no other events fired beyond onBeforeRequest.
The other is that a response header can't be modified until an onHeadersReceived event, which means the fake URL has to return something. You can't just cancel it in onBeforeRequest. So, this wouldn't work offline. But, even if you let it process online to an existing URL that returns a reponse header, it won't accept a modification. I tried repeatedly to modify the response header and it just won't work. I tried an XMLHttpRequest GET and can intercept the events that fire but can't modify the response header; so, can't set Content Disposition to 'attachment', with or without file, to start a download. I can write to it but it's no good unless it is going to download what is written. It would be ok if the written content was going to a web page.
Also, if you redirect the URL along the way to anything other than a webRequest acceptable URL, the other events won't be interceptable. So, if redirect to an object URL in onBeforeRequest, you won't intercept the response headers stage in webRequest but can view threm in the onreadystatechange event of the XMLHttpRequest.
So, the upshot is that it appears the response headers cannot be modified even though the MDN Web Docs say it is possible. And, this idea of using awebRequest stream filter to stream data generated on the client or extracted from an indexedDB database, as opposed to building one large blob for download, won't work because can't intercept a download or change the response headers to trigger a download into which to write via the stream filter.
It was an interesting idea though. I still wonder whether or not the download would remain 'open', so to speak, while the data was being written on the client and passed in blocks or chunks. Perhaps, if that part of the response headers that states how data is to be passed and received was modified also it would work.
For now, I am no longer pursuing this approach. One of the Web Docs or a bug records stated that it is planned to allow a data URL to be intercepted. Perhaps, for an offline download to the client, that would be preferrable to a fake URL.
If anyone gets this to work, please let us know. Thank you.
A couple of terms:
"Gecko" is the rendering engine on which Firefox (and a few other applications like Thunderbird) is built
"Chrome" in this context means the browser user interface and features, as opposed to the contents of a web page being displayed by the browser.
In Firefox, much of the browser chrome is implemented in Javascript. The code that implements the user interface needs to be able to do things that normal web pages cannot do (such as reading and writing the local filesystem). Therefore, this code runs with different privileges than Javascript that runs as part of a web page. The terms "privileged code", "chrome privileged code", "Gecko privileged code" are all different ways to describe the same thing: Javascript code that is built in to the browser and has access to capabilities that web pages do not have.
Prior to the Firefox Quantum (version 57) release, Firefox extensions were allowed to run privileged Javascript code. As you might imagine, this was fraught with problems for security, performance, and stability, among other things. With WebExtensions, extensions now run with the same level of privilege as regular web content (ie, they do not execute with elevated privileges). Some browser features are exposed to extensions through extension APIs.
So, if you're interested in what you can do from an extension, any documents on MDN that reference privileged code, are effectively irrelevant. There are not currently any APIs available to WebExtensions that would allow you to directly access the filesystem, but there is an open bug to add some this capability. (that bug has existed for quite some time, but I suspect there will be progress relatively soon...)
I am implementing single page application(SPA) using of Angular Js, MongoDb. And I am using rest call with promises. Rest call working fine in Chrome, Mozila browser which is using for development. But rest call is not working in IE-11. It is giving me 500 Internal Server Error.
I am not able to find out line of rest call. Because it is not showing line number. But I can share sample code of rest call.
Rh.all('apicall').get('dbname/_aggrs/'+ ar_dep +'?avars=' + query).then(function (d) {
console.log("response data");
});
Above call is not printing console. Because It is breaking in IE-11, But these rest call working fine in other browser.
If I putt direct path not with variable then it is working in IE-11.
Working Rest Call below
Rh.all('apicall').get('dbname').then(function (d) {
console.log("response data");
});
NETWORK in Console(IE-11)
IN CHROME
I am updating my question. Because I found some difference parsing url, Because of restheart.
IN CHROME:
Rh.all('apicall').get('dbname/_aggrs/'+ ar_dep +'?avars=' + query)
After parsing
localhost:8080/apicall/dbname/_aggrs/rout?avars={%22routes%22:%22US%22}
In query object I have routes:us. So in chrome it parsing %22--%22 place of " ".
IN IE-11
Rh.all('apicall').get('dbname/_aggrs/'+ ar_dep +'?avars=' + query)
After parsing
localhost:8080/apicall/dbname/_aggrs/rout?avars={"routes":"US"}
In IE-11, It is not parsing double qoutes to %22 %22. It is parsing same as string.
A 500 error is always related to the server. The symptoms may only occur with a specific browser, but it is the server that is failing; the request that is being sent to the server is causing the server-side code to fail in some way.
Error 500 on its own is too generic; without knowing more details about the error, it is always very hard to diagnose, and frankly I won't be able to give you a definitive answer here.
At your end, you should rule out the obvious, and check your browser settings in IE. Specifically, any settings that might cause it to fail to communicate properly with the server. For example, make sure that cookies are enabled and are working properly.
But the first thing you should do is discuss with the vendor or developers of the API because they will have access to the server error logs, and they will want to know about it if their code is throwing a 500 error.
However, if you do want to investigate at your end, the fact that it is specific to one browser is a clue. If the other browsers are working, then what this tells us is that this one browser (IE11) is sending the request with something about it that is different to the other browsers, and it is that something that is triggering the server-side code to fail. This gives us something to work with in the investigation.
So the first thing to do is to examine the request in all browsers. Use the F12 dev tools in Chrome, Firefox and IE, and get to the point where you've made the same call in all three of them, and it works in FF and Chrome but not in IE11.
In the dev tools, you should now be able to examine the request details for all three. Compare them.
Start by looking at the request data -- ie the actual query string that was sent. If there are differences, consider whether any of these differences may be responsible for the error. Something may stand out obviously; eg if IE has truncated a variable or something like that. If this solves the problem, then great.
If it doesn't help, then you need to look in more detail. Maybe there are some differences but they don't look like they should break anything? Modern browser dev tools allow you to edit and re-send a request, so try editing the request in Chrome or Firefox's dev tools, and make the parameters the same as the ones from IE that failed. Now try re-sending that request. If you're lucky, this will cause the request to fail in the other browser, which will allow you to show that a specific set of data is the problem (rather than a specific browser). You mentioned that it's a third party API, so you'll then need to discuss with the API vendor to find out why that query breaks their API.
If you still haven't found the problem at this point, and you're sending identical queries in both browsers, and you're logged in as the same user, then the next step is to look at the request headers.
There is one request header that will definitely be different: the User Agent string. But there may be others too. Again, try re-sending request that works in Chrome, but with headers from the failing request in IE (including the UA string). Does the request now fail in Chrome? If so, narrow down which headers are different that make it fail.
Again, if this allows you to find a specific set of request data and headers that causes the problem, then you will need to discuss with the API vendor.
If all of this doesn't help, then try looking at the cookies. You already checked that cookies are working, so this seems like a long shot now, but again compare the cookies between browsers, and see if there's anything obviously different about them.
I hope the above is enough to help you diagnose the issue.
I'm new to Angular and I just can't get my head wrapped around this idea, any help would be greatly appreciated.
A lot of conversations state the Model should come from the server via restful web services. I've been using $http in a factory. This makes sense to me "if" there is data present. If you load a screen and the user or whatever is new then you get a blank JSON value. For complex data (relationships) you get those items with a value but other properties are left off.
So what am I missing here, how can the model come from the server consistently?
It's useful to think of your model as both a server model and a client model. The server model should be your true model or "source of truth", and the client model is a working model or "mimic" that should behave as a local copy of the server model.
For the model to "come from the server consistently", you have to ensure that any changes to the client model get validated by the server side. Meaning that when any change requests to the model -- such as create, remove, update, or delete (crud) -- get sent as requests to the server, and then the resulting changed data model gets returned to the client model so it can be updated.
You could take advantage of standard HTTP status codes as a mechanism to provide results to the client:
for example, your service could return an HTTP code of 204 to indicate that the server successfully processed the request, but is not returning any content.