I am new in front-end/client-side app development for website. I am setting up a new react project. I used create-react-app.
How should I handle the console errors(other than network call errors)? Is there a way to log them in any file?
What are the best logging practices?
Even though there are means and ways to store data on the browser (not to actual client files) that's not really the general strategy for logging browser errors (which I believe is what you're referring to).
In my experience we'd have a dedicated logging server with a simple API with adequate security that filters traffic and applies rate limiting. That would finally enhance and write logs into a document database that can later be analysed.
A naive JavaScript solution would be using the following to capture errors and send them out to a logging server.
window.onerror = function(message, url, lineNumber) {
// make ajax call to api to save error on server
return true;
};
I should also mention Sentry.io - they provide a service that does this and even though there are some limitations they're usually enough for a small to mid app.
https://github.com/getsentry
Related
In reading about the File API and wanting to write data directly from an indexedDB database to the client disk instead of first building and holding a large blob in RAM to download to disk, there are a few basic items I'm not understanding.
In the MDN documents these two statements are found:
In Gecko, privileged code can create File objects representing any local file without user interaction.
If you want to use the DOM File API in chrome code, you can do so without restriction. In fact, you get one bonus feature: you can create File objects specifying the path of the file on the user's computer. This only works from privileged code, so web content can't do it.
Where exactly does one write chrome code and/or Gecko priveleged code? Is this beyond a web extension? I've read and experimented with extensions; so, I'm not asking specifically about how to access them.
I'm not concerned about a 'normal' web page and server accessing the client disk. I know that it's not permitted inorder to protect the individual.
I'm interested in what can be done offline through the browser--with the aid of web extensions and/or a separate profile granting special permissions but without node.js, electron, etc.-- by an individual who knowingly wants to use the browser to do maybe what they should have built in the OS rather than the browser.
Put another way, if I want to use the browser just to run my javascript code to perform tasks all offline on my own machine, where is the privileged code written that gives access to these types of APIs that aren't subject to the security issues of a normal web page?
Is it still javascript or C++ in these areas?
Thank you.
This old question provides a link to their extension which includes the File API that writes to disk in a way that appears to provide a means to bypass the creation of a large blob of data. It's six years old but appears to contain what is needed, at least to get started.
I'm not referring to their trying to get around using indexedDB, but just that using this type of extension could allow for writing each object from the database directly to the client disk without first having to generate a large blob to download.
Attempt at employing Andrew Swan's suggestion
I'm trying to put the pieces together but have reached a point where am not sure how to continue. I wrote the code below in the background script of an extension. In attempting to employ Andrew Swan's suggestion, the plan is to intitiate a GET request for a text/csv file, which is intercepted and replaced by data extracted from database and written to the GET request by the stream filter.
First, make a GET request to a bogus url and listen for response, as follows:
let request = new XMLHttpRequest();
request.open("GET", url );
request.setRequestHeader( "Content-Type", "text/csv" );
request.send(null);
request.onreadystatechange = () =>
{
portFromCS.postMessage( { 'func' : 'disp_result', 'args' : { 'msg' : "request.status :", 'value' : request.status + ' : ' + request.statusText} } );
};
Second, intercept the request and write to the GET, as follows:
browser.webRequest.onBeforeRequest.addListener(
listener,
{ urls : ["<all_urls>"] },
["blocking"] );
function listener( details )
{
let filter = browser.webRequest.filterResponseData(details.requestId);
let decoder = new TextDecoder("utf-8");
let encoder = new TextEncoder();
filter.onstart = event =>
{
let str = decoder.decode(event.data, {stream: true});
str = '' +
'HTTP/1.1 200 OK \r\n' +
'Content-Length: 17 \r\n' +
'Content-Type: text/csv \r\n\r\n\r\n' +
'This is a string.';
filter.write(encoder.encode(str));
filter.disconnect();
};
}
The message sent from the background script in the request.onreadystatechange function is received in the content script, and the request.status is '0'.
The filter.onstart is used because the ondata event will never fire since the url is bogus. Also, that means there will be no converting of data from the url, but only the writing of new data through the filter.
The str data is written and received by the request, but only as responseText and not as a response header. The request.status remains '0' instead of '200'.
It seems that can't change the response header unless in onHeadersReceived which will never take place, it appears, for a bogus url. However, I tried this on a real url and, even though the event fired, an error of webRequest.HttpHeaders is not a function was thrown. I had "responseHeaders" in the webRequest extraInfoSpec at that time.
My questions are:
Can a response header be written to set the request.status to '200' and then start writing the database data through an async function in small blocks as retrieved?
Can the Content Disposition section of the header response be set such that it automatically starts the download of the response.text and allows the user to select the file name and save location, and stay "open" as keep writing to the file as the data is extracted from the database and passed to the GET request through the filter.write()?
Thank you.
Conclusion
It was a good idea but I don't think it is possible for at least two reasons.
One is that webRequest doesn't appear to intercept a downloads.download() function at all, or any download event; so, you can't intercept a download, and an event with a Content Disposition of 'attachment' is needed to even try to write to it with a stream. I could intercept a forced click to an anchor tag href but no other events fired beyond onBeforeRequest.
The other is that a response header can't be modified until an onHeadersReceived event, which means the fake URL has to return something. You can't just cancel it in onBeforeRequest. So, this wouldn't work offline. But, even if you let it process online to an existing URL that returns a reponse header, it won't accept a modification. I tried repeatedly to modify the response header and it just won't work. I tried an XMLHttpRequest GET and can intercept the events that fire but can't modify the response header; so, can't set Content Disposition to 'attachment', with or without file, to start a download. I can write to it but it's no good unless it is going to download what is written. It would be ok if the written content was going to a web page.
Also, if you redirect the URL along the way to anything other than a webRequest acceptable URL, the other events won't be interceptable. So, if redirect to an object URL in onBeforeRequest, you won't intercept the response headers stage in webRequest but can view threm in the onreadystatechange event of the XMLHttpRequest.
So, the upshot is that it appears the response headers cannot be modified even though the MDN Web Docs say it is possible. And, this idea of using awebRequest stream filter to stream data generated on the client or extracted from an indexedDB database, as opposed to building one large blob for download, won't work because can't intercept a download or change the response headers to trigger a download into which to write via the stream filter.
It was an interesting idea though. I still wonder whether or not the download would remain 'open', so to speak, while the data was being written on the client and passed in blocks or chunks. Perhaps, if that part of the response headers that states how data is to be passed and received was modified also it would work.
For now, I am no longer pursuing this approach. One of the Web Docs or a bug records stated that it is planned to allow a data URL to be intercepted. Perhaps, for an offline download to the client, that would be preferrable to a fake URL.
If anyone gets this to work, please let us know. Thank you.
A couple of terms:
"Gecko" is the rendering engine on which Firefox (and a few other applications like Thunderbird) is built
"Chrome" in this context means the browser user interface and features, as opposed to the contents of a web page being displayed by the browser.
In Firefox, much of the browser chrome is implemented in Javascript. The code that implements the user interface needs to be able to do things that normal web pages cannot do (such as reading and writing the local filesystem). Therefore, this code runs with different privileges than Javascript that runs as part of a web page. The terms "privileged code", "chrome privileged code", "Gecko privileged code" are all different ways to describe the same thing: Javascript code that is built in to the browser and has access to capabilities that web pages do not have.
Prior to the Firefox Quantum (version 57) release, Firefox extensions were allowed to run privileged Javascript code. As you might imagine, this was fraught with problems for security, performance, and stability, among other things. With WebExtensions, extensions now run with the same level of privilege as regular web content (ie, they do not execute with elevated privileges). Some browser features are exposed to extensions through extension APIs.
So, if you're interested in what you can do from an extension, any documents on MDN that reference privileged code, are effectively irrelevant. There are not currently any APIs available to WebExtensions that would allow you to directly access the filesystem, but there is an open bug to add some this capability. (that bug has existed for quite some time, but I suspect there will be progress relatively soon...)
I'm working on a simple chat application that uses this frameworks, libraries: react, socket.io, express.
When a user opens the web app for the first time, he sees a login form, and after login, the server retrieves the list of all users and sends it to the client. When someone writes a new message, the server sends the message to all the clients.
As you can see, every part of the app depends on the server.
Does it make sense to use a service worker? Can it be at all?
As far as I know, a service worker is good at storing images, css, js files, and it help the users to use the app while they don't have internet connection.
But I do not know when everything depends on the server what can be done.
You have a great question.
You can most certainly use a Service Worker but most likely not to the extent some other apps could use it. You have outlined the problem yourself: your website depends on the server so it's not possible to make it offline or so. Some other websites could be made offline or could be made mostly offline showing some content without network connection and giving the full experience when connectivity comes back, but that doesn't sound like to be the case for your website.
Based on the description you've given, there's still something you could easily use Service Worker for, however. You've understand correctly that SW is very good at storing (caching) static assets and serving them from the device's cache without any network connectivity. You could use this feature and make your site faster. You could use a SW to proactively cache all the static assets of your site and have the SW return them from the local cache without requesting anything from the network. This would make your site a bit or much faster, depending on the user's connectivity (if the user has a slow 3G connection, then the SW would make the site super fast; if the user has a steady fiber or whatnot, then the difference wouldn't be that huge).
You could also make your site available offline without any internet connectivity. In that situation you would of course show the user a message saying "Hey, it seems like you're offline! Shoot! You need connectivity to use the app. We'll continue as soon as we get the bits flowing!" since this would probably make the user experience nicer.
So, in conclusion: you can leverage SW to make the initial loading of the site faster but you most likely won't get as much out of a SW configuration as some other site would get.
If you have any other questions or would like to have some clarifications, just comment :)
Sure you can benefit from having a Service Worker, it is universal enough to have an application for all kinds of applications and I don't agree it is only good for static assets.
It all depends on the actual requirements for you application, obviously. But technically there is no limitation that would prevent you from caching your users response in the Service Worker.
Remember that "offline" is a condition that happens in multiple circumstances - not only being far from the network coverage, but also outages, interferences, lie-fi or going through a tunnel. So it can as well happen intermittently during your app operation and it might make sense to prepare for it.
You can for example store your messages for offline in IndexedDB and for messages sent during that time, register a Background Sync event to send it to the server when the connectivity is back. This way users might still be able to use the app in the limited fashion (read the previously exchanged messages and post their own messages to be sent out later).
Scenario: You are going to do scheduled database maintenance. You will hence be unable to serve dynamic content (just assume the caching system in front of the database also needs to be maintained).
During that time, what's the correct way of handling web requests trying to access a dynamic resource?
What's the correct HTTP error code, if any, that goes along with the notice that your service is currently not available? Should you use errors in the 5XX range?
What are the implications in terms of SEO? Will it hurt if search engine crawlers try to access your site and see lots of error codes or pages with the same notice instead of dynamic content? Can you easily recover from that?
503 Service Unavailable is the correct response to use in this situation.
Depending on how your site works, you could just put up a static HTML page replacing everything saying that the site is undergoing maintenance.
What is the correct way of handling errors on the client side of Silverlight applications? I tried building a service endpoint that would receive details about the error and then would write that string to the database. The problem is, the error's text exceeds the maximum byte length, so I can't send the exception message and stacktrace. What would be a better way of handling errors that end up at the client side?
Try handling faults...I used this pattern from MSDN
http://msdn.microsoft.com/en-us/library/dd470096%28VS.96%29.aspx
If you find you message is too long to send to your logging web service then try setting your binding properties such as maxBufferSize and maxStringContentLength to appropriately large values. They default to 16KB, personally i have set mine to 2147483647 (which is int.MaxValue).
Obviously you cannot send the raw exception straight to the logging web service (exceptions are not serializable), what i did was write a function that takes an exception and walks it, translating it into a WCF friendly structure that can then be passed to my logging end point. Of course you need to ensure that if this fails you have a backup plan, like maybe logging it to isolated storage if you are running in browser, or logging it to the user's file system if you are running elevated OOB.
You should not be considering logging of error messages via a service. What if the error that you want to log is related to the service itself? Maybe the server that hosts all dependant services (including the error logging service) is not reachable or down. client errors should be logged on the client side and periodically flushed to the server when connectivity to service is available.
Thats what I would do...
Take a look at the new Silverlight Integration Pack for Enterprise Library from Microsoft patterns & practices. It provides plumbing for both logging (client-side and via a remote service) and exception handling with flexible configuration of policies via config or programmatically.
In ASP.NET, I usually log exceptions at server-side, In windows forms I can either log exceptions server-side or write to a log file on the client. Silverlight seems to fit somewhere in between.
I wanted to know what everyone else is doing to handle their Silverlight exceptions and I was curious if any best practices have emerged for this yet.
For real logging that you could store & track, you will need to do it on the server, since you can't be guaranteed anything on the client will be persisted.
I would suggest exposing a "LogEvent(..)" method on a server side web service (maybe you already have one) which would then do the same kind of logging you do in ASP.net
Here's a video about basic web service calls in Silverlight if you haven't done that yet
http://silverlight.net/learn/learnvideo.aspx?video=66723
I'm not sure about any logging best practices though, my first guess would be to do the best practicies for logging in a web sevice on the server and expose that to the client.
Hope this helps!
I would say that Silverlight fits much better to ASP.NET side of the model. You have server which serves web page. An object (Silverlight app) on the page pings data service to fetch data and display it.
All data access happens on the server side and it does not matter if data is used to create ASP.NET pages on the server or sent raw to the RIA for display. I do log any failures in data service on server side (event log works fine) and do not allow any exception to pass to WCF. When client does not receive expected data (it gets null collection or something similar), it display generic data access error to the user. We may need to extend that soon to pass a bit more information (distinguishing between access denied/missing database/infrastructure failure/internal error/etc), but we do not plan to pass exception error messages to the client.
As for client side, sometimes we may get in situation where async call times out -- it is just another message. For general exceptions from client code (typically, bugs in our code), I just pass exception to the browser to display in same manner as any script exception.
Also take a look at the new Silverlight Integration Pack for Enterprise Library from Microsoft patterns & practices. It provides support for logging exceptions to isolated storage or remote services and is configurable via policies in external config or programmatically. Batch logging and automatic retry (in case of occasionally connected scenarios) are also supported.
Use the Isolated Storage available for Silverlight application. You should store here your log.
Then you can develop a mecanism to send the user log to a webservice like the Windows bug report service.
It very much depends on the type of application that youre developing.
if its an mvc / mvp based architecture then your model, or most of it at least, will be on the server, and this is where most of your exceptions will be thrown i would imagine, so you can log them there and choose to display a message to the user or not.
for exceptions from the client you may want to know the details so just send them back.