Can NReco PDF Generator somehow fire the event Session_End on an ASP.Net MVC Application? - nreco

I'm using NReco html-to-pdf converter to deliver a file stream representing a PDF document through an action method. However it seems that when I invoke the method HtmlToPdfConverter.GeneratePdf although the response is delivered correctly the session is abandoned right after. I know that because the event handler Session_End gets executed on the global.asax.
This premature non-explicit expiration of the session makes the application to misbehave in further requests that query the session object (now set to null)
That does not happen when I generate the PDF files with another third-party library: i. e. the pdf file is generated and served and the session keeps its state until it expires normally after some idle time.
Can NReco PDF Generator somehow fire the event Session_End on an ASP.Net MVC Application?
I know it would be easier to use the pdf library that does not show that behavior however it does not support some feature that Nreco does like CSS and javascript :S
Regards and thanks for your help

Related

AngularJS: stale UI trying to load cachebusted HTML files that no longer exist when new version deployed

I have an angularjs/ui-router application. All the html templates, which are lazyloaded by the framework, are cachebusted so that they are named myTemplate.someHash.html, and the references to those templates are also updated compile-time.
One problem that sometimes occurs is when I deploy a new version while a user is using the application. The sequence of events is as following:
The user has opened a page, which has a link on it to a state called Summary. The Summary state uses an HTML template called summary.html. But since we're cachebustering the HTML templates, this file is actually named summary.12345.html in version of the application currently loaded in the user´s browser.
A new release is deployed, which contained some changes to summary.html, causing it to get a new hash, so it's now called summary.98765.html.
The user clicks on the link, and angular tries to fetch summary.12345.html which nolonger exists, causing a 404.
Is there a good pattern for solving this?
My initial thought is to append some HTTP header in all requests, something like Expected-Version: 999 (where 999 is the build number generated in CI), and if that is not the version running on the server, then the server will respond with something like "410 Gone", in which case the application will ask the user to refresh the browser.
But it would require some work on the server side, and I'm not sure how to inject this logic into the template loading on the client side either.
Also, since new versions are typically deployed a few times per week (sometimes many per day), and most of those releases don't even contain any changes that would break the SPA in the above way, I don't want to force the users to reload the page all the time.
Could one approach be to only show the "please refresh" message when a request results in a 404 AND the response contains a header that indicates that you're running a stale version of the SPA?
Or maybe there is a simpler way?
I solved it in the following way.
I decided I don't want to ask users to refresh their browser every time a new release has been deployed, because many of those releases only contain backend changes.
I only want to ask them to refresh when a release has been deployed (since the time they opened the app in their browser) which contains changes to either:
the javascript application (which is served as two bundles), or
any of the html templates that angular lazyloads.
So I had to introduce the notion of a UI version. I decided that the definition of this UI version should be the SHA256 hash of all the filenames (just the names, since those already contain hashes for cachebusting) of the html templates mentioned above, concatenated with the names of the two js bundles (which are also cachebusted).
Here's the solution.
Part 1
I added a step in my CI pipeline, after the js app has been compiled and cachebusted, but before the whole application is built into a Docker image. This new build step looks like this:
const glob = require("glob");
const crypto = require('crypto');
const fs = require('fs');
const jsPromise = new Promise(resolve => {
glob("./js/*.js", {}, function (err, files) {
if (!err) {
resolve(files);
}
});
});
const htmlPromise = new Promise(resolve => {
glob("./app/**/*.html", {}, function (err, files) {
if (!err) {
resolve(files);
}
});
});
Promise.all([jsPromise, htmlPromise]).then(([files1, files2]) => {
const allFiles = files1.concat(files2);
fs.writeFileSync(
'./ui-version.env',
`UI_VERSION=${crypto.createHash('sha256').update(allFiles.sort().join()).digest('hex')}`
);
});
As you can see, it writes it to a file called ui-version.env. I then refer to this file in the docker-compose.yml using the env_file command. This way, the backend application is now aware of the current UI version. And the backend now serves this data at the HTTP endpoint GET /ui-version.
Part 2
When the frontend app is loaded, it calls the above mentioned endpoint and stores the current UI version in memory.
When a new release has been deployed, a websocket message is sent to all connected frontend apps with a notification to check whether or not they are still running an up-to-date UI version. They do this by requesting the new endpoint mentioned above, and comparing the result to the version it saved on load.
What about users who´s computers were in sleep mode or without internet connection when the websocket message was sent out?
I already have event listeners setup in the SPA that goes off when the computer comes back from sleep mode, or when it recovers internet connection. So any of those events will now also trigger the UI version check.
To be honest, I'm not super excited about this solution. There are a lot of moving parts. It would be great if someone could offer a simpler solution. But unfortunately I'm skeptical that I will get any good answers at all.

ExtJS6: Partial App load for special request that always opens in a new window

We have a ExtJS7 app, that for special requests like reset password, that always opens a new tab via email reset link, get loaded in full. We would like to load only few pages that are needed to load for these kind of request
Is there a way in ExtJS that would only load a particular page and its dependencies
I have not seen tutorials on this subject in official documentation. Myself did the following - just created another app (or bundle) for logging. The backend is responsible for the logic of what to display (loginapp or mainapp) - in the absence of a session, the user receives the app login
Absolutely. You can make another app - each app is a page, and will have its own packaged dependencies.
That's the easiest approach. A more complicated approach is to break your application into several ExtJS packages. You can then configure the app.json to exclude all of the packages from the micro loader. You then need to load these packages dynamically, presumably after logging in.
Doing this, though, is extremely complicated, and almost certainly not worth doing.

Accessible file upload in React (no drag/drop, no jQuery)

I am trying to find an accessible file upload method for React that doesn't rely on jQuery (I am not using it) but does rely on Fetch (async).
Everything I've found thus far seems to be drag and drop type components or uses jQuery's $.ajax methods. I feel like there's got to be a way to send a file with other form fields.
The front end is React with vanilla JavaScript. I've got a custom API that sends data to my back end asynchronously with Fetch. My back end uses Multer to gather form data. My form sends with multipart/form-data.
I've tried a few things, including adjusting the headers when sending from React to my back end, but either only the body comes through (no files) or nothing at all comes through.
I could get this to work by directly sending to my Express server, but I don't want to expose my API routes in the HTML, and I'm hoping to avoid the page refresh which would make this form stand out from everything else in my app. I have decided against using jQuery because I am able to do pretty much everything without it and I don't want to add the weight of the library just for the one method.
Accessibility is key (drag and drop components generally aren't accessible for obvious reasons) so I'm hoping to use a standard input file element here.
Thanks to anyone and all in advance.

IE8 freeze caused by long synchronous xmlhttprequest from silverlight

I'm having an issue where a long synchronous request will freeze Internet Explorer.
Let me explain the context : this is a web application which only supports IE8 and which can only use Synchronous*.
We have a Silverlight component with a save button. When the user presses the button, some information is sent to the server using a synchronous XMLHttpRequest, then some other actions are done client-side in the Silverlight component.
The server-side part includes calculations that will sometime take a while (several minutes).
In short, here is the code (c# silverlight part)
ScriptObject _XMLHttpRequest;
_XMLHttpRequest.Invoke("open", "POST", url, false);
_XMLHttpRequest.Invoke("send", data);
checkResponse(XMLHttpRequest);
doOtherThings();
I know that the server does its work properly because I can see in the verbose logs, the end of the page rendering for the "url" called from Silverlight.
However, in debug mode I can see that I never reach the "checkresponse" line. After calling the "send" line, IE will freeze forever, not unfreezing once the server log shows that "url" has been processed.
Also, I tried to add "_XMLHttpRequest.SetParameter("timeout", 5000)" between the "open" and the "send" lines. IE freezes for 5 seconds, then "checkresponse" and "dootherthings" are executed. Then IE freezes again while server-side calculations are processed and doesn't unfreeze once the server is done with its work.
IE timeout is supposed to be 3 hours (registry key ReceiveTimeout set to 10800000), and I also got rid of IE 2-connexions limit (MaxConnectionsPer1_0Server and MaxConnectionsPerServer set to 20).
Last important information : there is no issue when the server-side part only takes a few seconds instead of several minutes.
Do you know where the freeze could come from (IE bug, XMLHttpRequest bug, something I have done wrong) and how I can avoid this ?
Thank you !
Kévin B
*(while trying to solve my issue with the help of Google I found an incredible amount of "use asynch" and "synch is bad" posts; but I can't do this change in my app. Switching the application, ajax loads, and all server side calculations to asynchronous is a huge work which has been quoted for our client and is a long-term objective. I need a short-term fix for now)
Silverlight virtually requires that everything be done asynchronously. Any long running synchronous process will hang the browser if run on the UI thread. If you never reach the 'checkResponse' line of code it is possible that an unhandled exception was thrown on the previous line, and it is being swallowed. You can check in the dev tools of your browser to see if there are any javascript errors. I am surprised that calling XMLHttpRequest synchronously works at all since I would expect it to lock up the UI thread. But, the solution depends on your definition of async.
You can try:
calling the sync XHR request on a background thread and then marshalling to the UI thread (eg with Dispatcher.BeginInvoke) when you are ready
setting up an XMLHttpRequest wrapper that makes the call in async mode and raises a callback in Silverlight on completion
using HttpClient or WebClient
While these options are async, they don't require your server code to be written any differently (it can stay synchronous). I have seen a web server process a call for over an hour before finally returning a response, at which time the Silverlight app raised a callback. You could even use tools like the TPL await/async, or co-routines available in many mvvm frameworks to make the code appear very procedural/synchronous while still performing its actions asynchronously.
I hope you already solved your issue, but maybe this will be helpful to whomever may come across this post.

webservice upload and progress

Please help me with this one, I'm not sure what the correct, or best approach is. Basically I have a webservice which takes a byte stream, enabling a c# winform application to upload files to the webservice.
What I need is for the winform to upload in the background one file at a time (using basic multithread, it tries to upload them all at once). I also need to drop in there a progress bar.
How should I do it? Any ideas? I have the feeling it should be fairly straight forward. I think the application should start a new thread for the first file, wait until it's finished then dispose of the thread, create a new one for the next file and so on.
It completely depends on the technology you are using on the client side to access the web service.
If that technology allows for customization of the client proxy to the point where you can intercept transmission of messages (WCF allows this, I can't recall how much the old web services reference does), then you should be able to add your hook to see when the bytes are processed/sent.
Based on bookstorecowboy's comment about using the old "web reference" functionality in .NET, I believe that it generated proxies that derive from the SoapHttpClientProtocol class.
This being the case, I would recommend creating a custom class that derives from the SoapHttpClientProtocol class, overriding the GetWriterForMessage method. In this, you are supposed to return an XmlWriter given the Stream that is passed as a property on the SoapClientMessage parameter.
You would also create a custom class that derives from Stream which takes a Stream instance and forwards all the calls to that instance.
The only difference is that in the Write methods, you would fire an event indicating how many bytes were written.
You would then get the Stream that is exposed on the SoapClientMessage passed to the GetWriterForMessage and wrap it in your custom Stream implementation. You would also connect your event handlers here as well.
With that stream, you would create the XmlWriter and return it.
Then, for your proxies, you would use this new class that derives from SoapHttpClientProtocol and have the proxies derive from that.
As for ASP.NET 2.0 web services ("Old web services ") you could add web services extension to alter and extend it's behavior .
You could also add custom Http module .
It allows you aceess up to the stream level .
see Soap Extensions,Http Modules

Resources