I am using Django Rest Framework/AngularJs for developing a web application.
I have a use case in which server needs to stream the contents of a file in realtime, the file itself is growing since some other application is logging in to it. I know many inefficient ways to do this.
Can you suggest some better ways to achieve this. The Django "view" function should not return till the file is growing but still be able to send the incremental data to the client.
Any help will be appreciated.
Related
1. The setup
I'm currently initiating a GET request to an S3 bucket (not important) to download a very large file using the browser fetch(). This file is, in it's stored form, raw and unusable binary data, not structured.
2. The task and problem
There are a few things I want to do on the client-side with this data:
I need to process this data as it streams into the client to perform transformations on it (decryption, for example).
Once the data is processed and downloaded, it might still not be of any immediate use to the user outside the context of the web UI. Maybe the data should stay stored within the web app's sandbox disk space unless a user explicitly exports it?
3. The question
Where can I store this blob of unstructured data in both or either of the use cases listed above? There appear to be many options but none that fit this use case precisely. Any thoughts?
EDIT:
I feel like an idiot. I totally forgot about the FileSystem API. I'll take a look and answer my own question with a pseudo-implementation of this works.
EDIT 2:
I feel the need to reiterate what I stated in 2.2 above:
within the web app's sandbox disk space
I don't care about accessing the user's whole file system. I just want a space I can work with large files in on disk, similar to the app space directories provided to mobile applications by Android and iOS.
If you want to save and process a file at client level, and Blob is not an option, you may consider File System Access API (https://developer.mozilla.org/en-US/docs/Web/API/File_System_Access_API#writing_to_files), even if this will introduce an interaction with the user.
Another option would be to take the advantages of PWAs client-side storage (https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Client-side_web_APIs/Client-side_storage), this is also about your application architecture.
Before to check if to process your file at client level can be done as you need with the existing technologies, check if you really need to do that because it is only option, or, instead, if you are able to move such logic at server level, depending on your use cases.
I am working on an electron project to keep inventory of a warehouse but I want to store the data on the client-side (on the client's desktop/laptop) and not on a cloud database. How do I do this? Is using an xlsx file a good idea to store the data. As it will come with an added bonus as the user can read the data outside the app if they want to in an excel sheet.
P. S: even if xslx is a way I would like to know other possible ways so I can choose which is more comfortable for me. Thank you.
Edit: sorry I forgot to mention that I might also have to store images in the data.
You have plenty of option. You can store json file and read it when application boot up. As this is node js related thing I would suggest you to use electron store
And xlsx is a good choice but that may be overkill if the thing you are storing is too simple. On windows you can store some settings in registry too. But I prefer the config version.
I have also used sqlite3 database for some app. In Android I believe many app uses sqlite approach to store local database.
By medium to large I mean anything from 10mb -> 200mb (sound files if that is important)
basically I want to make an API that does some spectral analysis on the file itself, this would require a file upload. But for UI/UX reasons it would be nice to have a progress bar for the upload process. What are the common architectures for achieving this interaction.
The client application uploading the file will be a javascript client (reactjs/redux) and the API is written in ASP.NET Core. I have seen some examples which use websockets to update the client on progress, and other examples where the client polls for status updates given a resource url to query the status. Are there any best practices (or the "modern way of doing this") for doing such a thing that I should know of? TIA
In general, you just need to save progress status while reading the input stream in your controller to some variable (session-specific variable, because there might be a few file uploading sessions at the same time) and then get this status from the client-side by ajax requests (or signalr).
You could take a look at this example: https://github.com/DmitrySikorsky/AspNetCoreUploadingProgress
I have tried 11 MB files with no problems. There is line
await Task.Delay(10); // It is only to make the process slower
there, don't forget to remove it in the real solution.
In this sample files are loaded by the ajax, so I didn't try really large files, but you can use iframe solution from this sample:
https://github.com/DmitrySikorsky/AspNetCoreFileUploading
The other part will be almost the same.
Hope this helps you. Feel free to ask if have any additional questions.
We were looking for a job/message queue technology.
After comparing the main ones (RabbitMQ, ActiveMQ, Apollo, ZeroMQ..), we chose Beanstalkd because apparently, compared to RabbitMQ, "It gives 80% the functionality with 20% the weight and complexity".
But while looking how it works, we didn't find any way to send file through this queue system.
Is there a way to achieve that ?
Maybe I should explain our situation. We've got a Web server, and a Local one. What we want to achieve with the queues, first, would be that the Web server "asks" to the Local server to generate a complex PDF, and send it back to the Web server when it's done, so it can be displayed for the visitor.
So maybe that's not the right technology for that ?
Finally it's working perfectly just by reading file content as binary (producer) and writing it on filesystem (consumer), doesn't need any other technology (ftp, shared fs, etc).
I'm asking for the advice. I'm working at the Silverlight 3 application and now I should select the mean how to save the information and get it. I could save the necessary info in files (from 1 to 300K size) or I could save them in database. If I would use WebClient for accessing to separate file there's very low loading of the server. If I get data from database the server would load much more I think and the code on the server too.
Please correct me if I'm not right.
I'm looking forward to hearing from you!
Thanks
there are additional considerations if you use a file that is localized to the users machine. If you wish to save data w/o any user intervention then you are limited to using Isolated Storage, which has constraints on the size of your data. Otherwise, you have to ask the user for information on where to save/load the file. This is due to the security model used by silverlight.
i am thinking that a Database and the RIA framework might be the way to go.
just my 2ยข
If you are saving and loading the entire file at a time, then it might be okay to use a WebClient. This might take a little coding to handle errors that may result in incomplete saves.
If you're serializing some objects or xml data and storing that in a file, then you should probably be using a database instead.
Edit: It can be a pain to get WebClient or HttpWebRequest working correctly with GET/POST, but WCF can also be a pain to configure if you haven't done it before. WCF is probably better style, and you'll want to use a binary binding and send the file across as a byte[].