Loading Image from remote server - codenameone

I'm loading images in codename one apps from remote server. How to display new images when these are updated on server?
Storage.getInstance().deleteStorageFile("xxxx");
Image icon = URLImage.createToStorage(placeholder, "xxxx", "remote_link");
I'm loading images in codename one apps from remote server. The images take a lot of time to load and unfortunately still when the images are updated on the server, the application continues to display the old images

The old images will display as long as the local file exists and as long as you have the image object in ram. Frankly if you don't want caching using URLImage might not be the best approach. You can use something like downloadImageToStorage which gives you a bit more flexibility or even just a regular Rest request for byte[] data.
About speed I will need more details to give an authoritative answer. If your image on the server are large then you're effectively downloading a lot of redundant data and "paying" for it. URLImage hides it to some degree by scaling the image down and removing overhead after the fact but you'd still waste bandwidth. You can increase the number of network threads (defined in the init(Object) method usually) which might improve performance for some cases but at some point you're limited by the bandwidth you have on the device.

Related

What is the best way to handle big media files in MEAN-Stack applications?

I have a MEAN-Stack application and I store media files in an AWS S3 Bucket.
Currently I handle the media file upload, in the way that I encode the files in base64 and transfer them with simple post request for each file, over the node.js backend to the S3 Bucked and return the reference link to the file after wards.
That worked well for a time, but now some users decide to upload bigger Files, that party exceeded the size cap of a post call (I think thats 100mb per call, so I capped it to 95mb + 5mb puffer for meta information).
This obviously exceeded the technical capabilities of the application, but also for media files less than that size, it takes a long time for upload and there is no feedback about the uploading progress for the user.
What would be the best way to handle big files in the MEAN + S3 Stack?
What Angular sided libraries would you suggest? Maybe for video file compression / type conversion (.mov is part of the problem) but also for user feedback
Does it make sense to put a data stream through the Node.js server?
How would you handle the RAM cap? (currently 512mb per VM(EC2) on which the Node server is hosted).
Or what other solutions would you suggest?
A small foreword: read about AWS request-signing if you do not already know what it is. This allows your back-end to sign a hash of the parameters of AWS requests so that they can be called by the front end securely. You should actually use this with your existing GetObject requests so that you can control, track, and expire accesses.
What would be the best way to handle big files in the MEAN + S3 Stack?
Either by uploading directly from the client, or streaming to a server as a multipart upload to AWS S3. Note that doing so through the client will require some work as you must call CreateMultipartUpload orchestrate the signature of multiple UploadPart requests on the server, and then call CompleteMultipartUpload.
Multipart upload limits are huge and can handle any scale with your choice of chunk size.
In NodeJS this can actually be done much easier than by handling each command. See the #aws-sdk/lib-storage package, which wraps the upload in a transaction that handles errors and retries.
What Angular sided libraries would you suggest? Maybe for video file compression / type conversion (.mov is part of the problem) but also for user feedback.
I don't know much about angular, but I would recommend not doing processing of objects on the front-end. A great (and likely cheap) way to accomplish this without a dedicated server is through AWS lambda functions which trigger on object upload. See more about lambda-invocation here.
Does it make sense to put a data stream through the Node.js server?
It does to me as I mentioned in the answer to question 1, but its not the only way. Lambda functions are again a suitable alternative, or request presigning. See an AWS blog about the issue here.
Also there seems to be way to post from the front-end directly, and control access through S3 policies.
How would you handle the RAM cap? (currently 512mb per VM(EC2) on which the Node server is hosted).
Like all performance question, the answer is to measure. Monitor the usage of your server in production, and through tests. In addition, its always good to run stress-tests on important architecture. Hammer your architecture (replicated in a development deployment) in emulation of the worst-case, high-volume usage.
What might be most beneficial in your case, is to not run a server, but either a cluster of servers with autoscaling and load balancing. In addition, containerization can help decouple physical server deploys and your application. Containers also can use the AWS Fargate, which is a server-less architecture for containers. Containers also mean memory scaling can happen in-process and without much configuration change.
To focus this answer: For your purposes, Fargate or Lambda seem appropriate to provide a server-less architecture.
5 Or what other solutions would you suggest?
See above answers.

AngularJS Performance vs Page Size

My Site is ~500 KB Gzipped including js, css and images. It is built on AngularJS. A lot of people in my company are complaining about the site being slow in lower bandwidths. There are a few questions I would like to get answered,
Is 500KB Gzipped too high for lower bandwidths? People claim it takes 20 sec for it to load on their machine, which I believe is an exaggeration. Is it really due to anugularJS and its evaluation time?
How does size of the app matters in lower bandwidths? If my site is 500KB and if I reduce it to 150KB by making a custom framework, Would it really help me in lower bandwidth? If so, how much?
It's all subjective, and the definition of "low bandwidth" is rather wide. However...using https://www.download-time.com/ you can get a rough idea of how long it would take to download 500Kb on different bandwidths.
So, on any connection above 512Kbps (minimum aDSL speeds, most are now better than 5Mbps, and 3G mobile is around the same mark), it's unlikely that the file size is the problem.
If "low bandwidth" also implies "limited hardware" (RAM, CPU), it's possible the performance problem lies in unzipping and processing your application. Angular is pretty responsive, but low-end hardware may struggle.
The above root causes would justify rewriting the application using your own custom framework.
The most likely problem, however, is any assets/resources/templates your angular app requires on initialization - images, JSON files etc. This is hard to figure out without specific details - each app is different. The good news is that you shouldn't need to rewrite your application - you should be able to use your existing application and tweak it. I'm assuming the 500Kb application can't be significantly reduced in size without a rewrite, and that the speed problem is down to loading additional assets as part of start-up.
I'd use Google Chrome's Developer tools to see what's going on. The "performance" tab has an option to simulate various types of network conditions. The "network" tab allows you to see which assets are loaded, and how long they take. I'd look at which assets take time, and seeing which of those can be lazy loaded. For instance, if the application is loading a very large image file on start-up, perhaps that could be lazy-loaded, allowing the application to appear responsive to the end user more quickly.
A common way to improve perceived performance is to use lazy loading.
To decrease your load time just process your caching and find the right download tool to calculate the download speed of your file. You can use https://downloadtime.org for reference. If you have any issues let me know. Also to To decrease the page load time try to create chunks of your javascript functionalities which consist only of the functionality which is needed for e.g. the index page to decrease the load time.
As angular.js itself has a gzipped size of 57kb it seems there is much more loaded with this initial page call which is ~10 times the size of angular.js.
To decrease the page load time try to create chunks of your javascript functionalities which consist only of the functionality which is needed for e.g. the index page to decrease the load time.
For example when you're using Webpack the recommended default maximum file size is around 244kb see here

Upload millions of images from tablet to server

I want to architect a system that will allow thousands of users to upload images from a tablet to a content management system. In one upload, each user can upload up to 12 images at a time and there could be up to 20,000 uploads per day. As the numbers are <240,000 images per day, I've been wondering what is the best approach to avoid bottle necking during peak times.
I'm thinking along the lines of using a web server farm (IIS) to upload the images though HTTP POST. Where each image is less than 200kB and I could store the images on a file system. This would be 48GB per day and only 16TB per year.
Then I could store the image metadata in SQL Server DB along with other textual data. At a later time, the users will want to recall the images and other (text) data from a DB to the tablet for further processing.
On a small scale, this is no problem ,but I'm interested in what everyone thinks is the best approach for uploading/retrieving such a large number of images/records per day?
I've been wondering what is the best approach to avoid bottle necking during peak times.
Enough hardware. Period.
I'm thinking along the lines of using a web server farm (IIS) to upload the images though
HTTP POST.
No alternative to that that is worth mentioning.
This would be 48GB per day and only 16TB per year.
Yes. Modern storage is just fantastic ;)
Then I could store the image metadata in SQL Server DB along with other textual data.
Which makes this ia pretty smal ldatbase - which is good. At the end that means the problem runs down into the image storage, the database is not really that big.
On a small scale, this is no problem ,but I'm interested in what everyone thinks is the
best approach for uploading/retrieving such a large number of images/records per day?
I am not sure you are on a large scale yet. Problems will be around:
Number of files. You need to split them into multiple folders and best have the concept of buckets in the database so you can split them into multiple buckets each being their own server(s) - good for long term maintenance.
Backup / restore is a problem, but a lot less when you use (a) tapes and (b) buckets as said above - the chance of a full problem is tiny. ALso "3-4 copies on separate machines" may work well enough.
Except the bucket problem - i.e. yo can not put all those files into a simple folder, that iwll be seirously unwieldely - you are totally fine. This is not exactly super big. Keep the web level stateless so you can scale it, same on the storage backend, then use the database to tie it all together and make sure you do FREQUENT database backups (like all 15 minutes).
One of possible way is upload from client directly to Amazon S3. It will scale and receive any amount of files thrown at it. After upload to S3 is complete, save a link to S3 object along with useful meta to your DB. In this setup you will avoid file upload bottleneck and only have to be able to save ~240,000 records per day to your DB which should not be a problem.
If you want to build service that adds a value and save some (huge amount actually) time on file uploads, consider using existing 3rd party solutions that are built to solve this particular issue. For example - Uploadcare and some of its competitors.

Silverlight Large File Downloader

I've got an interesting one: the ability to marshal the download of files - many in the gigabyte region of data.
I have a silverlight website that allows the upload of large volumes of data (Gigs) using the following plugin: http://silverlightuploader.codeplex.com/
However, I also want to be able to allow users to download the same data too. But I want to be able to restrict the amount of concurrent downloads. Thus the idea of directly controlling a stream of data to the client via silverlight is compelling - as I don't want to directly install anything on the machine.
My question is: For the volume of data I am looking at retrieving is it appropriate to use the WebClient class (I can specify how many bytes into the http stream I want to read, so I can download it incrementally, and put some business rules round it checking how many people are currently downloading, and make it wait until user count has gone down...), or can I use sockets to keep the overhead down of HTTP?
Unless there is a project I've not found which does exactly this thing!
Cheers in advance,
Matt
As long as you download the data in chunks of some smaller size then the actual volume of the total file won't matter and it won't really matter what you use to do the downloading. For example, for a file of that size I would just use the WebClient class and download chucks of maybe 1 or 2 Mb at a time to a temporary storage file on disk. You'll have to keep track of how much you've got downloaded and where you need to start the next chuck from, but that isn't overly difficult a problem. You can use sockets but then you have to communicate with the web server yourself to get access to the file in the first place.
When a client connects to download the next chunk, that is where you would enforce your business logic concerning the number of concurrent users. There might be a library you can use to do all this but to be honest it's not a complex problem.

What's the best way to send pictures to a browser when they have to be stored as blobs in a database?

I have an existing database containing some pictures in blob fields. For a web application I have to display them.
What's the best way to do that, considering stress on the server and maintenance and coding efforts.
I can think of the following:
"Cache" the blobs to external files and send the files to the browser.
Read them from directly the database every time it's requested.
Some additionals facts:
I cannot change the database and get rid of the blobs alltogether and only save file references in the database (Like in the good ol' Access days), because the database is used by another application which actually requires the blobs.
The images change rarely, i.e. if an image is in the database it mostly stays that way forever.
There'll be many read accesses to the pictures, 10-100 pictures per view will be displayed. (Depending on the user's settings)
The pictures are relativly small, < 500 KB
I would suggest a combination of the two ideas of yours: The first time the item is requested read it from the database. But afterwards make sure they are cached by something like Squid so you don't have to retrieve the every time they are requested.
one improtant thing is to use proper HTTP cache control ... that is, setting expiration dates properly, responding to HEAD requests correctly (not all plattforms/webservers allow that) ...
caching thos blobs to the file system makes sense to me ... even more so, if the DB is running on another server ... but even if not, i think a simple file access is a lot cheaper, than piping the data through a local socket ... if you did cache the DB to the file system, you could most probably configure any webserver to do a good cache control for you ... if it's not the case already, you should maybe request, that there is a field to indicate the last update of the image, to make your cache more efficient ...
greetz
back2dos

Resources