Reducing the size of frozen model converting into tfjs model - tensorflow.js

I have a frozen model of 96mb and using "quantization 1" I have reduced it to 24MB.
Is there a way to reduce this size further, I want my size to be in between 10-12MB (Preferably 10MB)
Thanks

Related

Huge array of libSDL textures

I am developing an app that presents the user with a potentially very large user-generated image gallery, 10 or so images at the time.
The app is to be implemented in C using libSDL and 2D textures for accelerated rendering.
The overall gist of it in pseudocode is:
while cycle < MAX_CYCLES
while i < MAX_STEPS
show a gallery of 10 image thumbnails
while (poll events)
if event == user has pushed next
break
i++
scramble image galleries using a genetic algorithm
cycle++
I could load every image from disk at initialization time, creating all the required textures, so image presentation is fast. But of course this would be slow and potentially allocate a huge array of textures.
I will scale down the images for presentation, so this could mitigate the problem, but the total size of the collection depends on user preference. Surely I can cap the maximum value, but it cannot be small.
I was thinking about unloading every unused image at every step of every cycle, using SDL_FreeSurface and SDL_DestroyTexture. This would mean reloading the data from disk, recreating the surface and recreating the texture each time. Is this a viable approach?
Also I understand that SDL textures are stored in GPU memory, so the amount of available memory on the card should be my main concern. Am I right?
In summary, is there a recommended method to deal with this type of situation?
I would keep always 3 slides in memory.
Prev - Current - Next
While presenting the current slide, preload the next slide and unload the slide no (Current - 2).
Also I understand that SDL textures are stored in GPU memory, so the amount of available memory on the card should be my main concern. Am I right?
Not quite, if the GPU (Driver) seems it necessary, it will outsource unused texture data to RAM.
For Example, if you're presenting 10 Images and thus have 30 Images present in memory, then for 2K (with alpha) (1920 x 1080 x 4) you will need approx. 250 MB.
As long as you don't run on an embedded system (or very old, outdated system), this shouldn't be a big concern.

How to train a custom Object detector from scratch in tensorflow.js?

I followed multiple example, to train a custom object detector in TensorflowJS . The main problem I am facing every where it is using pretrained model.
Pretrained models are fine for general use cases, but custom scenario it fails. For example, take this this is example form official Tensorflowjs examples, here it is using mobilenet, and mobilenet and mobilenet has image size restriction 224x224 which defeats all the purpose, because my images are big and also not of same ratio so resizing is not an option.
I have tried multiple example, all follows same path oneway or another.
What I want ?
Any example by which I can train a custom objector from scratch in Tensorflow.js.
Although the answer sounds simple but trust me I searching for this for multiple days. Any help will be greatly appreciated. Thanks
Currently it is not yet possible to use tensorflow object detection api in nodejs. But the image size should not be a restriction. Instead of resizing, you can crop your image and keep only the part that contain your object to be detected.
One approach will be like partition the image in 224x224 and run for all partitions but what if the object is between two partitions
The image does not need to be partitioned for it. When labelling the image, you will need to know the x, y coordinates (from the top left) and the w, h of the detected box. You only need to crop a part of the image that will contain the box. Cropping at the coordinates x - (224-w)/2, y- (224-h)/2 can be a good start. There are two issues with these coordinates:
the detected boxes will always be in the center, so the training will be biaised. To prevent it, a randomn factor can be used. x - (224-w)/r , y- (224-h)/r where r can be randomly taken from [1-10] for instance
if the detected boxes are bigger than 224 * 224 maybe you might first choose to resize the video keeping it ratio before cropping. In this case the boxe size (w, h) will need to be readjusted according to the scale used for the resizing

What's the fastest way to hash a very large dataset for UICollectionView Layout...NSIndexPath is too slow

I have a UICollectionViewController with a large dataset (>2000 items) with a custom layout. Using sections, the scrolling performance became extremely choppy. Using Instruments and a few tests, I determined this was due to lookup in the layout (layoutAttributesForElementsInRect: ). I cache layout attributes in prepareLayout, and look them up here like so, in the fastest way I know of:
[elementsInfo enumerateKeysAndObjectsUsingBlock:^(NSIndexPath *indexPath, UICollectionViewLayoutAttributes *attributes, BOOL *innerStop) {
if (CGRectIntersectsRect(rect, attributes.frame)) [allAttributes addObject:attributes];
}];
I found that ~25% of cpu time was spent enumerating this, mostly on [NSIndexPath isEqual:]. So, I need a faster way to hash these values.
It must be possible, because I did a cross test using the same data with a sectioned UICollectionViewFlowLayout and it was smooth.
Well, turns out using arrays instead of dictionaries, and filtering by an NSPredicate was much faster since in this case the indices were already known.

cvAddWeighted for different size of images

I am trying to use CvAddWeighted for different size of images .I know I can crop the size of larger image to match with the smaller image but I dont want to.Is there any other technique ,with which I can Use cvAddWeighted with different size of images ?
It all depends on the relationship between your different images. Assuming the smaller ones are a subset of the biggest one, you can copy a small one into an empty image of maximum size and then use cvAddWhatever on all the resulting big images. Of course it implies that you know before hand the maximum size of your images (or that you can store them somehow)
You should set ROI on the images.

How many rows should be in the (main) buffer of a virtual Listview control?

How many rows should be in the (main) buffer of a virtual Listview control?
I am witting an application in pure 'c' to the Win32 API. There is an ODBC connection to a database which will retrieve the items (actually rows).
The MSDN sample code implies a fixed size buffer of 30 for the end cache (Which would almost certainly not be optimal). I think the end cache and the main cache should be the same size.
My thinking is that the buffer should be more than the maximum number of items that could be displayed by the list view at one time. I guess this could be re-calculated each time the Listivew was resized?
Or, is it just better to go with a large fixed value. If so what is that value?
Use the ListView_ApproximateViewRect (or the LVM_APPROXIMATEVIEWRECT message) to get the view rect height.
Use the ListView_GetItemRect (or the LVM_GETITEMRECT message) to get the height of an item.
Divide the view rect height by the height of an item to get the number of items that can fit in your view.
Do this calculation on each size event.
Then create your buffer accordingly.
The LVN_ODCACHEHINT notification message will let you know how many items it is going to ask. This could help you in deciding how big your cache should be.
#Brian R. Bondy Thanks for the explicit help for how to do get the number of items. In fact I was all ready working my way to understanding that it could be done (for list or report view) with ListView_GetCountPerPage, and I would use you way to get it for for the others, though I don't need the ListView_ApproximateViewRect since I will all ready know the new size of the ListView.
#Lars Truijens I am already using the LVN_ODCACHEHINT and had though about using that to set the buffer size, however I need to read to the end of the SQL data to find the last item to get the number of rows returned from ODBC. Since that would be the best time to fill the 'end cache' I think I have to set up the number of items (and therefore fill the buffer) before we get a call to LVN_ODCACHEHIN.
I guess my real question is one of optimization for which I think Brian hinted at the answer. The amount of overhead in trashing your buffer and reallocating memory is smaller than the overhead of going out to the network and doing an ODBC read, some make the buffer fairly small and change it often rather.
Is this right?
I have done a little more playing around, and it seems that think that LVN_ODCACHEHINT generally fills the main buffer correctly, and only misses if a row (in report mode) is partially visible.
So I think the answer for the size of the cache is: The total number of displayed items, plus one row of displayed items (since in the icons views you have multiple items per row).
You would then re-read the cache with each WM_SIZE and LVN_ODCACHEHINT if either had a different begin and end item number.
The answer would seem to be: (Or a random collection of notes as I fiddle around with ideas)
As a general answer for buffers:
Start with some amount, in this case a screen full (I add an extra row in case the next is partially uncovered), and then every time the screen is scrolled, double the buffer size (up to the point before you run out of memory).
Which would seem to be wrong. As it turns out, most ways of loading data are all ready buffered. ODBC calls of File I/O. Pretty much anything that isn't that I can think off is either in memory or is recalculated on the fly. This means the answer really is: take the values provided in LVN_ODCACHEHINT (and add 1 either side - this just seems to work faster if you don't have an integral height).

Resources