On a windows server 2019, I have many directories under a main images directory, each with about 100 images.
A 32-bit executable currently relies on disk-caching for serving the images. The current load on the exe is about 100 images per second. It's coming to the point where It seems disk-caching only slows the threads - each calling for a different image.
My initial thought is to have a 64-bit exe load everything in-mem (about 200GB RAM). The RAM on the server is sufficient. Once loaded in the 64-bit exe - either find a way to share the memory between the 32-bit and the 64 bit exe's, or use TIdTCPServer on the 63 bit and TIdTCPClient on the 32-bit, calling each time for the image.
Also, my initial thought of the shared-mem is the limit of the 32-bit EXE (unless there is a way) to access the shared-mem from the 64 bit.
Only other way I see possible is preloading MSSQL-server 2019 with the images in a memory-table - which would guarantee performance, reliability and lower the development/testing time on the TidTCP svr/cli.
The main idea is is to have a good reliable solution with the lowest headache for dev/test/lifecycle support.
Any thoughts with applicable code are welcomed
TIA
Related
I created this node package
When it was the version of 1.7.3, Its unpacked size was only 618KB
But after updating to the version of 2.0.0 with just a little file change, its size became 4.35 MB
The super weird thing is the fact that I rather reduced file size after the 1.7.3 version by removing a third-party module that I had imported and a few js and CSS files from this project but still, it's 4.13MB
1.I don't think the unpacked size is related to the actual size of the node module. is that right?
If I'm correct what exactly is unpacked size and is there a way to reduce the size?
If I'm wrong, what factors might have increased the size? and How could I reduce the unpacked size?
Note
I started this project with npx create-react-library command.
created by https://www.npmjs.com/package/create-react-library
Whenever I was trying to publish, what I did was just one command
npm publish
this command did all work for me to publish.
This was my first time to create a node package. So please understand me if it turns out to be a very silly mistake.
If it is packed into tgz or tar.gz, it is basically just the same thing as a zip file. They just use different algorithms for compression of data. They compress this data so that the download experience is more convenient. Smaller files mean quicker download times.
That said, The smaller size and larger sizes will be directly correlated. Imagine pushing the air out of a bag of potato chips. Although this will make any bag smaller, a full bag will still occupy the most space.
As we discussed earlier, the unpacked size is the size that your application will eventually be once it is installed on a machine. The same method used to zip it into a tgz file is used to reinflate it on the other end of the download, so that it can be used by node. The size that your package was just before you packed it should be the same size that it ends up being after it is unpacked. This is what 'unpacked size' is referring to. The correlation isn't perfect. In other words, a project twice the size doesn't mean a power ball twice the size. Other factors are at play. The average size for a single file in your package has a lot to do with it as well. In the earlier analogy, imagine crushing all of the potato chips to crumbs before pushing the air out. You would still be packing the same amount of chips, but would need a lot less space.
This is where the answer gets a bit murky. It is hard to know for sure what is causing your file size to bloat without actually seeing the unpacked files for both versions. With that said, I'm sure that you could do a very small bit of investigation and figure it out on your own. It is just simple math. The file sizes of your individual files, when added together, should be just a little less than the unpacked size of your package. The conversion from unpacked size 2 tarball size is as I mentioned above.
One thing that I will point out and highlight is that you need to check your dependencies for malicious software. If you don't trust it, as a rule, don't use it. If version 3 of a dependency is 3 times the size of version 2 with no reason, it is suspect.
Just yesterday, I read that more than 3000 docker images on docker hub currently contain malware. Doctor hub is used by industry leaders everyday!
I have a new SSD (256GB) where I got installed ESXi 6.7 (the usual bunch of standard partitions) + couple of VMFS volumes (20GB each) that contains two Linux flavors. Everything works. So, now I would like to have a backup copy of the system then I cloned the SSD into another identical SSD (in the future I wanted to have a file image .iso) but when I test the new cloned USB something is wrong. ESXi boots up and I do see the two VMFS partitions but they do not appear as datastores, they are not usable and the two virtual machines appear to be broken (most likely because of no datastore).
I do the copy, booting with a live linux and using dd:
dd if=/dev/sdb of=/dev/sda bs=512 conv=noerror, sync
of course sda and sdb are not in use when I clone, as Linux boots up from the USB.
Any idea why the exact copy delivered by dd does not work exactly the same original SSD? Is there any special settings to be used with dd?
I still don't know why a bit by bit cloned copy does not show the same but eventually mounting manually (via ssh) the VMFS partitions makes the magic and the virtual machine works.
I am trying to render a video project that I created with kdenlive. It is about 50 minutes long and contains a dozen short 1080p video clips and several hundred still images (mostly 18MP). melt runs and proceeds to consume all 4GB of my RAM, at which point it is killed by the kernel.
I have tried both mlt 0.9.0 that came with Ubuntu 14.04, and I have tried the latest version, 0.9.8, that I compiled myself. No difference.
Is this indicative of a problem with melt, or is it just not realistic to render this kind of project with only 4GB of RAM?
Do you have 4 GB free RAM before launching melt? I do expect a project of that complexity and resolution to consume near 4 GB. You can readily remove half the project contents and make a test to see how it compares. There is a workaround that requires editing the project XML to set autoclose=1 on the playlists, but that is not set by default since it only works with sequential processing and will break handling in a tool that seeks such as Kdenlive.
Dealing with an issue where loading a large file into memory in a Silverlight 4 app leads to a an out of memory exception, and a crash. The file is ~100MB. I am trying to determine if Silverlight has some sort of default limit on RAM.
I can tell you only about Silverlight 5, as I'm having issue with it now.
As some author has written here, on any machine (x86, or x64) for 32-bit process the memory limit by default is 2 GB. If special flag in .exe header is set (called IMAGE_FILE_LARGE_ADDRESS_AWARE), then the limit is increased to 4 GB. However, in OOB mode Silverlight app is lauched by C:\Program Files (x86)\Microsoft Silverlight\sllauncher.exe, which is 32-bit process that doesn't have appropriate flag set, so it has 2GB memory limit MINUS ~800 MB for .NET CLR usage.
In short terms: RAM limit (at least in my case, OOB mode) is 1.3 GB.
(sorry I'm answering 1.5-years-old question, but people may want to know...)
I have some automated test (using CUnit) which require a "disk-image"-file (raw copy of a disk) to be "mounted" in windows and explored. I have previously used a tool/library called "FileDisk-17" , but this doesn't seem to work on my Windows 7 (64bit).
Update
I should point out, that changing the image-format (to say VHD) is not at option.
Any suggestions as to other (perhaps better supported) tools or libraries for mouting the file? The project is coded in ANSI C and compiled using MinGW.
Best regards!
Søren
Edit: Searching Bing for +filedisk 64 brings up a 64bit build of FileDisk, the utility you refer to:
http://www.winimage.com/misc/filedisk64.htm
And FileDisk-15 signed for 64bit here:
http://www.acc.umu.se/~bosse/
I can't vouch for it as I have never used it and am not familiar with the author.
Alternatively:
If you have a VHD, you can mount that in windows easily:
http://technet.microsoft.com/en-us/library/cc708295(WS.10).aspx
See also:
http://www.petri.co.il/mounting-vhd-files-with-vhdmount.htm
Since you have a raw DD image not a VHD, you will need to convert it first:
http://www.bebits.com/app/4554
Or qemu-img.exe can also do this:
qemu-img.exe convert -f raw rawdisk.img -O vpc rawdisk.vhd
Alternatively, you can create an empty VHD, and use DD to copy the raw image to the VHD, by opening the VHD as a raw device.
I faced this problem recently and found ImDisk to be an extremely nice solution:
Free, with source available and a very flexible open source license
Trivial setup (I have seen filedisk64 (in the accepted answer) described as having a "technical" setup)
Straightforward GUI and command-line access
Worked on Windows 7 64-bit
Seems to happily mount any kind of filesystem recognised by Windows (in my case, FAT16)
Works with files containing
Raw partitions
Entire raw disks (i.e. including the MBR and one or more partitions; which partition to mount can be selected)
VHD files (which it turns out are just raw partitions or disks with a 512-byte footer appended!)
Also can create RAM drives -- either initially empty or based on an existing disk image! (Very neat I must say!)
I did encounter minor issues trying to unmount drives. I was unable to unmount a drive from the GUI right-click context menu as the drive appeared to be "in use" by the explorer.exe process. Closing the Explorer window and using imdisk -d -m X: also didn't work; however imdisk -D -m X: (-D "forces" an unmount, whatever that means) did. This worked even if the drive was visible in an open Explorer window, without appearing to create any problems. However even after the drive appeared to have fully unmounted, an imdisk -l to list all available devices would still report that \Device\ImDisk0 exists, and if you remount the drive later, both that and \Device\ImDisk1 will appear in the output of imdisk -l (and so on with more unmount/remount cycles). This didn't create any problems with actually using the mounted drive when I tried a few unmount/remount cycles, though it theoretically might if you perform this many times between reboots.
ImDisk was invaluable for transferring the contents of a 1.5Gb disk drive with one FAT16 DOS partition from an ancient 486 machine.