I am trying to render a video project that I created with kdenlive. It is about 50 minutes long and contains a dozen short 1080p video clips and several hundred still images (mostly 18MP). melt runs and proceeds to consume all 4GB of my RAM, at which point it is killed by the kernel.
I have tried both mlt 0.9.0 that came with Ubuntu 14.04, and I have tried the latest version, 0.9.8, that I compiled myself. No difference.
Is this indicative of a problem with melt, or is it just not realistic to render this kind of project with only 4GB of RAM?
Do you have 4 GB free RAM before launching melt? I do expect a project of that complexity and resolution to consume near 4 GB. You can readily remove half the project contents and make a test to see how it compares. There is a workaround that requires editing the project XML to set autoclose=1 on the playlists, but that is not set by default since it only works with sequential processing and will break handling in a tool that seeks such as Kdenlive.
Related
On a windows server 2019, I have many directories under a main images directory, each with about 100 images.
A 32-bit executable currently relies on disk-caching for serving the images. The current load on the exe is about 100 images per second. It's coming to the point where It seems disk-caching only slows the threads - each calling for a different image.
My initial thought is to have a 64-bit exe load everything in-mem (about 200GB RAM). The RAM on the server is sufficient. Once loaded in the 64-bit exe - either find a way to share the memory between the 32-bit and the 64 bit exe's, or use TIdTCPServer on the 63 bit and TIdTCPClient on the 32-bit, calling each time for the image.
Also, my initial thought of the shared-mem is the limit of the 32-bit EXE (unless there is a way) to access the shared-mem from the 64 bit.
Only other way I see possible is preloading MSSQL-server 2019 with the images in a memory-table - which would guarantee performance, reliability and lower the development/testing time on the TidTCP svr/cli.
The main idea is is to have a good reliable solution with the lowest headache for dev/test/lifecycle support.
Any thoughts with applicable code are welcomed
TIA
I created this node package
When it was the version of 1.7.3, Its unpacked size was only 618KB
But after updating to the version of 2.0.0 with just a little file change, its size became 4.35 MB
The super weird thing is the fact that I rather reduced file size after the 1.7.3 version by removing a third-party module that I had imported and a few js and CSS files from this project but still, it's 4.13MB
1.I don't think the unpacked size is related to the actual size of the node module. is that right?
If I'm correct what exactly is unpacked size and is there a way to reduce the size?
If I'm wrong, what factors might have increased the size? and How could I reduce the unpacked size?
Note
I started this project with npx create-react-library command.
created by https://www.npmjs.com/package/create-react-library
Whenever I was trying to publish, what I did was just one command
npm publish
this command did all work for me to publish.
This was my first time to create a node package. So please understand me if it turns out to be a very silly mistake.
If it is packed into tgz or tar.gz, it is basically just the same thing as a zip file. They just use different algorithms for compression of data. They compress this data so that the download experience is more convenient. Smaller files mean quicker download times.
That said, The smaller size and larger sizes will be directly correlated. Imagine pushing the air out of a bag of potato chips. Although this will make any bag smaller, a full bag will still occupy the most space.
As we discussed earlier, the unpacked size is the size that your application will eventually be once it is installed on a machine. The same method used to zip it into a tgz file is used to reinflate it on the other end of the download, so that it can be used by node. The size that your package was just before you packed it should be the same size that it ends up being after it is unpacked. This is what 'unpacked size' is referring to. The correlation isn't perfect. In other words, a project twice the size doesn't mean a power ball twice the size. Other factors are at play. The average size for a single file in your package has a lot to do with it as well. In the earlier analogy, imagine crushing all of the potato chips to crumbs before pushing the air out. You would still be packing the same amount of chips, but would need a lot less space.
This is where the answer gets a bit murky. It is hard to know for sure what is causing your file size to bloat without actually seeing the unpacked files for both versions. With that said, I'm sure that you could do a very small bit of investigation and figure it out on your own. It is just simple math. The file sizes of your individual files, when added together, should be just a little less than the unpacked size of your package. The conversion from unpacked size 2 tarball size is as I mentioned above.
One thing that I will point out and highlight is that you need to check your dependencies for malicious software. If you don't trust it, as a rule, don't use it. If version 3 of a dependency is 3 times the size of version 2 with no reason, it is suspect.
Just yesterday, I read that more than 3000 docker images on docker hub currently contain malware. Doctor hub is used by industry leaders everyday!
So I have a WPF application which contains lots of real time effects - drawing directly to bitmaps, shader effects, etc. - and it all works great.
However, GPU usage is a bit higher than I would like, It hovers around 20%. I during the process of optimising, I tried attaching the Visual Studio profiler. When this is attached, and GPU Usage is selected as one of the profiling tools, when my WPF application runs....... the same app, with the same content....... GPU usage hovers around 5%!!!
After a heck of a lot of messing around, it does indeed seem that yes, when the profiler is attached (specifically GPU Usage, doesn't happen e.g. with just CPU usage selected) then yes the GPU usage plummets.
Please note the following (and I reference 5% and 20% as being if the profiler is attached or not)
I monitored GPU usage in task manager, and cross checked in Windows Performance Monitor/perfmon with the gpu are showing the same thing, I do not believe it is being misreported
You can look at system power monitoring and physically see more power being used under higher load level
If pushed my applications content, you can visually see in certain cases that at 5%, things run a bit smoother than at 20% (less frame drops), though in general...
Frame rate at both 20% and 5% usage is same 60fps
Happens if run directly in visual studio as debug, release, optimisation on or not, whatever
Happens if published and run stand alone
You can attach visual studio at run time with it's profiler (GPU tool) - starting and stopping the profiler literally toggles between 5% and 20% usage when you do it - no restarting my App or anything
Profiling everything in detail using visual studio, jetbrains dotTrace etc. does not identify any noticeable differences in the running app during 5% and 20% usage. E.g. jetbrains output showing call trees, time spent processing, call rates, etc. - 5% and 20% produce the same outputs.
Good old WPF Performance Suite/wpfperf shows no difference between 20% and 5% usage of number of calls being made etc. (though the visual profiler doesn't seem to work with latest .net core unfortunately)
GPU profiling is not showing any difference in VS
Nvidia CUDA toolkit - well it didn't want to work trying to profile this. Nor did RenderDoc - so did look at those.
I can scale the usage of gfx used in my app up and down, to vary the 20%/5%, but there is always a difference between profiler attached or not
Playing with the windows timer, just in case - Windows low level system timer resolution running at a consistant 1ms for both 20% and 5% - and confirmed no other known power saving settings are being changed. This was confirmed both at run time, and variants where I manually set things in code.
My app is a .net core 3.1 app. GFX is an nvidia GTX 2060s.
Of note, I have seen similar before in a separate WPF app (lots of 3D inside it, running .net 4.x framework) - where running the gpu profiler as above would make the 3D rendering run more smoothly. Tested on different pc's with different GFX hardware. It is also the same across different versions of GFX drivers.
Absolutely stumped what might be causing this.... I wouldn't mind if it was the other way around and was 4 times faster when no profiler was attached!
I am aware that when profiling, various things might get set in the background. I would have no clue at all what these might be.
Does anyone have any ideas at all?
Many thanks
Martin
Extra:
I found something similar, which is when the debugger is not attached in visual studio then performance is better - but my case here doesn't require any debugger, and appears to be GPU profiler specific, so don't believe it is anything like that. Why does my program run way faster when I enable profiling?
Example screenshot of performance on system demonstrating this...
High usage = no profiler attached. When everything drops, the profiler is attached and running (from around 15-38 seconds). Big red arrows = my task. Note there is other activity going on, including visual studio starting up the profiler, detatching it, etc.
Example project (source + built) you can see this happening...
https://1drv.ms/u/s!As6cQRoZ5gU5x8FzXdwcYS1qEFqjdg?e=98o64j
...note this is a new WPF project, created 15 minutes ago independently of my original project, with a test 3d object loaded into it - and also shows a performance difference - almost 50% less on my pc when visual studio gpu profiler attached
I have made a very simple game in titanium mobile. I only use 90k of sound files, but use quite a lot of graphics, so my .apk file is about 2.5MB. I am guessing most of this comes from the graphics files. I have a couple of specific questions.
Does the size of graphics files that are not used get added to final package?
(I am guessing yes, because compiler can not execute dynamic javascript to figure out if file could ever be needed)
Does the size of graphics files in the Resources/iphone folder affect the size of the android package (and visa versa)
Are the packages bigger on average than using native code alone? If so, by how much?
What else can I do to reduce the package size?
What method of compressing images is most successful on android phone?
What size for a file do people consider normal? (when should I stop trying to optimise?)
So basically, how do I measure and reduce the size of the components and final deliverable package?
To answer questions 1, 2, 4 & 6:
1) Yes - unused graphics are added to the final package.
2) No - the Resources/iphone graphics are not included.
You can see the intermediate (pre-apk) by looking at build/android/bin/assets/Resources to see what is being compiled into your binary.
4) You could try minifying the JS files.
6) IMO 2.5MB is pretty small
i tried to answer a few questions:
i think so since the matching splashscreen should be loaded depending on the screen resolution of the device. so you need to have an image in different resolutions in stock.
packages should be zipalign. to check your apk use
zipalign -c -v existing.apk
2.5mb is not as big as i might think. many apps are >10 mb.. so no one will be confused about your app size.
try a look in the android doc.
You can remove unused libraries.
Unzip the apk
go to the /lib directory.
you will find 3 subdirectories:
armeabi
*armeabi-v7a*
x86
now you can create 3 deployments this three above named plattforms.
I am writing code using VS.Net 10 and SlimDX to render 3D content on a D3DImage. It works perfectly under 32 Bit Windows XP. However, after migrating to 64 bit Windows 7 (quad core and 4 GB Ram), the same code does not work any more. The symptom is that after rendering about 10 or 20 times, the D3DImage's IsFrontBufferAvailableChanged event is raised and the property of IsFrontBufferAvailable has a value of false.
I have checked everything I can think of, e.g. RenderCapability.Tier, calling SetBackBuffer, checking the device (in fact it is DeviceEx type) after the front buffer is lost, updating video card driver (nvidia 9500 GT 1G RAM), etc. None of them worked.
One thing that may contribute to the problem is that the image control which uses D3DImage as the source is not created on the GUI thread. I am doing to reduce the work load of the GUI thread to make the application more responsive. Of course, I have been using Dispatcher.Invoke to avoid threading problems. Again, it works perfectly in XP.
Any help is much appreciated. Thank you in advance.
I think there is a x64 version of the slimdx.dll.. if you are using the x32 version, that could be the problem.