unpacked size of my node module suddenly increased much bigger with no much change - reactjs

I created this node package
When it was the version of 1.7.3, Its unpacked size was only 618KB
But after updating to the version of 2.0.0 with just a little file change, its size became 4.35 MB
The super weird thing is the fact that I rather reduced file size after the 1.7.3 version by removing a third-party module that I had imported and a few js and CSS files from this project but still, it's 4.13MB
1.I don't think the unpacked size is related to the actual size of the node module. is that right?
If I'm correct what exactly is unpacked size and is there a way to reduce the size?
If I'm wrong, what factors might have increased the size? and How could I reduce the unpacked size?
Note
I started this project with npx create-react-library command.
created by https://www.npmjs.com/package/create-react-library
Whenever I was trying to publish, what I did was just one command
npm publish
this command did all work for me to publish.
This was my first time to create a node package. So please understand me if it turns out to be a very silly mistake.

If it is packed into tgz or tar.gz, it is basically just the same thing as a zip file. They just use different algorithms for compression of data. They compress this data so that the download experience is more convenient. Smaller files mean quicker download times.
That said, The smaller size and larger sizes will be directly correlated. Imagine pushing the air out of a bag of potato chips. Although this will make any bag smaller, a full bag will still occupy the most space.
As we discussed earlier, the unpacked size is the size that your application will eventually be once it is installed on a machine. The same method used to zip it into a tgz file is used to reinflate it on the other end of the download, so that it can be used by node. The size that your package was just before you packed it should be the same size that it ends up being after it is unpacked. This is what 'unpacked size' is referring to. The correlation isn't perfect. In other words, a project twice the size doesn't mean a power ball twice the size. Other factors are at play. The average size for a single file in your package has a lot to do with it as well. In the earlier analogy, imagine crushing all of the potato chips to crumbs before pushing the air out. You would still be packing the same amount of chips, but would need a lot less space.
This is where the answer gets a bit murky. It is hard to know for sure what is causing your file size to bloat without actually seeing the unpacked files for both versions. With that said, I'm sure that you could do a very small bit of investigation and figure it out on your own. It is just simple math. The file sizes of your individual files, when added together, should be just a little less than the unpacked size of your package. The conversion from unpacked size 2 tarball size is as I mentioned above.
One thing that I will point out and highlight is that you need to check your dependencies for malicious software. If you don't trust it, as a rule, don't use it. If version 3 of a dependency is 3 times the size of version 2 with no reason, it is suspect.
Just yesterday, I read that more than 3000 docker images on docker hub currently contain malware. Doctor hub is used by industry leaders everyday!

Related

uboot using FIT to upgrade filesystem

I want to upgrade my systems in the field using the uboot FIT images.
My system is a custom firmware, booted by uboot. So far the FIT filesystem works very good. It provides a shasum verified upload. I am using uboot scripts to update stuff on the target.
One intriguing type defined in uboot docs is type "filesystem". The actual content could be several things, like maybe tar'ed bunch of files, or an actual collection of separate individual files in one chunk in the FIT.
In another FIT question, Tom Rini implied that a filesystem is really just a binary blob. What goes into it is my problem and that uboot could then just mmc write ... or usb write ... to create the new filesystem on some partition. Is this really the case?
How can I build a filesystem (say FAT), on a host build computer for packaging with FIT?
Thanks, Steve
The creation of a filesystem image will depend on the filesystem itself. In many cases, build systems such as OpenEmbedded or buildroot can help you here as they will create the images for you.

Realistic memory usage for MLT Framework project?

I am trying to render a video project that I created with kdenlive. It is about 50 minutes long and contains a dozen short 1080p video clips and several hundred still images (mostly 18MP). melt runs and proceeds to consume all 4GB of my RAM, at which point it is killed by the kernel.
I have tried both mlt 0.9.0 that came with Ubuntu 14.04, and I have tried the latest version, 0.9.8, that I compiled myself. No difference.
Is this indicative of a problem with melt, or is it just not realistic to render this kind of project with only 4GB of RAM?
Do you have 4 GB free RAM before launching melt? I do expect a project of that complexity and resolution to consume near 4 GB. You can readily remove half the project contents and make a test to see how it compares. There is a workaround that requires editing the project XML to set autoclose=1 on the playlists, but that is not set by default since it only works with sequential processing and will break handling in a tool that seeks such as Kdenlive.

Titanium mobile installed package size is too large

I have made a very simple game in titanium mobile. I only use 90k of sound files, but use quite a lot of graphics, so my .apk file is about 2.5MB. I am guessing most of this comes from the graphics files. I have a couple of specific questions.
Does the size of graphics files that are not used get added to final package?
(I am guessing yes, because compiler can not execute dynamic javascript to figure out if file could ever be needed)
Does the size of graphics files in the Resources/iphone folder affect the size of the android package (and visa versa)
Are the packages bigger on average than using native code alone? If so, by how much?
What else can I do to reduce the package size?
What method of compressing images is most successful on android phone?
What size for a file do people consider normal? (when should I stop trying to optimise?)
So basically, how do I measure and reduce the size of the components and final deliverable package?
To answer questions 1, 2, 4 & 6:
1) Yes - unused graphics are added to the final package.
2) No - the Resources/iphone graphics are not included.
You can see the intermediate (pre-apk) by looking at build/android/bin/assets/Resources to see what is being compiled into your binary.
4) You could try minifying the JS files.
6) IMO 2.5MB is pretty small
i tried to answer a few questions:
i think so since the matching splashscreen should be loaded depending on the screen resolution of the device. so you need to have an image in different resolutions in stock.
packages should be zipalign. to check your apk use
zipalign -c -v existing.apk
2.5mb is not as big as i might think. many apps are >10 mb.. so no one will be confused about your app size.
try a look in the android doc.
You can remove unused libraries.
Unzip the apk
go to the /lib directory.
you will find 3 subdirectories:
armeabi
*armeabi-v7a*
x86
now you can create 3 deployments this three above named plattforms.

Configuration Management for FPGA Designs

Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and wrote a little on 'why' at my blog.
In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it.
What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random.
The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true:
You have to manually edit the HDL files produced by Coregen.
The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an output file).
You have to copy-and-paste the output from Coregen into your top-level design.
This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
I suggest CM tools that support version labeling and binary files. Most Software CM applications are fine with ASCII text files. They may just store a "difference" file rather than the entire file for updates.
My recommendations: PVCS, ClearCase and Subversion. DO NOT USE Microsoft SourceSafe. I don't like it because it only supports one label per revision.
I've seen Perforce and Subversion used in a couple of FPGA-intensive companies.
We use Perforce, and its great. You can have your code that lives in Linux-land checked in side-by-side with your Specs and Docs that live in Windows-land. And you get branching, labels, etc.
I've seen everything from Clearcase to RCS used, and it is really all okay for this kind of thing. The important thing is to get a good set of check-in policies established for your group, and make sure they stick to it.
And have automated nightly regressions. That way, when someone breaks the rules, they can be identified and publicly shamed.
I have personally used Perforce, Subverion, git and ClearCase for FPGA projects. Since VHDL and C are just text files, any works fine. However be sure to capture the other project and contraint files and any libraries you use.
Also think about what to do with the outputs, e.g. log file and bitstreams. Both tend to be big and the bitstreams are binaries.
Previously I used Subversion but have switched to git two years ago. Git handles FPGA design files just as well as it handles every other text and binary file. Git is all you need for version controlling your files and artifacts.
For building the designs, I recommend just using a single ISE project called "ise" (living in a subdirectory called "ise/"). You can take a look at my (very modest) FPGA open-source project on github for the file layout. I don't bother storing the ISE files at all since they are easy to regenerate. The only things I save are the Verilog files and some ISIM waveform config files. In other projects that use coregen I save the coregen.cgp project file and all of the *.xco scripts for regenerating cores. Then I use a Makefile for actually running coregen on the *.xco files. There are a few other Xilinx-specific files you should version control too: *.ucf, *.coe, *.xcf, etc.
I experimented with using Makefiles and the Xilinx command-line tools but found that ISE did a much better job tracking dependencies and calling the tools with the right arguments. Just don't make the mistake of trying to version control your ise/ project files or you will go mad. Xilinx has something like 300 different file types which change every release. If you want to save a file, you can try the ISE project file itself with a .xise extension. Anything that is hard to recreate, like the golden bitfile that you know works and took 6 hours to build, you might want to copy that and configuration manage it explicitly.

Read data from damaged media

Is it possible to read damaged media (cd, hdd, dvd,...) even if windows explorer bombs out?
What I mean to ask is, whether there is a set of APIs or something that can access the disk at a very low level (below explorer?) and read whatever can be retrieved even if it is only partial, especially if you can still see the file is there from explorer, but can't do anything with it because it is damaged somehow (scratch on cd, etc)?
The main problem with Windows Explorer is that it doesn't support resuming copying after a read error. Most superficially scratched CDs, for example, will fail on different areas of the disk every time you eject and reinsert them.
Therefore, with a utility that supports resuming copy operations, it is possible to read the entire contents of a damaged CD with by doing "eject/reload/resume" a few times.
In fact, this is what a utility I wrote does, and I've never needed anything fancier to read scratched disks. (It simply uses ReadFile and WriteFile.)
One step lower would be opening the raw partition (i.e. disk image) by passing a string such as "\.\F:" (note: slashes are literal here) to CreateFile. It would allow you to read raw sectors from a drive, but reconstructing files from that data would be hard.
In fact, the "\.\" syntax allows you to open devices in the "\GLOBAL??" branch of the Windows Object Manager namespace as if they were files. It's not unlike calling dd with /dev/x as a parameter. There is also a "\Device" branch, but that's only accessible via DeviceIoControl() (i.e. ioctl()), meaning there's no simple ReadFile()/WriteFile() interface.
Anything lower level than that would be device-specific, I guess; like reading raw CD-ROM data (including ECC bits) the way some CD-burning programs do. You'd have to do some research on the specific media (CD, flash, DVD) and what your hardware allows you to do on them.
Note: The backslashes seem to get lost on the way to the web page; you need to pass "backslash backslash dot backslash DeviceName" to CreateFile. You need to escape them, too, of course.
If you want to do it, do it from the Linux side - see: http://sourceforge.net/projects/monkeycity/ opensource
or ready made app and freeware too: http://www.theabsolute.net/sware/dskinv.html
the first step is dd_rescue. After that, you're free to try anything to reconstruct the data.
And there's GNU ddrescue
GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying to rescue the good parts first in case of read errors.
Make sure to use the 3-arg version (manual):
ddrescue [options] infile outfile [mapfile]
That is, do use a mapfile even if it's optional, because:
If you use the mapfile feature of ddrescue, the data is rescued very efficiently, (only the needed blocks are read). Also you can interrupt the rescue at any time and resume it later at the same point. The mapfile is an essential part of ddrescue's effectiveness. Use it unless you know what you are doing.
And it's also included in Cygwin and Homebrew.
I don't know what layer exists between Windows Explorer and the Win32 APIs. You can try to write a program with the Win32 File I/O stuff. If that doesn't work, then you have to write your own device driver to get any lower.
I've had some luck from the linux side, or using BartPE (http://www.nu2.nu/pebuilder/), but just seeing the file doesn't always mean the file is going to be recoverable, whether you're trying from Windows or Linux. You're best bet might be to use a trial of a recovery program.
I have had two disks start to disintegrate on me. From the pattern of unreadable sectors I think they had internal flaking of their emulsion. WinXP Explorer just threw up its hands and said the drive didn't even exist.
In both cases I used "GetDataBack for NTFS" from Runtime Software (http://www.runtime.org/). You can download a free trial which will show you what you could get back if you paid for it. When I bought it it was $49, but I see it is now $79.
This program is amazing. It's not necessarily fast as it will reread some sectors over and over, trying to get a consensus value from multiple tries, but when it's done you can get back stuff that you thought was gone forever. I had one drive that it took over 10 hours to analyze, but when it was done I got back over 97% of a 500GB drive. Definitely worth the price.
Another great tool is Beyond Compare. I have rev 2.5.3, but it is currently at 3.?? and costs $30. They have a full-functionality, 30-day trail. It does a great job of copying large quantities of files (and only those that need to be copied) and, unlike Explorer, it doesn't blow up if something fails. It's sort of like a visual rsync for Windows, if you're familiar with that program from the Samba people.
I have no connection with either of the comapnies mentioned other than being a very satisfied customer.
The gold standard for recovering data from a magnetic storage device would have to be SpinRite. It's a commerical app though, so you probably wouldn't learn much from it.
If you have a Linux machine around, I can recommend dvdisaster. It is originally meant for creating error correction files, but it also reads DVDs into an image and ignores read errors; and you can use different drives one after another to get missing sectors filled in the image.

Resources