I have made a very simple game in titanium mobile. I only use 90k of sound files, but use quite a lot of graphics, so my .apk file is about 2.5MB. I am guessing most of this comes from the graphics files. I have a couple of specific questions.
Does the size of graphics files that are not used get added to final package?
(I am guessing yes, because compiler can not execute dynamic javascript to figure out if file could ever be needed)
Does the size of graphics files in the Resources/iphone folder affect the size of the android package (and visa versa)
Are the packages bigger on average than using native code alone? If so, by how much?
What else can I do to reduce the package size?
What method of compressing images is most successful on android phone?
What size for a file do people consider normal? (when should I stop trying to optimise?)
So basically, how do I measure and reduce the size of the components and final deliverable package?
To answer questions 1, 2, 4 & 6:
1) Yes - unused graphics are added to the final package.
2) No - the Resources/iphone graphics are not included.
You can see the intermediate (pre-apk) by looking at build/android/bin/assets/Resources to see what is being compiled into your binary.
4) You could try minifying the JS files.
6) IMO 2.5MB is pretty small
i tried to answer a few questions:
i think so since the matching splashscreen should be loaded depending on the screen resolution of the device. so you need to have an image in different resolutions in stock.
packages should be zipalign. to check your apk use
zipalign -c -v existing.apk
2.5mb is not as big as i might think. many apps are >10 mb.. so no one will be confused about your app size.
try a look in the android doc.
You can remove unused libraries.
Unzip the apk
go to the /lib directory.
you will find 3 subdirectories:
armeabi
*armeabi-v7a*
x86
now you can create 3 deployments this three above named plattforms.
Related
I created this node package
When it was the version of 1.7.3, Its unpacked size was only 618KB
But after updating to the version of 2.0.0 with just a little file change, its size became 4.35 MB
The super weird thing is the fact that I rather reduced file size after the 1.7.3 version by removing a third-party module that I had imported and a few js and CSS files from this project but still, it's 4.13MB
1.I don't think the unpacked size is related to the actual size of the node module. is that right?
If I'm correct what exactly is unpacked size and is there a way to reduce the size?
If I'm wrong, what factors might have increased the size? and How could I reduce the unpacked size?
Note
I started this project with npx create-react-library command.
created by https://www.npmjs.com/package/create-react-library
Whenever I was trying to publish, what I did was just one command
npm publish
this command did all work for me to publish.
This was my first time to create a node package. So please understand me if it turns out to be a very silly mistake.
If it is packed into tgz or tar.gz, it is basically just the same thing as a zip file. They just use different algorithms for compression of data. They compress this data so that the download experience is more convenient. Smaller files mean quicker download times.
That said, The smaller size and larger sizes will be directly correlated. Imagine pushing the air out of a bag of potato chips. Although this will make any bag smaller, a full bag will still occupy the most space.
As we discussed earlier, the unpacked size is the size that your application will eventually be once it is installed on a machine. The same method used to zip it into a tgz file is used to reinflate it on the other end of the download, so that it can be used by node. The size that your package was just before you packed it should be the same size that it ends up being after it is unpacked. This is what 'unpacked size' is referring to. The correlation isn't perfect. In other words, a project twice the size doesn't mean a power ball twice the size. Other factors are at play. The average size for a single file in your package has a lot to do with it as well. In the earlier analogy, imagine crushing all of the potato chips to crumbs before pushing the air out. You would still be packing the same amount of chips, but would need a lot less space.
This is where the answer gets a bit murky. It is hard to know for sure what is causing your file size to bloat without actually seeing the unpacked files for both versions. With that said, I'm sure that you could do a very small bit of investigation and figure it out on your own. It is just simple math. The file sizes of your individual files, when added together, should be just a little less than the unpacked size of your package. The conversion from unpacked size 2 tarball size is as I mentioned above.
One thing that I will point out and highlight is that you need to check your dependencies for malicious software. If you don't trust it, as a rule, don't use it. If version 3 of a dependency is 3 times the size of version 2 with no reason, it is suspect.
Just yesterday, I read that more than 3000 docker images on docker hub currently contain malware. Doctor hub is used by industry leaders everyday!
I want to upgrade my systems in the field using the uboot FIT images.
My system is a custom firmware, booted by uboot. So far the FIT filesystem works very good. It provides a shasum verified upload. I am using uboot scripts to update stuff on the target.
One intriguing type defined in uboot docs is type "filesystem". The actual content could be several things, like maybe tar'ed bunch of files, or an actual collection of separate individual files in one chunk in the FIT.
In another FIT question, Tom Rini implied that a filesystem is really just a binary blob. What goes into it is my problem and that uboot could then just mmc write ... or usb write ... to create the new filesystem on some partition. Is this really the case?
How can I build a filesystem (say FAT), on a host build computer for packaging with FIT?
Thanks, Steve
The creation of a filesystem image will depend on the filesystem itself. In many cases, build systems such as OpenEmbedded or buildroot can help you here as they will create the images for you.
Is there any lightweight version of the card.io android sdk? Since its size is larger than my whole app. The SDK is 11Mb vs 9Mb of my app. It ends up being larger than 20Mb.
Yes, card.io is big because card scanning is not trivial. You're welcome to excluse all the *.so files if you just want a manual entry form. Example documentation of this is found in the PayPal Android SDK
You may be happy because iOS version is a way bigger.
After including this Framework, a source code is about 300 MB larger, but a whole application's build (IPA file) is let's say, only an about 30 MB in total.
I used Sphinx4 for some time which really fits my needs. I load a recognizer, pass the audio data to it and use the recognized String in my application.
Right now I'm working on a C application (C++ is unfortunately not an option) where I need something similar and thought that I could use Sphinx3 which is written in C.
The problem is that I don't really know how it is used inside an application and there is no "Hello World"-example as Sphinx4 provides it.
I already compiled and installed sphinxbase and sphinx3 and now I can include the sphinx header files in my application.
Now to my questions:
Is there a "simple" and well documented example application that uses sphinx3 from a C environment?
How can I load up the sphinx3 engine and call a recognizer with my binary audio data?
OR: Do I need to start an application like "sphinx3_decode" and call it from my own application? If so, is there an example application for that?
Thank you in advance!
Best regards,
Robert
It's not recommended to use Sphinx3. From the website:
Sphinx-3 is CMU’s large vocabulary speech recognition system. It’s
older C based decoder that we continue to maintain. It’s planned to
make it obsolete in the future, it’s still most accurate decoder for
large vocabulary tasks. We are using it as a baseline to check the
recognizer accuracy. This decoder is only intended for researchers who
want to evaluate bleeding edge methods in ASR like tree search method.
If you need to use a decoder you should use pocketsphinx. You can find the tutorial and the API documentation on the website
http://cmusphinx.sourceforge.net/wiki/tutorialpocketsphinx
http://cmusphinx.sourceforge.net/api/pocketsphinx/pocketsphinx_8h.html
I Recently worked on an Intregated Project on Punjabi Language.
Here are some steps that we used...
First we recorded the punjabi audio data in a vaccumed room in 16000 hz sample rate.
Then we took the recorded data and segmented it using Praat Software into small wav and raw files of 2 to 30 sec and saved them in a folder named train.
Then we took a system having Linux ie. Ubuntu and installed the required plug in like autoconfig, automake etc and untarred Sphinx 3 along with 4 packages that are cmuclmtk, pocketsphinx, sphinxbase, sphinxtrain.
Then according to the small wav files we made many files like transcription, dic, phone, filler, file id, ccs etc.
Then we opened the terminal and typed –"sphinx_fe” to check the whether the sphinx is functional or not.
Then we created an folder named “man” and then in terminal wrote its path.
Then we run the command- “sphinxtrain –t man setup”. By running this command an folder named “etc” will be formed in “man” folder containing files “feat_paramas” & ”config”.
Changes were made in the in the config file according to our data.
Then we moved all the files that we created before ie. transcription, dic in the etc folder in that is located in man folder.
Then we placed ‘lang1.sh” script in etc folder and remaining 4 scripts in man folder.
Then we opened the path for etc folder in terminal and run command- “lang1.sh”
Then we run series of commands in terminal – “mfcgen2.sh” then “verify3.sh” then “hmm4.sh” and at last “end-test.sh” to get the final result.
Rest if you have worked on Sphinx 4 then you may know about the files that are mentioned above in the steps. I hope this helps you.
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and wrote a little on 'why' at my blog.
In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it.
What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random.
The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true:
You have to manually edit the HDL files produced by Coregen.
The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an output file).
You have to copy-and-paste the output from Coregen into your top-level design.
This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
I suggest CM tools that support version labeling and binary files. Most Software CM applications are fine with ASCII text files. They may just store a "difference" file rather than the entire file for updates.
My recommendations: PVCS, ClearCase and Subversion. DO NOT USE Microsoft SourceSafe. I don't like it because it only supports one label per revision.
I've seen Perforce and Subversion used in a couple of FPGA-intensive companies.
We use Perforce, and its great. You can have your code that lives in Linux-land checked in side-by-side with your Specs and Docs that live in Windows-land. And you get branching, labels, etc.
I've seen everything from Clearcase to RCS used, and it is really all okay for this kind of thing. The important thing is to get a good set of check-in policies established for your group, and make sure they stick to it.
And have automated nightly regressions. That way, when someone breaks the rules, they can be identified and publicly shamed.
I have personally used Perforce, Subverion, git and ClearCase for FPGA projects. Since VHDL and C are just text files, any works fine. However be sure to capture the other project and contraint files and any libraries you use.
Also think about what to do with the outputs, e.g. log file and bitstreams. Both tend to be big and the bitstreams are binaries.
Previously I used Subversion but have switched to git two years ago. Git handles FPGA design files just as well as it handles every other text and binary file. Git is all you need for version controlling your files and artifacts.
For building the designs, I recommend just using a single ISE project called "ise" (living in a subdirectory called "ise/"). You can take a look at my (very modest) FPGA open-source project on github for the file layout. I don't bother storing the ISE files at all since they are easy to regenerate. The only things I save are the Verilog files and some ISIM waveform config files. In other projects that use coregen I save the coregen.cgp project file and all of the *.xco scripts for regenerating cores. Then I use a Makefile for actually running coregen on the *.xco files. There are a few other Xilinx-specific files you should version control too: *.ucf, *.coe, *.xcf, etc.
I experimented with using Makefiles and the Xilinx command-line tools but found that ISE did a much better job tracking dependencies and calling the tools with the right arguments. Just don't make the mistake of trying to version control your ise/ project files or you will go mad. Xilinx has something like 300 different file types which change every release. If you want to save a file, you can try the ISE project file itself with a .xise extension. Anything that is hard to recreate, like the golden bitfile that you know works and took 6 hours to build, you might want to copy that and configuration manage it explicitly.