What is the best way to bundle static resources in Nim? - file

Currently, I am writing a web application using Jester and would like to facilitate the deployment by bundling all static resources (CSS, HTML, JS).
What is the best way to do this in Nim ?

The basic way is to use staticRead (aka slurp) to read a file at compile time and have it as a constant in your program. This can get tedious pretty fast since you would need to do this for each file manually, or generate a .nim file with lots of these staticRead() calls based on the current files of your directory before shipping and use those variables.
Another way might be to zip all files and have your program read/unpack them at runtime. The zip can be created without compression if you just want to use it to reduce file clutter on your deployment though you could experiment with fast compression settings which typically improve overall speed (IO is slow, so you program ends up spending less time waiting for the read to complete, and CPUs are really good at uncompressing today).
Combining the above you might want to embed the zip file into your binary and use it as a kind of embedded virtual filesystem.

Related

How to automatically extract all the source code and header files that a small project depends on, within a big project?

A small project is in a big project. The small project that can be compiled with make only uses a small part of the files of this big project. Are there any tools that can automatically extract all the source code and header files that this small project depends on? Manually picking is definitely feasible, but it is inefficient and error-prone since the Makefiles are complex and deeply-nested. Modern IDEs usually build indexes, and I don't know if any IDEs provide this feature of extracting all dependencies.
I ended up using the Process Monitor tool to track and filter all the OpenFile system calls of the build system on Windows, then exported the report and wrote a script to copy the files. But such an approach is not so elegant >_<

Is there a way to prevent a file from being completely loaded by a software?

Is there a way to limit a hard drive from reading a certain file? Ex. It's given to Program A the order to open a .txt file. Program B overloads the .txt file opening hundreds times a second. Program A is unable to open the txt file.
So I'm trying to stress test a game engine that relies on extracting all used textures from a single file at once. I think that this extraction method is causing some core problems to the game developing experience of the engine overall. My theory is that the problem is caused by the slow reading time of some hard drives. But I'm not sure if I'm right on this, and I needed I way to test this out.
Most operating systems support file locking and file sharing so that you can establish rules for processes that share access to a file.
.NET, for example (which runs on Windows, Linux, and MacOS), provides the facility to open a file in a variety of sharing modes.
For very rapid access like you describe, you may want to consider a memory-mapped file. They are supported on many operating systems and via various programming languages. .NET also provides support.

Create single-file executable and embed static files, for legacy C/Linux program

I have a legacy Linux application written for C that relies upon static external files on the filesystem. I'd like to bundle all of them together into a single executable, so that the single-file executable doesn't rely upon anything in the filesystem. Is there a way to do this, without having to make lots of changes to the existing code?
I can link the program statically to avoid any dependencies on dynamic libraries, but the application also relies upon other static resources (other read-only files on the filesystem), which I'd like to embed into the application. I know how to embed an external file into the final executable using objcopy (e.g., as described here or here), but then I need to arrange for the program to use the embedded blob instead of trying to open a file on the filesystem. I've seen some ways to access these ELF sections at runtime (e.g., using linker symbol names or elfdataembed, as described here and here), but they require me to change every place in the program that accesses one of these external files to instead refer to the embedded resource. That sounds tedious and error-prone.
To reduce my workload and reduce bugs, I'd prefer to minimize the amount of changes needed to the application's code. How can I do this, minimizing changes to the application? I'm thinking maybe something that wraps open() to detect attempts to open one of the external files and redirecting them to read from the blob embedded in the executable, for instance, though I'm not sure about some of the details (e.g., how the open() wrapper can create a fake fd; whether I'll need to wrap all of the other filesystem functions as well) or what the pitfalls might be.
How can I achieve this? Or, is there an existing tool I should know about, to avoid re-inventing the wheel?

File Repository (with file history) - What implementation to use?

I'm writing a file backup utility that:
(1) Backs up your current files; and
(2) Allows you to retrieve past versions of such files (similar to code repository revisions).
I'm considering using a source code repository (SVN, Git, Mercurial are main candidates) since they provide similar functionality, except to source code.
What are the advantages/disadvantages of that compared to writing my own proprietary code (e.g. for each file, keep the current file and maintain a binary diff chain down to the oldest revision)?
What method would you recommend, in light of performance considerations?
If it matters, the server program will be written in Python, with performance-critical areas done by C extensions.
Your requirement can be done perfectly using a source code repository. You can just reuse them.
Many project is open source, you can modify them if you want.
EDIT:
For small frequently commits, I think it depends on the frequency and how large the repository is. If the repository is very large and committed frequently, then I think it is very difficult to reach your goal. But if the number of files to back up is not large or the frequency is not high, then it will be OK.

Many files for a single program?

Typically, when I create a program (I have only made a few very simple ones so far) I compile the program into a standalone EXE. Most programs that are distributed nowadays have many exes, dlls, and other files that are installed when you first download the program. Is this wrong to be compiling my programs into standalone EXEs? What are some advantages/disadvantages to a standalone vs multifile program?
The only possible thing I can think of is for updating and fixes, because then instead of having to download a 100MB file and overwrite all user settings data, etc. you can simply download maybe a 400kb file that only replaces the files that need to be fixed.
DLL files are library files. If you are not using any functions from a library, then the DLL files will not be included with your program. Having multiple EXE files are generally a way to break a larger program down into smaller, more maintainable units.
If you're just getting started, this is not something you'll need to worry about just yet. One day, when you're working on a larger project that involves using other pre-built components, you'll dig around in your build folder and notice that you also have some DLL files and other resources.

Resources