I have 2 computers. One (computer 2) is getting files (.json) from 2 different processes, and then it passes this files (through ethernet cable) to the other computer (computer 1) (like in the attached image). This happens constantly not just one's.
To do this file transfers, my idea is the following:
Create a server/client sockets in c to comunicate both computers.
Tar the files (an amount say 4 files) in computer 2.
Recive the files in computer 1.
One way at first I wanted to do this, was with Netcat and tar, in bash. But the I read that it was not a good Ideas, because bash doesn't work well with file transfers. So I decided to do it in C (it has to be C, or C++, not python, but I am better at C so C is the option).
So now I am doing it with this sample code:
Send and Receive a file in socket programming in Linux with C/C++ (GCC/G++)
but I cant figure out the part of tar and sending Tar, and if this code would help for this.
Other way I wa thinking to do it was with zeromq, but I havent use this before, so I dont know if it is worth the extra studding.
Thanks a lot in advance for your answers.
image
The .tar file format is pretty straightforward (and documentation on how to read/write a .tar file yourself can be found via a Google search on ".tar file format" or similar), or you could just use the existing libtar reader/writer library if you'd prefer not to reinvent the wheel.
Another alternative (assuming you just need a quick program to handle a personal use-case and don't need a production-grade solution) would just be to call system() (or similar) to have it execute the appropriate shell commands on behalf of your C program -- but if that's good enough, one has to wonder why a simple bash script isn't also acceptable. Bash may not be good at file transfers natively, but the various command-line utilities you can call from bash (scp, rsync, tar, untar, etc) certainly work fine.
Related
Is there a way to limit a hard drive from reading a certain file? Ex. It's given to Program A the order to open a .txt file. Program B overloads the .txt file opening hundreds times a second. Program A is unable to open the txt file.
So I'm trying to stress test a game engine that relies on extracting all used textures from a single file at once. I think that this extraction method is causing some core problems to the game developing experience of the engine overall. My theory is that the problem is caused by the slow reading time of some hard drives. But I'm not sure if I'm right on this, and I needed I way to test this out.
Most operating systems support file locking and file sharing so that you can establish rules for processes that share access to a file.
.NET, for example (which runs on Windows, Linux, and MacOS), provides the facility to open a file in a variety of sharing modes.
For very rapid access like you describe, you may want to consider a memory-mapped file. They are supported on many operating systems and via various programming languages. .NET also provides support.
So if you want to parse command line options when starting the program you use getopt(). But how are you doing this if the program is already running in the background? I couldn't find info. let's say for example that you have a server running, but you want to change something in the way it works. How to do it? I want to do this in Linux.
There is no platform-independent way of doing this; the C programming language doesn't specify (or require the existance of) a mechanism to talk to a running program.
You're going to have to look for either platform-specific code, or some existing library which abstracts the platforms into something portable of its own.
In Linux, a Unix domain socket is one way of implementing this. Another is shared memory.
If you're on Un*x you have many options.
A FIFO pipe looks reasonable and easy to implement :)
There are a couple of ways you can do this, but they all have a common theme - Interprocess communication.
My preferred way to do this is via some sort of sockets (typically these days I use ZMQ for these purposes, but if you're starting out, read up on sockets in general before you get caught up using ZMQ). Depending whether you're on Windows or some sort of Unix will dictate what sort of sockets you have available to you.
There are other ways to do this also - such as shared memory, but sockets would be your best bet especially since you mentioned "server". I suggest you study the "client server model".
A simplest solution with server I have used - is to make file and ask server to read file once a 10 second. Put command there. That is cross platform).
Second more or less cross platform solution is to use some standard library for concurrency (pthread, for example), or to use new standard of C++ (thread and mutex libs). Make thread that will wait for some commands, while other will execute something.
You could use a configuration file and have the program listen to changes to that file.
If you are programming in Linux you can use inotify (#include <linux/inotify.h>).
In MacOS/iOS use FSEvents.
In Windows use FindFirstChangeNotification.
I am going to implement a file system in C and i'm wondering how can i test it without installing it in the kernel nor using FUSE API. Ideally what i'd like to do is to use dd command to create a virtual hard drive and interact with it using linux system calls like write and read (the idea is to not write drivers). Is that posible?
(I'm sorry if i misspelled words, but eanglish isn't my first language. Also i'm sorry if this is off-topic, it's my first question)
Thanks.
If you are really implementing a file system, you can test it in virtual machine.
Otherwise, you can implement a file system in a file which exist in real file system, and implement some functions like read/write/etc...
Virtual hard drive and virtual filesystem are a bit different things, - you write different functions and handle different requests when implementing them. Given that you implement a filesystem, your best bet on linux is to expose your filesystem via FUSE for testing. Then write different tests that will access your FUSE-based filesystem to perform various tasks.
Unfortunately testing a filesystem is hard and requires writing many tests. Manual testing with different software (file managers) is also required.
I am trying to build a bash like script provides some functionalities such as ls,pwd,cat etc. working on NTFS in a linux system. Suppose that I have an NTFS image and I open that as a file with fopen. Then, I read some sectors such as BPB residing at 0x0B and fetched some general info about the NTFS image. I need to reach to the root directory pointer then traverse through the filesystem in order to implement those functions especially for ls and pwd. I google'd a lot about internal details and offsets of NTFS but I could not find out how to achieve the goal. I can not progress further without understandable documentation or samples.
Any help, documentation, hint, offset table etc. would be highly appreciated.
Thank you.
I'm guessing this is a learning exercise. So, first:
Writing a bashlike interpreter for a specific filesystem is the wrong thing to do. You should be concentrating on understanding the details of the NTFS filesystem instead.
Writing ls, cat to be able to work with files in a specific filesystem is the wrong thing to do. You should be concentrating on understanding the details of the NTFS filesystem instead.
If you write a filesystem driver (say using FUSE), then the original bash, ls, cat will automatically work with that filesystem. Because the driver will be able to translate system calls like open and read into the filesystem specific procedure.
Finally:
Learn about FUSE. It is awesome. See this Hello World FUSE module. Run it, play with it.
Download the sources for NTFS-3G, which is the NTFS driver used by most GNU/Linux distros these days. It uses FUSE. Learn how it works.
I am wondering how the OS is reading/writing to the hard drive.
I would like as an exercise to implement a simple filesystem with no directories that can read and write files.
Where do I start?
Will C/C++ do the trick or do I have to go with a more low level approach?
Is it too much for one person to handle?
Take a look at FUSE: http://fuse.sourceforge.net/
This will allow you to write a filesystem without having to actually write a device driver. From there, I'd start with a single file. Basically create a file that's (for example) 100MB in length, then write your routines to read and write from that file.
Once you're happy with the results, then you can look into writing a device driver, and making your driver run against a physical disk.
The nice thing is you can use almost any language with FUSE, not just C/C++.
I found it quite easy to understand a simple filesystem while using the fat filesystem on the avr microcontroller.
http://elm-chan.org/fsw/ff/00index_e.html
Take look at the code you will figure out how fat works.
For learning the ideas of a file system it's not really necessary to use a disk i think. Just create an array of 512 byte byte-arrays. Just imagine this a your Harddisk an start to experiment a bit.
Also you may want to hava a look at some of the standard OS textbooks like http://codex.cs.yale.edu/avi/os-book/OS8/os8c/index.html
The answer to your first question, is that besides Fuse as someone else told you, you can also use Dokan that does the same for Windows, and from there is just a question of doing Reads and Writes to a physical partition (http://msdn.microsoft.com/en-us/library/aa363858%28v=vs.85%29.aspx (read particularly the section on Physical Disks and Volumes)).
Of course that in Linux or Unix besides using something like Fuse you only have to issue, a read or write call to the wanted device in /dev/xxx (if you are root), and in these terms the Unices are more friendly or more insecure depending on your point of view.
From there try to implement a simple filesystem like Fat, or something more exoteric like an tar filesystem, or even some simple filesystem based on Unix concepts like UFS or Minux, or just something that only logs the calls that are made and their arguments to a log file (and this will help you understand, the calls that are made to the filesystem driver during the regular use of your computer).
Now your second question (that is much more simple to answer), yes C/C++ will do the trick, since they are the lingua franca of system development, also a lot of your example code will be in C/C++ so you will at least read C/C++ in your development.
Now for your third question, yes, this is doable by one person, for example the ext filesystem (widely known in Linux world by it's successors as ext2 or ext3) was made by a single developer Theodore Ts'o, so don't think that these things aren't doable by a single person.
Now the final notes, remember that a real filesystem interacts with a lot of other subsystems in a regular kernel, for example, if you have a laptop and hibernate it the filesystem has to flush all changes made to the open files, if you have a pagefile on the partition or even if the pagefile has it's own filesystem, that will affect your filesystem, particularly the block sizes, since they will tend to be equal or powers of the page block size, because it's easy to just place a block from the filesystem on memory that by coincidence is equal to the page size (because that's just one transfer).
And also, security, since you will want to control the users and what files they read/write and that usually means that before opening a file, you will have to know what user is logged on, and what permissions he has for that file. And obviously without filesystem, users can't run any program or interact with the machine. Modern filesystem layers, also interact with the network subsystem due to the fact that there are network and distributed filesystems.
So if you want to go and learn about doing kernel filesystems, those are some of the things you will have to worry about (besides knowing a VFS interface)
P.S.: If you want to make Unix permissions work on Windows, you can use something like what MS uses for NFS on the server versions of windows (http://support.microsoft.com/kb/262965)