Is it possible to compare big directories on two different PC using Sha? - md5

I was wondering if Is it possible to compare big directories on two different PC using Sha ? if so please let me know the command

Related

Can two different programs have complete access to single INI file at the same time?

I have two different programs that need to have complete access to a single INI file to read and write into at the same time. Is this possible?
I am using fpc 2.6.2 and Lazarus 1.0.12.

Where to store a variable when needed in the next run?

I'm changing a program written in C.
For these changes I need a counter (variable int). When the run stops, I need the value of this counter in the following run of the program. (even if the pc is restarted in between).
What is the best way to store this value? I was thinking about the following : storing it as a registry-value, writing it to a file (not preferred, somebody might delete this file), using persistent variables (but I can't find many information on these).
Or, are there other ways to keep this variable?
The program has to run in a Windows environment and in a Linux environment (as it does now).
Store it in a file. If you want to protect the file from accidental deletion, have its name start with a period on Linux (.myfile) or mark it as "hidden" on Windows. If you want to protect it against more than just accidental deletion, the registry is no better than a file.
The best solution I think would be to store it in a database. Have you got any database experience? Could you store it in MySQL or SQL Server?
C doesn't have a concept of "persistent variables"; no actual programming language that I know of has that.
A file would be the best choice; detecting its absense and protesting/failing will be trivial.

QEMU adding new arguments to qemu binary

I am new to qemu development. I am trying to modify qemu to emulate some features of SGX processor on x86 machines using QEMU emulator. Here is what I want to do.
I want to add the following to qemu. I want to start a qemu process with a new argument EECREATE. This when given to qemu-i386 binary should create an encrypted space in memory with few new data structures inside. Like for example,
qemu-system-i386 -hda ubuntu.img -eecreate -m 2G
This command should boot an ubuntu.img and create a encrypted space (need not be big) of memory for the image (In this case create an encrypted space within 2G that is assigned to the ubuntu-img. Basically, the encrypted space should be within a address space of the image.)
Can anyone please let me know the process involved as what needs to be followed to get it working? What files I need to modify? A brief explaination of how the flow of code will be?
I am not able to get any documentation on web and am stuck as where and how to begin.Any help is greatly appreciated.
Thanks
The short answer is "modify vl.c and qemu-options.hx". The latter is there as all the options processing is integrated into the help provision and so forth - i.e. the code is built dynamically. My normal approach is to pick a similar option and see how it's done.
The longer answer is that if you want the code upstreamed, you should probably discuss your proposal on the qemu-devel mailing list.
The #qemu IRC channel on on irc.oftc.net is also helpful. You will no doubt get some feedback. However, I'd suggest you might consider implementing this as a machine parameter rather than a command line option, unless you are going to make it work for all virtual machine types.

What is a good pattern to synchronize files between computers in parallel (in CentOS)?

Trying to find a good way to copy code between one "deployment" computer and several "target" computers, hopefully in parallel. The idea is that the deployment computer holds a copy of the files as they are supposed to be copied to the target servers. We would like to have copying happen in parallel, as it might involve several tens of target servers.
Our current scheme involves using rsync to synchronize the containing directory where the files reside, in order to keep the target servers up-to-date on the deployment server.
So, the questions are:
What is a good / better way to do this?
What sort of tools are used to do this?
Should this problem be faced from a different angle or perspective that I'm totally missing?
Thanks very much!
Another option is pdsh, a parallel, distributed shell. It's available from EPEL, and allows running remote commands (via ssh) on multiple nodes in parallel. For example:
pdsh -w node10,node11,node12 command
Runs "command" on all three nodes in parallel. It also has a handy hostname expression feature to do the same thing with a bit less typing:
pdsh -w node[10-12] command
It also includes the pdcp command copies files to multiple nodes in parallel. (The pdsh package needs to be installed on all nodes for pdcp to work.)
pdcp -w node[10-12] /local/file /remote/dir/
The local file is copied to the /remote/dir on all three nodes.
We use the lftp command to sync our remote web server to our local backup machine. We wrote a BaSH script to automatically sync all backups on the server to the local box, and we set that script up on a cron to run nightly.
rsync is a fine way of handling this, and I might recommend moving your current protocol into a cron setup if it isn't already.
Unison is also a tool available for setting up two way sync, if you requie that functionality.
Hope this helps!
There is a program called clusterssh that is available on debian based operating systems (but I was able to install it onto RHEL 6.3 using an RPM and resolving other dependencies) that will allow you to open an ssh terminal for multiple machines, with a single input location (this allows you type once onto as many machines as you have terminals open). Then you just have to use a simple scp. I have used this program to move a file from a development workstation to as many as 25 other workstations at the same time, but this option is only really useful if you're trying to accomplish what you stated in the question, that is, copying files from one computer to several others.
This is not an effective syncing mechanism. If you really want it to sync then the above answer would be best.

How to backup project folders to local desktop if path more than 255 characters?

For e.g. if i am storing some files on a network server which is under many hierarchical folders.
Then i want to do backup. But i always encounter issue because the file path is more than 255?
How can i resolve this issue or work around it?
Preface: I'm assuming the OS of the machine you want to copy the files to is some flavor of Windows.
The first part of Mark Bessey's is somewhat correct, however even on modern versions of windows with modern filesystems (NTFS for example) you can still run into problems.
I suspect the limitation you're running into is due to MAX_PATH, which is a predefined limit on the length of a path that many APIs on Windows will accept.
You may try using Robocopy to do the backup as it is able to create paths longer than the MAX_PATH limitation. However, most applications will not be able to access these files.
Taring or Zipping the files may be a good plan but it seems unlikely that you'd be able to unzip or untar them to a Windows machine.
Maybe upgrade to an Operating System that's been updated in the last decade or so? Seriously, though - what OS and file system are you using? Even FAT32 supports long path and file names, though any single component of the path is limited to 255 characters.
If you've got directories with more than 255 characters in their name, then that's problematic (and a little weird). To work around that issue, you could consider using an archiving utility (tar or zip) run on the server, then ship the archive over to yuor desktop machine.
What OS, What protocol?
All I can see is: CoreFSIF with a limit of 256 chars.
What you are seeing is Win32's MAX_PATH limit (MAX_PATH is 260).
There are "escapes" on the API to be able to use NT's larger limit of 32767 characters (prepending \\?\ to the path), which however only work on some of the APIs. Files created this way tend to confuse other programs which expect all paths to fit in MAX_PATH-sized buffers.
The way to work around this (for backup/restore) is to find a program which can backup (and restore!) files with longer paths (by either always using the API "escapes" or the underlying NT API). Another option is to do the backup/restore with a different operating system.

Resources