As part of my Research Project. I am supposed to test different DFS'S (Distributed File Systems) like HDFS, Ceph etc. For this purpose I have planned to use IBM Bluemix.
I tried doing google to get help in this regard but there is not much available in this regard. Is there any help available in this regard or any suggestion(s) please? Not getting how to start?
My understanding is you should start using Virtual Machines, get into them and install and test each File System type you want to explore: https://console.ng.bluemix.net/docs/virtualmachines/vm_index.html#vm_index
how to get harddrive serial number(not the volume # wich change at each reinstall of windows) in C or asm, without wmi (cause wmi required admin right). Any clue would be helpfull cause right now i found nothing on web in C without wmi, in dayss of searching... Thank you.
EDIT : For windows system
Please try my open source tool, DiskId32, which also has the source code at http://www.winsim.com/diskid32/diskid32.html . I only have an Win32 version at this time. Maybe some day I will add a Win64 version.
Hard drive serial number and other information about the harddrive like firmware version, etc. can only be obtained using SMART as far as I know and that requires special ioctls to the the block device node (/dev/sda or /dev/sdb) which is usually not available to a regular user.
I know there is a tool called smartctl which does exactly this:
sudo smartctl -i /dev/sda
Similar tools exist (hdparm, lshw, etc.) as well.
As far as trying to figure it out this info without being a privileged user, it might be possible only if it is exposed via /proc or /sys which I highly doubt is being done in the current SATA block device drivers.
Once I've written a sort of a driver for Windows, which had to intercept the interaction of the native display driver with the OS. The native display driver consists of a miniport driver and a DLL loaded by win32k.sys into the session space. My goal was to meddle between the win32k.sys and that DLL. Moreover, the system might have several display drivers, I had to hook them all.
I created a standard WDM driver, which was configured to load at system boot (i.e. before win32k). During its initialization it hooked the ZwSetSystemInformation, by patching the SSDT. This function is called by the OS whenever it loads/unloads a DLL into the session space, which is exactly what I need.
When ZwSetSystemInformation is invoked with SystemLoadImage parameter - one of its parameters is the pointer to a SYSTEM_LOAD_IMAGE structure, and its ModuleBase is the module base mapping address. Then I analyze the mapped image, patch its entry point with my function, and the rest is straightforward.
Now I need to port this driver to a 64-bit Windows. Needless to say it's not a trivial task at all. So far I found the following obstacles:
All drivers must be signed
PatchGuard
SSDT is not directly exported.
If I understand correctly, PatchGuard and driver signing verification may be turned off, the driver should be installed on a dedicated machine, and we may torture it the way we want.
There're tricks to locate the SSDT as well, according to online sources.
However recently I've discovered there exists a function called PsSetLoadImageNotifyRoutine. It may simplify the task considerably, and help avoid dirty tricks.
My question are:
If I use PsSetLoadImageNotifyRoutine, will I receive notifications about DLLs loaded into the session space? The official documentation talks about "system space or user space", but does "system space" also includes the session space?
Do I need to disable the PatchGuard if I'm going to patch the mapped DLL image after it was mapped?
Are there any more potential problems I didn't think about?
Are there any other ways to achieve what I want?
Thanks in advance.
Do I need to disable the PatchGuard if I'm going to patch the mapped DLL image after it was mapped?
To load any driver on x64 it must be signed. With admin rights you can disabled PatchGuard and I personally recommend using DSEO, a GUI application made for this. Or you can bypass PatchGuard by overwriting the MBR (or BIOS), although this is typically considered a bootkit - malware.
I have all my COBOL source code located on a z/OS mainframe. What is a way to migrate all this code to ClearCase?
Rational Developer for System z (RDz) is the tool you should be using for this. It's basically Eclipse with a large number of IBM proprietary plug-ins which give you access to your mainframe data sets, including those under the control of SCLM (the default z/OS source code management system).
You can use RDz to connect to the mainframe and check out your code directly into an Eclipse project. That code can then be added to any other source code management system that has an Eclipse plug-in.
There's more to it of course, such as the ability to kick off mainframe builds from the Eclipse environment, something that will be important since, no matter how hard you look, you probably won't find many distributed platform compilers that can compile mainframe source.
If you just need a one-time move, a file packing tool -- like PKZip/MVS or UnXMIT -- to bundle the source up. You can then transmit it using IND$FILE, ISPF File Transfer or FTP to your clearcase server and check it in.
If you need ongoing updates of your mainframe resources on a server based source control system, you might be better off setting up some shared DASD using samba, NFS or the like between your mainframe and your server.
Unless you plan on doing your development on PCs, I don't think Rational Developer for Z is going to be a good fix. It will do what you need, but the mainframe setup is kind of headache-y and the cost of the product is excessive if all you need is to move resources to/from your clearcase server.
IIRC, RDz costs about 6k per seat. You might spend a few days writing some procs to ftp to/from your clearcase server and check-in/check-out and save some heafty expense. Actually, IBM ought to already have those tools built already. Clearcase supports remote machines doing checkin/checkout, maybe all you need is USS and a TCP/IP connection.
I have a folder a/ and a remote folder A/.
I now run something like this on a Makefile:
get-music:
rsync -avzru server:/media/10001/music/ /media/Incoming/music/
put-music:
rsync -avzru /media/Incoming/music/ server:/media/10001/music/
sync-music: get-music put-music
when I make sync-music, it first gets all the diffs from server to local and then the opposite, sending all the diffs from local to server.
This works very well only if there are just updates or new files on the future. If there are deletions, it doesn't do anything.
In rsync there is --delete and --delete-after options to help accomplish what I want but thing is, it doesn't work on a 2-way-sync.
If I want to delete server files on a syn, when local files have been deleted, it works, but if, for some reason (explained after) I have some files that aren't in the server but exist locally and they were deleted, I want locally to remove them and not server copied (as it happens).
Thing is I have 3 machines in context:
desktop
notebook
home-server
So, sometimes, server will have files that were deleted with a notebook sync, for example and then, when I run a sync with my desktop (where the deleted server files still exist on) I want these files to be deleted and not to be copied again to the server.
I guess this is only possible with a database and track of operations :P
Any simpler solutions?
Thank you.
Try Unison: http://www.cis.upenn.edu/~bcpierce/unison/
Syntax:
unison dirA/ dirB/
Unison asks what to do when files are different, but you can automate the process by using the following which accepts default (nonconflicting) options:
unison -auto dirA/ dirB/
unison -batch dirA/ dirB/ asks no questions at all, and writes to output how many files were ignored (because they conflicted).
Note: I am no longer using Unison (I use NextCloud, which doesn't address the original use case). However, note that rsync is not designed for bidirectional sync, while unison is. unison may have its bugs (as any other piece of software) and its wrinkles. I am surprised it seems to be actively maintained now (last time I looked I think I thought it looked dead), but I'm not sure what's the state nowadays. I haven't had the need to have a two-way file synchronizer, so there may be better options, though.
Since the original question also involves a desktop and laptop and example involving music files (hence he's probably using a GUI), I'd also mention one of the best bi-directional, multi-platform, free and open source programs to date: FreeFileSync.
It's GUI based, very fast and intuitive, comes with filtering and many other options, including the ability to remote connect, to view and interactively manage "collisions" (in example, files with similar timestamps) and to switch between bidirectional transfer, mirroring and so on.
FreeFileSync can easily sync two computers on the same network and also sync two computers on different and remote networks.
On same network: have FreeFileSync use the local file system on one side and a shared network drive / path on the other. On Windows systems you enable file / disk sharing on one computer and access that share from the other. I use FreeFileSync this way to keep my main development PC source code synced with my 2 laptops.
I have also synced one of these laptops with a Linux server with Samba installed and sharing one of its directories.
Across networks: create a VPN and do the same as above. FreeFileSync will see the remote disk as it was on the local network. Or buy one router that allows you to connect a USB disk to it and share over the internet. I have installed a VPN on a remote Linux server and used it through the OpenVPN Windows client.
You could also try bitpocket: https://github.com/sickill/bitpocket
Try this,
get-music:
rsync -avzru --delete-excluded server:/media/10001/music/ /media/Incoming/music/
put-music:
rsync -avzru --delete-excluded /media/Incoming/music/ server:/media/10001/music/
sync-music: get-music put-music
I just test this and it worked for me. I'm doing a 2-way sync between Windows7 (using cygwin with the rsync package installed) and FreeNAS fileserver (FreeNAS runs on FreeBSD with rsync package pre-installed).
You might use Osync: http://www.netpower.fr/osync , which is rsync based with intelligent deletion propagation. it has also multiple options like resuming a halted execution, soft deletion, and time control.
You could try csync, it is the sync engine under the hood of owncloud.
I'm surprised no one has mentioned Syncthing yet. I have been using it for years to synchronize my phone, my tablet and my two laptops. One time I also used it to send 10 GB of photos to my family ~600 km away, straight from my machine to their machine, and it was incredibly fast (despite the data getting routed through Syncthing's discovery server to work around NAT issues). I also tried OwnCloud/NextCloud at some point but Syncthing has been much more reliable and, also, much faster.
I'm now using SparkleShare https://www.sparkleshare.org/
works on mac, linux and windows.
I'm not sure whether it works with two syncing but for the --delete to work you also need to add the --recursive parameter as well.
Rclone is what you are looking for. Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers including local filesystems. Rclone was previously known as Swiftsync and has been available since 2013.