I'm trying to develop an open source application to be sort like a centralized configuration management for all Unix platform like for example (changing root password, SSH configuration, DNS settings, /etc/hosts management.... and others).
I need your feedback for what do you recommend to use as the interface for all the configuration (list of scripts will be running in the Unix Servers as a clients to read the configuration and apply it in each system "Client===>to===>Server mode"
Should I use LDAP to host the configurations and any Unix OS can talk to the LDAP to get the configuration
or Should I just save the configuration in Database (e.g. MySQL) and build a web interface to read the database and print the configuration to the client ?
or you have any other idea?
You might look into something like Chef or Puppet instead. Why re-invent the wheel?
Curl can download a file from a URL and write that file to standard output. For example, executing curl -sS http://someHost/file.cfg will download "file.cfg" from the specified web server. The "-sS" options instruct Curl to print error messages but not any any progress diagnostics. By the way, Curl supports many protocols including HTTP, FTP and LDAP, so you have flexibility in the technology you want to use to host your centralised configuration repository (CCR).
You could use curl to retrieve a configuration file from the CCR, store the result in a local file and then parse that local file.
Check out Blueprint from DevStructure. It sounds like something along the lines of what you're trying to do. Basically it reverse engineers servers and detects everything that has changed from the install state. Open-source too.
https://github.com/devstructure/blueprint (Blueprint # Github)
We are also about to launch ConfigChief which is a central configuration repository that would do what you want: central point to store configuration (with all features like versioning, audit, ACL, inheritence, etc).
Once you have that, combined with change notification, you can just run a curl as Ciaran McHale says against the CCR and get your parsed configuration file back. This would eliminate the need for writing scripts to generate config files from the outside.
If you are interested, you can signup for a beta at http://woot.configchief.com
DISCLAIMER: I guess it is obvious from the first word!
Related
My Goal
I've been using devcontainers in combination with WSL2 for a little while now. But I keep running into issues and besides that I like off-loading resources of my laptop to a server. Moving the containers to a native Linux server would solve my issues.
My ideal situation would be to have a solution that works just like working locally on my Windows laptop (later probably moving to Macbook) but using the facilities of a Linux server (which has systemd and netns) and moving the workload there as well so my laptop doesn't sound like a vacuum cleaner.
My Journey
I'm trying to setup remote containers as described here: https://code.visualstudio.com/remote/advancedcontainers/develop-remote-host
Actually the containers are running fine, I'm using the second storage solution what means I add the following to my .devcontainer.json:
"workspaceMount": "source=/home/marvink/code,target=/workspaces,type=bind,consistency=cached"
And my workflow currently looks something like this:
Clone project locally (with .devcontainer already in the project)
Add workspaceMount above to devcontainer.json
Clone project on remote (e.g. to /home/marvink/code/new-project)
Open project locally
Build and reopen in container
Work on the files on the remote
My issue
This works but now I have files on my local drive that never get touched which isn't ideal but not a disaster, the bigger issue is when I want to update the devcontainer. I need to do that locally (in a seperate window), manually need to copy paste that to the remote if I want to commit it to git and off-course I sometimes forget this and try to edit it remotely which is causing a lot of frustration (and sometimes it seems like it does use the remote config, but that might have been a mistake?).
This is why I want to setup rsync both ways to sync changes to files and as a bonus I can edit files locally when I'm offline. In the link it's described how to do it manually but I want it automated so that I can't forget or make mistakes.
From Powershell I'm able to run an rsync command that syncs one-way and I can extend that to sync 2-way:
wsl rsync -rlptzv --progress --exclude=.git '$PWD' 'marvink#s-dev01:~/code/new-project'
This needs to be ran locally but I can't find a way to do that. I'd need to run a task locally for example, but that isn't possibly when working on a remote (https://github.com/microsoft/vscode-remote-release/issues/168).
The other way around doesn't seem like an option to me as I don't want to expose any ports on my laptop and firewalls would get in the way depending on where I am.
My question
My workflow still seems a bit convoluted so I'm open to suggestions on that end but any ideas on how to sync my workspace files?
You don't need a local version of your code (containing the .devcontainer folder) if you're storing that code on the remote server. You should be able to open an ssh target in VScode using the Remote - SSH extension, which is the recommended approach in the link you added. The extension Remote - Containers 'stacks' on top of the SSH extension, so once connected over SSH you then connect to the container using the .devcontainer.json configuration located on your remote server.
If you don't want to use the extension and use a bind mount + specify docker.host in your settings.json file, you can sync code using the approaches in that same article, through SSHFS, docker machine, or rsync.
I'm trying to test out U2F on Google Appengine.
Unfortunately dev_appserver.py, the development app server for local testing, only runs in HTTP, and the U2F standard requires that the web server be connected over HTTPS.
There are some options for proxy servers, including stunnel, stud, Pound and ngrok.
What I am doing will probably end up being an open source package, so I would like to keep the setup fairly straightforward and dependency list strictly to widely available packages.
An ideal solution would be a command-line program along the lines of prog_name -listen localhost:8041 -proxy localhost:8040; in other words, a very simple command line setup.
The stud and pound programs seem like overkill.
The stunnel option seems to be the best and most common solution, but it would be better if it could be configured from the command line instead of a config file.
Ngrok is super-cool and seems to be along the right lines. It gives you a random server name though, which can be a problem since the U2F appId must match the server (if persistence matters), but other than that it's basically the right idea.
I have a vague memory of this being possible with openssl from the command line, but the only command that seems suitable is s_server and that seems to only provide ssl reflection/debugging information and not the option to proxy requests per se. My memory must be faulty.
It would not be terribly difficult to write-up a trivial Python server/client proxy, leading me to believe there's probably a simple option out there ... however, the search results have a dreadful signal-noise ratio.
Are there other sensible options for developing with a HTTPS server when the content is served over HTTP (as the case is with AppEngine's dev_appserver.py)?
I have a folder a/ and a remote folder A/.
I now run something like this on a Makefile:
get-music:
rsync -avzru server:/media/10001/music/ /media/Incoming/music/
put-music:
rsync -avzru /media/Incoming/music/ server:/media/10001/music/
sync-music: get-music put-music
when I make sync-music, it first gets all the diffs from server to local and then the opposite, sending all the diffs from local to server.
This works very well only if there are just updates or new files on the future. If there are deletions, it doesn't do anything.
In rsync there is --delete and --delete-after options to help accomplish what I want but thing is, it doesn't work on a 2-way-sync.
If I want to delete server files on a syn, when local files have been deleted, it works, but if, for some reason (explained after) I have some files that aren't in the server but exist locally and they were deleted, I want locally to remove them and not server copied (as it happens).
Thing is I have 3 machines in context:
desktop
notebook
home-server
So, sometimes, server will have files that were deleted with a notebook sync, for example and then, when I run a sync with my desktop (where the deleted server files still exist on) I want these files to be deleted and not to be copied again to the server.
I guess this is only possible with a database and track of operations :P
Any simpler solutions?
Thank you.
Try Unison: http://www.cis.upenn.edu/~bcpierce/unison/
Syntax:
unison dirA/ dirB/
Unison asks what to do when files are different, but you can automate the process by using the following which accepts default (nonconflicting) options:
unison -auto dirA/ dirB/
unison -batch dirA/ dirB/ asks no questions at all, and writes to output how many files were ignored (because they conflicted).
Note: I am no longer using Unison (I use NextCloud, which doesn't address the original use case). However, note that rsync is not designed for bidirectional sync, while unison is. unison may have its bugs (as any other piece of software) and its wrinkles. I am surprised it seems to be actively maintained now (last time I looked I think I thought it looked dead), but I'm not sure what's the state nowadays. I haven't had the need to have a two-way file synchronizer, so there may be better options, though.
Since the original question also involves a desktop and laptop and example involving music files (hence he's probably using a GUI), I'd also mention one of the best bi-directional, multi-platform, free and open source programs to date: FreeFileSync.
It's GUI based, very fast and intuitive, comes with filtering and many other options, including the ability to remote connect, to view and interactively manage "collisions" (in example, files with similar timestamps) and to switch between bidirectional transfer, mirroring and so on.
FreeFileSync can easily sync two computers on the same network and also sync two computers on different and remote networks.
On same network: have FreeFileSync use the local file system on one side and a shared network drive / path on the other. On Windows systems you enable file / disk sharing on one computer and access that share from the other. I use FreeFileSync this way to keep my main development PC source code synced with my 2 laptops.
I have also synced one of these laptops with a Linux server with Samba installed and sharing one of its directories.
Across networks: create a VPN and do the same as above. FreeFileSync will see the remote disk as it was on the local network. Or buy one router that allows you to connect a USB disk to it and share over the internet. I have installed a VPN on a remote Linux server and used it through the OpenVPN Windows client.
You could also try bitpocket: https://github.com/sickill/bitpocket
Try this,
get-music:
rsync -avzru --delete-excluded server:/media/10001/music/ /media/Incoming/music/
put-music:
rsync -avzru --delete-excluded /media/Incoming/music/ server:/media/10001/music/
sync-music: get-music put-music
I just test this and it worked for me. I'm doing a 2-way sync between Windows7 (using cygwin with the rsync package installed) and FreeNAS fileserver (FreeNAS runs on FreeBSD with rsync package pre-installed).
You might use Osync: http://www.netpower.fr/osync , which is rsync based with intelligent deletion propagation. it has also multiple options like resuming a halted execution, soft deletion, and time control.
You could try csync, it is the sync engine under the hood of owncloud.
I'm surprised no one has mentioned Syncthing yet. I have been using it for years to synchronize my phone, my tablet and my two laptops. One time I also used it to send 10 GB of photos to my family ~600 km away, straight from my machine to their machine, and it was incredibly fast (despite the data getting routed through Syncthing's discovery server to work around NAT issues). I also tried OwnCloud/NextCloud at some point but Syncthing has been much more reliable and, also, much faster.
I'm now using SparkleShare https://www.sparkleshare.org/
works on mac, linux and windows.
I'm not sure whether it works with two syncing but for the --delete to work you also need to add the --recursive parameter as well.
Rclone is what you are looking for. Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers including local filesystems. Rclone was previously known as Swiftsync and has been available since 2013.
How do I configure Apache2 via webmin or command-line (I'm using RHEL5 Linux) so that I can have multiple domains on the same server on the same port but in different subdirectories?
For instance, trying to get homerentals.ws and homerepair.ws to be detected on port 80 (default port) on the same server. I know that my DNS holds the two addresses and web hits currently go to the same test page. Now all I need is for web hits to go to a subdirectory, but not show this subdirectory. For instance, I do not want people going to http://homerentals.ws and being redirected back to http://homerentals.ws/homerentals/. Instead, http://homerentals.ws would go to /var/www/html/homerentals, while http://homerepair.ws would go to var/www/html/homerepair, but would not look any differently in the URL.
On IIS, I did this once with host-header detection. But I don't know how to do it on RHEL5 Linux via webmin or file editing. I'm stuck.
The feature you're describing is known as virtual hosts. Have a look at Apache's documentation. In general you need to edit /etc/apache2/httpd.conf file to make things happen (maybe it can be edited through webmin, but I'm not familiar with it).
Is there any way to have something that looks just like a file on a Windows file share, but is really a resource served up over HTTP?
For context, I'm working with an old app that can only deal with files on a Windows file share, I want to create a simple HTTP-based service to serve the content of the files dynamically to pick up real time changes to the underlying data on request.
WebDAV (basically) takes an existing directory, and shares it over HTTP - which sounds like the opposite of what you want.
You need something that speaks SMB/CIFS on one end, and your own code on the other. The easiest way to do that is with a userspace file system.
To that end, here's a couple of links:
WinFUSE, which is kind of a barebones CIFS/SMB server that can host your own filesystem. I've done a couple of small samples with it - and the docs are terrible, but it more or less worked.
Dokan, a userspace file driver with .NET bindings. I haven't used this one, but it looks promising. It has both .NET and Ruby bindings, so you should be able to get a POC up pretty quickly.
Callback File System - yet another userspace file system. Again, I have no experience with this one.
A Linux box with SAMBA and FUSE that shares the drive out to the Windows box.
This won't answer your question in any meaningful way, but maybe it will get you pointed in the right direction. Look into serving the "file(s)" via WebDAV--SharePoint uses this and its files can be accessed exactly as you want, as a file share where the transport mechanism is HTTP. Unfortunately I can't give any more detailed info, as I've only worked on the client end of WebDAV and not the server side of things.
I think serving up files from WebDAV might be what you're looking for.