Would like to ask for advice.
there is a need for binary to have a mechanism for self update. Lets imagine binary rolls on host A and updates-server is server B.
Lobster method is to fork bash script with wget/ftp/ncftp/etc getter wich will download and replace. But ehm...there is no such tools on A and they will not be installed.
In short I can't use any external software tools(external to running binary).I can just hardcode mechanism in running binary.
As binary image runs it can load binary(and md5 file) simply via tcp sockets in tmp file,then do md5 compare and if everything ok then replace binary and restart itself. Its easy to do, but I have some strange feeling...dunno.
Mb someone can share and advice?:)Thank you in advance.
Conditions: binary is written in pure c. freebsd is binary rolling side and update-serve is centos. So java/python/c++/any is available at server side but not on free. Y, tobe honest its is possbile install some tools on client side and openfirewall for ftp, but want to avoid and hardcode :)
ADDED: must be noted that the enviroment between A and B is secured..eghm...as we think, in any way security and access problem and spoofing/sniffing out of our world there :) its just local update implementation mechanism for some binary which nowdays we update from center within expect scripts via ssh.
You will have to reimplement a whole host of functionality if you want to do so. My easiest suggestion would be to link to libcurl, hardcode the download path into your executable and write the image of your executable back to $ARGV[0]. However, you should definitely rethink your distribution concept, most distributions do some form of package management, and using it is the easiest alternative for all parties involved.
First of all check if you can modify a binary when a process is executing it, some system does not allow it.
You say you can not use external tool so probbly you can not create another "updater program" which will do the chenge instead of your binary.
Probalby you can download such program (from where you want to downlaod your update), execute it (exec, replaces current process with the new one)
that executed process will download and upded your main one, and then exec to it.
Related
My question should be very simple to answer for anyone not being a self-taught newbie like me...
On this page is a cheatsheet concerning a function to be used in GIS/DB environnement : http://www.bostongis.com/pgsql2shp_shp2pgsql_quickguide.bqg
I would like to create a script allowing users to just have to click on it to launch the process, given the proper datas. But I don't understand how to use this. It obviously doesn't work in a Python console, nor directly in the windows console. How is it supposed to work ? What language is this ?
Thanks
shp2pgsql is indeed a command line tool. It comes with your PostgreSQL/PostGIS installation (usually) and, if not accessible via PATH-variable, can (usually) be run from within the /bin-folder in your PostgreSQL-Installation. You can also always 'make' the programm from source in any location yourself, if needed.
EDIT:
One way to set up a script (independent of whether you use it within qgis own python environment or not) would be to use Pythons subprocess (or os.system) module (check related question here) to write to shell and execute shp2pgsql.
A slightly more sophisitcated solution to (batch) insert (multiple) shapefiles via script could be to implement ogr2ogr via gdal/ogr module within python (check this blog). That, however, would require a working installation of the gdal core library, and the respective Python bindings (at least to use outside of QGIS Python environment, where it is pre-installed AFAIK), which can be tiresome at times. Once installed correctly, it offers a powerful (I dare say almighty) toolset for geodata management and manipulation via Python, though.
Apart from that, the blog link I provided also states the implementation of a batch insert script/tool (which operates ogr2ogr) in qgis 2.8 toolbox...maybe that can help you, either with your work directly or (via sourcecode) to point you in the direction of creating your own tool.
Hi
I was trying to use FileSystemWatcher to detect if some files or directories has been moved to another location. The problem was, i had to use onCreated and onDeleted events to handle this, but there are many issues using this solution
how could i detect change if i will select more than one file and press Ctrl+C, Ctrl+V, or right-click and select Copy and then Paste in the same directory?
how could i detect, if i will select more than one directory?
the last one, what if i simulate moving file? I could delete file and create with same name in different place.
I know i could use, Timers, process locking detection, verification which process uses file (if explorer.exe then it could be moving file), but this solution is not perfect and it's very ineffective. I was whinking about this how to solve this issue, and i have decided to implement this in low-level language. Is this possible to do this using C, or assembler? I know that every thing is possible to do using assembler, so is it possible to implement this in asm? I would like to create my own FileSystemWatcher using assembler or C but where should i looking for info how to do this?
File movement within the same filesystem can be detected easily using a filesystem filter driver, as the filesystem received the corresponding request from the OS. Other scenarios such as moving to the other disk or moving by copy/delete sequence are hardly traceable even with the filter driver because you would need to match between the file which have been created/written to and the file which is being deleted (possibly on the other disk).
If you plan to write some security mechanism (like a DRM), then I need to remind that the data can be altered during copying (eg. encrypted or compressed), which makes your task even harder.
Still you can look at filesystem filter drivers - should you decide to go on with detection of filesystem events, such driver is a much more reliable and powerful mechanism than FileSystemWatcher.
I'm currently working an a rather large web project which is written using C servlets ( utilizing GWAN Web server ). In the past I've used a couple of IDEs for my LAMP/PHP jobs, like Eclipse.
My problems with Eclipse are that you can either mirror the project locally, which isn't possible in this case as I'm working on a Mac (server does not run on OSX), or use the "remote" view, which would re-upload files when you save them.
In the later case, the file is only partly written while uploading, which makes this a no-go for a running web server, or the file could become corrupted if the connection was lost during uploading. Also, for changing some character, uploading the whole file seems rather inefficient to me.
So I was thinking:
Wouldn't it be possible to have the IDE open Vim per SSH and mirror my changes there, and then just :w (save) ? Or use some kind of diff-files for changes?
The first one would be preffered, as it has the added advantage of Vim .swp files, which makes it possible that others know when someone is already editing the file.
My current solution is using ssh+vim, but then I lose all the cool features I have with Eclipse and other more advanced IDEs.
Also, regarding X-Forwarding: The reason I don't like it is speed. It feels way slower than just editing locally, and takes up unneeded bandwidth, when all I want to do is basically "text editing".
P.S.: I couldn't find any more appropriate tags for the question, especially no "remote" tag, but if you know any, feel free to add them. Also, if there is another similar question, feel free to point it out - I couldn't find any.
Thank you very much.
If you're concerned about having to transmit the entire file for minor changes, the only solution that comes to my mind is running (either continuously, or on demand) an rsync job that mirrors the remote site to your local system (and back). The rsync protocol just transmits the delta information. According to Are rsync operations atomic at file level?, the change is atomic.
Another possibility: run everything in a virtual machine on your Mac. The server and the IDE/text editor are both on the same virtual machine so you don't have to fear network issues.
Because the source code on the virtual machine is under some kind of VCS the classic code → test → commit process is trivial (at least theoretically).
I am writing a program in C on Linux environment (Debian-Lenny) and would like the program to be updated when an update is available (the program gets notified when a new update is available). I am looking for a way that the program can update itself.
What I am thinking is that the main program invokes a new program to handle the update. The updater program will have(access to) the source code and receive the update information about the changes on the source code, something like that:
edit1: line 20, remove column 5 to 20;
edit2: line25, remove column 4-7 then add "if(x>3){" from the column4
edit3: line 26, enter a new line and insert "x++;"
then kill the main process, recompile the source code, and then replace the new binary with the old one.
or is there a better (easier) and standard way to implement the ability that a program can update itself?
I use the program to control a system with a Linux embedded board. Therefore, I don't want the source code to be accessible to another person (if the system is hacked or something).
If the best way to update a program by using the source code, how do you suggest me to secure the source code? If you suggest me to encrypt the source code, what function (Linux C) can the program use to encrypt and decrypt the source file?
If your target system is Debian, then you should just take advantage of the Debian packaging system to provide updates. Package your compiled application in a .deb package, distribute it on an APT archive which is included in your system's sources.list, and just use cron to schedule a regular update check with apt. The .deb package can include a post-installation script that restarts your application.
You could run an apt-proxy caching proxy on your "gateway" nodes that have internet access, and have the other nodes use that as their apt source.
Distributing source code in this case is probably not appropriate, because then you would need to include a full compiler toolchain on your target system.
What you're describing is very similar to the 80s-style of delivering Unix source code, popularized by the development of PERL. You use diff to get a record of changes between different versions of the source-code, then distribute this "patch" file, and use patch to perform the necessary modifications at the client-end. This doesn't address the network-communication or version-control issues.
A possible downside is that a first-time download may need to apply many patches to bring the version up. This is often the case when investigating old source from nntp:comp.sources.unix.
I am currently using libproxy to get the proxy information (if any) on RedHat and Debian Linux. It doesn't work all that well, but it's the only way I know I can use to get the proxy information from my code.
I need to stop using the lib since in most cases it doesn't recognize the proxy.
Is there any way to acquire the proxy information? What i mean is, is there a file (or group of files) i can read, or an env variable or an API or system call that i can use to get the information?
Gnome based code is OK, KDE might help as well but i am looking for something more generic.
The code is C.
Now, before anyone asks, I don't want to use libproxy anymore. Period. I don't want to start investigating why it doesn't work. I don't really want to know whether there is a new version of that lib. I know it might work, I just don't want to use it. i can't use it (just because). So please don't point me that way.
Code is appreciated.
thanks.
In linux, the "global proxy setting" is typically just environment variables that are usually set in /etc/profile. You can examine those variables to see what proxy is set.
The variables are:
http_proxy - the proxy for HTTP connections
ftp_proxy - the proxy for FTP connections
Using the Network Proxy Preferences tool under Gnome saves information in the GConf database. The path to the keys are /system/http_proxy and /system/proxy. You can read about the detail in those trees at this page.
You can access the GConf database using the library API. Note that GConf is based on GObject. To examine the contents of this tree using the command line, try the following:
gconftool-2 -R /system/http_proxy
This will provide a "name = value" listing of the tree, which may be usable in your application. Note that this requires a system() call, so it's not recommended for a deployed application, but it might help you get started.
GNOME has its own place to store the Proxy settings, and I am sure KDE or any other DE has its own place too. May be you can look for any mention of where Proxy settings should be store in the Linux Standard Base. That could hint you a standard of doing it irrespective of Distro or DE.
DE -> Desktop Environment
char* proxy = getenv("all_proxy");
This statement puts the value of the environment variable called all_proxy, which is used by the system as a global proxy, in your C variable.
To print it in bash, try env | grep 'all_proxy' | cut -d= -f 2.