Linux C - implementing the ability that a program can update itself - c

I am writing a program in C on Linux environment (Debian-Lenny) and would like the program to be updated when an update is available (the program gets notified when a new update is available). I am looking for a way that the program can update itself.
What I am thinking is that the main program invokes a new program to handle the update. The updater program will have(access to) the source code and receive the update information about the changes on the source code, something like that:
edit1: line 20, remove column 5 to 20;
edit2: line25, remove column 4-7 then add "if(x>3){" from the column4
edit3: line 26, enter a new line and insert "x++;"
then kill the main process, recompile the source code, and then replace the new binary with the old one.
or is there a better (easier) and standard way to implement the ability that a program can update itself?
I use the program to control a system with a Linux embedded board. Therefore, I don't want the source code to be accessible to another person (if the system is hacked or something).
If the best way to update a program by using the source code, how do you suggest me to secure the source code? If you suggest me to encrypt the source code, what function (Linux C) can the program use to encrypt and decrypt the source file?

If your target system is Debian, then you should just take advantage of the Debian packaging system to provide updates. Package your compiled application in a .deb package, distribute it on an APT archive which is included in your system's sources.list, and just use cron to schedule a regular update check with apt. The .deb package can include a post-installation script that restarts your application.
You could run an apt-proxy caching proxy on your "gateway" nodes that have internet access, and have the other nodes use that as their apt source.
Distributing source code in this case is probably not appropriate, because then you would need to include a full compiler toolchain on your target system.

What you're describing is very similar to the 80s-style of delivering Unix source code, popularized by the development of PERL. You use diff to get a record of changes between different versions of the source-code, then distribute this "patch" file, and use patch to perform the necessary modifications at the client-end. This doesn't address the network-communication or version-control issues.
A possible downside is that a first-time download may need to apply many patches to bring the version up. This is often the case when investigating old source from nntp:comp.sources.unix.

Related

How can I package a configuration file with a C program?

I am currently trying to build a new version of a piece of software I developed. The software takes a simple command line argument and appends the argument to the end of a file. My problem is that I want to alter the program so:
Someone can set up a standard location to save the file to.
The program will remember that location.
It will still work for anyone installing the C program on mac, linux or windows.
So basically I am trying to figure out how to create a C executable that comes with persistent memory that it can read and modify. Alternatively I would take any way to create an installer to make this easy for anyone who wants to use my program.
If this were a java program I would just add it to the jar file but I have never seen this documented for the C language.
I would add platform-specific code to store your settings in whatever area users of that particular platform expect. So:
For Linux: store configuration files in the location specified by $XDG_CONFIG_HOME.
For Mac: Use CFPreferences
For Windows: use the registry

FileSystemWatcher handling moving file - another solution

Hi
I was trying to use FileSystemWatcher to detect if some files or directories has been moved to another location. The problem was, i had to use onCreated and onDeleted events to handle this, but there are many issues using this solution
how could i detect change if i will select more than one file and press Ctrl+C, Ctrl+V, or right-click and select Copy and then Paste in the same directory?
how could i detect, if i will select more than one directory?
the last one, what if i simulate moving file? I could delete file and create with same name in different place.
I know i could use, Timers, process locking detection, verification which process uses file (if explorer.exe then it could be moving file), but this solution is not perfect and it's very ineffective. I was whinking about this how to solve this issue, and i have decided to implement this in low-level language. Is this possible to do this using C, or assembler? I know that every thing is possible to do using assembler, so is it possible to implement this in asm? I would like to create my own FileSystemWatcher using assembler or C but where should i looking for info how to do this?
File movement within the same filesystem can be detected easily using a filesystem filter driver, as the filesystem received the corresponding request from the OS. Other scenarios such as moving to the other disk or moving by copy/delete sequence are hardly traceable even with the filter driver because you would need to match between the file which have been created/written to and the file which is being deleted (possibly on the other disk).
If you plan to write some security mechanism (like a DRM), then I need to remind that the data can be altered during copying (eg. encrypted or compressed), which makes your task even harder.
Still you can look at filesystem filter drivers - should you decide to go on with detection of filesystem events, such driver is a much more reliable and powerful mechanism than FileSystemWatcher.

How to make binary which downloads its newer copy?(limitied conditions)

Would like to ask for advice.
there is a need for binary to have a mechanism for self update. Lets imagine binary rolls on host A and updates-server is server B.
Lobster method is to fork bash script with wget/ftp/ncftp/etc getter wich will download and replace. But ehm...there is no such tools on A and they will not be installed.
In short I can't use any external software tools(external to running binary).I can just hardcode mechanism in running binary.
As binary image runs it can load binary(and md5 file) simply via tcp sockets in tmp file,then do md5 compare and if everything ok then replace binary and restart itself. Its easy to do, but I have some strange feeling...dunno.
Mb someone can share and advice?:)Thank you in advance.
Conditions: binary is written in pure c. freebsd is binary rolling side and update-serve is centos. So java/python/c++/any is available at server side but not on free. Y, tobe honest its is possbile install some tools on client side and openfirewall for ftp, but want to avoid and hardcode :)
ADDED: must be noted that the enviroment between A and B is secured..eghm...as we think, in any way security and access problem and spoofing/sniffing out of our world there :) its just local update implementation mechanism for some binary which nowdays we update from center within expect scripts via ssh.
You will have to reimplement a whole host of functionality if you want to do so. My easiest suggestion would be to link to libcurl, hardcode the download path into your executable and write the image of your executable back to $ARGV[0]. However, you should definitely rethink your distribution concept, most distributions do some form of package management, and using it is the easiest alternative for all parties involved.
First of all check if you can modify a binary when a process is executing it, some system does not allow it.
You say you can not use external tool so probbly you can not create another "updater program" which will do the chenge instead of your binary.
Probalby you can download such program (from where you want to downlaod your update), execute it (exec, replaces current process with the new one)
that executed process will download and upded your main one, and then exec to it.

Configuration Management for FPGA Designs

Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and wrote a little on 'why' at my blog.
In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it.
What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random.
The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true:
You have to manually edit the HDL files produced by Coregen.
The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an output file).
You have to copy-and-paste the output from Coregen into your top-level design.
This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
I suggest CM tools that support version labeling and binary files. Most Software CM applications are fine with ASCII text files. They may just store a "difference" file rather than the entire file for updates.
My recommendations: PVCS, ClearCase and Subversion. DO NOT USE Microsoft SourceSafe. I don't like it because it only supports one label per revision.
I've seen Perforce and Subversion used in a couple of FPGA-intensive companies.
We use Perforce, and its great. You can have your code that lives in Linux-land checked in side-by-side with your Specs and Docs that live in Windows-land. And you get branching, labels, etc.
I've seen everything from Clearcase to RCS used, and it is really all okay for this kind of thing. The important thing is to get a good set of check-in policies established for your group, and make sure they stick to it.
And have automated nightly regressions. That way, when someone breaks the rules, they can be identified and publicly shamed.
I have personally used Perforce, Subverion, git and ClearCase for FPGA projects. Since VHDL and C are just text files, any works fine. However be sure to capture the other project and contraint files and any libraries you use.
Also think about what to do with the outputs, e.g. log file and bitstreams. Both tend to be big and the bitstreams are binaries.
Previously I used Subversion but have switched to git two years ago. Git handles FPGA design files just as well as it handles every other text and binary file. Git is all you need for version controlling your files and artifacts.
For building the designs, I recommend just using a single ISE project called "ise" (living in a subdirectory called "ise/"). You can take a look at my (very modest) FPGA open-source project on github for the file layout. I don't bother storing the ISE files at all since they are easy to regenerate. The only things I save are the Verilog files and some ISIM waveform config files. In other projects that use coregen I save the coregen.cgp project file and all of the *.xco scripts for regenerating cores. Then I use a Makefile for actually running coregen on the *.xco files. There are a few other Xilinx-specific files you should version control too: *.ucf, *.coe, *.xcf, etc.
I experimented with using Makefiles and the Xilinx command-line tools but found that ISE did a much better job tracking dependencies and calling the tools with the right arguments. Just don't make the mistake of trying to version control your ise/ project files or you will go mad. Xilinx has something like 300 different file types which change every release. If you want to save a file, you can try the ISE project file itself with a .xise extension. Anything that is hard to recreate, like the golden bitfile that you know works and took 6 hours to build, you might want to copy that and configuration manage it explicitly.

Apply file structure diff/patch on remote system?

Is there a tool that creates a diff of a file structure, perhaps based on an MD5 manifest. My goal is to send a package across the wire that contains new/updated files and a list of files to remove. It needs to copy over new/updated files and remove files that have been deleted on the source file structure?
You might try rsync. Depending on your needs, the command might be as simple as this:
rsync -az --del /path/to/master dup-site:/path/to/duplicate
Quoting from rsync's web site:
rsync is an open source utility that
provides fast incremental file
transfer. rsync is freely available
under the GNU General Public License
and is currently being maintained by
Wayne Davison.
Or, if you prefer wikipedia:
rsync is a software application for
Unix systems which synchronizes files
and directories from one location to
another while minimizing data transfer
using delta encoding when appropriate.
An important feature of rsync not
found in most similar
programs/protocols is that the
mirroring takes place with only one
transmission in each direction. rsync
can copy or display directory contents
and copy files, optionally using
compression and recursion.
#vfilby I'm the process of implementing something similar.
I've been using rsync for a while, but it gets funky when deploying to remote server with permission changes that are out of my control. With rsync you can choose to not include permissions, but they still endup being considered for some reason.
I'm now using git diff. This works very well for text files. Diff generates patches, rather then a MANIFEST that you have to include with your files. The nice thing about patches is that there is already an established framework for using and testing these patches before they're applied.
For example, with patch utility that comes standard on any *unix box, you can run the patch in dry-run mode. This will tell you if the patch that you're going to apply is actually going to apply before you run it. This helps you to make sure that the files that you're updating have not changed while you were preparing the patch.
If this is similar to what you're looking for, I can elaborate on my process.

Resources