Execute a program with a custom .ini path - file

I make a fair amount portable Apps for personal use and they work perfectly for the most part. I do, however, find it quite frustrating that if I run them on another computer none of my preferences are retained, as a program always looks in appData for the configuration files (which obviously don't exist on another system), so I'm wondering whether there is some kind of command line to launch an .exe with a custom .ini location.
I'm asking this firstly because Google has proved fruitless (once again) and secondly because I know it's possible - I've actually done this before, but with only one of my Apps. I accomplished this by launching the App via the command programFile.exe -f configFile.ini /s (I have also seen programFile.exe -d -f configFile.ini /s elsewhere). Naturally, I thought I would try to apply this to some other Apps but it seems it only works for that particular App.
So, is there a command/switch that I am unaware of that will do this for an .exe file?
Thanks

It really depends on each executable file you are using. Some have support for what you are looking for, and some don't. Some programs don't even use .ini files. What you should look for is if each and every program you use have support for user data custom location.
Edit
The only case where generic arguments would be avaialble for a group of EXE files is if they are generated with the same tool, which automatically provides these arguments for you. InstallShield and MSI install programs have that kind of feature (with the silent install and automated installation for instance).
I suggest you look into the tool you are using to generate your portable Apps, and see if it does provide those generic arguments for you, and how they work. If it does not have that feature, then look into the Apps you were able to specify a custom location for your INI file. Somewhere into the code, there must be a piece of code that handles the arguments you specify to the EXE file and handles them. You should share that piece of code with your other Apps, to make sure they provide the same arguments list.

Related

How to use shp2pgsql

My question should be very simple to answer for anyone not being a self-taught newbie like me...
On this page is a cheatsheet concerning a function to be used in GIS/DB environnement : http://www.bostongis.com/pgsql2shp_shp2pgsql_quickguide.bqg
I would like to create a script allowing users to just have to click on it to launch the process, given the proper datas. But I don't understand how to use this. It obviously doesn't work in a Python console, nor directly in the windows console. How is it supposed to work ? What language is this ?
Thanks
shp2pgsql is indeed a command line tool. It comes with your PostgreSQL/PostGIS installation (usually) and, if not accessible via PATH-variable, can (usually) be run from within the /bin-folder in your PostgreSQL-Installation. You can also always 'make' the programm from source in any location yourself, if needed.
EDIT:
One way to set up a script (independent of whether you use it within qgis own python environment or not) would be to use Pythons subprocess (or os.system) module (check related question here) to write to shell and execute shp2pgsql.
A slightly more sophisitcated solution to (batch) insert (multiple) shapefiles via script could be to implement ogr2ogr via gdal/ogr module within python (check this blog). That, however, would require a working installation of the gdal core library, and the respective Python bindings (at least to use outside of QGIS Python environment, where it is pre-installed AFAIK), which can be tiresome at times. Once installed correctly, it offers a powerful (I dare say almighty) toolset for geodata management and manipulation via Python, though.
Apart from that, the blog link I provided also states the implementation of a batch insert script/tool (which operates ogr2ogr) in qgis 2.8 toolbox...maybe that can help you, either with your work directly or (via sourcecode) to point you in the direction of creating your own tool.

Run GUI from batch file is unprofessional?

I recently released a software to our customer, it will be installed on one machine at one location and maybe later at two other locations. It is a prototype and has to be tested.
This is a compiled Matlab GUI which runs scanning, does some image analysis and produces report. All is fine here. But I've got a complaint that batch-file I use is a thing from past DOS times, should not be used and looks unprofessional... Currently, the user should set up one path in batch file before the first use and then always run batch file. This bat-file kills some processes to avoid conflicts, including any running instances of the GUI, sets the path for results and runs the GUI. I proposed them to create a shortcut to this bat-file with a nice logo (such that they don't see the .bat extension ;)), but they are still unhappy.
What to do is probably not the main question here - client is always right and I should remove .bat somehow to make them happy. But is it really so unprofessional to release technical, not mass software using bat-files? Or is it just one person's opinion?
Personally I think if you already have a GUI, then use that to do the pre-processing. There is nothing wrong with using batch files, but using one when you have a GUI doesn't seem like the best way to do it.
Alternatively create a GUI with the same look as the main program to ask the user for the details it needs. For me it's not about professionalism, but how easy it is for the user to do what they need to do.
I see nothing wrong with using a batch file for small projects (especially during the testing phase if that is easier for you and delivers something to the client quicker). However, depending on the size of the project, it is nice to have an EXE to deliver to the client with a proper icon and what not.
I would agree with the customer that an EXE looks more professional. Whether it is or not... I'm not sure. The .bat files just seem a little slapped together whether that is true or not.
I would say this is similar to when people call PHP programs scripts because a lot of the times they are simple scirpts. But, then there are frameworks out there like Cake and Kohanna that are more than what someone would typically classify as a script but, since it's PHP there is still that connotation.
A batch file is not unprofessional. The batch "language" has disadvantages and outright problems (error handling!!) but it gets the job done and that's the point.
The problem is that it shows the internals of the program and some people are scared by this. They don't want to see anything like this and so try to find a disapproving label. The quickest solution is to hide the batch file behind a "vanity cover", or in this case an exe that hides the working from terrified eyes.
One simple possibility is to use an self-extracting zip file eg: http://www.7zsfx.info/en/

Configuration Management for FPGA Designs

Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and wrote a little on 'why' at my blog.
In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it.
What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random.
The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true:
You have to manually edit the HDL files produced by Coregen.
The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an output file).
You have to copy-and-paste the output from Coregen into your top-level design.
This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
I suggest CM tools that support version labeling and binary files. Most Software CM applications are fine with ASCII text files. They may just store a "difference" file rather than the entire file for updates.
My recommendations: PVCS, ClearCase and Subversion. DO NOT USE Microsoft SourceSafe. I don't like it because it only supports one label per revision.
I've seen Perforce and Subversion used in a couple of FPGA-intensive companies.
We use Perforce, and its great. You can have your code that lives in Linux-land checked in side-by-side with your Specs and Docs that live in Windows-land. And you get branching, labels, etc.
I've seen everything from Clearcase to RCS used, and it is really all okay for this kind of thing. The important thing is to get a good set of check-in policies established for your group, and make sure they stick to it.
And have automated nightly regressions. That way, when someone breaks the rules, they can be identified and publicly shamed.
I have personally used Perforce, Subverion, git and ClearCase for FPGA projects. Since VHDL and C are just text files, any works fine. However be sure to capture the other project and contraint files and any libraries you use.
Also think about what to do with the outputs, e.g. log file and bitstreams. Both tend to be big and the bitstreams are binaries.
Previously I used Subversion but have switched to git two years ago. Git handles FPGA design files just as well as it handles every other text and binary file. Git is all you need for version controlling your files and artifacts.
For building the designs, I recommend just using a single ISE project called "ise" (living in a subdirectory called "ise/"). You can take a look at my (very modest) FPGA open-source project on github for the file layout. I don't bother storing the ISE files at all since they are easy to regenerate. The only things I save are the Verilog files and some ISIM waveform config files. In other projects that use coregen I save the coregen.cgp project file and all of the *.xco scripts for regenerating cores. Then I use a Makefile for actually running coregen on the *.xco files. There are a few other Xilinx-specific files you should version control too: *.ucf, *.coe, *.xcf, etc.
I experimented with using Makefiles and the Xilinx command-line tools but found that ISE did a much better job tracking dependencies and calling the tools with the right arguments. Just don't make the mistake of trying to version control your ise/ project files or you will go mad. Xilinx has something like 300 different file types which change every release. If you want to save a file, you can try the ISE project file itself with a .xise extension. Anything that is hard to recreate, like the golden bitfile that you know works and took 6 hours to build, you might want to copy that and configuration manage it explicitly.

Rely on PATH or provide an explicit path when using system()

I'm writing a 'C' program that makes several calls to system() to execute other programs. When constructing the command string is it better to explicitly give the full path to the program being called, or should I just give the executable name and let the shell resolve its location using the PATH environment variable?
The programs I'm calling are all part of a single package and I have the path to the installation directory from a preprocessor definition. Giving the explicit path would seem to avoid errors that might occur if multiple installed programs share the same name. However it makes building the command strings a little more complicated, and everything will break if the user moves the programs around after installation.
Is there a widely accepted best practice covering this?
[Clarification]
I'm using autoconf/automake to generate the distribuion. The preprocessor definition providing the installation directory is created by the makefile. It reflects the user's choice of the installation directory as specified either on the configure comamnd line or the make command line. I do take the point about using environment variables to specify the location for the binaries though. It seems like an unneeded pain in the butt to make users rebuild just to change the location of the binaries.
Best practice is never to assume that you know your install directory at build time. Let your users decide where to install and work anyway.
This means that you will need to find out where your programs are located using some other mechanism. Consider using environment variables or command line parameters to allow the user to specify the actual path, if your platform does not provide you with the means to find out where the executables are located. You can use your knowledge of where you are normally installed as a fallback option.
For your actual question, in case you can build the absolute path to your program (using another mechanism than preprocessor directives) - use that. Otherwise, fall back to having the system find out for you.
The best practice is to not presume anything about the system you're installing onto. You can have the best of both worlds if you just let the user choose. Make the command you call an application preference or require paths to be defined in the environment:
PATH_TO_TOOL1=foo
PATH_TO_TOOL2=/usr/bin/bar
You can, of course, just fall back to a default of some kind if the variables aren't defined or the preference isn't set. Writing your application to be more flexible is always the best choice!
You should definitely let the user specify the path with an environment variable to the installed binaries. Not all systems are the same and many people will want to put their execs in different places.
the best example I can think of is people doing a local install vs system install. If your program is installed in a home directory that user will have to set and env variable to say where the binaries are copied to.
If you're absolutely sure of the path names, and if they are not "well-known" commands (for example, POSIX shell utilities on Unix are "well-known"), you should specify the pathname, otherwise don't specify the full path, or let the user control it by using an environment variable.
In fact, you may be able to write something like a function such as int my_system(const char *);, which does the prefixing of the path for you. If later you determine that it was a bad idea, it's just a matter of making my_system() identical to system().
I'm not sure if it's a best practice, but what I do in these cases is I write my C code to extend the PATH environment variable to include the installation directory at the end. Then I just use the PATH. That way, if the user's PATH wants to override where I believe the stuff was installed, it can—but if the software was installed in an out-of-the-way place, I can call it without forcing my users to put the directory on $PATH themselves.
Please note that the extended PATH lasts only as long as the C program runs; I'm not proposing changing the persistent PATH.

How to allow a user to edit data in a separate app from the terminal?

I am writing a terminal-based application, but I want the user to be able to edit certain text data in a separate editor. For example, if the user chooses to edit the list of current usernames, the list should open as a text file in the user's favorite editor (vim, gedit, etc.). This will probably be an environment variable such as $MYAPPEDITOR. This is similar to the way commit messages work in svn.
Is the best way to do this to create a temporary file in /tmp, and read it in when the editor process is terminated? Or is there a better way to approach this problem?
There's already a $EDITOR variable, which is extremely standard and I have seen it working on a wide variety of unixes. Also, vi is always an option on any flavor of unix.
Debian has a sensible-editor command that invokes $EDITOR if it can, or falls back to some standard ones otherwise. Freedesktop.org has an xdg-open command that will detect which desktop environment is running and open the file with the associated application. As far as I know, sensible-editor doesn't exist on other distributions, and of course xdg-open will fail in a text-only environment, but it couldn't hurt to try as many options as possible, if you think it's important that a desktop user can see their happy shiny gedit or kate instead of scary old vi or nano. ;)
The way crontab and sudoedit work is also by making a file in /tmp. git puts it under .git, and svn actually puts it in the current directory (not /tmp).
The way svn and mercurial do it is by making a file in /tmp.
BTW, you don't need a MYAPPEDITOR, on nix there's EDITOR already present.
Since you mention svn in your post, why not just follow the same methodology? svn opens a file with a particular name with whatever $EDITOR (or $SVN_EDITOR) contains - this might actually require some work on your part; determining the parameters to each supported editor. In either case, you have the name of the file that was saved (or the error code of the application if something failed) and you can just use that.

Resources