Distributing loadable builtin bash modules - c

I've written a built-in for bash which modifies the 'cd' command, a requirement for my software. Is there a way to actually distribute a loadable independently of bash itself? I'd ideally like to distribute just a drop in "additional feature" because I know people can be put off by patching and compiling their shell from source code.
I want to time how long a user is in a directory so I can determine where they want to be. It's this functionality: http://github.com/joelthelion/autojump/tree/master rewritten as a bash builtin, for performance issues. This implementation uses $PROMPT_COMMAND to work but I wanted something integrated.

It is unclear what you have modified but in any case, bash (like at least ksh93 which IIRC introduced the concept and zsh) supports, using the enable -f file name syntax, loading built-in functions as external dynamically loaded modules.
These modules being plain files can certainly be distributed independently, as long as you make sure they are compatible with the target version/architecture. This was already true 5 years ago when you asked this question.
One issue in your case is there seems to be no documented way to overload a internal built-in like cd by a dynamically loaded one while keeping the ability to access the former.
A simple workaround would be to implement your customized cd with a different name, say mycd, like this:
int mycd_builtin(list)
WORD_LIST *list;
{
int rv;
rv=cd_builtin(list);
if(rv == EXECUTION_SUCCESS) {
char wd[PATH_MAX+1];
getcwd(wd,sizeof(wd));
// do your custom stuff knowing the new working directory
...
}
return (rv);
}
then to use an alias, or better, a shell function for your customized version to be used instead of the regular one:
cd() {
mycd "$#"
}
As long as your customization doesn't affect the behavior of the standard command and thus doesn't risk breaking scripts using it, there is nothing wrong in your approach.

Changing the built-in cd is a support nightmare for any admin and unwelcome to foreign users. What is wrong with naming it 'smart-cd' and letting the USER decide if they want the functionality by including it in their .bashrc or .profile? Then they can setup things however they want.
Also, using how long you've been in a directory is a pretty poor indication of preference. How would you distinguish between idling (a forgotten shell hanging in /tmp overnight), long-running scripts (nightly cron jobs), and actual activity.
There are a multitude of other methods for creating shortcuts to favorite directories: aliases, softlinks, $VARIABLES, scripts. It is arrogant of you to assume that your usage patterns will be welcomed by other users of your system.

Related

Common Lisp Package Definition with Dependencies for Exploration at the REPL?

This is another take at the question of packages and systems especially for usage locally in personal projects (maybe similiar to HOWTO definition and usage of Common Lisp packages (libraries)? from 2014). What are recommendations for how to approach this and can links be provided for sources that describe the recommended approach in 2022? CL's long half-life means lots of old blogs and books are still useful, but unfortunately not all of them, and it is hard for the novice to know which are the still viable guides.
Here is the start of a file that I might write for a simple personal exploration project. My basic working mode would be to start slime, load and compile this in emacs, and then change to the slime repl and use in-package to move into my package and start playing with my functions.
(ql:quickload "alexandria")
(ql:quickload "trivia")
(defpackage computing-with-cognitive-states
(:use cl)
(:nicknames :cwcs)
(:local-nicknames (:alex :alexandria)))
(in-package :cwcs)
(defun sum-denom (dimension dist) ... ; and more defun's
The leading ql:quickload's strike me as ugly, but without them I can't load and compile. Is it recommended that I do this manually in slime repl first? Or is this considered an acceptable practice for small use cases like this? Or is there another recommended way?
If I am thinking of using my package elsewhere someday is it recommended that I make it a system, perhaps with quickproject? Or is it the more traditional approach to just load a fasl file from one directory on my system when another session (or another package definition) needs it?
In summary, the questions are about the writing of code that is for personal use, might be a small file, or might be a few files that depend on each other and a few external systems. What are some of the reasonable approaches and resources that describe those approaches? And which are the ones that would be easiest to scale up if the project grew into something that one wanted to share?
For personal one-off »scripts«, I have a little macro in my .sbclrc:
(in-package #:cl-user)
(defmacro ql-require (&rest system-names)
`(eval-when (:compile-toplevel :load-toplevel :execute)
(ql:quickload ',system-names)))
Then my script file looks like this:
(in-package cl-user) ; naked symbols, I don't care much about package pollution here
(ql-require "alexandria" "arrows" …)
(defpackage my-script
(:use cl alexandria arrows))
(in-package my-script)
;; script away!
I open these in Emacs/SLIME, C-c C-k. This often just has the »main« at toplevel, so it will just print at the REPL. In other cases, I hit C-c ~ afterwards to work from the REPL.
As soon as it's not just a script, but something I might want to quickload entirely, I create a simple .asd file for it, which just moves the systems I depend on from the ql-require in the file to the :dependencies in the system definition. The package definition can remain in the one .lisp file. All my common lisp projects are under ~/common-lisp/, which is in the ASDF load path by default. A very simple .asd file can look like this:
(in-package #:asdf-user) ; uninterned symbols as string designators avoid package pollution
(defsystem "my-system" ; system names are lower-case strings
:dependencies ("alexandria" "arrows" …)
:serial t
:components ((:file "my-file")))
The filename for the system definition should be the same as the system name (my-system.asd here). If you put multiple systems into a single file, their names should share the filename as a prefix, optionally followed by a slash and some qualifying suffix (e. g. "my-system" and "my-system/test"). This ensures that ASDF can quickly find the right file to load without having to load it first.
As soon as I split the functionality into several files, I will usually also put the defpackage into its own file.
As soon as I create multiple packages, I will usually create subdirectories for each of them (:modules in :components in the system definition). (ASDF has an option to make a so-called package inferred system, but I prefer the central system definition.)
If you want to share a system, you'd probably want to add an :author and a :license to the system definition (even if it's just CC0/Public Domain). If you think that it's of general interest, you can submit it to Quicklisp.
The simple solution would indeed be to define a system, using ASDF. This is really all that quickproject does, but it is (IMO) a perfectly reasonable solution even for smaller projects.
What you would typically do, assuming you place your code in a somewhat standard location on your system (i.e. you do not have lisp projects everywhere on your file system !), is create a symlink from~/quicklisp/local-projects/ to your/personal/projects/directory. This way, after creating a project with quickproject:make-project (or writing the .asd file by hand ...) in this repository, Quicklisp will be able to find it and you can then quickload it directly, along with its dependencies if you have specified those in the .asd file.
To make things clearer: Quicklisp is "simply" a tool built on top of ASDF which is able to download (and then load) dependencies, and their dependencies ... and so on, if they are specified in a .asd file located in a place it knows. An "equivalent" could be Python's pip. On the other hand, ASDF specifies a way to define systems, that is, a set of source files that should be compiled together, in some specific order, using some dependencies, and so on. On a lower level, this would be more like C/C++'s make or CMake.

vi/vim - custom formatting depending on presence of special file or tag inside code

Question
Is there a simple/reliable way to have VIM, on a project/directory specific base, either detect a special file (ie: custom .vimrc with a couple settings), or to change run-time settings based on the presence of a special tag/string/hash in a comment at the beginning of a c source (.c) or header (.h) file? The string/hash must map to a function/setting in the .vimrc file, and must not contain the actual settings themselves.
Background
I have a mutli-developer project where we all have a common set of code style settings for our various editors (emacs and vim, primarily), and we all adhere strictly to these settings, such as newline style (CR versus CR+LF), indentation (length, hard-tabs versus expanded-as-spaces), and so on.
Problem
I'm creating a few new projects that, for reasons beyond our control (ie: static code analysis tool we have to use), will require different style settings than ours. There are ways to bypass this in the static code analysis tool, but there's a non-technical/legal requirement that we avoid disabling "features" of this tool.
For each of these new projects, I would like to somehow make vi/vim aware of some special flag, either by the presence of a special file in the root of the project's directory structure, or by a special keyword/tag/hash/etc I could put inside a /* C-style block comment */. When vi/vim is aware of the presence of this "trigger", I would want it to invoke a function to override the style settings for newlines, indentation, etc. If this is possible, is it also possible to have several, mutually exclusive such "triggers" so that everyone has a common .vimrc and the project determines which style to utilize?
Question - redux
Is there a straightforward way to accomplish this?
One solution: modelines (:help modeline) for Vim and file variables for Emacs.
Those are special comments you put in your files that are interpreted by your editor. You can use them to set indent style, file encoding, etc.
In my opinion, modelines are ugly noise.
One solution for Vim: .exrc (:help 'exrc').
You can put your project-specific settings in a .exrc file at the root of your project. The manual claims this solution is insecure but I fail to see how normal functioning adult could be beaten by it. YMMV.
One solution for Vim: directory-specific autocommands.
That's the safer alternative mentioned at the end of :help 'exrc' but it requires each contributor to add stuff to his own vimrc so… not that useful I guess.
The definitive solution: editorconfig.
You put your settings in a .editorconfig at the root of your project and let each contributor's IDE/editor deal with it.
... to change run-time settings based on the presence of a special
tag/string/hash in a comment at the beginning of a c source (.c) or
header (.h) file?
Yes, they're called modelines. http://vim.wikia.com/wiki/Modeline_magic
They can appear at the start or end of files.
An example from some C sources of mine:
/* vim:ft=c:expandtab:sw=4:ts=4:sts=4:
*/
See :help modeline in vim for more info.
Central configuration
If it's okay to configure the specific commands / local exceptions centrally, you can put such autocmds into your ~/.vimrc:
:autocmd BufRead,BufNewFile /path/to/dir/* setlocal ts=4 sw=4
It is important to use :setlocal instead of :set, and likewise :map <buffer> ... and :command! -buffer ....
On the other hand, if you want the specific configuration stored with the project (and don't want to embed this in all files via modelines), you have the following two options:
Local config with built-in functionality
If you always start Vim from the project root directory, the built-in
:set exrc
enables the reading of a .vimrc file from the current directory. You can place the :set ts=4 sw=4 commands in there.
Local config through plugin
Otherwise, you need the help of a plugin; there are several on vim.org; I can recommend the localrc plugin (especially with my own enhancements), which even allows local filetype-specific configuration.
Note that reading configuration from the file system has security implications; you may want to :set secure.
I've covered the main alternatives in this answer : https://stackoverflow.com/a/456889/15934 (Yes, your question is almost a duplicate: different formulation, but same solutions).
Modelines are really limited: you have to use a plugin to set things that are not vim options.
.exrc doesn't look behind the current directory
editorconfig is restricted to very specific options: don't expect to forward plugin specifications like where compilation directories are (this is how I support multiple compilation modes with CMake -- others prefer playing with ccache and tuning the CMakeCache, but this doesn't work well when using g++ and clang++ one after the other), how the linter shall be called, your naming conventions...
autocommand don't scale and cannot be ported easily from one directory to the other.
In the end, the best solutions are plugin based IMO: There a plenty plugin solutions see the non exhaustive list at the end of my local_vimrc plugin's README
Note also that since my previous answer, I've initiated another experiment to simplify project management. For instance, I introduce p:variables which are variables shared among all buffers belonging to a project.

Vim autocomplete from ctags for system headers only works when popup is triggered manually

On OS X, I generated a set of ctags for the system includes using the following command:
ctags -f c -h ".h" -R --c-kinds=+p --fields=+iaS --extra=+q /usr/include
This was run inside of a ~/.vim/ctags/ directory, where I put all of the ctags I generate for system-wide header files (I also have stuff for ROS and CPP that I load conditionally, but that's neither here nor there).
Anyway. The ctags file is set correctly in my .vimrc, and vim can definitely see the ctags, but for some reason the autocomplete popup will only display results from #included header files if I write out the entire symbol and then start backspacing. As an example, if I #include <string.h> in a project, and then I want to call strlen(), and I start to type str in to the active vim buffer, I will only get results for symbols that are currently in the vim buffer. But, if I type out strlen and then start backspacing one or two characters and hit <C-n>, the popup menu will be populated with matches from any other included header files.
EDIT: Turns out, if I just hit "s" then <C-n>, it works as well. So the problem seems to be that it only works if the popup menu is launched manually. Which makes me think that it's a plugin problem (see below)
Additional information:
completeopt is set to completeopt=menuone,menu,preview,longest
I have OmniCppComplete, which I suppose could be interfering with the behavior. It is currently not being conditionally loaded for C++ files only. If you want me to edit and post my OmniCppComplete settings from my .vimrc, just ask.
I also have AutoComplPop installed, but I haven't done anything to configure it, so it's running with its default settings. Haven't really researched the plugin, so no idea if some of it's behavior could be interfering with the results.
I have AutoTag and TagBar installed, but those should only be fiddling with the current directory's local tagfile.
I'm honestly pretty new to Vim, and I just have no idea where to start debugging this issue, whether it be with a random plugin or with my .vimrc settings.
Vim has many specific completion mechanisms.
<C-n> and <C-p> use many sources defined by the complete option. By default, they will provide completion using the current and all loaded and unloaded buffers, tags and included files. While you can usually get quite useful suggestions with these, it is a bit of a "catch-all" solution: it is not reliable at all if you work on reasonably large projects.
<C-x><C-]> uses only tags so it may be a little more useful to you.
And there are many more, see :h ins-completion.
Omni completion is smarter: it typically runs a custom filetype-specific script that tries hard to provide meaningful completion. It is triggered by <C-x><C-o> and you can read about it in :h ft-c-omni. Omni completion is often a better choice when working with code.
Because you have two overlaping "auto"-completion plugins it's hard to say what completion mechanism is at work. You should disable those plugins and play around with the different completion mechanisms available to you.
I have not mastered this yet, but I do think the following observation may be of help.
Vim's default auto complete which can be quite noisy, often gets in the way of what you call with <C-x><C-o>. Specifically, I found myself calling up my tags based completions with <C-x><C-o> only to have them replaced with continued typing with Vim's default suggestions using my open buffers.
The suggestion of shutting off one of the plugins makes sense. In my case the key was how to shut down Vim's default behavior. I have seen several people (and to which I now include myself), set the length of the expression to a high number before triggering Vim's default. For me that is:
let g:deoplete#auto_complete_start_length = 99
... this way you eliminate the default layer of completions that comes and goes regardless of the commands you intended to inform your work.
This still feels like a hack but it helps keep my work focused on the tag-based completions.
FYI: I use NVIM on a Mac.

Portable access to sysconfdir via config.h

I'd like my application to have portable access to the configuration files installed during make install (dist_sysconf_DATA). Is it possible to access $(sysconfdir) via config.h?
It is, but you should not do this according to official voices (as in, I am not gonna search the manual for it now) so as to continue supporting overriding it for specific objects to be built.
make CPPFLAGS="-USYSCONFDIR -DSYSCONFDIR=/blah" thisoneobject.o
Hence, what one is supposed to do:
AM_CPPFLAGS = -DSYSCONFDIR=\"${sysconfdir}\"
If you're using autoheader, adding this to your configure.ac will output a SYSCONFDIR macro in your config.h, and it will be defined with the value $(sysconfdir) or ${prefix}/etc.
if test "x$sysconfdir" = 'x${prefix}/etc'; then
if test "x$prefix" = 'xNONE'; then
sysconfdir=$ac_default_prefix/etc
else
sysconfdir=$prefix/etc
fi
fi
AC_DEFINE_UNQUOTED([SYSCONFDIR], ["$sysconfdir"], [location of system configuration directory])
But I would strongly recommend against doing that, and instead, stick with using the -DSYSCONFDIR flag. It's less code and therefore less prone to something going wrong. Using a condition in configure.ac such I mentioned may not be portable or take into account every case that might be encountered. Using -DSYSCONFDIR is the best option. Sometimes appearance just doesn't matter.
What I believe is most commonly done (and this is what I do)
Add the following in your Makefile.am
AM_CPPFLAGS = -DSYSCONFIR='"$(sysconfdir)"'
And now you can access SYSCONFDIR in source

Locating data files in C program built with Autotools

I have a C program built using Autotools. In src/Makefile.am, I define a macro with the path to installed data files:
AM_CPPFLAGS = -DAM_INSTALLDIR='"$(pkgdatadir)"'
The problem is that I need to run make install before I can test the binary (since it needs to be able to find the data files).
I can define another macro with the path of the source tree so the data files can be located without installing:
AM_CPPFLAGS = -DAM_INSTALLDIR='"$(pkgdatadir)"' -DAM_TOPDIR='"$(abs_top_srcdir)"'
Now, I would like the following behavior:
If the binary was installed via make install, use AM_INSTALLDIR to fetch data files.
If the binary was not installed, use AM_TOPDIR to fetch data files.
Is this possible? Is there a better approach to this problem?
What I do (in https://http://rhdunn.github.com/cainteoir/) is:
const char *basedir = getenv("CAINTEOIR_DATADIR");
if (!basedir)
basedir = DATADIR "/" PACKAGE; // e.g. /usr/share/cainteoir-engine
and then run it (in tests/harness.py) as:
CAINTEOIR_DATADIR=`pwd`/data src/apps/metadata/metadata test_file.epub
This then allows the user to change the location of where to get the data if they wish.
Making the program able to use a run-time configuration as proposed by reece is a good solution. If for some reason you do not want it to be configurable at run-time, a common solution is to build a test binary differently than the installed binary (there are other problems associated with this, in particular ensuring that the program you are testing has behavior that is consistent with the program that is installed.) An easy way to do that is something like:
bin_PROGRAMS = foo
check_PROGRAMS = test-foo
test_foo_SOURCES = $(foo_SOURCES)
AM_CPPFLAGS = -DINSTALLDIR='"$(pkgdatadir)"'
test_foo_CPPFLAGS = -DINSTALLDIR='"$(abs_top_srcdir)"'
Rather than using a binary with a different name, you might want to have a dedicated tests directory and build the program using the same name as the original.
Note that I've changed the name from AM_INSTALLDIR to INSTALLDIR. Automake reserves names
beginning with "AM_" for its own use, and by using that name you are stomping on Automake's
namespace.
A bit of additional information first: The data files are under active development, and I have various scripts that need to call binaries using local data files, whereas installed binaries should use stable, installed data files.
My original solution made use of an environment variable, as proposed by reece. But I didn't want to manage setting up environment variables in various places, and I didn't want any risk of the wrong data files being picked up due to a mistake.
So the solution I ended up with was to define macros for both locations at build time, and add a flag (-local) to the binaries to force local data files to be used.

Resources