How can I set 'no-tty' for 'stack sdist' without disabling it for 'gpg' in general? - cabal

If I run
stack sdist --sign ...
I get
Error signing with GPG
gpg: cannot open `/dev/tty': Device not configured
unless I place
no-tty
in my ~/.gnupg/gpg.conf.
But if I do that then
gpg -e ...
gives
gpg: Sorry, no terminal at all requested - can't get input
Is there a way to configure gpg or stack sdist so that I can use no-tty for the latter but not the former? For example can I use a local gpg.conf in the projects where I run stack sdist? Or are there command line args I can provide to either gpg or stack sdist that will have the same effect? Perhaps there is a way to pass --no-tty through stack sdist to the underlying GPG?
As an added constraint, I need this to work in an IDE where I am limited to a single shell command (so I can't batch this or write a small script to do it):

Looking at the source code of stack's GnuPG interface, there is no possibility in stack to add GnuPG parameters:
(_hIn,hOut,hErr,process) <-
gpg
[ "--output"
, "-"
, "--use-agent"
, "--detach-sig"
, "--armor"
, toFilePath path]
Given having Microsoft add support for a proper TTY variable is not realistic, you're probably best off having stack changed. I'd consider not providing --no-tty as GnuPG parameter if stack's --no-terminal parameter is set a bug. Additionally, some way to pass custom options to GnuPG seems a reasonable thing to support and feature request.
As a workaround, you should very well be able to provide some kind of wrapper batch script around stack --sign. A possible solution would be to set a GNUPGHOME environment variable pointing to a stack-specific copy of your GnuPG home directory, that includes no-tty as a configuration option (so you have a "default" GnuPG home directory for day-to-day work, and a specific one for code signing). You might also be able to provide such an environment variable in the IDE.

Related

Create directory structure in /var/lib using autotools and automake

I'm using autotools on a C project that, after installation, needs a particular directory structure in /var/lib as follows:
/var/lib/my-project/
data/
configurations/
local/
extra/
inputs/
I'm currently using the directive AS_MKDIR_P in configure.ac like so:
AS_MKDIR_P(/var/lib/my-project/data)
AS_MKDIR_P(/var/lib/my-project/configurations/local)
AS_MKDIR_P(/var/lib/my-project/configurations/extra)
AS_MKDIR_P(/var/lib/my-project/inputs)
But it needs the configure script to be run with root permissions which I don't think is the way to go. I think the instructions to create this directory structure needs to be in Makefile.am, so that make install creates them rather than configure, but I have no idea how to do that.
You really, really, really do not want to specify /var/lib/my-project. As the project maintainer, you have the right to specify relative paths, but the user may change DESTDIR or prefix. If you ignore DESTDIR and prefix and just install your files in /var/lib without regard for the user's requests, then your package is broken. It is not just slightly damaged, it is completely unusable. The autotool packaging must not specify absolute paths; that is for downsteam packagers (ie, those that build *.rpm or *.deb or *.dmg or ...). All you need to do is add something like this to Makefile.am:
configdir = $(pkgdatadir)/configurations
localdir = $(configdir)/local
extradir = $(configdir)/extra
inputdir = $(pkgdatadir)/inputs
mydatadir = $(pkgdatadir)/data
config_DATA = cfg.txt
local_DATA = local.txt
extra_DATA = extra.txt
input_DATA = input.txt
mydata_DATA = data.txt
This will put input.txt in $(DESTDIR)$(pkgdatadir)/inputs, etc. If you want that final path to be /var/lib/my-project, then you can specify datadir appropriately at configure time. For example:
$ CONFIG_SITE= ./configure --datadir=/var/lib > /dev/null
This will assign /var/lib to datadir, so that pkgdatadir will be /var/lib/my-project and a subsequent make install DESTDIR=/path/to/foo will put the files in /path/to/foo/var/lib/my-package/. It is essential that your auto-tooled package honor things like prefix (which for these files was essentially overridden here by the explicit assignment of datadir) and DESTDIR. The appropriate time to specify paths like /var/lib is when you run the configure script. For example, you can add the options to the configure script in your rpm spec file or in debian/rules, or in whatever file your package system uses. The auto-tools provide a very flexible packaging system which can be easily used by many different packaging systems (unfortunately, the word "package" is highly overloaded!). Embrace that flexibility.
According to autotools documentation (here and here), there are hooks that you can specify in Makefile.am that will run at specific times during the installation. For my needs I will use install-exec-hook (or install-data-hook) which will be run after all executables (or data) have been installed:
install-exec-hook:
$(MKDIR_P) /var/lib/my-project/data
$(MKDIR_P) /var/lib/my-project/configurations/local
$(MKDIR_P) /var/lib/my-project/configurations/extra
$(MKDIR_P) /var/lib/my-project/inputs
MKDIR_P is a variable containing the command mkdir -p, or an equivalent to it if the system doesn't have mkdir. To make it available in Makefile.am you have to use the macro AC_PROG_MKDIR_P in configure.ac.

How to add a new filter to ffmpeg library

I am trying to add functionality to FFmpeg library. The issue is that in developer guide there are just general instruction on how to do it. I know that when I want to add something to ffmpeg I need to register the new functionality and rebuild the library so I can then call it somehow like so:
ffmpeg -i input.avi -vf "myfilter" out.avi
I do not want to officialy contribute. I would like to try to create the extra functionality and test it. The question is - is there any scelet file where the basic structure would be ready and you would just get a pointer to a new frame and processed it? Some directions or anything, because the source files are kinda hard to read without understanding its functions it calls inside.
The document in the repo is worth a read: ffmpeg\doc\writing_filters.txt
The steps are:
Add an appropriate line to the: ffmpeg\libavfilter\Makefile
OBJS-$(CONFIG_MCSCALE_CUDA_FILTER) += vf_mcscale_cuda.o
vf_mcscale_cuda.ptx.o scale_eval.o
Add an appropriate line to the: ffmpeg\libacfilter\allfilters.c
extern AVFilter ff_vf_mcscale_cuda;
The change in (2) does not become recognized until ./configure scans the files again to configure the build, so run Configure and when you next run make the filter should be generated. Happy days.
i was faced with a problem to add transform_v1 filter (see details on transform 360 filters at https://www.diycode.cc/projects/facebook/transform360 ) to ffmpeg with the version N-91732-g1124df0. i did exactly according to writing_filters.txt but transform_v1.o is not linked?
i added the object file (vf_transform_v1.o) in Makefile of libavfilter.
OBJS-$(CONFIG_TRANSFORM_V1_FILTER)+= vf_transform_v1.o
i checked that the define CONFIG_TRANSFORM_V1_FILTER=1 is present in config.h .
However, after the compilation transform_v1 is still not recognized.
i resolved this issue in an awkward way, i added explicitly vf_transform_v1.o in OBJ-list without conditioning by the global define CONFIG_TRANSFORM_V1_FILTER:
OBJS+= vf_transform_v1.o

How CYGWIN relates to the C: drive

I want to understand how the downloaded application of Cygwin relates to the "heart"/"mind" of my system. Yes, I am as green as the Emerald Isles. So, please spare me. Let me give an explanation of the chain of events and my perception of them ("backstory"), so that the motivation for my question can be understood.
Currently, on Cygwin, I can see that it "sees" QUARKy (my [super]username on this comp]. And currently I can see that it (Cygwin) cannot "see" the files contained therein (So there's a, metaphorically speaking: "cognitive dissonance"; somehow it automatically "knows" of QUARKy." yet, it knows nothing of it!). It looks blank when I do the "ls" command when in that directory.
Now, if my memory serves me-- I could swear last time, the FIRST time, I downloaded Cygwin (I had to do a... whatever the appropriate terminology for a "master reset" for a comp. is) I could go into that directory and I could see all of my files-- just like if how it would look to me if I went about accessing the files more conventionally (Do you say "Through a GUI"?). I don't recall doing anything in particular to enable myself to have these privileges.
So, in part, I wonder if I failed to download a package the second time which I had the first. I am certainly having much trouble with the packages this second time around.
I am also wondering if-- though I could swear I am vividly recalling actual experiences of mine-- I am wrong that I ever had such an ability. Therefore, I wonder what "special thing" needs to be done so that Cygwin can see the files contained in this user directory. And I would please like it explained to me how the added special feature enables this privileged. I do see that it should make sense that, having independently downloaded this environment, it should not "naturally" know anything about my comp. But then, it is further striking, I think, that it should know of "QUARKy" and make it a directory. Though, perhaps I place too much weight on this last feature. Afterall, it is just a name and might natural default to making it a directory. Why stop there though?
See how maddening this is for me?!!!
:-( <---- That's what I look like, from now on.
Please help!
Cygwin is not an isolated environment like for example a virtual machine inside VirtualBox. It works on the same filesystem as other Windows applications. Its files could be accessed through any file manager (e.g: Total Commander), and it can access any other file including your home folder in Windows.
Only one thing makes confusion: cygwin uses UNIX-like paths but Windows uses DOS paths. There is a translation method to convert them vica-versa. cygpath utility can do this translation automatically, but it could be done by head as well. Here is some example:
#############################################
# converting to Windows path format:
#############################################
$ cygpath --windows /
C:\cygwin
$ cygpath --windows /home
C:\cygwin\home
$ cygpath --windows /home/username
C:\cygwin\home\username
$ cygpath --windows /cygdrive/c
C:\
#############################################
# converting to Cygwin (UNIX) path format:
#############################################
$ cygpath --unix 'C:\Users\username'
/cygdrive/c/Users/username
$ cygpath --unix 'C:\Windows'
/cygdrive/c/Windows
$ cygpath --unix 'D:\Games'
/cygdrive/d/Games

Autotools suite misplaces "man" file leading to installation failure

In a software I have to tweak, the man file is located under doc/ along with a simple Makefile.am file:
man_MANS = software.1
EXTRA_DIST = $(man_MANS)
Upon installation, I expect make install to copy the manual under /usr/local/share/man/, but the script - instead - will try to install the man under /usr/local/share/man/man1 - which does not exist - throwing an error and stopping the process.
I would expect a similar behavior if I assigned software.1 to man1_MANS, though.
What is going on ? How is this possible that automake does not create non-existing folders ?
man_MANS will try to figure out in which section to put the manual depending on the extension you gave it, so it is correct in this case that it would install into ${mandir}/man1.
Since you say that MKDIR_P is empty in your output, try to ensure that AC_PROG_MKDIR_P is being called in your configure.ac (it should be automatically called by AM_INIT_AUTOMAKE but since you said it's old it might have some issues).

How to find the callers and callee of a function in C code in vi/vim?

I want to know how can I easily click (or maybe use some easy shortcuts) on a function name and find all its callee or open where it has been defined. Most of the web manuals in web are really hard to follow or don't happen to work out. Say I want to click on allocuvm and see where it has been defined?
uint newstk=allocuvm(pgdir, USERTOP-PGSIZE, USERTOP);
cscope minimal example
Ingo mentioned it, here is an example.
First you should set on your .vimrc:
set cscopequickfix=s-,c-,d-,i-,t-,e-
Then to the base directory of your project and run:
cscope -Rb
This generates a cscope.out file which contains the parsed information. Generation is reasonably fast, even for huge projects like the Linux kernel.
Open vim and run:
:cs add cscope.out
:cs find c my_func
c is a mnemonic for callers. The other cscope provided queries are also possible, mnemonics are listed under:
help cscope
This adds a list of the callers to the quickfix list, which you can open with:
:copen
Go to the line that interests you and hit enter to jump there.
To find callers of the function name currently under the cursor, add to your .vimrc:
function! Csc()
cscope find c <cword>
copen
endfunction
command! Csc call Csc()
and enter :Csc<enter> when the cursor is on top of the function.
TODO:
do it for the current function under cursor with a single command. Related: Show function name in status line
automatically add the nearest database (parent directories) when you enter a file: how to auto load cscope.out in vim
interactively open the call graph like Eclipse. Related: Generate Call-Tree from cscope database
A word of advice: I love vim, but it is too complicated for me to setup this kind of thing. And it does not take into account classes e.g. in C++. If a project matters enough to you, try to get the project working on some "IDE". It may involve some overhead if the project does not track the IDE configuration files (which are auto-changing blobs that pollute the repo...), but it is worth it to me. For C / C++, my favorite so far was KDevelop 4.
For that, Vim integrates with the cscope tool; see :help cscope for more information.
vi / . --- / is the search function in vi, and . will repeat the same command.
you could also use sed ( stream editor ) if it is a large file
sed
grep can get you the line numbers
read the man page

Resources