Pass ARRAY arguments to set-alias in modulefile - arrays

all.
I need to use a function that must be declared within an environment module, so I'm trying to define it with set-alias.
Here's the tricky thing, the parameter it takes is an array.
So far, as a test I've tried this:
set-alias test {
declare -a argArray=(\"${#}\");
echo \${\#argArray}
}
which returns zero : (
0
the (potentially) awful amount of back-slashes is needed, as module doesn't get along well with single quotes (so they say in manpage).
can somebody explain me what's going on?
thanks

Don't use set-alias for writing functions
Environment modules are tcl based
You can use proc for writing functions:
proc test {arg1} {
return [llength $arg1]
}

Related

How can I check if a certain function could be indirectly called by another certain function?

Assuming that in a project written by C, there is a function named A and a function named B.
How can I verify if the function A could be in the call tree of function B? Just like B->C->D->...->A .
This question came when I was thinking about which libvirt API may invoke the qemu qmp "query-block". Since qmp "query-block" is only called by function qemuMonitorJSONQueryBlock. So this specific question becomes: How can I find which libvirt API may invoke qemuMonitorJSONQueryBlock?
I think dynamic analysis is hard to answer that question because lots of tests are required. It should be a question of static analysis. But I could find proper tools or methods to solve it. At last I summarize the question as the first paragraph.
You can try CppDepend and its code query language to create some advanced queries about the dependencies, In you case you can use a query like this one
from m in Methods
let depth0 = m.DepthOfIsUsedBy("__Globals.B()")
where depth0 >= 0 && m.SimpleName=="A" orderby depth0
select new { m, depth0 }
You can use GNU cflow utility which analyzes a collection of source files written in C programming language and outputs a graph charting dependencies between various functions
I think dynamic analysis is hard to answer that question because lots of tests are required. It should be a question of static analysis. But I could find proper tools or methods to solve it. At last I summarize the question as the first paragraph.
That's true, basically because you can call functions that you have never linked in your program. with the dlopen(3) function and friends, you can dynamically link to your program a completely unknown function and be able to call it. There's no way to check if the pointer to a function is actually storing a valid pointer and to see if, as a result, it will be called or not (or if it is in the call graph of some initial function)
I find cscope can help solve the question. It is a
is a developer's tool for browsing source code. It can get the caller of a function by following:
1. Change to the source code directory, then generate the cscope database file named cscope.out
cd libvirt
cscope -bR
Find the callers of func1 by cscope:
cscope -d -f cscope.out -L3 func1, then 2nd column is the callers of this function. For example:
cscope -d -f./cscope.out -L3 qemuMigrationDstPrepareDirect
The result:
src/qemu/qemu_driver.c ATTRIBUTE_NONNULL 12487 ret = qemuMigrationDstPrepareDirect(driver, dconn,
src/qemu/qemu_driver.c qemuDomainMigratePrepare2 12487 ret = qemuMigrationDstPrepareDirect(driver, dconn,
src/qemu/qemu_driver.c qemuDomainMigratePrepare3 12722 ret = qemuMigrationDstPrepareDirect(driver, dconn,
src/qemu/qemu_driver.c qemuDomainMigratePrepare3Params 12809 ret = qemuMigrationDstPrepareDirect(driver, dconn,
Note that: cscope will mistakenly regard function attribute declarement ATTRIBUTE_* as callers. We should skip them.
Then recursively find the caller of the a function. At last select the target B->...->A call trace.
doxygen can generate call graphs and caller graphs. If you configure it for an unlimited number of calls in a graph, you will be able to get the information you need.

"Use" the Perl file that h2ph generated from a C header?

The h2ph utility generates a .ph "Perl header" file from a C header file, but what is the best way to use this file? Like, should it be require or use?:
require 'myconstants.ph';
# OR
use myconstants; # after mv myconstants.ph myconstants.pm
# OR, something else?
Right now, I am doing the use version shown above, because with that one I never need to type parentheses after the constant. I want to type MY_CONSTANT and not MY_CONSTANT(), and I have use strict and use warnings in effect in the Perl files where I need the constants.
It's a bit strange though to do a use with this file since it doesn't have a module name declared, and it doesn't seem to be particularly intended to be a module.
I have just one file I am running through h2ph, not a hundred or anything.
I've looked at perldoc h2ph, but it didn't mention the subject of the intended mechanism of import at all.
Example input and output: For further background, here's an example input file and what h2ph generates from it:
// File myconstants.h
#define MY_CONSTANT 42
...
# File myconstants.ph - generated via h2ph -d . myconstants.h
require '_h2ph_pre.ph';
no warnings qw(redefine misc);
eval 'sub MY_CONSTANT () {42;}' unless defined(&MY_CONSTANT);
1;
Problem example: Here's an example of "the problem," where I need to use parentheses to get the code to compile with use strict:
use strict;
use warnings;
require 'myconstants.ph';
sub main {
print "Hello world " . MY_CONSTANT; # error until parentheses are added
}
main;
which produces the following error:
Bareword "MY_CONSTANT" not allowed while "strict subs" in use at main.pl line 7.
Execution of main.pl aborted due to compilation errors.
Conclusion: So is there a better or more typical way that this is used, as far as following best practices for importing a file like myconstants.ph? How would Larry Wall do it?
You should require your file. As you have discovered, use accepts only a bareword module name, and it is wrong to rename myconstants.ph to have a .pm suffix just so that use works.
The choice of use or require makes no difference to whether parentheses are needed when you use a constant in your code. The resulting .ph file defines constants in the same way as the constant module, and all you need in the huge majority of cases is the bare identifier. One exception to this is when you are using the constant as a hash key, when
my %hash = { CONSTANT => 99 }
my $val = $hash{CONSTANT}
doesn't work, as you are using the string CONSTANT as a key. Instead, you must write
my %hash = { CONSTANT() => 99 }
my $val = $hash{CONSTANT()}
You may also want to wrap your require inside a BEGIN block, like this
BEGIN {
require 'myconstants.ph';
}
to make sure that the values are available to all other parts of your code, including anything in subsequent BEGIN blocks.
The problem does somewhat lie in the require.
Since require is a statement that will be evaluated at run-time, it cannot have any effect on the parsing of the latter part of the script. So when perl reads through the MY_CONSTANT in the print statement, it does not even know the existence of the subroutine, and will parse it as a bareword.
It is the same for eval.
One solution, as mentioned by others, is to put it into a BEGIN block. Alternatively, you may forward-delcare it by yourself:
require 'some-file';
sub MY_CONSTANT;
print 'some text' . MY_CONSTANT;
Finally, from my perspective, I have not ever used any ph files in my Perl programming.

How to trace a function using dtrace?

I was making a few changes in the dhcpagent command and on testing, it sort of fails. Now I know which function is being called in the end before dhcpagent exits. I want to trace the control from dhcpagent to that particular function lets say foo().I am looking for who called foo() and who called that function and so on, like a family tree, from dhcpagent to foo(). How do I do this ? I have very basic knowledge of dtrace, like how to construct a basic script, but no more. Could you suggest a script/resource from where I can learn and write the script myself ?
What I did try:
pid$target::functionname:entry //and the target was dhcpagent from the command line
Thanks
I think the following script can help you:
#!/usr/sbin/dtrace -Fs
pid$target:::entry,
pid$target:::return
{
}
In the above script, it can print how the function is called. But the output maybe awesome large!
If you only cared about dhcpagent module, I think the following script is a better choice:
#!/usr/sbin/dtrace -Fs
pid$target:dhcpagent::entry,
pid$target:dhcpagent::return
{
}

perl syntax check without loading c library

I would like to check syntax of my perl module (as well as for imports), but I don't want to check for dynamic loaded c libraries.
If I do:
perl -c path_to_module
I get:
Can't locate loadable object for module B::Hooks::OP::Check in #INC
because B::Hooks::OP::Check are loading some dynamic c libraries and I don't want to check that...
You can't.
Modules can affect the scripts that use them in many ways, including how they are parsed.
For example, if a module exports
sub f() { }
Then
my $f = f+4;
means
my $f = f() + 4;
But if a it were to export
sub f { }
the same code means
my $f = f(+4);
As such, modules must be loaded to parse the script that loads it. To load a module is simply to execute it, be it written in Perl or C.
That said, some folks put together PPI to address the needs of people like you. It's not perfect —it can't be perfect for the reasons previously stated— but it will give useful results nonetheless.
By the way, the proper way to syntax check a module is
perl -e'use Module;'
Using -c can give errors where non exists and vice-versa.
The syntax checker loads the included libraries because they might be applying changes to the syntax. If you're certain that this is not happening, you could prevent the inclusion by manipulating the loading path and providing a fake b::Hooks::OP::Check.

In MATLAB, can I have a script and a function definition in the same file?

Suppose I have a function f() and I want to use it in my_file.m, which is a script.
Is it possible to have the function defined in my_file.m?
If not, suppose I have it defined in f.m. How do I call it in my_file.m?
I read the online documentation, but it wasn't clear what is the best way to do this.
As of release R2016b, you can have local functions in scripts, like so:
data = 1:10; % A vector of data
squaredData = f(data); % Invoke the local function
function y = f(x)
y = x.^2;
end
Prior to release R2016b, the only type of function that could be defined inside a MATLAB script was an anonymous function. For example:
data = 1:10; % A vector of data
f = #(x) x.^2; % An anonymous function
squaredData = f(data); % Invoke the anonymous function
Note that anonymous functions are better suited to simple operations, since they have to be defined in a single expression. For more complicated functions, you will have to define them in their own files, place them somewhere on the MATLAB path to make them accessible to your script, and then call them from your script as you would any other function.
The way I get around this limitation, is to turn my scripts into functions that take no arguments (if I need variables from the global namespace, I either explicitly pass them in the function, or use "evalin" to grab them.)
Then you can define all the additional functions you need in the "script." It's a hack, but I have found it to be quite powerful in those cases where I need several non-trivial functions.
EDIT: Here's a simplistic example. All this can reside in a single file.
function [] = myScriptAsAFunction()
img = randn(200);
img = smooth(img);
figure(1);
imagesc(img);
axis image;
colorbar;
end
function simg = smooth(img)
simg = img / 5;
end
You can do something like this (assuming your file is named my_file.m):
function my_file
%script here
end
function out = f(in)
%function here
end
If you click the run button the function my_file will be executed as default.
1) You cannot nest a function inside a script.
2) Make sure f.m is on your path or in current directory, and you can call it like any other function.
As of R2016b, you can define local functions within a script.
x = 1;
y = add1(x);
function z = add1(x)
z = x + 1;
end
I have implemented the solution by John, and I found it useful. But there are a couple of caveats (in Octave; Matlab possibly behaves similarly):
If code inside your main function contains clear all prior to using the auxiliary function, it will not work. In file test3.m, commenting/uncommenting clear all makes code work/not work.
function [] = test3()
%clear all
a = myfunc( 1 );
a
endfunction;
%---------------------------------
% Auxiliary functions
function retval = myfunc( a )
retval = 2 * a;
endfunction;
From It seems like upon running a script, there is a first pass where code outside functions is executed (in this case, there is no such code), and functions defined (in this case, test3 and myfunc) are added to the workspace. A second pass would execute the main function, which would not find myfunc if clear all is active.
As pointed out by chessofnerd, out-of-the-box the variables in your main function do not go to the workspace.
You can have many functions in a sample file. But only the first one can act as a main function, when you run the file. Others can be used merely in this file. For some situation you want to define a big function. You can separate it into smaller functions and define below it.
However, the most simple way to find the answer is having a try~

Resources