I would like to check syntax of my perl module (as well as for imports), but I don't want to check for dynamic loaded c libraries.
If I do:
perl -c path_to_module
I get:
Can't locate loadable object for module B::Hooks::OP::Check in #INC
because B::Hooks::OP::Check are loading some dynamic c libraries and I don't want to check that...
You can't.
Modules can affect the scripts that use them in many ways, including how they are parsed.
For example, if a module exports
sub f() { }
Then
my $f = f+4;
means
my $f = f() + 4;
But if a it were to export
sub f { }
the same code means
my $f = f(+4);
As such, modules must be loaded to parse the script that loads it. To load a module is simply to execute it, be it written in Perl or C.
That said, some folks put together PPI to address the needs of people like you. It's not perfect —it can't be perfect for the reasons previously stated— but it will give useful results nonetheless.
By the way, the proper way to syntax check a module is
perl -e'use Module;'
Using -c can give errors where non exists and vice-versa.
The syntax checker loads the included libraries because they might be applying changes to the syntax. If you're certain that this is not happening, you could prevent the inclusion by manipulating the loading path and providing a fake b::Hooks::OP::Check.
Related
Assuming that in a project written by C, there is a function named A and a function named B.
How can I verify if the function A could be in the call tree of function B? Just like B->C->D->...->A .
This question came when I was thinking about which libvirt API may invoke the qemu qmp "query-block". Since qmp "query-block" is only called by function qemuMonitorJSONQueryBlock. So this specific question becomes: How can I find which libvirt API may invoke qemuMonitorJSONQueryBlock?
I think dynamic analysis is hard to answer that question because lots of tests are required. It should be a question of static analysis. But I could find proper tools or methods to solve it. At last I summarize the question as the first paragraph.
You can try CppDepend and its code query language to create some advanced queries about the dependencies, In you case you can use a query like this one
from m in Methods
let depth0 = m.DepthOfIsUsedBy("__Globals.B()")
where depth0 >= 0 && m.SimpleName=="A" orderby depth0
select new { m, depth0 }
You can use GNU cflow utility which analyzes a collection of source files written in C programming language and outputs a graph charting dependencies between various functions
I think dynamic analysis is hard to answer that question because lots of tests are required. It should be a question of static analysis. But I could find proper tools or methods to solve it. At last I summarize the question as the first paragraph.
That's true, basically because you can call functions that you have never linked in your program. with the dlopen(3) function and friends, you can dynamically link to your program a completely unknown function and be able to call it. There's no way to check if the pointer to a function is actually storing a valid pointer and to see if, as a result, it will be called or not (or if it is in the call graph of some initial function)
I find cscope can help solve the question. It is a
is a developer's tool for browsing source code. It can get the caller of a function by following:
1. Change to the source code directory, then generate the cscope database file named cscope.out
cd libvirt
cscope -bR
Find the callers of func1 by cscope:
cscope -d -f cscope.out -L3 func1, then 2nd column is the callers of this function. For example:
cscope -d -f./cscope.out -L3 qemuMigrationDstPrepareDirect
The result:
src/qemu/qemu_driver.c ATTRIBUTE_NONNULL 12487 ret = qemuMigrationDstPrepareDirect(driver, dconn,
src/qemu/qemu_driver.c qemuDomainMigratePrepare2 12487 ret = qemuMigrationDstPrepareDirect(driver, dconn,
src/qemu/qemu_driver.c qemuDomainMigratePrepare3 12722 ret = qemuMigrationDstPrepareDirect(driver, dconn,
src/qemu/qemu_driver.c qemuDomainMigratePrepare3Params 12809 ret = qemuMigrationDstPrepareDirect(driver, dconn,
Note that: cscope will mistakenly regard function attribute declarement ATTRIBUTE_* as callers. We should skip them.
Then recursively find the caller of the a function. At last select the target B->...->A call trace.
doxygen can generate call graphs and caller graphs. If you configure it for an unlimited number of calls in a graph, you will be able to get the information you need.
I have a Perl module A that is a XS based module. I have an A.xs file, and an aux_A.c file, where I have some standard C functions. I use DynaLoader, and it works file.
Now, I have a new module B, that is also a XS module. I also have the B.xs file, and the aux_B.c file. Now, I want that a standard C function defined in aux_B.c file to be able to use a function defined in aux_A.c file.
One option is to make A module to create a standard C library, and link B module with it. But I was trying to get away from that option.
Is there any other way to go?
What I am currently getting is DynaLoader complaining on undefined symbol when trying to load the B.so library.
Thanks
Alberto
To make module A export its C symbols with DynaLoader, you have to add the following to A.pm:
sub dl_load_flags { 1 }
This is badly documented, unfortunately. See this thread on PerlMonks and the DynaLoadersource code for more details. The effect of the flag is to set RTLD_GLOBAL when loading A.so with dlopen which makes its symbols available to other shared objects.
The h2ph utility generates a .ph "Perl header" file from a C header file, but what is the best way to use this file? Like, should it be require or use?:
require 'myconstants.ph';
# OR
use myconstants; # after mv myconstants.ph myconstants.pm
# OR, something else?
Right now, I am doing the use version shown above, because with that one I never need to type parentheses after the constant. I want to type MY_CONSTANT and not MY_CONSTANT(), and I have use strict and use warnings in effect in the Perl files where I need the constants.
It's a bit strange though to do a use with this file since it doesn't have a module name declared, and it doesn't seem to be particularly intended to be a module.
I have just one file I am running through h2ph, not a hundred or anything.
I've looked at perldoc h2ph, but it didn't mention the subject of the intended mechanism of import at all.
Example input and output: For further background, here's an example input file and what h2ph generates from it:
// File myconstants.h
#define MY_CONSTANT 42
...
# File myconstants.ph - generated via h2ph -d . myconstants.h
require '_h2ph_pre.ph';
no warnings qw(redefine misc);
eval 'sub MY_CONSTANT () {42;}' unless defined(&MY_CONSTANT);
1;
Problem example: Here's an example of "the problem," where I need to use parentheses to get the code to compile with use strict:
use strict;
use warnings;
require 'myconstants.ph';
sub main {
print "Hello world " . MY_CONSTANT; # error until parentheses are added
}
main;
which produces the following error:
Bareword "MY_CONSTANT" not allowed while "strict subs" in use at main.pl line 7.
Execution of main.pl aborted due to compilation errors.
Conclusion: So is there a better or more typical way that this is used, as far as following best practices for importing a file like myconstants.ph? How would Larry Wall do it?
You should require your file. As you have discovered, use accepts only a bareword module name, and it is wrong to rename myconstants.ph to have a .pm suffix just so that use works.
The choice of use or require makes no difference to whether parentheses are needed when you use a constant in your code. The resulting .ph file defines constants in the same way as the constant module, and all you need in the huge majority of cases is the bare identifier. One exception to this is when you are using the constant as a hash key, when
my %hash = { CONSTANT => 99 }
my $val = $hash{CONSTANT}
doesn't work, as you are using the string CONSTANT as a key. Instead, you must write
my %hash = { CONSTANT() => 99 }
my $val = $hash{CONSTANT()}
You may also want to wrap your require inside a BEGIN block, like this
BEGIN {
require 'myconstants.ph';
}
to make sure that the values are available to all other parts of your code, including anything in subsequent BEGIN blocks.
The problem does somewhat lie in the require.
Since require is a statement that will be evaluated at run-time, it cannot have any effect on the parsing of the latter part of the script. So when perl reads through the MY_CONSTANT in the print statement, it does not even know the existence of the subroutine, and will parse it as a bareword.
It is the same for eval.
One solution, as mentioned by others, is to put it into a BEGIN block. Alternatively, you may forward-delcare it by yourself:
require 'some-file';
sub MY_CONSTANT;
print 'some text' . MY_CONSTANT;
Finally, from my perspective, I have not ever used any ph files in my Perl programming.
What's the simplest way to find the path to the file in which I am "executing" some code? By this, I mean that if I have a file foo.py that contains:
print(here())
I would like to see /some/path/foo.py (I realise that in practice what file is "being executed" is complicated, but I think the above is well defined - a source file that contains some function that, when executed, gives the path to said file).
I have needed this in the past to make tests (that require some external file) self-contained, and I am currently wondering if it would be a useful way to locate some support files needed by a program. But I have never found a good way of doing this. The inspect module sounds like it should work, but you seem to need a class or function that is defined in that module.
In particular, the module instances contain __file__ attributes, but I can't see how to get the "current" module. Objects have a __module__ attribute, but that's the module name, not a module instance.
I guess one way is to throw and catch an exception and inspect the contents, but that seems like hard work. Surely there is a simple, easy way that I have missed?
To get the absolute path of the current file:
import os
os.path.abspath(__file__)
To get content of external file distributed with your package you could use pkg_util.get_data()(stdlib) or pkg_resources.resouce_string() (setuptools) to support execution from zip-archives or standalone executables created by py2exe, PyInstaller or similar, example.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
My C project uses preprocessor directives to activate / deactivate some features. It's not unusual to find some of the less common configurations do not compile anymore due to a change made a few days ago within an #ifdef.
We use a script to compile the most common configurations, but I'm looking for a tool to ensure everything is compiled (testing is not a problem in our case, we just want to detect ASAP nothing stops compiling). Usually ifdefs / ifndefs are independent, so normally each module have to be compiled just twice (all symbols defined, all undefined). But sometimes the ifdefs are nested, so those modules have to be compiled more times.
Do you know of any tool to search all ifdef / ifndef (also nested ones) and gives how many times a module have to be compiled (with the set of preprocessor symbols to be defined in each one) to ensure every single source line of code is analyzed by the compiler?
I am not aware of any tool for doing what you want to do. But looking at your problem i think all you need is a script that will compile the source with all possible combinations of the preprocessor symbols.
Like say if you have..
#ifdef A
CallFnA();
#ifdef B
CallFnB();
#endif
CallFnC();
#endif
You will have to trigger the build with the foll combinations
A and B both defined
A not defined and B defined ( will not make sense here, but required for entire module)
A defined and B not defined
A and B both not defined
Would love to see some script that will grep the source code and produce the combinations.
Something like
find ./ -name '*.cpp' -exec egrep -h '^#ifdef' {} \; | awk '{ print $2}' | sort | uniq
With *.cpp replaced with whatever files you want to search.
Here's a Perl script that does a hacky job of parsing #ifdef entries and assembles a list of the symbols used in a particular file. It then prints out the Cartesian Product of all the possible combinations of having that symbol on or off. This works for a C++ project of mine, and might require minor tweaking for your setup.
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my $path = $ENV{PWD};
my $symbol_map = {};
find( make_ifdef_processor( $symbol_map ), $path );
foreach my $fn ( keys %$symbol_map ) {
my #symbols = #{ $symbol_map->{$fn} };
my #options;
foreach my $symbol (#symbols) {
push #options, [
"-D$symbol=0",
"-D$symbol=1"
];
}
my #combinations = #{ cartesian( #options ) };
foreach my $combination (#combinations) {
print "compile $fn with these symbols defined:\n";
print "\t", join ' ', ( #$combination );
print "\n";
}
}
sub make_ifdef_processor {
my $map_symbols = shift;
return sub {
my $fn = $_;
if ( $fn =~ /svn-base/ ) {
return;
}
open FILE, "<$fn" or die "Error opening file $fn ($!)";
while ( my $line = <FILE> ) {
if ( $line =~ /^\/\// ) { # skip C-style comments
next;
}
if ( $line =~ /#ifdef\s+(.*)$/ ) {
print "matched line $line\n";
my $symbol = $1;
push #{ $map_symbols->{$fn} }, $symbol;
}
}
}
}
sub cartesian {
my $first_set = shift #_;
my #product = map { [ $_ ] } #$first_set;
foreach my $set (#_) {
my #new_product;
foreach my $s (#$set) {
foreach my $list (#product) {
push #new_product, [ #$list, $s ];
}
}
#product = #new_product;
}
return \#product;
}
This will definitely fail with C-style /* */ comments, as I didn't bother to parse those effectively. The other thing to think about is that it might not make sense for all of the symbol combinations to be tested, and you might build that into the script or your testing server. For example, you might have mutually exclusive symbols for specifying a platform:
-DMAC
-DLINUX
-DWINDOWS
Testing the combinations of having these on and off doesn't really make sense. One quick solution is just to compile all combinations, and be comfortable that some will fail. Your test for correctness can then be that the compilation always fails and succeeds with the same combinations.
The other thing to remember is not all combinations are valid because many of them aren't nested. I think that compilation is relatively cheap, but the number of combinations can grow very quickly if you're not careful. You could make the script parse out which symbols are in the same control structure (nested #ifdefs for example), but that's much harder to implement and I've not done that here.
You can use unifdef -s to get a list of all preprocessor symbols used in preprocessor conditionals. Based on the discussion around the other answers this is clearly not quite the information you need, but if you use the -d option as well, the debugging output includes the nesting level. It should be fairly simple to filter the output to produce the symbol combinations you want.
Another solution is to compile in all the features and have run-time configuration testing for the features. This is a cool "trick" since it allows Marketing to sell different configurations and saves Engineering time by simply setting values in a configuration file.
Otherwise, I suggest a scripting language for building all the configurations.
Sorry, I don't know any tools to help you, but if I had to do this I would go with a simple script that does this:
- Copy all source files to another place,
- Add a running number (in a comment, obviously) at the start of each line (first code line of first file = 1, do not reset between files),
- Pre-process using all the pre-defined configurations and check which lines were included and which weren't,
- Check which lines have been included and which are missing.
Shouldn't take more than a couple of days to get that running using e.g. Perl or Python. What is required is a file that has a line including the configurations. This should be quick enough to do with the help of this script. Just check which lines are not included with the configurations already and edit the file until every line is included. Then just run this occasionally to make sure that there are no new paths.
Analyzing the source like you want would be a much more complex script and I'm not sure if it would be worth the trouble.
Hm, I initially thought that unifdef might be helpful, but looking further as to what you're asking for, no, it wouldn't be of any immediate help.
You can use Hudson with a matrix project. (Note that Hudson is not just a Java testing tool, but a very flexible build server which can build just about anything, including C/C++ projects). If you set up a matrix project, you get the option to create one or more axes. For each axis you can specify one ore more values. Hudson will then run your build using all possible combinations of the variable values. For example, if you specify
os=windows,linux,osx
wordsize=32,64
Hudson will build six combinations; 32- and 64-bit versions for each windows, linux, and osx. When building a "free-style project" (i.e. launching an external build script), the configurations are specified using environment variables. in the example above, "os" and "windows" will be specified as environment variables.
Hudson also has support for filtering combinations in order to avoid building certain invalid combinations (for example, windows 64-bit can be removed, but all others kept).
(Edited post to provide more details about matrix projects.)
May be You need grep?