My project is fairly small C project. Running sourceanalyzer from a command line finishes in about 3 minutes for both translation and scan.
The documentation says if the .fpr was generated from the command line and I need to re-scan from AWB, the Update Project Translation button is greyed (which it is). But if I modify the source code, the documentation says I must first update the translation before I can re-scan the code, which means I have to run sourceanalyzer from the command line again (since the option is greyed-out in AWB.) However, using sourceanalyzer re-writes the .fpr, which means I lose all my audits and custom filters that I created in AWB.
Question 1: Can I run sourceanalyzer from the command line for both translation and scan without losing the audit work and custom filters I created in AWB?
The next logical step seemed to be create the .fpr from AWB. But if I try to use AWB to start a new project using Advanced Scan..., it takes over an hour to complete the Generating intermediate files - JtsWrapper.java step. When it's done, the results show 0 issues.
Question 2: How do I use AWB to start a new project on a C project that doesn't use Java? When I select Start New Project -> Advanced Scan, it asks for the Java version. Does that mean it thinks my project is a Java project?
This is how I use sourceanalyzer:
sourceanalyzer -clean
sourceanalyzer -64 -b myproj \
-build-label myproj \
-build-project myproj \
-build-version 1.0.0 \
touchless make -j6 -k
sourceanalyzer -64 -b myproj \
-build-label myproj \
-build-project myproj \
-build-version 1.0.0 \
-scan \
-f myproj.fpr
Question 1)
There are two options for keeping your previous/existing comments, audits, and filters when creating a new scan.
a) If you scan a second time and have the -f pointing to your existing .fpr file that has the modifications, sourceanalyzer will automatically merge the new results into that .fpr.
b) There is a commandline utility to merge two files together:
fprutility -merge -project <old.fpr> -source <new.fpr> -f <merged.fpr>
When you said, "The next logical step seemed to be create the .fpr from AWB." I disagree. Being able to produce a scan at the commandline makes the process repeatable and automatable. AWB and the IDE plug-ins are all a front end for the sourceanalyzer.exe.
Question 2)
I am not sure what version of Fortify SCA you are using, but when I point the advanced scan at the c++ sample project (<HPE Fortify Install Dir>/Samples/Basic/cpp) I do not get asked about Java Versions (I am using version 16.10).
Couple of things about your commandline arguments:
-64 is automatically for several version now (not sure when the switch was made)
-build-label myproj is optional
-build-project myproj is optional
-build-version 1.0.0 is optional
Related
I am new to Meson so please forgive me if this is a stupid question.
Simple Version of the Question:
I want to be able to assign a dynamic version number to the meson project version at build time. Essentially meson.project_version()=my_dynamic_var or project( 'my_cool_project', 'c', version : my_dynamic_var') (which of course won't work).
I would rather not pre-process the file if I don't have to.
Some background if anybody cares:
My build system dynamically comes up with a version number for the project. In my case, it is using a bash script. I have no problem getting that version into my top level meson.build file using run_command and scraping stdout from there. I have read that using doing it this way is bad form so if there is another way to do this.. I am all ears.
I am also able to create and pass the correct -DPRODUCT_VERSION="<my_dynamic_var>" via add_global_arguments so I COULD just settle for that.. but I would like the meson project itself to carry the same version for the logs and so I can use meson.project_version() to get the version in subprojects for languages other than c/c++.
The short answer, as noted in comments to the question, appears to be no. There is no direct way to set the version dynamically in the project call.
However, there are some work arounds, and the first looks promising for the simple case:
(1) use meson rewriting capability
$ meson rewrite kwargs set project / version 1.0.0
Then obviously use an environment variable instead of 1.0.0.
(2) write a wrapper script which reads the version from the environment and substitutes it into your meson.build file in the project call.
(3) adopt conan.io and have your meson files generated.
(4) use build options. This option, while not as good as (1) might work for other work flows.
Here's how option (4) works.
create a meson_options.txt file in your meson root directory
add the following line:
option('version', type : 'string', value : '0.0.0', description : 'project version')
then create a meson.build file that reads this option.
project('my_proj', 'cpp')
version = get_option('version')
message(version)
conf_data = configuration_data()
conf_data.set('version', version)
When you go to generate your project, you have an extra step of setting options.
$ meson build && cd build
$ meson configure -Dversion=$BUILD_VERSION
Now the version is available as a build option, then we use a configuration_data object to make it available for substitution into header/source files (which you might want to get it into shared libraries or what not).
configure_file(
input : 'config.hpp.in',
output : 'config.hpp',
configuration : conf_data
)
And config.hpp.in looks something like this:
#pragma once
#include <string>
const static std::string VERSION = "#version#";
When we do the configure_file call, #version# will get substituted for the version string we set in the meson configure step.
So this way is pretty convoluted, but like I said, you may still end up doing some of it, e.g. to print copyright info and what not.
As of 0.60.3 you may directly assign version from run_command which means the following will work without any meson_options.txt.
project('randomName', 'cpp',
version : run_command('git', 'rev-parse', '--short', 'HEAD').stdout().strip(),
default_options : [])
In particular, it is also possible to assign the result of a bash script, simply invoke it instead of git.
I am building a shake based build system for a large Ruby (+ other things) code base, but I am struggling to deal with Ruby commands that expect to be passed a list of files to "build".
Take Rubocop (a linting tool). I can see three options:
need all Ruby files individually; if they change, run rubocop against the individual file that changed for each file that changed (very slow on first build or if many ruby files change because rubocop has a large start up time)
need all Ruby files; if any change, run rubocop against all the ruby files (very slow if only one or two files have changed because rubocop is slow to work out if a file has changed or not)
need all Ruby files; if any change, pass rubocop the list of changed dependencies as detected by Shake
The first two rules are trivial to build in shake, but my problem is I cannot work out how to represent this last case as a shake rule. Can anyone help?
There are two approaches to take with Shake, using batch or needHasChanged. For your situation I'm assuming rubocop just errors out if there are lint violations, so a standard one-at-a-time rule would be:
"*.rb-lint" %> \out -> do
need [out -<.> "rb"]
cmd_ "rubocop" (out -<.> "rb")
writeFile' out ""
Use batch
The function batch describes itself as:
Useful when a command has a high startup cost - e.g. apt-get install foo bar baz is a lot cheaper than three separate calls to apt-get install.
And the code would be roughly:
batch 3 ("*.rb-lint-errors" %>)
(\out -> do need [out -<.> "rb"]; return out) $
(\outs -> do cmd_ "rubocop" [out -<.> "rb" | out <- outs]
mapM_ (flip writeFile' "") pits)
Use needHasChanged
The function needHasChanged describes itself as:
Like need but returns a list of rebuilt dependencies since the calling rule last built successfully.
So you would write:
"stamp.lint" *> \out -> do
changed <- needHasChanged listOfAllRubyFiles
cmd_ "rubocop" changed
writeFile' out ""
Comparison
The advantage of batch is that it is able to run multiple batches in parallel, and you can set a cap on how much to batch. In contrast needHasChanged is simpler but is very operational. For many problems, both are reasonable solutions. Both these functions are relatively recent additions to Shake, so make sure you are using 0.17.2 or later, to ensure it has all the necessary bug fixes.
I have a BitBake recipe (example_0.1.bb) with a do_install task where I attempt to install a .so file:
do_install() {
install -d ${D}${libdir}
install -m 0644 ${S}/example.so ${D}${libdir}
}
FILES_${PN} += "${libdir}/example.so"
This fails during the build process and returns:
ERROR: example not found in the base feeds
However, if I add a test file to the package, both the .so file and the test file are added to the rootfs.
do_install() {
install -d ${D}${libdir}
install -m 0644 ${S}/example.so ${D}${libdir}
echo "bar" >> ${TOPDIR}/foo
install -m 0644 ${TOPDIR}/foo ${D}${libdir}
}
FILES_${PN} += "${libdir}/libceill.so"
FILES_${PN} += "${libdir}/foo"
How can I add only the .so file without the junk test file?
So you've got a library that is non-standard in that it's not installing a versioned library (libfoo.so.1.2.3, maybe symlinks such as libfoo.so.1 -> libfoo.so.1.2.3), and then an unversioned symlink for compilation time (libfoo.so -> libfoo.so.1). The default packaging rules assume standard libraries.
What's happening is that packages are populated by their order in PACKAGES, which has PN-dev before PN. FILES_PN-dev by default contains /usr/lib/lib*.so, and FILES_PN contains /usr/lib/lib*.so.. When you add /usr/lib/lib.so to FILES_PN what you want to happen isn't happening because PN-dev has already taken the files.
If your library doesn't come with any development files at all (e.g. no headers) then you can set FILES_${PN}-dev = "" to empty that package, and then your addition of lib*.so to FILES_${PN} will work as expected.
Yes, this is something that we should make easier (I've been thinking about a small class for libraries like this) and warn in sanity checks when it happens.
Oh and I'm surprised that the library ends up in the image in your second example, as example will contain /usr/lib/foo and example-dev will contains /usr/lib/libceill.so. Unless of course you've got dev-pkgs enabled, which will automatically install example-dev if you've got example in an image.
Add the line
FILES_SOLIBSDEV = ""
An explanation from the Yocto mailing list:
I had FILES_${PN} += “${libdir}/.so” in there and that didn't work.
Maybe it was because I was missing the FILES_SOLIBSDEV = “" you mentioned.
I'll play with it some more and see what happens. I first started out with
FILES_${PN} += “${libdir}/.so” and when that didn't work I tried other
things in the FILES_${PN} = line to try and get it picked up. When I
couldn't get any of it to work and then saw others (well, at least the link
I provided) were seeing the same thing I figured it was time to quit
spinning my wheels and consult the big guns :)
Heh :) The issue there is that the patterns are matched in the order of the
PACKAGES variable. The first package to include a file gets it, and
${PN}-dev is in PACKAGES before ${PN}. By emptying FILES_SOLIBSDEV, that’ll
remove the .so from FILES_${PN}-dev, letting the ${PN} package get it
instead.
Add the line:
FILES_${PN}_dev_remove="${FILES_SOLIBDEV} "
It will move out the package for development path.
I'm working with a proprietary code base where the owner would like users to get useful stack traces but not be able to view the source code. Generating Debian dbg packages with debug symbols but no source code is straightforward but the Redhat debuginfo RPMs are automatically created with source code.
Is there a way of configuring rpmbuild to build a debuginfo RPM without source code?
If not, what's the best way to remove the source code from a debuginfo package? Does anyone have a script to do it?
A -debuginfo package is just a sub-package, and can be created manually without source code. The automatic generation adds the necessary syntax to a spec file, but you can also do this manually, adding a debug info package in the spec file.
Disable automagic generation of *-debuginfo.rpm, run find-debuginfo.sh at the end of %install, and then remove the source files.
Another (and easier/cleaner) means to remove source files overrides this macro
%__debug_install_post \
%{_rpmconfigdir}/find-debuginfo.sh %{?_missing_build_ids_terminate_build:--strict-build-id} %{?_find_debuginfo_opts} "%{_builddir}/%{?buildsubdir}"\
%{nil}
in the spec file, replacing %{_rpmconfigdir}/find-debuginfo.sh with a modified/customized find-debuginfo.sh script.
Include the modified script in the spec file like
SourceN: my-find-debuginfo.sh
and then use the macro
%{SOURCEn}
(where N == n, some small appropriate integer) instead of the default to generate debugging symbols without source code.
Just finished a round of testing and in the end we inserted the following into the .spec file somewhere above the %description tag:
# Override the macro that invokes find-debuginfo.sh to remove
# the source files before the debuginfo pkg is assembled.
# It would be nice to remove the entire /usr/src tree but
# rpmbuild is running a check-files utility that fails the
# build if /usr/src/debug/%{name} isn't there. Tried to
# just delete the contents but it's tricky getting an
# asterisk to expand properly so we remove the entire
# directory and then restore an empty one. Sigh!
%define __debug_install_post \
%{_rpmconfigdir}/find-debuginfo.sh %{?_missing_build_ids_terminate_build:--strict-build-id} %{?_find_debuginfo_opts} "%{_builddir}/%{?buildsubdir}";\
rm -rf "${RPM_BUILD_ROOT}/usr/src/debug/%{name}"; \
mkdir "${RPM_BUILD_ROOT}/usr/src/debug/%{name}"; \
%{nil}
This works for RHEl 6 and 7 but results in a bash error in RHEl 5 so we avoid building a debuginfo package for the latter by not installing the redhat-rpm-config package.
We decided to avoid creating a modified find-debuginfo.sh script as suggested because there are already differences between different platforms and we preferred a single patch that would work for all targets including future new ones. This isn't perfect but is as close as we came up with.
CentOS 7 needed a slight modification of Guy's solution. Here's what I'm using successfully:
# Remove source code from debuginfo package.
%define __debug_install_post \
%{_rpmconfigdir}/find-debuginfo.sh %{?_missing_build_ids_terminate_build:--strict-build-id} %{?_find_debuginfo_opts} "%{_builddir}/%{?buildsubdir}"; \
rm -rf "${RPM_BUILD_ROOT}/usr/src/debug"; \
mkdir -p "${RPM_BUILD_ROOT}/usr/src/debug/%{name}-%{version}"; \
%{nil}
The following can be used to verify the source code is no longer contained within the RPM:
rpm -qpl xxx-debuginfo-1.0.0-1.el7.x86_64.rpm
I would like the executables for a project I am working on to have the latest mercurial changeset recorded so that when a user complains about buggy behavior, I can track which version they are using. Some of my executables are Python and others are compiled C. Is there a way to automate this, or can you point me to projects that exhibit solutions that I can look at?
I am using autoconf in my project... in case that makes the solution easier.
Thanks!
Setjmp
A common way to do this is with m4_esyscmd. For example, autoconf distributes a script in build-aux which generates a version number from the git repo and invokes AC_INIT as:
AC_INIT([GNU Autoconf], m4_esyscmd([build-aux/git-version-gen .tarball-version]),
[bug-autoconf#gnu.org])
You can often get away without distributing the script and do something simple like:
AC_INIT([Package name], m4_esyscmd([git describe --dirty | tr -d '\012']),
[bug-report-address])
Instead of git-describe, use whatever command you want to generate the version number. One important detail is that it should not have a trailing newline (hence the tr following git-describe).
A major drawback with this technique is that the version number is only generated when you run autoconf.
Add this to configure.ac:
AM_CONDITIONAL([IS_HG_REPO], [test -d "$srcdir/.hg"])
Add the following lines to Makefile.am:
if IS_HG_REPO
AM_CPPFLAGS = -DHGVERSION="\"$(PACKAGE) `hg parents --template 'hgid: {node|short}'`\""
else
AM_CPPFLAGS = -DHGVERSION=PACKAGE_STRING
endif
This will define HGVERSION as a string of the form APPNAME hgid: 24d0921ee4bd or APPNAME VERSION, if building from a release tarball.
See wiki page on versioning with make