I'm working with a proprietary code base where the owner would like users to get useful stack traces but not be able to view the source code. Generating Debian dbg packages with debug symbols but no source code is straightforward but the Redhat debuginfo RPMs are automatically created with source code.
Is there a way of configuring rpmbuild to build a debuginfo RPM without source code?
If not, what's the best way to remove the source code from a debuginfo package? Does anyone have a script to do it?
A -debuginfo package is just a sub-package, and can be created manually without source code. The automatic generation adds the necessary syntax to a spec file, but you can also do this manually, adding a debug info package in the spec file.
Disable automagic generation of *-debuginfo.rpm, run find-debuginfo.sh at the end of %install, and then remove the source files.
Another (and easier/cleaner) means to remove source files overrides this macro
%__debug_install_post \
%{_rpmconfigdir}/find-debuginfo.sh %{?_missing_build_ids_terminate_build:--strict-build-id} %{?_find_debuginfo_opts} "%{_builddir}/%{?buildsubdir}"\
%{nil}
in the spec file, replacing %{_rpmconfigdir}/find-debuginfo.sh with a modified/customized find-debuginfo.sh script.
Include the modified script in the spec file like
SourceN: my-find-debuginfo.sh
and then use the macro
%{SOURCEn}
(where N == n, some small appropriate integer) instead of the default to generate debugging symbols without source code.
Just finished a round of testing and in the end we inserted the following into the .spec file somewhere above the %description tag:
# Override the macro that invokes find-debuginfo.sh to remove
# the source files before the debuginfo pkg is assembled.
# It would be nice to remove the entire /usr/src tree but
# rpmbuild is running a check-files utility that fails the
# build if /usr/src/debug/%{name} isn't there. Tried to
# just delete the contents but it's tricky getting an
# asterisk to expand properly so we remove the entire
# directory and then restore an empty one. Sigh!
%define __debug_install_post \
%{_rpmconfigdir}/find-debuginfo.sh %{?_missing_build_ids_terminate_build:--strict-build-id} %{?_find_debuginfo_opts} "%{_builddir}/%{?buildsubdir}";\
rm -rf "${RPM_BUILD_ROOT}/usr/src/debug/%{name}"; \
mkdir "${RPM_BUILD_ROOT}/usr/src/debug/%{name}"; \
%{nil}
This works for RHEl 6 and 7 but results in a bash error in RHEl 5 so we avoid building a debuginfo package for the latter by not installing the redhat-rpm-config package.
We decided to avoid creating a modified find-debuginfo.sh script as suggested because there are already differences between different platforms and we preferred a single patch that would work for all targets including future new ones. This isn't perfect but is as close as we came up with.
CentOS 7 needed a slight modification of Guy's solution. Here's what I'm using successfully:
# Remove source code from debuginfo package.
%define __debug_install_post \
%{_rpmconfigdir}/find-debuginfo.sh %{?_missing_build_ids_terminate_build:--strict-build-id} %{?_find_debuginfo_opts} "%{_builddir}/%{?buildsubdir}"; \
rm -rf "${RPM_BUILD_ROOT}/usr/src/debug"; \
mkdir -p "${RPM_BUILD_ROOT}/usr/src/debug/%{name}-%{version}"; \
%{nil}
The following can be used to verify the source code is no longer contained within the RPM:
rpm -qpl xxx-debuginfo-1.0.0-1.el7.x86_64.rpm
Related
My project is fairly small C project. Running sourceanalyzer from a command line finishes in about 3 minutes for both translation and scan.
The documentation says if the .fpr was generated from the command line and I need to re-scan from AWB, the Update Project Translation button is greyed (which it is). But if I modify the source code, the documentation says I must first update the translation before I can re-scan the code, which means I have to run sourceanalyzer from the command line again (since the option is greyed-out in AWB.) However, using sourceanalyzer re-writes the .fpr, which means I lose all my audits and custom filters that I created in AWB.
Question 1: Can I run sourceanalyzer from the command line for both translation and scan without losing the audit work and custom filters I created in AWB?
The next logical step seemed to be create the .fpr from AWB. But if I try to use AWB to start a new project using Advanced Scan..., it takes over an hour to complete the Generating intermediate files - JtsWrapper.java step. When it's done, the results show 0 issues.
Question 2: How do I use AWB to start a new project on a C project that doesn't use Java? When I select Start New Project -> Advanced Scan, it asks for the Java version. Does that mean it thinks my project is a Java project?
This is how I use sourceanalyzer:
sourceanalyzer -clean
sourceanalyzer -64 -b myproj \
-build-label myproj \
-build-project myproj \
-build-version 1.0.0 \
touchless make -j6 -k
sourceanalyzer -64 -b myproj \
-build-label myproj \
-build-project myproj \
-build-version 1.0.0 \
-scan \
-f myproj.fpr
Question 1)
There are two options for keeping your previous/existing comments, audits, and filters when creating a new scan.
a) If you scan a second time and have the -f pointing to your existing .fpr file that has the modifications, sourceanalyzer will automatically merge the new results into that .fpr.
b) There is a commandline utility to merge two files together:
fprutility -merge -project <old.fpr> -source <new.fpr> -f <merged.fpr>
When you said, "The next logical step seemed to be create the .fpr from AWB." I disagree. Being able to produce a scan at the commandline makes the process repeatable and automatable. AWB and the IDE plug-ins are all a front end for the sourceanalyzer.exe.
Question 2)
I am not sure what version of Fortify SCA you are using, but when I point the advanced scan at the c++ sample project (<HPE Fortify Install Dir>/Samples/Basic/cpp) I do not get asked about Java Versions (I am using version 16.10).
Couple of things about your commandline arguments:
-64 is automatically for several version now (not sure when the switch was made)
-build-label myproj is optional
-build-project myproj is optional
-build-version 1.0.0 is optional
I have a BitBake recipe (example_0.1.bb) with a do_install task where I attempt to install a .so file:
do_install() {
install -d ${D}${libdir}
install -m 0644 ${S}/example.so ${D}${libdir}
}
FILES_${PN} += "${libdir}/example.so"
This fails during the build process and returns:
ERROR: example not found in the base feeds
However, if I add a test file to the package, both the .so file and the test file are added to the rootfs.
do_install() {
install -d ${D}${libdir}
install -m 0644 ${S}/example.so ${D}${libdir}
echo "bar" >> ${TOPDIR}/foo
install -m 0644 ${TOPDIR}/foo ${D}${libdir}
}
FILES_${PN} += "${libdir}/libceill.so"
FILES_${PN} += "${libdir}/foo"
How can I add only the .so file without the junk test file?
So you've got a library that is non-standard in that it's not installing a versioned library (libfoo.so.1.2.3, maybe symlinks such as libfoo.so.1 -> libfoo.so.1.2.3), and then an unversioned symlink for compilation time (libfoo.so -> libfoo.so.1). The default packaging rules assume standard libraries.
What's happening is that packages are populated by their order in PACKAGES, which has PN-dev before PN. FILES_PN-dev by default contains /usr/lib/lib*.so, and FILES_PN contains /usr/lib/lib*.so.. When you add /usr/lib/lib.so to FILES_PN what you want to happen isn't happening because PN-dev has already taken the files.
If your library doesn't come with any development files at all (e.g. no headers) then you can set FILES_${PN}-dev = "" to empty that package, and then your addition of lib*.so to FILES_${PN} will work as expected.
Yes, this is something that we should make easier (I've been thinking about a small class for libraries like this) and warn in sanity checks when it happens.
Oh and I'm surprised that the library ends up in the image in your second example, as example will contain /usr/lib/foo and example-dev will contains /usr/lib/libceill.so. Unless of course you've got dev-pkgs enabled, which will automatically install example-dev if you've got example in an image.
Add the line
FILES_SOLIBSDEV = ""
An explanation from the Yocto mailing list:
I had FILES_${PN} += “${libdir}/.so” in there and that didn't work.
Maybe it was because I was missing the FILES_SOLIBSDEV = “" you mentioned.
I'll play with it some more and see what happens. I first started out with
FILES_${PN} += “${libdir}/.so” and when that didn't work I tried other
things in the FILES_${PN} = line to try and get it picked up. When I
couldn't get any of it to work and then saw others (well, at least the link
I provided) were seeing the same thing I figured it was time to quit
spinning my wheels and consult the big guns :)
Heh :) The issue there is that the patterns are matched in the order of the
PACKAGES variable. The first package to include a file gets it, and
${PN}-dev is in PACKAGES before ${PN}. By emptying FILES_SOLIBSDEV, that’ll
remove the .so from FILES_${PN}-dev, letting the ${PN} package get it
instead.
Add the line:
FILES_${PN}_dev_remove="${FILES_SOLIBDEV} "
It will move out the package for development path.
I have been using the deploytool in Matlab for the past few months in my 2010b 64bit version of Matlab. I just recently found out that I need to create a 32 bit version of my c shared library.
To do this I follow the same methods I had been using previously (pretty much calling the command mcc -W lib:MYLIB -T link:lib -d 'MYOUTPUTFOLDER' -v 'MFILE1' 'MFILE2') in my 2009b 32 version of Matlab. I keep getting the error LNK1811: cannot open input file LIBRARY.obj. I have tried to find this LIBRARY object file but I cannot seem to find it anywhere.
So far I have checked to ensure all of the correct libraries are available (found at $MATLABROOT$\extern\include\win32), I have made sure all of my paths are correct in the compopts.bat file, and I have used the option -T compile:lib which works fine and creates a dll. This would be great but I need a lib file to use later in mbuild.
My current path forward is to take the compopts from my 64 bit version of Matlab (on a different machine) and compare it with my compopts for the 32 bit. I will post if it makes a difference.
To summarize our comments in the question and make it an answer. Here is how I manage to create both x32 and x64 libraries/standalones with mcc.
NB: Maybe there are more elegant solutions to configure deploytool, anyway with brute force I'm sure it works and I can automate global deployment process for my applications ...
Machine setup
Install Matlab x32 and x64 on your machine
Run Matlab x32 and setup compiler options typing msbuild -setup
This will generate a compopts.bat file in ~user\AppData\Roaming\MathWorks\MATLAB\R2013b (path may differ upon your version)
Rename this file to compopts.x32.bat (see later)
Run Matlab x64 and setup compiler options typing msbuild -setup
This will generate a compopts.bat file in ~user\AppData\Roaming\MathWorks\MATLAB\R2013b (!!Overwrites x32!!)
Rename this file to compopts.x64.bat (To workaround file overwrite)
EDIT Just tested ... In R2014b, Matlab does no longer overwrites same compots.bat file ... it now generates two separate MBUILD_C++_win64.xml and MBUILD_C++_win32.xml files (which is a good thing!).
Compilation in x32
Force your compilation script to point to ~matlabx32\bin\win32\mcc.exe and force mcc.exe to use previously saved compopts.x32.bat file using the -f option. Your command line should be something like:
~matlabx32\bin\win32\mcc.exe -f "compopts.x32.bat" ... other mcc options ...
Compilation in x64
Force your compilation script to point to ~matlabx64\bin\win64\mcc.exe and force mcc.exe to use previously saved compopts.x64.bat file using the -f option. Your command line should be something like:
~matlabx64\bin\win64\mcc.exe -f "compopts.x64.bat" ... other mcc options ...
I am working on a project which requires me to download and use this. Inside the downloaded folder, when extracted I am presented with three things:
A folder called "include"
A folder called "src"
A file called "Makefile"
After some research, I found out that I have to navigate to the directory which contains these files, and just type in the command make.
It seemed to install the library in my system. So I tried a sample bit of code which should use the library:
csp_conn_t * conn;
csp_packet_t * packet;
csp_socket_t * socket = csp_socket(0);
csp_bind(socket, PORT_4);
csp_listen(socket, MAX_CONNS_IN_Q);
while(1) {
conn = csp_accept(socket, TIMEOUT_MAX);
packet = csp_read(conn, TIMEOUT_NONE);
printf(“%S\r\n”, packet->data);
csp_buffer_free(packet);
csp_close(conn);
}
That's all that was given for the sample server end of the code. So I decided to add these to the top:
#include <csp.h>
#include <csp_buffer.h>
#include <csp_config.h>
#include <csp_endian.h>
#include <csp_interface.h>
#include <csp_platorm.h>
Thinking I was on the right track, I tried to compile the code with gcc, but I was given this error:
csptest_server.c:1: fatal error: csp.h: No such file or directory
compilation terminated.
I thought I may not have installed the library correctly after all, but to make sure, I found out I could check by running this command, and getting this result:
find /usr -iname csp.h
/usr/src/linux-headers-2.6.35-28-generic/include/config/snd/sb16/csp.h
/usr/src/linux-headers-2.6.35-22-generic/include/config/snd/sb16/csp.h
So it seems like the csp.h is installed, maybe I am referencing it incorrectly in the header include line? Any insight? Thanks a lot.
The make command is probably only building the library, but not installing it. You could try sudo make install. This is the "common" method, but I recommend you to check the library's documentation, if any.
The sudo command is only necessary if you have no permissions to write the system's include and library directories, which may be your case.
Another possibility (instead of installing the library) is telling GCC the location of the library's source code and generated binaries (by means of the -I and -L options of the gcc command.
That Makefile will not install anything, just translate the source into a binary format.
The csp.h in the Linux kernel has nothing to do with your project, it's just a naming collision, likely to happen with three letter names.
In your case, I would presume you need to add the include directory to the compilation flags for your server, like gcc -I/path/to/csp/include/csp csptest_server.c.
(Next, you'll run into linker errors because you'll also want to specify -L/path/to/csp -lcsp so that the linker can find the binary code to link to.)
I would like the executables for a project I am working on to have the latest mercurial changeset recorded so that when a user complains about buggy behavior, I can track which version they are using. Some of my executables are Python and others are compiled C. Is there a way to automate this, or can you point me to projects that exhibit solutions that I can look at?
I am using autoconf in my project... in case that makes the solution easier.
Thanks!
Setjmp
A common way to do this is with m4_esyscmd. For example, autoconf distributes a script in build-aux which generates a version number from the git repo and invokes AC_INIT as:
AC_INIT([GNU Autoconf], m4_esyscmd([build-aux/git-version-gen .tarball-version]),
[bug-autoconf#gnu.org])
You can often get away without distributing the script and do something simple like:
AC_INIT([Package name], m4_esyscmd([git describe --dirty | tr -d '\012']),
[bug-report-address])
Instead of git-describe, use whatever command you want to generate the version number. One important detail is that it should not have a trailing newline (hence the tr following git-describe).
A major drawback with this technique is that the version number is only generated when you run autoconf.
Add this to configure.ac:
AM_CONDITIONAL([IS_HG_REPO], [test -d "$srcdir/.hg"])
Add the following lines to Makefile.am:
if IS_HG_REPO
AM_CPPFLAGS = -DHGVERSION="\"$(PACKAGE) `hg parents --template 'hgid: {node|short}'`\""
else
AM_CPPFLAGS = -DHGVERSION=PACKAGE_STRING
endif
This will define HGVERSION as a string of the form APPNAME hgid: 24d0921ee4bd or APPNAME VERSION, if building from a release tarball.
See wiki page on versioning with make