Backtrack 5 r3 "apt-get upgrade" error [NEED HELP] - package

root#bt:~# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
smartphone-pentest-framework
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
1 not fully installed or removed.
After this operation, 0B of additional disk space will be used.
Do you want to continue [Y/n]? y
Setting up w3af (1.2-bt2) ...
tar: pybloomfiltermmap-0.2.0.tar.gz: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
tar: Child returned status 2
tar: Exiting with failure status due to previous errors
/var/lib/dpkg/info/w3af.postinst: line 4: cd: pybloomfiltermmap-0.2.0: No such file or directory
python: can't open file 'setup.py': [Errno 2] No such file or directory
svn: Working copy 'w3af' locked
svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details)
dpkg: error processing w3af (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
w3af
E: Sub-process /usr/bin/dpkg returned an error code (1)
root
I have copied and pasted the errors i get while updating the packing "smartphone-pentest-framework" above.
What is going wrong and what is the problem?

As you can see "/var/lib/dpkg/info/w3af.postinst: line 4: cd: pybloomfiltermmap-0.2.0: No such file or directory" so simple solution to change the download URL here is the trick they have a bug in their script as the repository moved to https://svn.code.sf.net/p/w3af/code/trunk
so simply edit w3af.postinst file (vi /var/lib/dpkg/info/w3af.postinst)
and replace old url with
https://svn.code.sf.net/p/w3af/code/trunk
save file and then run w3af setup again...

Related

Executing bteq using SSH

I try to run a script using SSH. The files are these:
test.sh
#!/bin/sh
export logfile=/home/OprBCIDe/logout.log
export bteqfile=/home/OprBCIDe/test_bteq.btq
echo "Test BTEQ"
bteq < $bteqfile > $logfile 2>&1
test_bteq.btq
.SET ERROROUT STDOUT;
.RUN FILE = /home/OprBCIDe/logon.ini
.EXPORT data file=/home/OprBCIDe/sample.csv
.SET SEPARATOR '|'
SELECT * FROM ds_edw_temp.TEST SAMPLE 100;
.EXPORT reset
.LOGOFF;
.EXIT;
The logon.ini file has the credentials for Teradata.
When I run test.sh file in the local machine, works without problems. But, when I use SSH, the command return this in the logout.log
CLI:Message catalog open failed!: No such file or directory
The file "errmsg.cat" cannot be opened.
There may be problems with your installation.
*** CLI error: -1 Message Not Found!
*** Return code from CLI is: -1
*** Error: Fatal error from CLI.
*** Program exiting!
*** Exiting BTEQ...
*** RC (return code) = 8
I search the problem with errmsg.cat, but I not found nothing specified with Teradata.
EDIT:
I found the solution. Only export COPERR in test.sh with the errmsg.cat location on the remote file system.

sh: 1: /my_path/ompi-1.1/compiler/ompi: permission denied when I run my C program

I have installed a software named "OMPi" (after make, it generated two executable file ompicc and ompi, and you can use ompicc -x file to do something, and ompi will be called by ompicc).
When I run the command ompicc ~/Documents/example.c in the directory "/my_path/ompi-1.1/compiler" (ompicc is here and ompi is in the sub_path "./ompi/"), an error occurred sh: 1: /my_path/ompi-1.1/compiler/ompi: permission denied. But when I ran the same command in any other directories, the error didn't occur.
sudo chmod 777 -R ompi-1.1 is no use.
I think it may be because the sub_path "./ompi/" get the same name with file ompi. So, I created a directory named "ompi/" in home_path, and then ran the above command. To my surprise, the error didn't occur. It seems that the error only occur when I run the command in the directory: /my_path/ompi-1.1/compiler/
information in terminal
From the looks of it (I have briefly checked ompi's source code), the ompi program is expected by ompicc to be in the same directory. It worked fine after you had created /ompi/ in home directory, because you still had executable of the same name in the same directory as ompicc. It doesn't work in the directory you specified because there's only one ompi there which is a directory.
Line that does the execution in ompicc.c (the constructed command is then ran by a system() call:
sprintf(cmd, "%s%s%s \"%s.pc\" __ompi__%s%s%s%s%s%s%s %s > \"%s\"%s",
usegdb ? "gdb " : "", /* Run gdb instead of running _ompi directly */
RealOmpiName,
usegdb ? " -ex 'set args" : "", /* Pass the arguments */
/* ...further arguments here... */
To confirm that RealOmpiName is 'ompi' i followed the program and
RealOmpiName is traced back to (through external symbol OmpiName)
Makefile.am:
-DOmpiName='"_#PACKAGE_TARNAME#"' \
Which then is used like this (to install the software):
cp -f ompi $(DESTDIR)$(bindir)/_#PACKAGE_TARNAME#
cp -f ompicc $(DESTDIR)$(bindir)/#PACKAGE_TARNAME#cc
I think the installer wouldn't put the two programs together if it didn't require the two to be in the same directory in the first place.
Solution: ompi and ompicc have to be in the same folder/directory.

Coverity and "Failed to initialize ICU, try using the --prevent-root option"

I have the bin directory in the build directory of my project.
When I run the command ./bin/cov-build --dir cov-int make I get the following error -
[ERROR] Failed to initialize ICU, try using the --prevent-root option.
Coverity uses ICU to handle multibyte encodings. This requires the ICU data files, present in the Coverity installation. That error suggests those files are either missing or not present in the expected location, and suggests you try using --prevent-root to tell it where it can expect to find the files.
Did you only copy the bin directory to your project? This would likely explain the issue, and using --prevent-root to point to the actual Coverity installation should resolve it.
Dockerized version - Minimum packages
I have dockerized coverity. Considering the whole enterprise version install equates to 4.9GB, I could get away with coverity for python by trial-and-error by selecting the packages needed...
According to Docker's layers/caching rules, docker will write a layer after the execution of any instruction. For this reason, you can design the docker image with Multi-layers and select only the files needed.
After I copied the correct set of dirs, the error went away and I could execute cov-configure python...
Dockerfile
Incomplete source-code...
COPY --from=prepare-install /opt/coverity/analysis/bin /opt/coverity/analysis/bin
COPY --from=prepare-install /opt/coverity/analysis/bin/cov-* /opt/coverity/analysis/bin/
COPY --from=prepare-install /opt/coverity/analysis/config/parse_warnings.conf.sample /opt/coverity/analysis/config/parse_warnings.conf.sample
COPY --from=prepare-install /opt/coverity/analysis/config/user_nodefs.h /opt/coverity/analysis/config/user_nodefs.h
COPY --from=prepare-install /opt/coverity/analysis/config/wrapper_escape.conf /opt/coverity/analysis/config/wrapper_escape.conf
# ls /opt/coverity/analysis/config/templates/ | xargs -I {} echo "COPY --from=prepare-install /opt/coverity/analysis/config/templates/{} /opt/coverity/analysis/config/templates/{}"
COPY --from=prepare-install /opt/coverity/analysis/config/templates/python /opt/coverity/analysis/config/templates/python
# File doesn't exist: '/opt/coverity/analysis/config/templates/generic/generic_switches.dat'
COPY --from=prepare-install /opt/coverity/analysis/config/templates/generic /opt/coverity/analysis/config/templates/generic
COPY --from=prepare-install /opt/coverity/analysis/config/templates/generic_linker /opt/coverity/analysis/config/templates/generic_linker
COPY --from=prepare-install /opt/coverity/analysis/config/templates/xlc /opt/coverity/analysis/config/templates/xlc
# Addressing the error
# > [coverity-python 6/6] RUN cov-configure --python: No valid XML DTD catalog found, try using the --prevent-root option.
COPY --from=prepare-install /opt/coverity/analysis/certs /opt/coverity/analysis/certs
COPY --from=prepare-install /opt/coverity/analysis/dtd /opt/coverity/analysis/dtd
COPY --from=prepare-install /opt/coverity/analysis/xsl /opt/coverity/analysis/xsl
# Was failing with https://stackoverflow.com/questions/65184937/fatal-python-error-init-fs-encoding-failed-to-get-the-python-codec-of-the-file
# As it is configured with python3.9, not python3.7 as it is packaged
COPY --from=prepare-install /opt/coverity/analysis/lib/python3.9 /opt/coverity/analysis/lib/python3.9
...
...
Docker Image
```console
$ docker images | more
REPOSITORY TAG IMAGE ID SIZE
dockerhub.company.com/coverity/python:2022.6.0 605MB

How do I solve this path error when trying to run ANSYS with parallel processing using MPI?

I am currently attempting to write a batch file that will open ANSYS Autodyn using MPI on a virtual machine. Whenever I attempt to start the program however, I get the following message;
WARNING: No cached password or password provided.
use '-pass' or '-cache' to provide password
AVS/Express Developer Edition
Version: 8.0 fcs pc10_64
Project: C:\Program Files\ANSYS Inc\v162\aisol\AUTODYN
--- Error detected in: module: OMopen_file ---
can't find file with name: appl and suffix: v or vo in path: C:\Program
Files\ANSYS Inc\v162\aisol\AUTODYN\v;C:\Program Files\ANSYS
Inc\v162\aisol\AUTODYN;.
MPI Application rank 0 exited before MPI_Init() with status -1
The problem is caused by the fact that the path specified in the last paragraph there should be;
C:\Program Files\ANSYS Inc\v162\aisol\AUTODYN\winx64
The problem is that I cannot find the variable that specifies that path, and so I cannot change it. Does anyone know how to solve this problem? Or am I stuck using just one core for the time being?
The batch file code is;
set MPI_ROOT=C:\Program Files\ANSYS Inc\v162\commonfiles\MPI\Platform\9.1.2.1\winx64
"%MPI_ROOT%\bin\mpirun.exe" -mpi64 -prot -e MPI_WORKDIR="C:\Users\umjonesa\AppData\Roaming\Ansys\v162\AUTODYN" -f applfile.txt
PAUSE
That opens the .txt called applfile;
-e MPI_FLAGS=y0 -h localhost -np 1 "C:\Program Files\ANSYS Inc\v162\aisol\AUTODYN\winx64\autodyn.exe"
-h localhost -np 3 "C:\Program Files\ANSYS Inc\v162\aisol\AUTODYN\winx64\adslave.exe"
which should open an autodyn window with one master and three slaves.

nagios Check result path is not a valid directory

i'm getting error when i run below command
nagios3 -v /etc/nagios3/nagios.cfg
Error in configuration file '/etc/nagios3/nagios.cfg' - Line 469 (Check result path is not a valid directory) Error processing main config file
So i looked ls -l /var/lib/nagios3/
drwxr-x--- 3 nagios nagios 1024 Mar 14 21:13 spool
In this case, why i'm getting error? Probably i think my /var/lib/nagios3/spool/checkresult/check2JcDx5 file contains wrong line. And when i run below command, i get this output.
#cat check2JcDx5
file_time=1363378360
host_name=localhost
service_description=HTTP
check_type=0
check_options=0
scheduled_check=1
reschedule_check=1
latency=0.122000
start_time=1363378360.122234
Disable SELinux:
# getenforce
# setenforce 0
Edit /etc/selinux/config. Set SELINUX=disabled.
You may be able to install the nagios-selinux package to add the policy to run nagios in an selinux environment. Better than disabling your existing security.

Resources