I would like to run a bitcode with address sanitizer argument, but I have a problem with that, if I run it, the segmentation fault will happen.
$cat sample.c
#include <stdlib.h>
void *p;
int main() {
p = malloc(7);
return 0;
}
$clang -emit-llvm -fsanitize=address -c -g sample.c
$lli sample.bc
Stack dump:
0. Program arguments: lli sample.bc
0 lli 0x000000010c112d9c llvm::sys::PrintStackTrace(llvm::raw_ostream&) + 37
1 lli 0x000000010c11319e SignalHandler(int) + 192
2 libsystem_platform.dylib 0x00007fff603e2b3d _sigtramp + 29
3 libsystem_platform.dylib 000000000000000000 _sigtramp + 2680280288
4 lli 0x000000010be3ff74 llvm::ExecutionEngine::runStaticConstructorsDestructors(llvm::Module&, bool) + 310
5 lli 0x000000010beac842 llvm::MCJIT::runStaticConstructorsDestructors(bool) + 388
6 lli 0x000000010bb715c6 main + 8866
7 libdyld.dylib 0x00007fff601f7ed9 start + 1
Segmentation fault: 11
Sanitized code requires special runtime support which is implemented in Asan runtime library. lli does not load this library by default (because users normally don't need it) so you need to request it explicitly via LD_PRELOAD=libasan.so.VER. Note libasan.so is GCC convention, for Clang you may need something like libclang_rt.asan.XXX. You can determine full library paths via
GCC_ASAN_PRELOAD=$(gcc -print-file-name=libasan.so)
CLANG_ASAN_PRELOAD=$(clang -print-file-name=libclang_rt.asan-x86_64.so)
Related
In this code, I have a function whose return value is -1, but when assigned to int64_t type, the value obtained is 4294967295 instead of -1, but when assigned to int32_t type, it is -1. The return value of that zip_name_locate is of type int (4 bytes on my system). why is that?
#include <inttypes.h>
#include <stdio.h>
#include <zip.h>
int main() {
const char * path = "/home/www/api/default/current/public/static/doc/test.xlsx";
int error = ZIP_ER_NOENT;
zip_t* zip = zip_open(path, ZIP_RDONLY, &error);
int32_t n = zip_name_locate(zip, "xl/worksheets/_rels/sheet2.xml.rels", ZIP_FL_NODIR);
printf("%d\n", n);
int64_t j = zip_name_locate(zip, "xl/worksheets/_rels/sheet2.xml.rels", ZIP_FL_NODIR);
printf("%" PRId64 "\n", j);
return 0;
}
output:
-1
4294967295
This is my system information:
➜ ~ uname -r
3.10.0-1062.12.1.el7.x86_64
➜ ~ cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)
➜ ~ gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Thanks for answering, here are some answers to your questions:
About libzip, Because this is related to a bug I encountered in the process, so I need to use zip
My system is 64bit CentOS 7.7
zip_name_locate does return zip_int64_t type
I printed n and j in gdb, n is -1, j is 4196160, it seems that gdb cannot print integers of type int64_t, but this output indicates that j should not be -1
Because we used an old version of libzip in a certain environment and caused a bug, so we wanted to find the most fundamental reason, we used the built-in version 0.11
PRId64 is ld on my system
int64_t j = -1; and int64_t j = (int64_t)((zip_int64_t)-1); successful conversion
sizeof(long) is 8
I made a mistake. On my system, I have an old version of libzip and a new version of libzip, but when I tried to introduce the old version of libzip with the -L flag, the new version was actually introduced. The method I compiled is
gcc -L /usr/lib64 -lzip test1.c -o test and /usr/lib64 is where the old version of libzip dynamic shared library is located.
In my system, there are some libzip library files under /usr/lib64 and /usr/local/lib64, the old version under /usr/lib64, and the new version under /usr/local/lib64:
ls -lh /usr/local/lib64/libzip.so*
lrwxrwxrwx 1 root root 11 Jun 1 22:21 /usr/local/lib64/libzip.so -> libzip.so.5
lrwxrwxrwx 1 root root 13 Jun 1 22:21 /usr/local/lib64/libzip.so.5 -> libzip.so.5.3
-rwxr-xr-x 1 root root 162K Jun 1 23:18 /usr/local/lib64/libzip.so.5.3
ls -lh /usr/lib64/libzip.so*
-rwxr-xr-x 1 root root 57K Jun 2 00:02 /usr/lib64/libzip.so
lrwxrwxrwx 1 root root 11 Jun 2 00:07 /usr/lib64/libzip.so.2 -> libzip.so.5
-rwxr-xr-x 1 root root 57K Jun 2 00:02 /usr/lib64/libzip.so.2.1.0
-rwxr-xr-x 1 root root 57K Jun 2 00:02 /usr/lib64/libzip.so.5
I learned that the objdump command can check which shared libraries are dependent, so I checked it out and the following is the output:
objdump -p test | grep so
NEEDED libzip.so.2
NEEDED libc.so.6
required from libc.so.6:
Then I checked through ldconfig and found that libzip.so.2 points to the new version:
ldconfig -v | grep libzip
libzip.so.5 -> libzip.so.5.3
libzip.so.2 -> libzip.so.5
So in my question, it was based on a wrong judgment from the beginning, leading to incomprehensible imagination. If you are using a new version of libzip, the return value of zip_name_locate of libzip in the new version is zip_int64_t. This type is int64_t type on my system. When the 4294967295 return value of this type is assigned to int32_t, it causes overflow, so would be -1, and j would be 4294967295.
I install mpich using brew install mpich, but if I use MPI_Barrier, I will get segmentation fault. See the simple code below:
// A.c
#include "mpi.h"
#include <stdio.h>
int main(int argc, char *argv[])
{
int rank, nprocs;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Barrier(MPI_COMM_WORLD);
printf("Hello, world. I am %d of %d\n", rank, nprocs);fflush(stdout);
MPI_Finalize();
return 0;
}
mpicc A.c -g -O0 -o A
After running mpirun -n 2 ./A, I got error below:
================================================================================== =
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 60914 RUNNING AT pivotal.lan
= EXIT CODE: 139
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault: 11 (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions
The detailed stack from lldb -c /cores/core.60914:
(lldb) target create --core "core.60914"
warning: (x86_64) /cores/core.60914 load command 82 LC_SEGMENT_64 has a fileoff + filesize (0x27d3b000) that extends beyond the end of the file (0x27d3a000), the segment will be truncated to match
warning: (x86_64) /cores/core.60914 load command 83 LC_SEGMENT_64 has a fileoff (0x27d3b000) that extends beyond the end of the file (0x27d3a000), ignoring this section
bCore file '/cores/core.60914' (x86_64) was loaded.
(lldb) bt
* thread #1: tid = 0x0000, 0x000000010176f432 libpmpi.12.dylib`MPID_Request_create + 244, stop reason = signal SIGSTOP
* frame #0: 0x000000010176f432 libpmpi.12.dylib`MPID_Request_create + 244
frame #1: 0x000000010178d2fa libpmpi.12.dylib`MPID_Isend + 152
frame #2: 0x0000000101744d6f libpmpi.12.dylib`MPIC_Sendrecv + 351
frame #3: 0x00000001016861df libpmpi.12.dylib`MPIR_Barrier_intra + 401
frame #4: 0x00000001016866f2 libpmpi.12.dylib`MPIR_Barrier + 67
frame #5: 0x0000000101686789 libpmpi.12.dylib`MPIR_Barrier_impl + 90
frame #6: 0x00000001016860fb libpmpi.12.dylib`MPIR_Barrier_intra + 173
frame #7: 0x00000001016866f2 libpmpi.12.dylib`MPIR_Barrier + 67
frame #8: 0x0000000101686789 libpmpi.12.dylib`MPIR_Barrier_impl + 90
frame #9: 0x00000001015a8ed9 libmpi.12.dylib`MPI_Barrier + 820
frame #10: 0x0000000101590ed8 a.out`main(argc=1, argv=0x00007fff5e66fa40) + 88 at b.c:11
frame #11: 0x00007fff8f7805ad libdyld.dylib`start + 1
The usage is copied from official guide. What's the problem of MPI_Barrier function implementation in libmpi.12.dylib? Thanks.
I have the following program in Mingw, gcc 4.9.2:
#include <stdio.h>
#include <stdint.h>
#define VECSIZE 32
typedef char byteVec __attribute__ ((vector_size (VECSIZE)));
#define PERMLEFT_BVEC (byteVec){63,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30}
byteVec permute(byteVec x, byteVec y) {
return __builtin_shuffle(x,y,PERMLEFT_BVEC);
}
void print_vec32b(byteVec a) {
printf("[ ");
int i; for (i = 0; i < 32; ++i) printf("%d ", a[i]);
puts("]");
}
int main() {
byteVec x = {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32};
byteVec y = {11,12,13,14,15,16,17,18,19,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,88,89,90,91,92};
byteVec z = permute(x,y);
print_vec32b(x);
return 0;
}
When I compile this program with -m64, it crashes. With -m32 it works fine. Optimization level doesn't matter. I don't understand what's going on. I've also tried TDM with GCC 5.1.0. Same thing. Anybody have any advice? Is it something screwy with GCC in Windows?
Here is the assembly produced by the compiler (note how the shuffle is turned into a permutation automatically, with vperm2i128 and vpalignr, which is the desired behavior:
GCC Explorer
Minimal program: above.
Desired behavior: print permuted byte vector (which it does in 32-bit mode).
Expected output (works in 32-bit mode):
$ gcc nvec.c -m32 -mavx2 -o a.exe && a.exe
[ 92 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 ]
Actual result: crash.
Description of crash: immediate crash, windows error reporting bug shows up. No errors or warnings.
I wrote a simple c program that makes use of the assert() call. I'd like to analyze it using lldb.
OS in use: OS X Mavericks
Compiler used to compile:
Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
Target: x86_64-apple-darwin13.0.0
Thread model: posix
The -g compiler option generated a .DSYM directory. I wanted to know how to how to analyze this core using lldb.
PS: I have compiled using the -g option (clang -g test.c)
Start lldb and then execute the command
target create --core /cores/core.NNNN
where "/cores/core.NNNN" is your core file. A simple example:
$ lldb
(lldb) target create --core /cores/core.5884
Core file '/cores/core.5884' (x86_64) was loaded.
Process 0 stopped
* thread #1: tid = 0x0000, 0x00007fff8873c866 libsystem_kernel.dylib`__pthread_kill + 10, stop reason = signal SIGSTOP
frame #0: 0x00007fff8873c866 libsystem_kernel.dylib`__pthread_kill + 10
libsystem_kernel.dylib`__pthread_kill + 10:
-> 0x7fff8873c866: jae 0x7fff8873c870 ; __pthread_kill + 20
0x7fff8873c868: movq %rax, %rdi
0x7fff8873c86b: jmpq 0x7fff88739175 ; cerror_nocancel
0x7fff8873c870: ret
(lldb) bt
* thread #1: tid = 0x0000, 0x00007fff8873c866 libsystem_kernel.dylib`__pthread_kill + 10, stop reason = signal SIGSTOP
frame #0: 0x00007fff8873c866 libsystem_kernel.dylib`__pthread_kill + 10
frame #1: 0x00007fff85de835c libsystem_pthread.dylib`pthread_kill + 92
frame #2: 0x00007fff87554bba libsystem_c.dylib`abort + 125
frame #3: 0x00007fff8751ea5f libsystem_c.dylib`__assert_rtn + 321
frame #4: 0x000000010c867f59 a.out`main(argc=1, argv=0x00007fff53398c50) + 89 at prog.c:7
frame #5: 0x00007fff872b65fd libdyld.dylib`start + 1
(lldb) frame select 4
frame #4: 0x000000010c867f59 a.out`main(argc=1, argv=0x00007fff53398c50) + 89 at prog.c:7
4 int main(int argc, char **argv)
5 {
6 int i = 0;
-> 7 assert(i != 0);
8 return 0;
9 }
10
(lldb) p i
(int) $0 = 0
At the command prompt, in the same directory where you have the symbols directory, type
lldb program-name
then use the commands you want as in this official gdb to lldb command map:
lldb-gdb
A modern system:
% pacman -Q glibc gcc
glibc 2.16.0-4
gcc 4.7.1-6
% uname -sr
Linux 3.5.4-1-ARCH
A trivial program:
% < wtf.c
void main(){}
Let's do static and dynamic builds:
% gcc -o wtfs wtf.c -static
% gcc -o wtfd wtf.c
Everything looks fine:
% file wtf?
wtfd: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x4b421af13d6b3ccb6213b8580e4a7b072b6c7c3e, not stripped
wtfs: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.32, BuildID[sha1]=0x1f2a9beebc0025026b89a06525eec5623315c267, not stripped
Could anybody explain this to me?
% for n in $(seq 1 10); do ./wtfd; echo $?; done | xargs
0 0 0 0 0 0 0 0 0 0
% for n in $(seq 1 10); do ./wtfs; echo $?; done | xargs
128 240 48 128 128 32 64 224 160 48
Sure, one can use int main(). And -Wmain will issue a warning (return type of ‘main’ is not ‘int’).
I'd just like to understand what is going on there.
That's EXACTLY the point.
There is no "void main()". There is ALWAYS a result value, and if you don't return one and don't do anything in your program, the return value is what happens to be in the appropiate register at program start (or specifically, whatever happens to be there when main is called from the startup code). Which can certainly depend on what the program is doing before main, such as dealing with shared libs.
EDIT: to get an idea how this can happen, try this:
int foo(void)
{
return 55;
}
void main(void)
{
foo();
}
There is no guarantee, of course, but there's a good chance that this program will have an exit code of 55, simply because that's the last value returned by some function. Just imagine that call happened before main.
To further illustrate what Christian is saying. Even though you declared void main() your process will return whatever value was previous in eax (since you are on linux x86 arch).
void main() {
asm("movl $55, %eax");
}
So now it always returns 55 b/c the above code explicitly initializes eax.
$ cc rval.c
$ ./a.out
$ echo $?
55
Again this example will only work on the current major OSs since I am assuming the calling convention. There is no reason an OS could not have a different calling convention and the return value could be somewhere else (RAM, register, whatever).