Understanding double-free mitre.org example - c

I'm trying to understand the following code example found on mitre:
#include <stdio.h>
#include <unistd.h>
#define BUFSIZE1 512
#define BUFSIZE2 ((BUFSIZE1/2) - 8)
int main(int argc, char **argv) {
char *buf1R1;
char *buf2R1;
char *buf1R2;
buf1R1 = (char *) malloc(BUFSIZE2);
buf2R1 = (char *) malloc(BUFSIZE2);
free(buf1R1);
free(buf2R1);
buf1R2 = (char *) malloc(BUFSIZE1);
strncpy(buf1R2, argv[1], BUFSIZE1-1);
free(buf2R1);
free(buf1R2);
}
They state that it
should be exploitable on Linux distributions
which do not ship with heap-chunk check summing turned on
but they don't explain how. How is it possible?

Related

Why can't I access my pointer of char through my function?

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <regex.h>
#include <unistd.h>
#include <ctype.h>
#include <assert.h>
void *process(char **nbE)
{
char buffer[8] = "test";
*nbE = &buffer[0];
printf("%s\n", *nbE);
}
int main(int argc, char **argv)
{
char *str;
process(&str);
printf("%s\n", str);
}
I'm trying to get the value of *nbE in main() by making it points to the address of first char in my array.
But it returns something not encoded, why?
What would be a way for me to do this way?
Note: I know I can do it simpler, I have a more complex code and this is a mini example
Basically I have something interesting in my array and want to pass it to my main function through a char* variable
char buffer[8] = "test";
creates a string that is local to the function, it is destroyed once you return from that function. Do this
static char buffer[8] = "test";
or
char * buffer = strdup("test");
you have to release the string when you have finsihed with it in the second case

Why using crypt in glibc cause compiler warning?

I tried to compiler the following code(minimum example, see the edit for the whole code):
// a.c
#include <stdio.h>
#define _XOPEN_SOURCE
#include <unistd.h>
int main(int argc, char* argv[])
{
puts((const char*) crypt("AAAA", "$6$2222"));
return 0;
}
Using clang-7 -lcrypt a.c and it emitted the following warning:
minimum.c:8:24: warning: implicit declaration of function 'crypt' is invalid in C99 [-Wimplicit-function-declaration]
puts((const char*) crypt("AAAA", "$6$2222"));
^
minimum.c:8:10: warning: cast to 'const char *' from smaller integer type 'int' [-Wint-to-pointer-cast]
puts((const char*) crypt("AAAA", "$6$2222"));
^
2 warnings generated.
But ./a.out did seem to work:
$6$2222$6GKY4KPtBqD9jAhwxIZGDqEShaBaw.pkyJxjvSlKmtygDXKQ2Q62CPY98MPIZbz2h6iMCgLTVEYplzp.naYLz1
I found out that if I remove #include <stdio.h> and puts like this:
// new_a.c
#define _XOPEN_SOURCE
#include <unistd.h>
int main(int argc, char* argv[])
{
crypt("AAAA", "$6$2222");
return 0;
}
Then there is no warnings.
How to fix these warnings without removing #include <stdio.h>?
Edit:
Whole program:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define _X_OPEN_SOURCE
#include <unistd.h>
#include <assert.h>
void* Calloc(size_t cnt, size_t size)
{
void *ret = calloc(cnt, size);
assert(ret);
return ret;
}
size_t GetSaltLen(const char *salt)
{
size_t salt_len = strlen(salt);
assert(salt_len > 0);
assert(salt_len <= 16);
return salt_len;
}
char* GetSaltAndVersion(const char version, const char *salt)
{
size_t saltlen = GetSaltLen(salt);
/*
* The format of salt:
* $one_digit_number$up_to_16_character\0
* For more info, check man crypt.
*/
char *ret = (char*) Calloc(1 + 1 + 1 + saltlen + 1, sizeof(char));
char *beg = ret;
*beg++ = '$';
*beg++ = version;
*beg++ = '$';
memcpy((void*) beg, (const void*) salt, saltlen + 1);
return ret;
}
void crypt_and_print(const char *passwd, const char *salt_and_version)
{
char *result = crypt(passwd, salt_and_version);
assert(puts(result) != EOF);
}
int main(int argc, char* argv[])
{
if (argc != 4) {
fprintf(stderr, "argc = %d\n", argc);
return 1;
}
char *salt_and_version = GetSaltAndVersion(argv[2][0], argv[3]);
crypt_and_print(argv[1], salt_and_version);
free(salt_and_version);
return 0;
}
I have tried as #Andrey Akhmetov suggested and put the #define onto the first line, but the warnings did not disappear.
The macro _XOPEN_SOURCE is documented in feature_test_macros(7). In particular, the manpage states:
NOTE: In order to be effective, a feature test macro must be defined before including any header files. This can be done either in the compilation command (cc -DMACRO=value) or by defining the macro within the source code before including any headers.
When you include stdio.h, you indirectly include features.h, which uses the feature test macros as defined at that point. In particular, since _XOPEN_SOURCE and friends aren't defined at that point, crypt.h does not declare crypt.
By the time you define _XOPEN_SOURCE it is too late, since features.h has an include guard preventing it from being included twice.
By swapping the order of the first two lines, the code works without raising this warning on my system:
#define _XOPEN_SOURCE
#include <stdio.h>
#include <unistd.h>
int main(int argc, char* argv[])
{
puts((const char*) crypt("AAAA", "$6$2222"));
return 0;
}
Your larger example does not work for a second reason: You wrote _X_OPEN_SOURCE as the name of the macro, while the correct name is _XOPEN_SOURCE.

How to write exact 1MB array in C?

I want to initialize an array of size 1MB. So my goal is finally write that 1MB to a file.
I am curious every time i use this formula it is giving less than 1mb.
int len = (1048576)/sizeof(int);
data = (int *) malloc(len);
What is correct way ?
Thank you
Edit - As per the comments I have changed the code .
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <string.h>
int main(int argc, char *argv[]) {
int *data;
int bytes = (1024*1024);
data = (int *) malloc(bytes);
for(int i=0;i<bytes;i++){
data[i] = (int)rand();
printf("%d",data[i]);
}
return 0;
}
After compiling it and I tried dumping the data like below
mpicc -o a mpiFileSize.c
./a > dump.dat
Now I see the file size of dump.dat. Why its 2.5MB ?
Try this
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <string.h>
int main(int argc, char *argv[]) {
char *data;
int bytes = (1024*1024);
data = (char *) malloc(bytes);
for(int i=0;i<bytes;i++){
data[i] = (char) rand();
printf("%c",data[i]);
}
return 0;
}
You shoul use character instead of integer.
Although it was already properly answered.
Just a plus to the answer, if one wants to choose the amount of MBs to allocate would make something like:
#include <malloc.h>
#define Mebabyte (1024 * 1024)
int main(int argc, char** argv)
{
void* data = malloc(2 * Megabyte);
// Do your work here...
free(data);
return 0;
}
If you wanted to allocate more than 2 MBs just change the 2.
As already stated before do not use integers as it's going to have more than 1 byte of size. Instead use char or unsigned char. And as stated by another post, there's no need to cast the result of malloc since void* can be turned to a pointer to any type (and in fact it's done implicitly by the compiler).
see: Why does this code segfault on 64-bit architecture but work fine on 32-bit?

how to change color of xeyes using a path in c?

all this in Linux not windows
hello i want to know how i can change the color of xeyes like we can do in terminal like
xeyes -fg blue
now i want to to do this in c program using path
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <string.h>
#include <malloc.h>
//#inlcude <windows.h>
#define LB_SIZE 1024
int main(int argc, char *argv[])
{
char fullPathName[] = "/usr/bin/X11/xeyes";
char *myArgv[LB_SIZE]; // an array of pointers
myArgv[0] = (char *) malloc(strlen(fullPathName) + 1);
strcpy(myArgv[0], fullPathName);
myArgv[1] = NULL; // last element should be a NULL pointer
execvp(fullPathName, myArgv);
exit(0); // should not be reached
}
if i simply call /usr/bin/X11/xeyes it just show eyes
now i am trying to add command like /usr/bin/X11/xeyes-fg but its not working
any suggestion?
You can add onto the argument vector, like this:
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <malloc.h>
#define LB_SIZE 1024
int main(int argc, char *argv[])
{
char fullPathName[] = "/usr/bin/X11/xeyes";
char *myArgv[LB_SIZE]; // an array of pointers
int n = 0;
myArgv[0] = (char *) malloc(strlen(fullPathName) + 1);
strcpy(myArgv[n++], fullPathName);
myArgv[n++] = "-fg";
myArgv[n++] = "blue";
myArgv[n] = NULL; // last element should be a NULL pointer
execvp(fullPathName, myArgv);
exit(0); // should not be reached
}
Here is a picture of the result:
Offhand, I would have expected strace to show the file rgb.txt being opened, but do not see this using -f option (assume it happens in the server). The "blue" does show up in a trace, but only in the exec call, e.g.,
execve("/usr/bin/X11/xeyes", ["/usr/bin/X11/xeyes", "-fg", "blue"], [/* 62 vars */]) = 0

EMSA_PSS_ENCODE with libssl

Hi I'm trying to use libssl to get some EMSA_PSS_ENCODING through the function RSA_padding_add_PKCS1_type1 in libssl, but I can't find nor docs nor solutions, so this is the example code I've written:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <openssl/rsa.h>
#include <openssl/err.h>
FILE *error_file;
int main()
{
int lSize;
const unsigned char *string1= (unsigned char *)"The pen is on the table";
unsigned char *stringa=NULL;
int num = 64;
if ((stringa = (unsigned char *)OPENSSL_malloc(num)) == NULL)
fprintf(stderr,"OPENSSL_malloc error\n");
lSize = strlen((char *)string1);
fprintf(stdout,"string1 len is %u\n",lSize);
if(RSA_padding_add_PKCS1_type_1(stringa,num,string1,lSize) != 1)
fprintf(stderr,"Error: RSA_PADDING error\n");
error_file = fopen("libssl.log", "w");
ERR_print_errors_fp(error_file);
fclose(error_file);
fprintf(stdout,(char *)stringa);
fprintf(stdout,"\n");
}
The problem is that I get no output in stringa, I think the function RSA_padding_add.. should be initialized, but I can't find how to do it in the few doc at the openssl site.
Thanks
See http://www.openssl.org/docs/crypto/RSA_padding_add_PKCS1_type_1.html .
Try defining lSize to (int)strlen(string1) after string1 is set.
EDIT:
Allocate stringa.
unsigned char *stringa=malloc(num);

Resources