Consistently Getting Null Value in C String using getcwd - c

I am trying to make a simple program that just writes your working directory to a file, and I cannot, for the life of me, figure out what I am doing wrong. No matter what I do, my buffer is storing null after my call to getcwd(). I suspect it may have to do with permissions, but allegedly, linux now did some wizardry to ensure that getcwd almost never has access problems (keyword, "almost"). Can anyone test it on their machines? Or is there an obvious bug I am missing?
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char *argv[])
{
printf("Error is with fopen if stops here\n");
FILE* out_file = fopen("dir_loc.sh","w+");
char* loc = malloc(sizeof(char)*10000);
size_t size = sizeof(loc);
printf("Error is with cwd if stops here\n");
loc = getcwd(loc,size);
printf("%s",loc);
fprintf(out_file,"cd %s",loc);
printf("Error is with fclose if stops here\n");
free(loc);
fclose(out_file);
return 0;
}
compiled with gcc main.c (the file is named "main.c")
EDIT: As was mentioned by different posters, sizeof(loc) was taking the size of a char pointer, and not the size of the amount of space allocated to that pointer. Changed it to malloc(sizeof(char)*1000) and it all works gravy.

Your problem is here:
size_t size = sizeof(loc);
You're getting the size of a char pointer, not the allocated memory for your char.
Change it to:
size_t size = sizeof(char) * 10000;
or even to
size_t size = 10000;
since sizeof(char) is guaranteed to be 1.
And since you're using size in your subsequent call to getcwd, you're obviously gonna have too little space to store most paths, so your result is unsurprising
If you don't want to go about changing multiple different numbers in the code every time you make a change, you can use #DEFINE text replacement to solve that.
Like this:
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define LOC_ARRAY_SIZE 10000 // Here you define the array size
int main(int argc, char *argv[])
{
printf("Error is with fopen if stops here\n");
FILE* out_file = fopen("dir_loc.sh","w+");
char* loc = malloc(sizeof(char)*LOC_ARRAY_SIZE); // sizeof(char) could be omitted
size_t size = sizeof(char)*LOC_ARRAY_SIZE;
printf("Error is with cwd if stops here\n");
loc = getcwd(loc,size);
printf("%s",loc);
fprintf(out_file,"cd %s",loc);
printf("Error is with fclose if stops here\n");
free(loc);
fclose(out_file);
return 0;
}

Related

How am I supposed to successfully achieve buffer overflow?

I am currently tackling on an assignment, where I need to upload exploit.c and target.c onto a ubuntu server, and successfully achieve a buffer overflow attack with exploit onto target. I was provided a shellcode. Now, target.c is not to be altered, just exploit.c. I had to use GDB on exploit.c to force an external breakpoint on foo() from target.c, to figure out the return addresses using info frame.
I was provided with the working shellcode, and minimal instructions.
I am pretty sure I was able to successfully pull the return addresses, but my issue is that I cannot figure out what code to put into exploit.c to have it successfully perform a buffer overflow attack. I was also instructed that one of the return addresses must be input into the exploit code for it to function properly.
I understand that the exploit is trying to call back to the return address, to then push itself into the buffer, so I can obtain access to the shell.
Here is exploit.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include "shellcode.h"
// replace this define environment to have the correct path of your own target code
#define TARGET "/*******************"
int main(void)
{
char *args[3];
char *env[2];
char *tmp = NULL;
// Creating an input buffer that can cause buffer overflow in strcpy function in the target.c executable code
int buffSize = 1000;
char buff[buffSize];
// Intialize buffer elements to 0x01
int i;
for (i=0; i < buffSize; i++) buff[i] = 0x01;
// write your code below to fill the 22 bytes shellcode into the buff variable, and
// at the correct location overwrite the return address correctly in order to achieve stack overflow
// Your own code starts here:
strcpy (buff[buffSize-22], shellcode);
// Your code ends here.
// prepare command line input to execute target code
args[0] = TARGET; // you must have already compiled and generated the target executable code first
args[1] = buff; // the first input parameter to the target code (artfully crafted buffer overflow string)
args[2] = NULL;
env[0] = "FOO=bar";
env[1] = NULL;
if (0 > execve(TARGET, args, env))
fprintf(stderr, "execve failed.\n");
return 0;
}
Here is the target.c code
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int foo(char* arg)
{
char localBuf[240];
short len = 240;
float var1=2.4;
int *ptr = NULL;
strcpy(localBuf, arg);
printf("foo() finishes normally.\n");
return 0;
}
int kbhit(void)
{
struct timeval tv;
fd_set read_fd;
tv.tv_sec=0; tv.tv_usec=0;
FD_ZERO(&read_fd); FD_SET(0,&read_fd);
if(select(1, &read_fd, NULL, NULL, &tv) == -1)
return 0;
if(FD_ISSET(0,&read_fd))
return 1;
return 0;
}
int main(int argc, char *argv[])
{
if (argc != 2)
{
fprintf(stderr, "target: argc != 2\n");
exit(EXIT_FAILURE);
}
printf("Press any key to call foo function...\n");
while(!kbhit())
;
foo(argv[1]);
return 0;
}
I compiled both target and exploit. Then I ran GDB on exploit, and formed a breakpoint using "break target.c:10". Using Info Frame I was able to obtain the return addresses.
I used strcpy, because it is essentially the only line of code we were taught for this section involving overflow attacks, even though it clearly states in the document "Fill the shell executable code (in the string array shellcode[]) byte-by-
byte into the buff for your modified return address to execute, do not
use strcpy() because shellcode[] is not an ASCII string (and not
copying NULL byte, too)."
Exploit compiles fine, and it runs fine, but it does not give me access to a shell. I was instructed that I would know if it worked, if I was presented with two dollar signs ($$) instead of one ($).
I am a network engineer, and I am not entirely savvy with C, or attacking vulnerabilities in programs, any help would be appreciated. The entire lesson revolves around "stack overflow", but this assignment is called "buffer overflow attack".

Reading files to shared memory

I am reading a binary file that I want to offload directly to the Xeon Phi through Cilk and shared memory.
As we are reading fairly much data at once each time and binary data the preferred option is to use fread.
So if I make a very simple example it would go like this
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
_Cilk_shared uint8_t* _Cilk_shared buf;
int main(int argc, char **argv) {
printf("Argv is %s\n", argv[1]);
FILE* infile = fopen(argv[1], "rb");
buf = (_Cilk_shared uint8_t*) _Offload_shared_malloc(2073600);
int len = fread(buf, 1, 2073600, infile);
if(ferror(infile)) {
perror("ferror");
}
printf("Len is %d and first value of buf is %d\n", len, *buf);
return 0;
}
The example is very simplified from the real code but enough to examplify the behavior.
This code would then return
ferror: Bad address
Len is 0 and first value of buf is 0
However if we switch out the fread for a fgets (not very suitable for reading binary data, specially with the return value) things work great.
That is we switch fgets((char *) buf, 2073600, infile); and then drop the len from the print out we get
first value of buf is 46
Which fits with what we need and I can run _Offload_cilk on a function with buf as an argument and do work on it.
Is there something I am missing or is fread just not supported? I've tried to find as much info on this from both intel and other sites on the internet but I have sadly been unable to.
----EDIT----
After more research into this it seems that running fread on the shared memory with a value higher than 524287 (524287 is 19 bits exactly) fread gets the error from above. At 524287 or lower things work, and you can run as many fread as you want and read all the data.
I am utterly unable to find any reason written anywhere for this.
I don't have a PHI, so unable to see if this would make a difference -- but fread has it's own buffering, and while that may be turned of for this type of readind, then I don't see why you would go through the overhead of using fread rather than just using the lower level calls of open&read, like
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdlib.h>
#include <stdint.h>
_Cilk_shared uint8_t* _Cilk_shared buf;
int main(int argc, char **argv) {
printf("Argv is %s\n", argv[1]);
int infile = open(argv[1], O_RDONLY); // should test if open ok, but skip to make code similar to OP's
int len, pos =0, size = 2073600;
buf = (_Cilk_shared uint8_t*) _Offload_shared_malloc(size);
do {
buf[pos]=0; // force the address to be mapped to process memory before read
len = read(infile, &buf[pos], size);
if(len < 0) {
perror("error");
break;
}
pos += len; // move position forward in cases where we have no read the entire data in first read.
size -= len;
} while (size > 0);
printf("Len is %d (%d) and first value of buf is %d\n", len, pos, *buf);
return 0;
}
read & write should work with shared memory allocated without the problem you are seeing.
Can you try to insert something like this before the fread calls?
memset(buf, 0, 2073600); // after including string.h
This trick worked for me, but I don't know why (lazy allocation?).
FYI, you can also post a MIC question on this forum.

open system calls in C on linux

There are probably several problems with the code below. Found it online after searching for a way to get keyboard input in linux. I've verified the correct event for keyboard input. The reason it seems fishy to me is regardless of what i put in the filepath, it always seems to pass the error check (the open call returns something greater than 0). Something is obviously wrong, so suggestions are welcome.
This won't run correctly unless you run the exe as su.
When i want to read in my keystroke, do i just use something like fgets on the file descriptor in an infinite while loop(would that even work)? I want it to be constantly polling for keyboard inputs. Any tips on decoding the inputs from the keyboard event?
Thanks again! This project of mine may be overly ambitious, as it's been a really long time since i've done any coding.
#include <stdio.h>
#include <stdlib.h>
#include <stddef.h>
#include <fcntl.h>
#include <linux/input.h>
#include <unistd.h>
// Edit this line to reflect your filepath
#define FILE_PATH "/dev/input/event4"
int main()
{
printf("Starting KeyEvent Module\n");
size_t file; //will change this to int file; to make it possible to be negative
const char *str = FILE_PATH;
printf("File Path: %s\n", str);
error check here
if((file = open(str, O_RDONLY)) < 0)
{
printf("ERROR:File can not open\n");
exit(0);
}
struct input_event event[64];
size_t reader;
reader = read(file, event, sizeof(struct input_event) * 64);
printf("DO NOT COME HERE...\n");
close(file);
return 0;
}
the problem is here:
size_t file;
size_t is unsigned, so it will always be >=0
it should have been:
int file;
the open call returns something greater than 0
open returns int, but you put in in an unsigned variable (size_t is usually unsigned), so you fail to detect when it is <0

C Linux - Not Writing Integers

I don't know why I keep getting troubles to write an integer to a file.
Here's the code:
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
int main (int argc, char* argv[]) {
int fd, w;
int num=80;
fd=open ("file3.txt", O_CREAT|O_WRONLY, 0777);
if (fd>0) {
w=write (fd, &num, sizeof (int));
if (w==-1) {
printf ("Writing Error \n");
return EXIT_FAILURE;
}
}
close (fd);
return EXIT_SUCCESS;
}
Does anyone know what could it be?
Thanks a lot...
You're writing binary values to the file, not ascii. If you want ascii in the file, you need to sprintf it first to a char buffer, then write the char buffer. Or open your file with fopen instead of open and use fprintf.
p.s. you want close(fd) inside your if (fd > 0) { block. Also, technically the only error return of open is -1. All other values (positive, zero, negative) are success.
From your comments it is working 100% correctly: P happens to be decimal 80.
write() is outputting bytes of the integer not a decimal representation.
You might want to look at fopen and fprintf as an easy way to get what it looks like you are expecting.

Why can't my program save a large amount (>2GB) to a file?

I am having trouble trying to figure out why my program cannot save more than 2GB of data to a file. I cannot tell if this is a programming or environment (OS) problem. Here is my source code:
#define _LARGEFILE_SOURCE
#define _LARGEFILE64_SOURCE
#define _FILE_OFFSET_BITS 64
#include <math.h>
#include <time.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
/*-------------------------------------*/
//for file mapping in Linux
#include<fcntl.h>
#include<unistd.h>
#include<sys/stat.h>
#include<sys/time.h>
#include<sys/mman.h>
#include<sys/types.h>
/*-------------------------------------*/
#define PERMS 0600
#define NEW(type) (type *) malloc(sizeof(type))
#define FILE_MODE (S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH)
void write_result(char *filename, char *data, long long length){
int fd, fq;
fd = open(filename, O_RDWR|O_CREAT|O_LARGEFILE, 0644);
if (fd < 0) {
perror(filename);
return -1;
}
if (ftruncate(fd, length) < 0)
{
printf("[%d]-ftruncate64 error: %s/n", errno, strerror(errno));
close(fd);
return 0;
}
fq = write (fd, data,length);
close(fd);
return;
}
main()
{
long long offset = 3000000000; // 3GB
char * ttt;
ttt = (char *)malloc(sizeof(char) *offset);
printf("length->%lld\n",strlen(ttt)); // length=0
memset (ttt,1,offset);
printf("length->%lld\n",strlen(ttt)); // length=3GB
write_result("test.big",ttt,offset);
return 1;
}
According to my test, the program can generate a file large than 2GB and can allocate such large memory as well.
The weird thing happened when I tried to write data into the file. I checked the file and it is empty, which is supposed to be filled with 1.
Can any one be kind and help me with this?
You need to read a little more about C strings and what malloc and calloc do.
In your original main ttt pointed to whatever garbage was in memory when malloc was called. This means a nul terminator (the end marker of a C String, which is binary 0) could be anywhere in the garbage returned by malloc.
Also, since malloc does not touch every byte of the allocated memory (and you're asking for a lot) you could get sparse memory which means the memory is not actually physically available until it is read or written.
calloc allocates and fills the allocated memory with 0. It is a little more prone to fail because of this (it touches every byte allocated, so if the OS left the allocation sparse it will not be sparse after calloc fills it.)
Here's your code with fixes for the above issues.
You should also always check the return value from write and react accordingly. I'll leave that to you...
main()
{
long long offset = 3000000000; // 3GB
char * ttt;
//ttt = (char *)malloc(sizeof(char) *offset);
ttt = (char *)calloc( sizeof( char ), offset ); // instead of malloc( ... )
if( !ttt )
{
puts( "calloc failed, bye bye now!" );
exit( 87 );
}
printf("length->%lld\n",strlen(ttt)); // length=0 (This now works as expected if calloc does not fail)
memset( ttt, 1, offset );
ttt[offset - 1] = 0; // Now it's nul terminated and the printf below will work
printf("length->%lld\n",strlen(ttt)); // length=3GB
write_result("test.big",ttt,offset);
return 1;
}
Note to Linux gurus... I know sparse may not be the correct term. Please correct me if I'm wrong as it's been a while since I've been buried in Linux minutiae. :)
Looks like you're hitting the internal file system's limitation for the iDevice: ios - Enterprise app with more than resource files of size 2GB
2Gb+ files are simply not possible. If you need to store such amount of data you should consider using some other tools or write the file chunk manager.
I'm going to go out on a limb here and say that your problem may lay in memset().
The best thing to do here is, I think, after memset() ing it,
for (unsigned long i = 0; i < 3000000000; i++) {
if (ttt[i] != 1) { printf("error in data at location %d", i); break; }
}
Once you've validated that the data you're trying to write is correct, then you should look into writing a smaller file such as 1GB and see if you have the same problems. Eliminate each and every possible variable and you will find the answer.

Resources