I and dynamic memories are not friends. I always get a problem with them. And the task is simple to understand.
Task: Write a function readText that reads an arbitrary text (finalized by return) from the user and returns it as a string. In this first version of the function assume that the text can't be longer than a certain length (e.g.1000 characters). After the text has been read, the memory should be shortened to the minimal needed length.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define MAXC 5
char *readText(int*lengh);
int main()
{
char *str= malloc(MAXC*sizeof(char));
if(str == NULL) {
printf("Kein virtueller RAM mehr verfügbar ...\n");
return EXIT_FAILURE;
}
int length=0;
str=readText(&length);
printf("Text: %s %d %c\n",str,length,*str);
str= realloc(str,length+1);
if(str == NULL) {
printf("Kein virtueller RAM mehr verfügbar ...\n");
return EXIT_FAILURE;
}
printf("Text: %s\n",str);
free(str);
printf("free\n");
return 0;
}
char *readText(int*lengh){
char *result1;
char result[MAXC];
printf("Read Text: ");
scanf("%s",&result);
result1=result;
*lengh=strlen(result);
return result1;
}
Results (the string thing just happened in a moment ago, and before I only had a problem with the realloc):
Read Text: hoi
Text: h╠ ` 3 h
Kein virtueller RAM mehr verf³gbar ... (No virtual RAM available)
Process returned 1 (0x1)
My worry is that my program is ok, but my RAM is not. So if this is the case or in general, please tell me how to fix RAM problems too. Would be amazing
Thanks for looking at this and help me to improve.
The function readText returns the address of a local variable. You cannot realloc memory that was not obtained by malloc (or calloc, or strdup, etc.), and the local variable from the function readText was certainly not obtained from malloc. So realloc fails.
Related
I am learning about file descriptors by using the open, write and close functions. What I expect is a printf statement outputting the file descriptor after the open function, and another printf statement outputting a confirmation of the text being written. However, I get the following result:
.\File.exe "this is a test"
[DEBUG] buffer # 0x00b815c8: 'this is a test'
[DEBUG] datafile # 0x00b81638: 'C:\Users\____\Documents\Notes'
With a blank space where the further debugging output should be. The code block for the section is:
strcpy(buffer, argv[1]); //copy first vector into the buffer
printf("[DEBUG] buffer \t # 0x%08x: \'%s\'\n", buffer, buffer); //debug buffer
printf("[DEBUG] datafile # 0x%08x: \'%s\'\n", datafile, datafile); //debug datafile
strncat(buffer, "\n", 1); //adds a newline
fd = open(datafile, O_WRONLY|O_CREAT|O_APPEND, S_IRUSR|S_IWUSR); //opens file
if(fd == -1)
{
fatal("in main() while opening file");
}
printf("[DEBUG] file descriptor is %d\n", fd);
if(write(fd, buffer, strlen(buffer)) == -1) //wrting data
{
fatal("in main() while writing buffer to file");
}
if(close(fd) == -1) //closing file
{
fatal("in main() while closing file");
}
printf("Note has been saved.");
I basically copied the code word for word from the book I'm studying, so how could it not work?
The problem is that the printf function does not display anything, and the file descriptor is not returned.
Here is the full code:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <fcntl.h>
#include <sys/stat.h>
void usage(char *pnt, char *opnt) //usage function
{
printf("Usage: %s <data to add to \"%s\">", pnt, opnt);
exit(0);
}
void fatal(char*); //fatal function for errors
void *ec_malloc(unsigned int); //wrapper for malloc error checking
int main(int argc, char *argv[]) //initiates argumemt vector/count variables
{
int fd; //file descriptor
char *buffer, *datafile;
buffer = (char*) ec_malloc(100); //buffer given 100 bytes of ec memory
datafile = (char*) ec_malloc(20); //datafile given 20 bytes of ec memory
strcpy(datafile, "C:\\Users\\____\\Documents\\Notes");
if(argc < 2) //if argument count is less than 2 i.e. no arguments provided
{
usage(argv[0], datafile); //print usage message from usage function
}
strcpy(buffer, argv[1]); //copy first vector into the buffer
printf("[DEBUG] buffer \t # %p: \'%s\'\n", buffer, buffer); //debug buffer
printf("[DEBUG] datafile # %p: \'%s\'\n", datafile, datafile); //debug datafile
strncat(buffer, "\n", 1); //adds a newline
fd = open(datafile, O_WRONLY|O_CREAT|O_APPEND, S_IRUSR|S_IWUSR); //opens file
if(fd == -1)
{
fatal("in main() while opening file");
}
printf("[DEBUG] file descriptor is %d\n", fd);
if(write(fd, buffer, strlen(buffer)) == -1) //wrting data
{
fatal("in main() while writing buffer to file");
}
if(close(fd) == -1) //closing file
{
fatal("in main() while closing file");
}
printf("Note has been saved.");
free(buffer);
free(datafile);
}
void fatal(char *message)
{
char error_message[100];
strcpy(error_message, "[!!] Fatal Error ");
strncat(error_message, message, 83);
perror(error_message);
exit(-1);
}
void *ec_malloc(unsigned int size)
{
void *ptr;
ptr = malloc(size);
if(ptr == NULL)
{
fatal("in ec_malloc() on memory allocation");
return ptr;
}
}
EDIT: the issue has been fixed. The reason for this bug was that the memory allocated within the ec_malloc function was not sufficient, which meant that the text could not be saved. I changed the byte value to 100 and the code now works.
I am not sure which compiler you are using, but the one I tried the code with (GCC) says:
main.c:34:5: warning: ‘strncat’ specified bound 1 equals source length [-Wstringop-overflow=]
34 | strncat(buffer, "\n", 1); //adds a newline
| ^~~~~~~~~~~~~~~~~~~~~~~~
In other words, the call to strncat in your code is highly suspicious. You are trying to append a single line-break character, which has a length of 1, which you pass as the third argument. But strncat expects the third parameter to be the remaining space in buffer, not the length of the string to append.
A correct call would look a bit like this:
size_t bufferLength = 100;
char* buffer = malloc(bufferLength);
strncat(buffer, "\n", (bufferLength - strlen(buffer) - strlen("\n") - 1));
In this case, however, you are saved, because strncat guarantees that the resulting buffer is NUL-terminated, meaning that it always writes one additional byte beyond the specified size.
All of this is complicated, and a common source of bugs. It's easier to simply use snprintf to build up the entire string at one go:
size_t bufferLength = 100;
char* buffer = malloc(bufferLength);
snprintf(buffer, bufferLength, "%s\n", argv[1]);
Another bug in your code is the ec_malloc function:
void *ec_malloc(unsigned int size)
{
void *ptr;
ptr = malloc(size);
if(ptr == NULL)
{
fatal("in ec_malloc() on memory allocation");
return ptr;
}
}
See if you can spot it: what happens if ptr is not NULL? Well, nothing! The function doesn't return a value in this case; execution just falls off the end.
If you're using GCC (and possibly other compilers) on x86, this code will appear to work fine, because the result of the malloc function will remain in the proper CPU register to serve as the result of the ec_malloc function. But the fact that it just happens to work by the magic of circumstance does not make it correct code. It is subject to stop working at any time, and it should be fixed. The function deserves a return value!
Unfortunately, the GCC compiler is unable to detect this mistake, but Clang does:
<source>:64:1: warning: non-void function does not return a value in all control paths [-Wreturn-type]
}
^
The major bug in your code is a buffer overrun. At the top, you allocate only 20 bytes for the datafile buffer:
datafile = (char*) ec_malloc(20); //datafile given 20 bytes of ec memory
which means it can only store 20 characters. However, you proceed to write in more than 20 characters:
strcpy(datafile, "C:\\Users\\____\\Documents\\Notes");
That string literal is 33 characters, not including the terminating NUL! You need a buffer with at least 50 characters of space to hold all of this. With a buffer that is too small, the strcpy function call creates a classic "buffer overrun" error, which is undefined behavior that manifests itself as corrupting your program's memory area and thus premature termination.
Again, when I tried compiling and running the code, GCC reported:
malloc(): corrupted top size
because it detected that you had overrun the dynamically-allocated memory (returned by malloc). It was able to do this because, under the hood, malloc stores sentinel information after the allocated memory block, and your overwriting of the allocated space had written over its sentinel information.
The whole code is a bit suspect; it was not written by someone who knows C very well, nor was it debugged or reviewed by anyone else.
There is no real need to use dynamic memory allocation here in order to allocate fixed-size buffers. If you're going to use dynamic memory allocation, then allocate the actual amount of space that you need. Otherwise, if you're allocating fixed-size buffers, then just allocate on the stack.
Don't bother with complex string-manipulation functions when you can get away with simply using snprintf.
And as a bonus tip: when debugging problems, try to reduce the code down as small as you can get it. None of the file I/O stuff was related to this problem, so when I was analyzing this code, I replaced that whole section with:
printf("[DEBUG] file descriptor is %d\n", 42);
Once the rest of the code is working, I can go back and add the real code back to that section, and then test it. (Which I didn't do, because I don't have a file system handy to test this.)
I am trying to make a program that creates a directory in which multiple directories are created, then in each directory I am creating a file. I cannot seem to open those "multiple directories" so that I can put my file there. I tried using name3 as a parameter, and I also tried creating a const char* with name3's value and nothing worked.
error: malloc.c:2379: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed. Aborted (core dumped)
here is my code
#include <dirent.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
#include <dirent.h>
int make_directory(char * name) {
int checker=mkdir(name, S_IRWXU | S_IRWXG | S_IRWXO);
return checker;
}
char** getNames() {
char** names=malloc(10*sizeof(char*));
for(int i=0;i<10;i++) {
if(i==9) {
names[i]=malloc(3*sizeof(char));
names[i][0]='1';
names[i][1]='0';
names[i][2]='\0';
} else {
names[i]=malloc(2*sizeof(char));
names[i][0]=49+i;
names[i][1]='\0';
}
}
return names;
}
int makeTenDirs() {
char **names=getNames();
char *name2;
for(int i=0;i<10;i++) {
name2=NULL;
name2=getcwd(NULL,0);
strncat(name2,"/input/dir",11);
strncat(name2,names[i],1);
int s=make_directory(name2);
}
name2=NULL;
name2=getcwd(NULL,0);
strncat(name2,"/input/dir",11);
strncat(name2,names[0],1);
strncat(name2,"0",2);
int s=make_directory(name2);
}
int main() {
char **names=getNames();
FILE *file;
DIR *dir;
DIR *dir2;
struct dirent *dent;
char * name1="./input";
char *name3;
int proceed=make_directory("./input");
if(proceed==-1) {
printf("Error making the directory\n");
}
makeTenDirs();
dir=opendir("./input");
if(dir!=NULL) {
name3=getcwd(NULL,0);
while((dent=readdir(dir))!=NULL){
if(strcmp(dent->d_name,"..")!=0 && strcmp(dent->d_name,".")!=0) {
name3=getcwd(NULL,0);
strncat(name3,"/input/",8);
strncat(name3,dent->d_name,10);
printf("%s\n",name3);
dir2=opendir(name3);
if(dir2!=NULL) {
printf("alo");
}
}
}
}
closedir(dir);
free(names);
return 0;
}
Any tips on how to open the directories (and maybe put the files in them)?
Below is a working implementation. I cleaned up all the warnings (you had some unused variables, makeTenDirs was not returning a value). Always look at and fix the warnings, use -Wall -Wextra flags to enable them. As I thought before, you were invoking undefined behavior by overwriting the buffers of name2 and name3. The way you were using getcwd it was allocating exactly enough space for name2 and name3. As soon as you strcat to that, you overwrite the buffer, invoking UB. At that point, the program can behave in completely unpredictable ways, including appearing to work. You hope your program crashes when there's UB so you're alerted to the problem. Below, I've used a 2nd operating mode of getcwd that doesn't internally malloc memory and instead keeps everything in automatic memory (on the stack). This relieves you of having to manage memory manually. I've included comments that hopefully explain everything, let me know if you have questions.
#include <dirent.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
#include <dirent.h>
#include <linux/limits.h> // for PATH_MAX
int make_directory(char * name) {
int checker=mkdir(name, S_IRWXU | S_IRWXG | S_IRWXO);
return checker;
}
char** getNames() {
char** names=malloc(10*sizeof(char*));
for(int i=0;i<10;i++) {
if(i==9) {
names[i]=malloc(3*sizeof(char));
names[i][0]='1';
names[i][1]='0';
names[i][2]='\0';
} else {
names[i]=malloc(2*sizeof(char));
names[i][0]='1'+i;
names[i][1]='\0';
}
}
return names;
}
int makeTenDirs(char** names) {
// This is where your problems began. As I suggested before and confirmed
// when I actually ran the code, `getcwd(NULL, 0)` returns a pointer to
// dynamically allocated memory that's just big enough to hold the path.
// As soon as you strcat to that, you overflow the buffer causing
// undefined behavior. It crashed for me 2nd time through the loop, not
// immediately when the UB occurred. Your results could be entirely different,
// that is the essence of UB. To fix it, I will declare an automatic array
// of PATH_MAX large (4096 I believe). That should make this fixed size array
// able to handle any path on your _linux_ box (this code will not be portable
// to windows). Alternatively, you could do what you did before, just be sure
// to realloc the name2 memory _before_ strcat'ing so there's enough room for
// "/input/dir"
char name2[PATH_MAX];
// only acceptable place for one character variable names are loop index
// variables, and you'll even get some argument on that. Give your variables
// descriptive names (although, I'd probably just make this function void, not
// much utility in the return value here, which you didn't even return!)
int directoryMade = 0;
for(int i=0;i<10;i++) {
// This returns NULL if the `sizeof name` (==PATH_MAX == 4096) is too
// small to hold the path. That shouldn't be the case since no paths
// on the system should exceed PATH_MAX, but it's always a good idea
// to check for errors, that's what a lot of your C code should be
// doing, so get in the habit. Also, realize your old way created
// a memory leak each time since each call to getcwd would malloc more
// memory, and you overwrite the pointer to the previous block with the
// pointer to the new block. Now nothing is pointing at the previous
// block so you can't free it --> memory leak.
if (getcwd(name2, sizeof name2) == NULL)
{
perror("Path exceeded buffer length");
exit(-1);
}
// should be plenty of space to strcat now
strncat(name2,"/input/dir",11);
strncat(name2,names[i],1);
// not much utility b/c it keeps getting overwritten. You could check it each time
// and return if there's an error
directoryMade=make_directory(name2);
}
if (getcwd(name2, sizeof name2) == NULL)
{
perror("Path exceeded buffer length");
exit(-1);
}
strncat(name2,"/input/dir",11);
strncat(name2,names[0],1);
strncat(name2,"0",2);
directoryMade=make_directory(name2);
return directoryMade;
}
int main() {
char **names=getNames();
DIR *dir;
DIR *dir2;
struct dirent *dent;
int proceed=make_directory("./input");
if(proceed==-1) {
// you print an error here, but continue on anyway
// as if there was no error
printf("Error making the directory\n");
}
makeTenDirs(names); // you already fetched names, use them!
dir=opendir("./input");
if(dir!=NULL) {
// same problem here. name3 holds _exactly_ how much space it
// needs when you allow getcwd to malloc memory for it. As before,
// you can make name3 an array in automatic storage, or realloc it
// before strcat'ing
char name3[PATH_MAX];
// check for NULL return here too. Not showing it because I'm getting lazy
getcwd(name3, sizeof name3);
while((dent=readdir(dir))!=NULL){
if(strcmp(dent->d_name,"..")!=0 && strcmp(dent->d_name,".")!=0) {
// check for NULL return
getcwd(name3, sizeof name3);
strncat(name3,"/input/",8);
strncat(name3,dent->d_name,10);
printf("%s\n",name3);
dir2=opendir(name3);
if(dir2!=NULL) {
// printf is line-buffered, so this won't print right away unless
// you put a newline on it (or fflush(stdout);)
printf("alo\n");
// close this dir too?
closedir(dir2);
}
}
}
}
closedir(dir);
// this is NOT a complete free. You have a double pointer. You need to loop and
// free each of names[0], names[1], .. names[9], _then_ free(names). I'll leave
// that as an exercise. In general, you should have a matching number of malloc's
// and free's.
free(names);
return 0;
}
For a class, I've been given the task of writing radix sort in parallel using pthreads, openmp, and MPI. My language of choice in this case is C -- I don't know C++ too well.
Anyways, the way I'm going about reading a text file is causing a segmentation fault at around 500MB file size. The files are line separated 32 bit numbers:
12351
1235234
12
53421
1234
I know C, but I don't know it well; I use things I know, and in this case the things I know are terribly inefficient. My code for reading the text file is as follows:
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <math.h>
int main(int argc, char **argv){
if(argc != 4) {
printf("rs_pthreads requires three arguments to run\n");
return -1;
}
char *fileName=argv[1];
uint32_t radixBits=atoi(argv[2]);
uint32_t numThreads=atoi(argv[3]);
if(radixBits > 32){
printf("radixBitx cannot be greater than 32\n");
return -1;
}
FILE *fileForReading = fopen( fileName, "r" );
if(fileForReading == NULL){
perror("Failed to open the file\n");
return -1;
}
char* charBuff = malloc(1024);
if(charBuff == NULL){
perror("Error with malloc for charBuff");
return -1;
}
uint32_t numNumbers = 0;
while(fgetc(fileForReading) != EOF){
numNumbers++;
fgets(charBuff, 1024, fileForReading);
}
uint32_t numbersToSort[numNumbers];
rewind(fileForReading);
int location;
for(location = 0; location < numNumbers; location++){
fgets(charBuff, 1024, fileForReading);
numbersToSort[location] = atoi(charBuff);
}
At a file of 50 million numbers (~500MB), I'm getting a segmentation fault at rewind of all places. My knowledge of how file streams work is almost non-existent. My guess is it's trying to malloc without enough memory or something, but I don't know.
So, I've got a two parter here: How is rewind segmentation faulting? Am I just doing a poor job before rewind and not checking some system call I should be?
And, what is a more efficient way to read in an arbitrary amount of numbers from a text file?
Any help is appreciated.
I think the most likely cause here is (ironically enough) a stack overflow. Your numbersToSort array is allocated on the stack, and the stack has a fixed size (varies by compiler and operating system, but 1 MB is a typical number). You should dynamically allocate numbersToSort on the heap (which has much more available space) using malloc():
uint32_t *numbersToSort = malloc(sizeof(uint32_t) * numNumbers);
Don't forget to deallocate it later:
free(numbersToSort);
I would also point out that your first-pass loop, which is intended to count the number of lines, will fail if there are any blank lines. This is because on a blank line, the first character is '\n', and fgetc() will consume it; the next call to fgets() will then be reading the following line, and you'll have skipped the blank one in your count.
The problem is in this line
uint32_t numbersToSort[numNumbers];
You are attempting to allocate a huge array in stack, your stack size is in few KBytes (Moreover older C standards don't allow this). So you can try this
uint32_t *numbersToSort; /* Declare it with other declarations */
/* Remove uint32_t numbersToSort[numNumbers]; */
/* Add the code below */
numbersToSort = malloc(sizeof(uint32_t) * numNumbers);
if (!numbersToSort) {
/* No memory; do cleanup and bail out */
return 1;
}
I am having trouble trying to figure out why my program cannot save more than 2GB of data to a file. I cannot tell if this is a programming or environment (OS) problem. Here is my source code:
#define _LARGEFILE_SOURCE
#define _LARGEFILE64_SOURCE
#define _FILE_OFFSET_BITS 64
#include <math.h>
#include <time.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
/*-------------------------------------*/
//for file mapping in Linux
#include<fcntl.h>
#include<unistd.h>
#include<sys/stat.h>
#include<sys/time.h>
#include<sys/mman.h>
#include<sys/types.h>
/*-------------------------------------*/
#define PERMS 0600
#define NEW(type) (type *) malloc(sizeof(type))
#define FILE_MODE (S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH)
void write_result(char *filename, char *data, long long length){
int fd, fq;
fd = open(filename, O_RDWR|O_CREAT|O_LARGEFILE, 0644);
if (fd < 0) {
perror(filename);
return -1;
}
if (ftruncate(fd, length) < 0)
{
printf("[%d]-ftruncate64 error: %s/n", errno, strerror(errno));
close(fd);
return 0;
}
fq = write (fd, data,length);
close(fd);
return;
}
main()
{
long long offset = 3000000000; // 3GB
char * ttt;
ttt = (char *)malloc(sizeof(char) *offset);
printf("length->%lld\n",strlen(ttt)); // length=0
memset (ttt,1,offset);
printf("length->%lld\n",strlen(ttt)); // length=3GB
write_result("test.big",ttt,offset);
return 1;
}
According to my test, the program can generate a file large than 2GB and can allocate such large memory as well.
The weird thing happened when I tried to write data into the file. I checked the file and it is empty, which is supposed to be filled with 1.
Can any one be kind and help me with this?
You need to read a little more about C strings and what malloc and calloc do.
In your original main ttt pointed to whatever garbage was in memory when malloc was called. This means a nul terminator (the end marker of a C String, which is binary 0) could be anywhere in the garbage returned by malloc.
Also, since malloc does not touch every byte of the allocated memory (and you're asking for a lot) you could get sparse memory which means the memory is not actually physically available until it is read or written.
calloc allocates and fills the allocated memory with 0. It is a little more prone to fail because of this (it touches every byte allocated, so if the OS left the allocation sparse it will not be sparse after calloc fills it.)
Here's your code with fixes for the above issues.
You should also always check the return value from write and react accordingly. I'll leave that to you...
main()
{
long long offset = 3000000000; // 3GB
char * ttt;
//ttt = (char *)malloc(sizeof(char) *offset);
ttt = (char *)calloc( sizeof( char ), offset ); // instead of malloc( ... )
if( !ttt )
{
puts( "calloc failed, bye bye now!" );
exit( 87 );
}
printf("length->%lld\n",strlen(ttt)); // length=0 (This now works as expected if calloc does not fail)
memset( ttt, 1, offset );
ttt[offset - 1] = 0; // Now it's nul terminated and the printf below will work
printf("length->%lld\n",strlen(ttt)); // length=3GB
write_result("test.big",ttt,offset);
return 1;
}
Note to Linux gurus... I know sparse may not be the correct term. Please correct me if I'm wrong as it's been a while since I've been buried in Linux minutiae. :)
Looks like you're hitting the internal file system's limitation for the iDevice: ios - Enterprise app with more than resource files of size 2GB
2Gb+ files are simply not possible. If you need to store such amount of data you should consider using some other tools or write the file chunk manager.
I'm going to go out on a limb here and say that your problem may lay in memset().
The best thing to do here is, I think, after memset() ing it,
for (unsigned long i = 0; i < 3000000000; i++) {
if (ttt[i] != 1) { printf("error in data at location %d", i); break; }
}
Once you've validated that the data you're trying to write is correct, then you should look into writing a smaller file such as 1GB and see if you have the same problems. Eliminate each and every possible variable and you will find the answer.
I can get the address of the end of the heap with sbrk(0), but is there any way to programmatically get the address of the start of the heap, other than by parsing the contents of /proc/self/maps?
I think parsing /proc/self/maps is the only reliable way on the Linux to find the heap segment. And do not forget that some allocators (including one in my SLES) do use for large blocks mmap() thus the memory isn't part of the heap anymore and can be at any random location.
Otherwise, normally ld adds a symbol which marks the end of all segments in elf and the symbol is called _end. E.g.:
extern void *_end;
printf( "%p\n", &_end );
It matches the end of the .bss, traditionally the last segment of elf. After the address, with some alignment, normally follows the heap. Stack(s) and mmap()s (including the shared libraries) are at the higher addresses of the address space.
I'm not sure how portable it is, but apparently it works same way on the Solaris 10. On HP-UX 11 the map looks different and heap appears to be merged with data segment, but allocations do happen after the _end. On AIX, procmap doesn't show heap/data segment at all, but allocations too get the addresses past the _end symbol. So it seems to be at the moment quite portable.
Though, all considered, I'm not sure how useful that is.
P.S. The test program:
#include <stdio.h>
#include <stdlib.h>
char *ppp1 = "hello world";
char ppp0[] = "hello world";
extern void *_end; /* any type would do, only its address is important */
int main()
{
void *p = calloc(10000,1);
printf( "end:%p heap:%p rodata:%p data:%p\n", &_end, p, ppp1, ppp0 );
sleep(10000); /* sleep to give chance to look at the process memory map */
return 0;
}
You may call sbrk(0) to get the start of the heap, but you have to make sure no memory has been allocated yet.
The best way to do this is to assign the return value at the very beginning of main(). Note that many functions do allocate memory under the hood, so a call to sbrk(0) after a printf, a memory utility like mtrace or even a call to putenv will already return an offset value.
Although much of what we can find say that the heap is right next to .bss, I am not sure what is in the difference between end and the first break. Reading there seems to results in a segmentation fault.
The difference between the first break and the first address returned by malloc is, among (probably) other thing:
the head of the memory double-linked-list, including the next free block
a structure prefixed to the malloced block incuding:
the length of this block
the address of the previous free block
the address of the next free block
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <errno.h>
void print_heap_line();
int main(int argc, char const *argv[])
{
char* startbreak = sbrk(0);
printf("pid: %d\n", getpid()); // printf is allocating memory
char* lastbreak = sbrk(0);
printf("heap: [%p - %p]\n", startbreak, lastbreak);
long pagesize = sysconf(_SC_PAGESIZE);
long diff = lastbreak - startbreak;
printf("diff: %ld (%ld pages of %ld bytes)\n", diff, diff/pagesize, pagesize);
print_heap_line();
printf("\n\npress a key to finish...");
getchar(); // gives you a chance to inspect /proc/pid/maps yourself
return 0;
}
void print_heap_line() {
int mapsfd = open("/proc/self/maps", O_RDONLY);
if(mapsfd == -1) {
fprintf(stderr, "open() failed: %s.\n", strerror(errno));
exit(1);
}
char maps[BUFSIZ] = "";
if(read(mapsfd, maps, BUFSIZ) == -1){
fprintf(stderr, "read() failed: %s.\n", strerror(errno));
exit(1);
}
if(close(mapsfd) == -1){
fprintf(stderr, "close() failed: %s.\n", strerror(errno));
exit(1);
}
char* line = strtok(maps, "\n");
while((line = strtok(NULL, "\n")) != NULL) {
if(strstr(line, "heap") != NULL) {
printf("\n\nfrom /proc/self/maps:\n%s\n", line);
return;
}
}
}
pid: 29825
heap: [0x55fe05739000 - 0x55fe0575a000]
diff: 135168 (33 pages of 4096 bytes)
from /proc/self/maps:
55fe05739000-55fe0575a000 rw-p 00000000 00:00 0 [heap]
press a key to finish...