I am working on an embedded system (ARM Cortex M3) where I do not have access to any sort of "standard library". In particular, I do not have access to malloc.
I have a function void doStuff(uint8_t *buffer) that accepts a pointer to a 512 bits buffer. I have tried doing the following:
uint8_t buffer[64] = {0};
doStuff((uint8_t *) &buffer));
but I'm not getting the expected results. Am I doing something wrong? Is there any alternative approach?
doStuff(buffer) shall be ok since buffer is already an uint8_t*.
Aside of this, you're closing one bracket too much after &buffer in your example.
If buffer is of variable size, you should pass the size into doStuff too, if it's of constant size, I'd also pass the size just in case that you change the size one day.
This being said, you should do it the following way:
uint8_t buffer[64] = {0};
int len = 64;
doStuff(buffer, len);
a simplemalloc(): have a char mem[MAXMEM]; and a struct freetable. Then write your own simplemalloc(), that finds in freetable a big enough junk of memory, and returns the offset into mem. simplefree() would then adjust the freetable.
EDITH:
if you need a lot of malloc()s, you may even devide your static mem into different chunks for different tasks (one chunk for exactly 100byte allocs, one fort the size of your favorite struct, and so on) this will speed up finding free mem.
if you are short of mem, you should implement a bestmatch() in simplemalloc(), which as a bad sideeffect slows the execution down.
if you have enough memory, you can implement a debugversion, which "allocates" a bit more memory and puts XXX before the start and after the end of the simplemalloc()ed mem. on free() you can check, if this XXX is broken, so you know you have some buffer-over or under-flow, which you can be aware of.
Related
I'm trying to convert a wchar_t* to char* and my memory wasting solution was
char *wstrtostr(const wchar_t *text) {
size_t size = wcslen(text)*sizeof(wchar_t)+1;
char *sa = malloc(size);
wcstombs(sa,text,size);
return sa;
}
A character might be single-byte or multi-byte and wcslen will count them regardless of their equivalent size as chars.
The question is how can we determine the equivalent char size for a wchar so that we can build an alternative to wcslen for this specific problem and consequently determine the size required to build our char pointer?
To answer what you asked, you can repeatedly call wcstombs with bytes slowly increasing until you get something stored. Not sure however how efficient that is for what you seem to want to do though. Maybe you would want a different approach:
Allocate some memory. Call wcsrtombs. If src doesn't end up being NULL then you ran out of memory so realloc and call wcsrtombs again from where it left off last time.
Depending on your data you can build a heuristic for how much memory to allocate in the first place so reallocing is rare.
Update: It turns out that if you are running under Linux, and don't require portability or C99 compliance, there exists another method. If you call wcstombs with NULL as the destination then it will return the number of bytes that would have been required. You can then alloc this number of bytes and call wcstombs again. Which approach will be better will depend on your circumstances, specifically I imagine the length of the string and how good your heuristic is at guessing the correct length first go. Also, just to reiterate, if you code needs to be portable then this is a non-standard API. Thanks to melpomene for the pointer.
Second update: wcsrtombs does support, according to C99, having its dest pointer set to NULL to get the length required for the output buffer. Thanks to Story Teller for that. So you could call that once with NULL, and then a second time with an appropriately sized buffer.
I am trying to debug a piece of code written by someone else that results in a segfault sometimes, but not all the time, during a memcpy operation.
Also, I would dearly appreciate it if anyone could give me a hand in translating what's going on in a piece of code that occurs before the memcpy.
First off, we have a function into which is being passed a void pointer and a pointer to a struct, like so:
void ExampleFunction(void *dest, StuffStruct *buf)
The struct looks something like this:
typedef struct {
char *stuff;
unsigned int totalStuff;
unsigned int stuffSize;
unsigned int validStuff;
} StuffStruct;
Back to ExampleFunction. Inside ExampleFunction, this is happening:
void *src;
int numStuff;
numStuff = buf->validStuff;
src = (void *)(buf->stuff);
I'm confused by the above line. What happens exactly when the char array in buf->stuff gets cast to a void pointer, then set as the value of src? I can't follow what is supposed to happen with that step.
Right after this, the memcpy happens:
memcpy(dest, src, buf->bufSize*numStuff)
And that's where the segfault often happens. I've checked for dest/src being null, neither are ever null.
Additionally, in the function that calls ExampleFunction, the array for dest is declared with a size of 5000, if that matters. However, when I printf the value in buf->bufSize*numStuff in the above code, the value is often high above 5000 -- it can go up as high as 80,000 -- WITHOUT segfaulting, though. That is, it runs fine with the length variable (buf->bufSize*numStuff) being much higher than the supposed length that the dest variable was initialized with. However, maybe that doesn't matter since it was cast to a void pointer?
For various reasons I'm unable to use dbg or install an IDE. I'm just using basic printf debugging. Does anyone have any ideas I could explore? Thank you in advance.
First of all, the cast and assignment just copies the address of buf->stuff into the pointer src. There is no magic there.
numStuff = buf->validStuff;
src = (void *)(buf->stuff);
If dest has only enough storage for 5000 bytes, and you are trying to write beyond that length, then you are corrupting your program stack, which can lead to a segfault either on the copy or sometimes a little later. Whether you cast to a void pointer or not makes no difference at all.
memcpy(dest, src, buf->bufSize*numStuff)
I think you need figure out exactly what buf->bufSize*numStuff is supposed to be computing, and either fix it if it is incorrect (not intended), truncate the copy to the size of the destination, or increase the size of the destination array.
A null-pointer dereference is not the only thing that can cause a segfault. When your program allocates memory, it is also possible to trigger a segfault when you attempt to access memory that is after the regions of memory that you have allocated.
Your code looks like it intends to copy the contents of a buffer pointed to by buf->stuff to a destination buffer. If either of those buffers are smaller than the size of the memcpy operation, the memcpy can be overrunning the bounds of allocated memory and triggering a segfault.
Because the memory allocator allocates memory in large chunks, and then divvies it up to various calls to malloc, your code won't consistently fail every time you run past the end of a malloc'ed buffer. You will get exactly the sporadic failure behavior you described.
The assumption that is baked into this code is that both the buffer pointed to by buf->stuff and by the dest pointer are at least "buf->bufSize * numStuff" bytes in length. One of those two assumptions is false.
I would suggest a couple of approaches:
check the code that allocates both the buffer pointed to by dest, and the buffer pointed to by buf->stuff, and ensure that they are always to be as big or larger than buf->bufSize * numStuff.
Failing that, there are a bunch of tools that can help you get better diagnostic information from your program. The simplest to use is efence ("Electric Fence") that will help identify places in your code where you overrun any of your buffers. (http://linux.die.net/man/3/efence). A more thorough analysis can be done using valgrind (http://valgrind.org/) -- but Valgrind is a bit more involved to use.
Good luck!
PS. There's nothing special about casting a char* pointer to a void* pointer -- it's still just an address to an allocated block of memory.
Is there a max size for a char buffer? I have a program that is collecting strings for a char buffer and writing it to a proc file. After a certain point it appears to stop writing things - is there too much in there? What is that max size so I can work around this?
Here is code. This is an LKM - is limits.h available from kernel space?
Foremost:
const char* input = "hooloo\n";
Next:
int read_info( char *page, char **start, off_t off, int count, int *eof, void *data )
{
unsigned int mem;
char answer_buf[strlen(input) + 1 + 14];
name_added = vmalloc(strlen(input) + 1 + 14);
strcpy(name_added, input);
strcat(name_added, extension);
mem = sprintf(answer_buf, "%s\n", name_added);
memcpy(page, answer_buf, mem);
return strlen(answer_buf) + 1;
}
All in my code are things like this are things that remalloc the buffer and add to it. Also, that read_info is for the procfile. This issue is I keep adding to that buffer with the code above over and over and over - eventually I ca my procfile and the text cuts off - it doesn't go on forever like i want )-=.
There's no concrete maximum size "in C" specifically. Theoretical (or "potential") maximum size of any object on a C platform is determined by the implementation and is usually derived from the properties of the underlying machine platform and OS.
On platforms with flat memory model it will typically be limited by the size of the address space in theory, and by the size of the available free memory (or that specific kind) in practice.
On platforms with segmented memory model it might be limited by the segment size, which is smaller than the address space size. Although implementations are free to breach that limit by "emulating" flat memory model in the code. For that reason on such platforms the maximum object size can also depend on compilation settings.
The only maximum size for a dynamically allocated char buffer will be available system memory.
A buffer on the stack will have its size constrained by maximum stack size. This will vary greatly depending on host OS.
When writing data to file, are you checking the size returned by fwrite and calling it repeatedly to write the remainder of the buffer if necessary?
You have a memory leak in your code!
The following memory is never freed:
name_added = vmalloc(strlen(input) + 1 + 14);
I don't understand why you allocate memory for the output at all.
And you do it twice, both on the stack and on the heap.
The caller has provided a buffer for the output. Use it!
Don't create copies!
I'd say it's at least able to handle 1,000 unique characters
I'm new to c. Just have a question about the character arrays (or string) in c: When I want to create a character array in C, do I have to give the size at the same time?
Because we may not know the size that we actually need. For example of client-server program, if we want to declare a character array for the server program to receive a message from the client program, but we don't know the size of the message, we could do it like this:
char buffer[1000];
recv(fd,buffer, 1000, 0);
But what if the actual message is only of length 10. Will that cause a lot of wasted memory?
Yes, you have to decide the dimension in advance, even if you use malloc.
When you read from sockets, as in the example, you usually use a buffer with a reasonable size, and dispatch data in other structure as soon you consume it. In any case, 1000 bytes is not a so much memory waste and is for sure faster than asking a byte at a time from some memory manager :)
Yes, you have to give the size if you are not initializing the char array at the time of declaration. Better approach for your problem is to identify the optimum size of the buffer at run time and dynamically allocate the memory.
What you're asking about is how to dynamically size a buffer. This is done with a dynamic allocation such as using malloc() -- a memory allocator. Using it gives you an important responsibility though: when you're done using the buffer you must return it to the system yourself. If using malloc() [or calloc()], you return it with free().
For example:
char *buffer; // pointer to a buffer -- essentially an unsized array
buffer = (char *)malloc(size);
// use the buffer ...
free(buffer); // return the buffer -- do NOT use it any more!
The only problem left to solve is how to determine the size you'll need. If you're recv()'ing data that hints at the size, you'll need to break the communication into two recv() calls: first getting the minimum size all packets will have, then allocating the full buffer, then recv'ing the rest.
When you don't know the exact amount of input data, do as follows:
Create a small buffer
Allocate some memory for a "storage" (e.g. twice of buffer size)
Fill the buffer with the data from the input stream (e.g. socket, file etc.)
Copy the data from the buffer to the storage
4.1 If there is not enough place in storage, re-allocate the memory (e.g. with a size twice bigger than it is at this point)
Do steps 3 and 4 unless the "END OF STREAM"
Your storage contains the data now.
If you don't know the size a-priori, then you have no choice but to create it dynamically using malloc (or whatever equivalent mechanism in your language of choice.)
size_t buffer_size = ...; /* read from a DEFINE or from a config file */
char * buffer = malloc( sizeof( char ) * (buffer_size + 1) );
Creating a buffer of size m, but only receiving an input string of size n with n < m is not a waste of memory, but an engineering compromise.
If you create your buffer with a size close to the intended input, you risk having to refill the buffer many, many times for those cases where m >> n. Typically, iterations over the buffer are tied up with I/O operations, so now you might be saving some bytes (which is really nothing in today's hardware) at the expense of potentially increasing the problems in some other end. Specially for client-server apps. If we were talking about resource-constrained embedded systems, that'd be another thing.
You should be worrying about getting your algorithms right and solid. Then you worry, if you can, about shaving off a few bytes here and there.
For me, I'd rather create a buffer that is 2 to 10 times greater than the average input (not the smallest input as in your case, but the average), assuming my input tends to have a slow standard deviation in size. Otherwise, I'd go 20 times the size or more (specially if memory is cheap and doing this minimizes hitting the disk or the NIC card.)
At the most basic setup, one typically gets the size of the buffer as a configuration item read off a file (or passed as an argument), and defaulting to a default compile time value if none is provided. Then you can adjust the size of your buffers according to the observed input sizes.
More elaborate algorithms (say TCP) adjust the size of their buffers at run-time to better accommodate input whose size might/will change over time.
Even if you use malloc you also must define the size first! So instead you give a large number that is capable of accepting the message like:
int buffer[2000];
In case of small message or large you can reallocate it to release the unused locations or to occupy the unused locations
example:
int main()
{
char *str;
/* Initial memory allocation */
str = (char *) malloc(15);
strcpy(str, "tutorialspoint");
printf("String = %s, Address = %u\n", str, str);
/* Reallocating memory */
str = (char *) realloc(str, 25);
strcat(str, ".com");
printf("String = %s, Address = %u\n", str, str);
free(str);
return(0);
}
Note: make sure to include stdlib.h library
I thought that I couldn't retrieve the length of an allocated memory block like the simple .length function in Java. However, I now know that when malloc() allocates the block, it allocates extra bytes to hold an integer containing the size of the block. This integer is located at the beginning of the block; the address actually returned to the caller points to the location just past this length value. The problem is, I can't access that address to retrieve the block length.
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
char *str;
str = (char*) malloc(sizeof(char)*1000);
int *length;
length = str-4; /*because on 32 bit system, an int is 4 bytes long*/
printf("Length of str:%d\n", *length);
free(str);
}
**Edit:
I finally did it. The problem is, it keeps giving 0 as the length instead of the size on my system is because my Ubuntu is 64 bit. I changed str-4 to str-8, and it works now.
If I change the size to 2000, it produces 2017 as the length. However, when I change to 3000, it gives 3009. I am using GCC.
You don't have to track it by your self!
size_t malloc_usable_size (void *ptr);
But it returns the real size of the allocated memory block!
Not the size you passed to malloc!
What you're doing is definitely wrong. While it's almost certain that the word just before the allocated block is related to the size, even so it probably contains some additional flags or information in the unused bits. Depending on the implementation, this data might even be in the high bits, which would cause you to read the entirely wrong length. Also it's possible that small allocations (e.g. 1 to 32 bytes) are packed into special small-block pages with no headers, in which case the word before the allocated block is just part of another block and has no meaning whatsoever in relation to the size of the block you're examining.
Just stop this misguided and dangerous pursuit. If you need to know the size of a block obtained by malloc, you're doing something wrong.
I would suggest you create your own malloc wrapper by compiling and linking a file which defines my_malloc() and then overwiting the default as follows:
// my_malloc.c
#define malloc(sz) my_malloc(sz)
typedef struct {
size_t size;
} Metadata;
void *my_malloc(size_t sz) {
size_t size_with_header = sz + sizeof(Metadata);
void* pointer = malloc(size_with_header);
// cast the header into a Metadata struct
Metadata* header = (Metadata*)pointer;
header->size = sz;
// return the address starting after the header
// since this is what the user needs
return pointer + sizeof(Metadata);
}
then you can always retrieve the size allocated by subtracting sizeof(Metadata), casting that pointer to Metadata and doing metadata->size:
Metadata* header = (Metadata*)(ptr - sizeof(Metadata));
printf("Size allocated is:%lu", header->size); // don't quote me on the %lu ;-)
You're not supposed to do that. If you want to know how much memory you've allocated, you need to keep track of it yourself.
Looking outside the block of memory returned to you (before the pointer returned by malloc, or after that pointer + the number of bytes you asked for) will result in undefined behavior. It might work in practice for a given malloc implementation, but it's not a good idea to depend on that.
This is not Standard C. However, it is supposed to work on Windows operatings systems and might to be available on other operating systems such as Linux (msize?) or Mac (alloc_size?), as well.
size_t _msize( void *memblock );
_msize() returns the size of a memory block allocated in the heap.
See this link:
http://msdn.microsoft.com/en-us/library/z2s077bc.aspx
This is implementation dependent
Every block you're allocating is precedeed by a block descriptor. Problem is, it dependends on system architecture.
Try to find the block descriptor size for you own system. Try take a look at you system malloc man page.