Assume that I have a small WAV file I've opened and dumped as a array of char for processing.
Right now, I am attempting to memcpy the fmt chunk ID into a 4 byte buffer.
char fmt[4];
memcpy(fmt_chunk_id, raw_file + 12, sizeof(char) * 4);
From my understanding of memcpy, this will copy the 4 bytes starting at offset 12 into fmt. However, when I go to debug the program I get some very strange output:
It seems to copy the fmt section correctly, but now for some reason I have a bunch of garbage after it. Interestingly, this garbage comes before format at offset bytes 0 (RIFF), and 8 (WAVE). It is a little endian file (RIFF).
I can't for the life of me figure out why I'm getting data from the beginning of the buffer at the end of this given that I only copied 4 bytes worth of data (which should exactly fit the first 4 characters f m t and space).
What is going on here? The output seems to indicate to me I'm somehow over-reading memory somewhere - but if that was the case I'd expect garbage rather than the previous offset's data.
EDIT:
If it matters, the data type of raw_file is const char* const.
The debugger is showing you an area of memory which has been dynamically allocated on the stack.
What is in all probability happening is that you read data from the file, and even if you asked to read, say, 50 bytes, the underlying system might have decided to read more (1024, 2048, or 4096 bytes usually). So those bytes passed around in memory, likely some on the stack, and that stack is being reused by your function now. If you asked to read more than those four bytes, then this is even more likely to happen.
Then the debugger sees that you are pointing to a string, but in C strings run until they get terminated by a zero (ASCIIZ). So what you're shown is the first four bytes and everything else that followed, up to the first 0x00 byte.
If that's important to you, just
char fmt[5];
fmt[4] = 0;
// read four bytes into fmt.
Now the debugger will only show you the first four bytes.
But now you see why you should always scrub and overwrite sensitive information from a memory area before free()ing it -- the data might remain there and even be reused or dumped by accident.
Related
i am honing my assembly for buffer overflow exploitation.
Machine specs : kali linux 32 bit running in virtual box.
i am running this code
#include <stdio.h>
getinput(){
char buffer[8]; //allocating 8 bytes
gets(buffer); //read input
puts(buffer); // print;
}
main() {
getInput();
return 0 ;
}
My understaning is that when the function getInput() is invoked the following happens :
1 - the address of the next instruction in main is pushed on the stack.
2 - ebp register is pushed on the stack.
3 - 8 bytes are allocated on the stack for the buffer.
That a total of 16 bytes.. but
As you can see in the image , just before reading the input in the getInput() function
it shows a total of 24 bytes of the stack.
specifically, i don't know why there is an extra 0x0000000 on the top of the stack
moreover, when i try to over-write the return address by inputing something like (ABCDABCDABCDABCD[desired address for target program]) it justs over-writes everything.
And if i try to input something like \xab\xab\xab\xab it gives a segementation fault , although this is only 4 bytes and should fit perfectly into the 8 bytes buffer.
Thank you in advance.
In real life buffer overflow attacks, you never know the size of the stack frame. If you discover a buffer overflow bug, it's up to you to determine the offset from the buffer start to the return address. Treat your toy example exactly like that.
That said, the structure of the stack frame can be driven by numerous considerations. The calling convention might call for specific stack alignment. The compiler might create invisible variables for its own internal bookkeeping, which may vary depending on compiler flags, such as the level of optimization. There might be some space for saved caller registers, the number of which is driven by the register usage of the function itself. There might even be a guard variable specifically to detect buffer overflows. In general, you can't deduce the stack frame structure from the source alone. Unless you wrote the compiler, that is.
After diassembling the getInput routin , it turned out that the extra 4 bytes came from the compiler pushing $ebx on the stack for some reason.
After testing with various payloads , it appeared that i was not considering the deceptive null byte that is added at the end of the string. so i only need to add one extra byte to mitigate the effect of the null byte.
The proper payload was : printf "AAAAAAAAAAAAA\xc9\x61\x55\x56" | ./test
Say I have a 90 megabyte file. It's not encrypted, but it is binary.
I want to store this file into a table as an array of byte values so I can process the file byte by byte.
I can spare up to 2 GB of ram, so something with a thing like jotting down what bytes have been processed, which bytes have yet to be processed, and the processed bytes, would all be good. I don't exactly care about how long it may take to process.
How should I approach this?
Note I've expanded and rewritten this answer due to Egor's comment.
You first need the file open in binary mode. The distinction is important on Windows, where the default text mode will change line endings from CR+LF into C newlines. You do this by specifying a mode argument to io.open of "rb".
Although you can read a file one byte at a time, in practice you will want to work through the file in buffers. Those buffers can be fairly large, but unless you know you are handling only small files in a one-off script, you should avoid reading the entire file into a buffer with file:read"*a" since that will cause various problems with very large files.
Once you have a file open in binary mode, you read a chunk of it using buffer = file:read(n), where n is an integer count of bytes in the chunk. Using a moderately sized power of two will likely be the most efficient. The return value will either be nil, or will be a string of up to n bytes. If less than n bytes long, that was the last buffer in the file. (If reading from a socket, pipe, or terminal, however, reads less than n may only indicate that no data has arrived yet, depending on lots of other factors to complex to explain in this sentence.)
The string in buffer can be processed any number of ways. As long as #buffer is not too big, then {buffer:byte(1,-1)} will return an array of integer byte values for each byte in the buffer. Too big partly depends on how your copy of Lua was configured when it was built, and may depend on other factors such as available memory as well. #buffer > 1E6 is certainly too big. In the example that follows, I used buffer:byte(i) to access each byte one at a time. That works for any size of buffer, at least as long as i remains an integer.
Finally, don't forget to close the file.
Here's a complete example, lightly tested. It reads a file a buffer at a time, and accumulates the total size and the sum of all bytes. It then prints the size, sum, and average byte value.
-- sum all bytes in a file
local name = ...
assert(name, "Usage: "..arg[0].." filename")
file = assert(io.open(name, "rb"))
local sum, len = 0,0
repeat
local buffer = file:read(1024)
if buffer then
len = len + #buffer
for i = 1, #buffer do
sum = sum + buffer:byte(i)
end
end
until not buffer
file:close()
print("length:",len)
print("sum:",sum)
print("mean:", sum / len)
Run with Lua 5.1.4 on my Windows box using the example as its input, it reports:
length: 402
sum: 30374
mean: 75.557213930348
To split the contents of a string s into an array of bytes use {s:byte(1,-1)}.
I need to (de)cipher some data at a time. Extra padding bytes may have to be added to the target data bytes at the beginning and at the end. The built-in crypto API works on struct scatterlist objects, as you can see with the definition of the encrypt method of a block cipher :
int (*encrypt)(struct blkcipher_desc *desc, struct scatterlist *dst,
struct scatterlist *src, unsigned int nbytes);
Now the procedure I am following for the ciphering operation :
Get a data buffer buf (length L)
Compute left padding and right padding bytes (rpad and lpad)
Cipher the whole thing (lpad and buf and rpad)
Get rid of the padding bytes in the result
The most simple and unefficient solution would be to allocate L + rpad + lpad bytes and copy the buffer's content in this new area appropriately. But since the API uses those scatterlist objects, I was wondering if there was a way to avoid this pure waste of resources.
I read a couple of articles on LWN about scatterlist chaining but a quick glance at the header file worries me : it looks like I have to manually set up the whole thing, which is a pretty bad practice ...
Any clue on how to use the scatterlist API properly ? Ideally, I would like to do the following :
Allocates buffers for the padding bytes for both input and output
Allocate a "payload" buffer that will only store the "useful" ciphered bytes
Create the scatterlist objects that includes padding buffers and target buffer
Cipher the whole and store the result in output padding buffers + output "payload" buffer
Discard the input and output padding buffers
Return the ciphered "payload" buffer to the user
first, sorry for my pour english,I am not a native english speaker.I think you are looking for
this api in kernel " blkcipher_walk_virt" , you can find the usage of this in ecb.c
"crypto_ecb_crypt". and you also can see the padlock_aes.c
After having investigated through the code, I found a suitable solution. It follows quite well the procedure I listed in my question, though there are some subtle differences.
As suggested by JohnsonDiao, I dived into the scatterwalk.c file to see how the Crypto API was making use of the scatterlist objects.
The problem that has arisen is the "boundary" between two subsequent scatterlist. Let's say I have two chained scatterlist. The first one hold information about a 12 bytes buffer, the second to a 20 bytes buffer. I want to encrypt the two buffers as a whole using AES128-CTR.
In this particular case, the API will :
Encrypt the 12 bytes of the buffer referenced by the first scatterlist.
Increment the counter
Encrypt the 16 first bytes of the second scatterlist
Increment the counter
Encrypt the last remaining 4 bytes
The behaviour I would have expected was :
Encrypt the 12 bytes of first buffer + the 4 first bytes of second buffer
Increment the counter
Encrypt the last 16 bytes of the second buffer
Thus, to enforce this, one must allocate a 16-byte aligned padding buffer in the pattern :
Let npad the number of padding bytes needed for the requested encryption. Then we have :
Where lbuf is the total length of the padding buffer. Now, the last lbuf - npad bytes must be filled with the first input data bytes. If the input is too short to ensure a full copy, that's not a matter.
Therefore we copy the first lcpy = min(lbuf - npad, ldata) bytes at the offset npad in the padding buffer
In short, here is the procedure :
Allocate the appropriate padding buffer with length lbuf
Copy the first lcpy bytes of the payload buffer at offset npad in the padding buffer
Reference the padding buffer in a scatterlist
Reference the payload buffer in another scatterlist (with a lcpy shift)
Ask for the ciphering
Extract the payload bytes present in the padding buffer
Discard the padding buffer
I tested this and it seemed to work perfectly.
I am also learning this part. and this is my analysis:
if your encryption device need cipher 16-bytes at once, you should set the alignment to (16-1). just like the padlock_aes.c , see ecb_aes_alg.cra_alignmask. the kernel would handle
this in blkcipher_next_copy and blkcipher_next_slow.
but I am puzzled, in aes_generic.c the alignmask is 3, how the kernel handle this without
blkcipher_next_copy?
Let me preface this by saying that i am a newbie, and im in a entry level C class at school.
Im writing a program that required me to use malloc and malloc is allocating 8x the space i expect it to in all cases. Even when just to malloc(1), it is allocation 8 bytes instead of 1, and i am confused as to why.
Here is my code I tested with. This should only allow one character to be entered plus the escape character. Instead I can enter 8, so it is allocating 8 bytes instead of 1, this is the case even if I just use a integer in malloc(). Please ignore the x variable, it is used in the actual program, but not in this test. :
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main (int argc ,char* argv[]){
int x = 0;
char *A = NULL;
A=(char*)malloc(sizeof(char)+1);
scanf("%s",A);
printf("%s", A);
free(A);
return 0;
}
A=(char*)malloc(sizeof(char)+1);
is going to allocate at least 2 bytes (sizeof(char) is always 1).
I don't understand how you are determining that it is allocating 8 bytes, however malloc is allowed to allocate more memory than you ask for, just never less.
The fact that you can use scanf to write a longer string to the memory pointed to by A does not mean that you have that memory allocated. It will overwrite whatever is there, which may result in your program crashing or producing unexpected results.
malloc is allocating as much memory as you asked for.
If you can read more than the allocated bytes (using scanf) it's because scanf is reading also over the memory you own: it's a buffer overflow.
You should limit the data scanf can read this way:
scanf( "%10s", ... ); // scanf will read a string no longer than 10
Im writing a program that required me
to use malloc and malloc is allocating
8x the space i expect it to in all
cases. Even when just to malloc(1), it
is allocation 8 bytes instead of 1,
and i am confused as to why.
Theoretically speaking, the way you do things in the program, is not allocating 8 bytes.
You can still type in 8 bytes (or any number of bytes) because in C there is no check, that you are still using a valid place to write.
What you see is Undefined Behaviour, and the reason for that is that you write in memory that you shouldn't. There is nothing in your code that stops the program after n byte(s) you allocated have been used.
You might get Seg Fault now, or later, or never. This is Undefined Behaviour. Just because it appears to work, does not mean it is right.
Now, Your program could indeed allocate 8 bytes instead of 1.
The reason for that is because of Alignment
The same program might allocate a different size in a different machine and/or a different Operating System.
Also, since you are using C you don't really need to cast. See this for a start.
In your code, there is no limit on how much data you can load in with scanf, leading to a buffer overflow (security flaw/crash). You should use a format string that limits the amount of data read in to the one or two bytes that you allocate. The malloc function will probably allocate some extra space to round the size up, but you should not rely on that.
malloc is allowed to allocate more memory than you ask for. It's only required to provide at least as much as you ask for, or fail if it can't.
using malloc or creating a buffer on the stack will allocate memory in words.
On a 32-bit system the word size is 4 bytes, so when you ask for
A=(char*)malloc(sizeof(char)+1);
(which is essentially A=(char*)malloc(2);
the system will actually give you 4 bytes. On a 64-bit machine you should get 8 bytes.
The way you use scanf there is dangerous as it will overflow the buffer if a string greater than the allocated size leaving a heap overflow vulnerability in your program. scanf in this case will attempt to stuff a string of any length in to that memory so using it to count the allocated size will not work.
What system are you running on? If it's 64 bit, it is possible that the system is allocating the smallest possible unit that it can. 64 bits being 8 bytes.
EDIT: Just a note of interest:
char *s = malloc (1);
Causes 16 bytes to be allocated on iOS 4.2 (Xcode 3.2.5).
If you enter 8 if will just allocate 2 bytes sizeof(char) == 1 (unless you are on some obscure platform) and you will write you number to that char. Then on printf it will output the number you stored in there. So if you store the number 8 it'll display 8 on the command line. It has nothing to do with the count of chars allocated.
Unless of course you looked up in a debugger or somewhere else that it is really allocating 8 bytes.
scanf has no idea how big the target buffer actually is. All it knows is the starting address of the buffer. C does no bounds checking, so if you pass it the address of a buffer sized to hold 2 characters, and you enter a string that's 10 characters long, scanf will write those extra 8 characters to the memory following the end of the buffer. This is called a buffer overrun, which is a common malware exploit. For whatever reason, the six bytes immediately following your buffer aren't "important", so you can enter up to 8 characters with no apparent ill effects.
You can limit the number of characters read in a scanf call by including an explicit field width in the conversion specifier:
scanf("%2s", A);
but it's still up to you to make sure that target buffer is large enough to accomodate that width. Unfortunately, there's no way to specify the field width dynamically as there is with printf:
printf("%*s", fieldWidth, string);
because %*s means something completely different in scanf (basically, skip over the next string).
You could use sprintf to build your format string:
sprintf(format, "%%%ds", max_bytes_in_A);
scanf(format, A);
but you have to make sure the buffer format is wide enough to hold the result, etc., etc., etc.
This is why I usually recommend fgets() for interactive input.
What I need to do for an assignment is:
open a file (using fopen())
read the name of a student (using fgetc())
store that name in some part of a struct
The problem I have is that I need to read an arbitrary long string into name, and I don't know how to store that string without wasting memory (or writing into non-allocated memory).
EDIT
My first idea was to allocate a 1 byte (char) memory block, then call realloc() if more bytes are needed but this doesn't seem very efficient. Or maybe I could double the array if it is full and then at the end copy the chars into a new block of memory of the exact size.
Don't worry about wasting 100 or 1000 bytes which is likely to be long enough for all names.
I'd probably just put the buffer that you're reading into on the stack.
Do worry about writing over the end of the buffer. i.e. buffer overrun. Program to prevent that!
When you come to store the name into your structure you can malloc a buffer to store the name the exact length you need (don't forget to add an extra byte for the null terminator).
But if you really must store names of any length at all then you could do it with realloc.
i.e. Allocate a buffer with malloc of some size say 50 bytes.
Then when you need more space, use realloc to increase it's length. Increase the length in blocks of say 50 bytes and keep track with an int on how big it is so that you know when you need to grow it again. At some point, you will have to decide how long that buffer is going to be, because it can't grow indefinitely.
You could read the string character by character until you find the end, then rewind to the beginning, allocate a buffer of the right size, and re-read it into that, but unless you are on a tiny embedded system this is probably silly. For one thing, the fgetc, fread, etc functions create buffers in the O/S anyway.
You could allocate a temporary buffer that's large enough, use a length limited read (for safety) into that, and then allocate a buffer of the precise size to copy it into. You probably want to allocate the temporary buffer on the stack rather than via malloc, unless you think it might exceed your available stack space.
If you are writing single threaded code for a tiny system you can allocate a scratch buffer on startup or statically, and re-use it for many purposes - but be really carefully your usage can't overlap!
Given the implementation complexity of most systems, unless you really research how things work it's entirely possible to write memory optimized code that actually takes more memory than doing things the easy way. Variable initializations can be another surprisingly wasteful one.
My suggestion would be to allocate a buffer of sufficient size:
char name_buffer [ 80 ];
Generally, most names (at least common English names) will be less than 80 characters in size. If you feel that you may need more space than that, by all means allocate more.
Keep a counter variable to know how many characters you have already read into your buffer:
int chars_read = 0; /* most compilers will init to 0 for you, but always good to be explicit */
At this point, read character by character with fgetc() until you either hit the end of file marker or read 80 characters (79 really, since you need room for the null terminator). Store each character you've read into your buffer, incrementing your counter variable.
while ( ( chars_read < 80 ) && ( !feof( stdin ) ) ) {
name_buffer [ chars_read ] = fgetc ( stdin );
chars_read++;
}
if ( chars_read < 80 )
name_buffer [ chars_read ] = '\0'; /* terminating null character */
I am assuming here that you are reading from stdin. A more complete example would also check for errors, verify that the character you read from the stream is valid for a person's name (no numbers, for example), etc. If you try to read more data than for which you allocated space, print an error message to the console.
I understand wanting to maintain as small a buffer as possible and only allocate what you need, but part of learning how to program is understanding the trade-offs in code/data size, efficiency, and code readability. You can malloc and realloc, but it makes the code much more complex than necessary, and it introduces places where errors may come in - NULL pointers, array index out-of-bounds errors, etc. For most practical cases, allocate what should suffice for your data requirements plus a small amount of breathing room. If you find that you are encountering a lot of cases where the data exceeds the size of your buffer, adjust your buffer to accommodate it - that is what debugging and test cases are for.