First, I'm a student still. So I am not very experienced.
I'm working with a piece of bluetooth hardware and I am using its protocol to send it commands. The protocol requires packets to be sent with LSB first for each packet field.
I was getting error packets back to me indicating my CRC values were wrong so I did some investigating. I found the problem, but I became confused in the process.
Here is Some GDB output and other information elucidating my confusion.
I'm sending a packet that should look like this:
|Start Flag| Packet Num | Command | Payload | CRC | End Flag|
0xfc 0x1 0x0 0x8 0x0 0x5 0x59 0x42 0xfd
Here is some GDB output:
print /x reqId_ep
$1 = {start_flag = 0xfc, data = {packet_num = 0x1, command = {0x0, 0x8}, payload = {
0x0, 0x5}}, crc = 0x5942, end_flag = 0xfd}
reqId_ep is the variable name of the packet I'm sending. It looks all good there, but I am receiving the CRC error codes from it so something must be wrong.
Here I examine 9 bytes in hex starting from the address of my packet to send:
x/9bx 0x7fffffffdee0
0xfc 0x01 0x00 0x08 0x00 0x05 0x42 0x59 0xfd
And here the problem becomes apparent. The CRC is not LSB first. (0x42 0x59)
To fix my problem I removed the htons() that I set my CRC value equal with.
And here is the same output above without htons():
p/x reqId_ep
$1 = {start_flag = 0xfc, data = {packet_num = 0x1, command = {0x0, 0x8}, payload = {
0x0, 0x5}}, crc = 0x4259, end_flag = 0xfd}
Here the CRC value is not LSB.
But then:
x/9bx 0x7fffffffdee0
0xfc 0x01 0x00 0x08 0x00 0x05 0x59 0x42 0xfd
Here the CRC value is LSB first.
So apparently the storing of C is LSB first? Can someone please cast a light of knowledge upon me for this situation? Thank you kindly.
This has to do with Endianness in computing:
http://en.wikipedia.org/wiki/Endianness#Endianness_and_operating_systems_on_architectures
For example, the value 4660 (base-ten) is 0x1234 in hex. On a Big Endian system, it would be stored in memory as 1234 while on a Little Endian system it would be stored as 3412
If you want to avoid this sort of issue in the future, it might just be easiest to create a large array or struct of unsigned char, and store individual values in it.
eg:
|Start Flag| Packet Num | Command | Payload | CRC | End Flag|
0xfc 0x1 0x0 0x8 0x0 0x5 0x59 0x42 0xfd
typedef struct packet {
unsigned char startFlag;
unsigned char packetNum;
unsigned char commandMSB;
unsigned char commandLSB;
unsigned char payloadMSB;
unsigned char payloadLSB;
unsigned char crcMSB;
unsigned char crcLSB;
unsigned char endFlag;
} packet_t;
You could then create a function that you compile differently based on the type of system you are building for using preprocessor macros.
eg:
/* Uncomment the line below if you are using a little endian system;
/* otherwise, leave it commented
*/
//#define LITTLE_ENDIAN_SYSTEM
// Function protocol
void writeCommand(int cmd);
//Function definition
void writeCommand(int cmd, packet_t* pkt)
{
if(!pkt)
{
printf("Error, invalid pointer!");
return;
}
#if LITTLE_ENDIAN_SYSTEM
pkt->commandMSB = (cmd && 0xFF00) >> 8;
pkt->commandLSB = (cmd && 0x00FF);
# else // Big Endian system
pkt->commandMSB = (cmd && 0x00FF);
pkt->commandLSB = (cmd && 0xFF00) >> 8;
#endif
// Done
}
int main void()
{
packet_t myPacket = {0}; //Initialize so it is zeroed out
writeCommand(0x1234,&myPacket);
return 0;
}
One final note: avoid sending structs as a stream of data, send it's individual elements one-at-a-time instead! ie: don't assume that the struct is stored internally in this case like a giant array of unsigned characters. There are things that the compiler and system put in place like packing and allignment, and the struct could actually be larger than 9 x sizeof(unsigned char).
Good luck!
This is architecture dependent based on which processor you're targeting. There are what is known as "Big Endian" systems, which store the most significant byte of a word first, and "Little Endian" systems that store the least significant byte first. It looks like you're looking at a Little Endian system there.
Related
I'm trying to perform a bufferoverflow so that a variable (type) has a specific value in it. I struggle with the strlen & check for my input.
I tried using something like: 'AAAAA\x00AAA...A\x00\xbc\xd4\xb9' for tricking the strlen check that my input is just 5 A's long. But something strips my \x00 always with the message:
-bash: Warnung: command substitution: ignored null byte in input
So i went to make a way bigger input and have an Integer overflow so that the short where the length is saved to is too big and gets below 32700... But that didnt seem to work either.
My inputs are constructed as follow:
./challenge `python -c "print 'A'*30000 + '\x00' + 'A'*10000" + '\xXX\xXX\xXX\xXX'`
(for \xXX my desired inputs)
or
./challenge `python programm.py`
I am also using gdb for the analysis of the memory. But i cant find the correct way to use it...
The code for my task is:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
int main(int argc, char *argv[])
{
int type = 0xdeadbeef;
char buf[32700];
short len;
if (argc < 2)
return -1;
len = strlen(argv[1]);
if (len > 32700) {
printf("Too long\n");
return -1;
}
strcpy(buf, argv[1]);
if (type == 0x00b1c2d3) {
printf("Success\n");
} else {
printf("Try again\n");
}
exit(0);
}
Any other ideas (or a 'solution') for this problem? I am trying so hard now since days. Also i didn't have anything todo with C (especially memory analysis, gdb and all of that) for years, so it's extra hard for me to understand some things.
The issue in the code is that the length len, which is calculated using strlen(), is a short. This means it's a 16-bit value. Since strlen() returns a size_t, which is a 64-bit value on my x86_64 system, only the bottom 16 bits will be stored in the len variable.
So, on first glance you can submit anything longer than 65,535 bytes, overflow that short int, and store just the lower 16-bits. But there's another issue to note. The len variable is a signed short integer. This means that it stores values from [-32768,32767], whereas a size_t is an unsigned value. So when strlen() returns its value, lengths that have the highest bit set (anything over 32,767 bytes) will become negative numbers.
Here's an example gdb session illustrating this:
(gdb) break 22
Breakpoint 1 at 0x829: file challenge.c, line 22.
(gdb) run `ruby -e 'print("A"*32768)'`
Starting program: /home/josh/foo/challenge `ruby -e 'print("A"*32768)'`
Breakpoint 1, main (argc=2, argv=0x7fffffff6338) at challenge.c:22
22 if (type == 0x00b1c2d3) {
(gdb) p len
$1 = -32768
(gdb) x/64bx $rsp
0x7ffffffee260: 0x38 0x63 0xff 0xff 0xff 0x7f 0x00 0x00
0x7ffffffee268: 0x00 0x00 0x00 0x00 0x02 0x00 0x00 0x00
0x7ffffffee270: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
0x7ffffffee278: 0x00 0x00 0x00 0x80 0xef 0xbe 0xad 0xde
0x7ffffffee280: 0x41 0x41 0x41 0x41 0x41 0x41 0x41 0x41
0x7ffffffee288: 0x41 0x41 0x41 0x41 0x41 0x41 0x41 0x41
0x7ffffffee290: 0x41 0x41 0x41 0x41 0x41 0x41 0x41 0x41
0x7ffffffee298: 0x41 0x41 0x41 0x41 0x41 0x41 0x41 0x41
(gdb)
See how the bytes at 0x7ffffffee27a are 0x00 0x80? That's -32768 to a short int, so the length is perceived to be negative (well under 32,700!).
I also included some extra info in the memory dump there, so you can see that the type field immediately succeeds the len field in memory, followed finally by the buf array (which contains my 32768 As!). Note that only certain lengths of input will work -- those that appear negative, or overflow to a value less than 32,700 when truncated to 16 bits.
This doesn't solve the entire challenge, but at least should give you a good start to understanding what you can control. Note also that there is data above the buf array, such as where rbp points, your saved return pointer, etc. Depending on how you are compiling this (e.g., with or without stack canaries, etc.), there might be some more things for you to solve along the way.
I want to read a specific 128 bit value in memory that equals the following:
128 bit val = 0171EC07F2E6C383FFFFFFFFFFFFFFFF
This id is visible when flashing a binary
Read Intel HEX application image with 608 bytes.
Received autobaud response:
Product ID: xxxxxx
Revision: xxx
Serial number: 0171EC07F2E6C383FFFFFFFFFFFFFFFF
User code present.
User code checksum failed.
Write protection disabled.
Read protection disabled.
Download completed.
Run command sent.
Programming flash image.
Read Intel HEX flash image with 189248 bytes.
Autobaud succeeded.
Flash completed.
This is the code I am using to read that address
uint8_t *address = (uint8_t *)0x34576;
uint8_t reg[16] = {0};
for (int i = 0; i < 16; i++) {
reg[i] = *address;
printf("reg[%d]= 0x%x\r\n", i, reg[i]);
*address++;
}
This is the output I'm getting:
reg[0]= 0x10
reg[1]= 0x17
reg[2]= 0xce
reg[3]= 0x70
reg[4]= 0x2f
reg[5]= 0x6e
reg[6]= 0x3c
reg[7]= 0x38
reg[8]= 0xff
reg[9]= 0xff
reg[10]= 0xff
reg[11]= 0xff
reg[12]= 0xff
reg[13]= 0xff
reg[14]= 0xff
reg[15]= 0xff
As can be seen from the output the byte order is mixed up. Is this because of little/big endian. How do I convert the value to be equal to 128 bit val
I am working on custom board containing a 32bit MCU (Cortex A5) and a 16bit wide DRAM chip (LPDDR2). The MCU has an on-board DRAM controller which supports both DDR3 and LPDDR2, and I do have a working setup using LPDDR2.
Now, I am trying to half the clock rate at boot time on both MCU and DRAM (they both use the same PLL) due to power-restrictions, and this is where my troubles begin.
As mentioned, I do have a working setup using the full frequency (DRAM: 400MHz, MCU: 396MHz), so one would expect that halving the frequency and updating the timings according to the DRAM datasheet should yeld another working setup, but no.
The DRAM init runs at boot time from MCU intram, so does any tests. The whole procedure is handled by a board-specific version of U-Boot 2015.04.
I have a collection of tests that run at MCU boot to verify DRAM integrity. One of these tests is a so-called "walking bit"-test, where I use a 32bit uint, a toggle each bit in sequence, reading back to verify.
What I found was that, when reading back, the lower 16 bits have not been touched, while the upper 16 bits seems altered. After some investigation, I found the following pattern (assuming a watermark "0xaa"):
write -> readback
0x8000_0000 -> 0x0000_aaaa
0x4000_0000 -> 0x0000_aaaa
0x2000_0000 -> 0x0000_aaaa
0x1000_0000 -> 0x0000_aaaa
[...]
0x0008_0000 -> 0x0000_aaaa
0x0004_0000 -> 0x0000_aaaa
0x0002_0000 -> 0x0000_aaaa
0x0001_0000 -> 0x0000_aaaa
0x0000_8000 -> 0x8000_aaaa
0x0000_4000 -> 0x4000_aaaa
0x0000_2000 -> 0x2000_aaaa
0x0000_1000 -> 0x1000_aaaa
[...]
0x0000_0008 -> 0x0008_aaaa
0x0000_0004 -> 0x0004_aaaa
0x0000_0002 -> 0x0002_aaaa
0x0000_0001 -> 0x0001_aaaa
The watermark is present, although I suspect it got there from a previous debugging-session. This I will address later, hence my primary focus at the moment is getting the "walking bit"-test to pass.
Here is a memory dump:
(gdb) x/16b addr
0x80000000: 0x00 0x00 0x55 0x55 0x55 0x55 0x00 0x80
0x80000008: 0xaa 0xaa 0xaa 0xaa 0xaa 0xaa 0x00 0x55
(gdb) p/x *addr
$59 = 0x55550000
(gdb) set *addr = 0xaabbccdd
(gdb) p/x *addr
$60 = 0xccdd0000
(gdb) x/16b addr
0x80000000: 0x00 0x00 0xdd 0xcc 0xbb 0xaa 0x00 0x80
0x80000008: 0xaa 0xaa 0xaa 0xaa 0xaa 0xaa 0x00 0x55
Can anyone tell my what might cause this type of behaviour?
Cheers
Note: I have intentionally left out MCU and DRAM specifications, as I believe that the question can be addressed only with JEDEC/DFI in mind.
Edit: Added memory dump.
Edit: Here is the source of the "walking bit"-test. Run from MCU intram on memory area located on DRAM. Assumed bug-free:
static u32 __memtest_databus(volatile u32 * const addr)
{
/* Walking bit */
u32 pattern = (1u << 31);
u32 failmask = 0;
for(; pattern; pattern >>= 1)
{
*addr = pattern;
if(*addr != pattern)
failmask |= pattern;
}
return failmask;
}
Edit: The PLL and VCO has been checked, and settings are correct. PLL is stable and DRAM PHY does obtain a lock.
Link to DRAM Data Sheet
the bytes look like they have shifted, not altered.
quote
(gdb) x/16b addr
0x80000000: 0x00 0x00 *0xdd 0xcc 0xbb 0xaa* 0x00 0x80
0x80000008: 0xaa 0xaa 0xaa 0xaa 0xaa 0xaa 0x00 0x55
unquote
You have one severe bug here: u32 pattern = (1 << 31);.
The integer constant 1 is of type int, which is 32 bits on your ARM system.
You left shift this signed number out of bounds and invoke undefined behavior; anything could happen. The variable pattern can get any value.
Correct code would be u32 pattern = (u32)1 << 31; or u32 pattern = 1u << 31;
i coded a small program to show you the casting problem
#include <stdlib.h>
struct flags {
u_char flag1;
u_char flag2;
u_short flag3;
u_char flag4;
u_short flag5;
u_char flag7[5];
};
int main(){
char buffer[] = "\x01\x02\x04\x03\x05\x07\x06\xff\xff\xff\xff\xff";
struct flags *flag;
flag = (struct flags *) buffer;
return 0;
}
my problem is when i cast the flag 5 wrongly takes the "\x06\xff" bytes ignoring the "\x07" and the flag 7 wrongly takes the next 4 "\xff" bytes plus a nul which is the next byte.I also run gdb
(gdb) p/x flag->flag5
$1 = 0xff06
(gdb) p/x flag->flag7
$2 = {0xff, 0xff, 0xff, 0xff, 0x0}
(gdb) x/15xb flag
0xbffff53f: 0x01 0x02 0x04 0x03 0x05 0x07 0x06 0xff
0xbffff547: 0xff 0xff 0xff 0xff 0x00 0x00 0x8a
why this is happening and how i can handle it correctly?
thanks
It seems like structure member alignment issues. Unless you know how your compiler packs structure members, you should not make assumptions about the positions of those members in memory.
The reason that the 0x07 is apparently lost, is because the compiler is probably aligning the flag5 member on a 16-bit boundary, skipping the odd memory location that holds the 0x07 value. That value is lost in the padding. Also, what you are doing is overflowing the buffer, a big no-no. In other words:
struct flags {
u_char flag1; // 0x01
u_char flag2; // 0x02
u_short flag3; // 0x04 0x03
u_char flag4; // 0x05
// 0x07 is in the padding
u_short flag5; // 0x06 0xff
u_char flag7[5]; // 0xff 0xff 0xff 0xff ... oops, buffer overrun, because your
// buffer was less than the sizeof(flags)
};
You can often control the packing of structure members with most compilers, but the mechanism is compiler specific.
The compiler is free to put some unused padding between members of the structure to (for instance) arrange the alignment to it's conveneince. Your compiler may provide a #pragma packed or a command line argument to insure tight structure packing.
How structures are stored is implementation defined, and thus, you can't rely on a specific memory layout for serialization like that.
To serialize your structure to a byte array, write a function which serializes each field in a set order.
You might need to pack the struct:
struct flags __attribute__ ((__packed__)) {
u_char flag1;
u_char flag2;
u_short flag3;
u_char flag4;
u_short flag5;
u_char flag7[5];
};
Note: This is GCC -- I don't know how portable it is.
This has to do with padding. The compiler is adding garbage memory into your struct in order to get it to align with your memory correctly for efficiency.
See the following examples:
http://msdn.microsoft.com/en-us/library/71kf49f1(v=vs.80).aspx
http://en.wikipedia.org/wiki/Data_structure_alignment#Typical_alignment_of_C_structs_on_x86
void Sound(int f)
{
USHORT B=1193180/f;
UCHAR temp = In_8(0x61);
temp = temp | 3;
Out_8(0x61,temp);
Out_8(0x43,0xB6);
Out_8(0x42,B&0xF);
Out_8(0x42,(B>>8)&0xF);
}
In_8/Out_8 reads/writes 8 bit to/from a specified port(implementation details omitted).
How does it make PC beep?
UPDATE
Why &0xF is used here? Shouldn't it be 0xFF?
The PC has a 8255 timer chip, which is controlled using the ports 0x61, 0x43 and 0x42.
When port 0x61 bit 0 is set to 1, this means "turn on the timer that is connected to the speaker".
When port 0x61 bit 1 is set to 1, this means "turn on the speaker".
This is done in the first paragraph of your code.
The second part puts the "magic value" 0xB6 on port 0x43, which means that the following two bytes arriving at port 0x42 will be interpreted as the divisor for the timer frequency. The division's resulting frequency (1193180 / divisor) will then be sent to the speaker.
http://gd.tuwien.ac.at/languages/c/programming-bbrown/advcw3.htm#sound