I want to read a specific 128 bit value in memory that equals the following:
128 bit val = 0171EC07F2E6C383FFFFFFFFFFFFFFFF
This id is visible when flashing a binary
Read Intel HEX application image with 608 bytes.
Received autobaud response:
Product ID: xxxxxx
Revision: xxx
Serial number: 0171EC07F2E6C383FFFFFFFFFFFFFFFF
User code present.
User code checksum failed.
Write protection disabled.
Read protection disabled.
Download completed.
Run command sent.
Programming flash image.
Read Intel HEX flash image with 189248 bytes.
Autobaud succeeded.
Flash completed.
This is the code I am using to read that address
uint8_t *address = (uint8_t *)0x34576;
uint8_t reg[16] = {0};
for (int i = 0; i < 16; i++) {
reg[i] = *address;
printf("reg[%d]= 0x%x\r\n", i, reg[i]);
*address++;
}
This is the output I'm getting:
reg[0]= 0x10
reg[1]= 0x17
reg[2]= 0xce
reg[3]= 0x70
reg[4]= 0x2f
reg[5]= 0x6e
reg[6]= 0x3c
reg[7]= 0x38
reg[8]= 0xff
reg[9]= 0xff
reg[10]= 0xff
reg[11]= 0xff
reg[12]= 0xff
reg[13]= 0xff
reg[14]= 0xff
reg[15]= 0xff
As can be seen from the output the byte order is mixed up. Is this because of little/big endian. How do I convert the value to be equal to 128 bit val
Related
I'm trying to perform a bufferoverflow so that a variable (type) has a specific value in it. I struggle with the strlen & check for my input.
I tried using something like: 'AAAAA\x00AAA...A\x00\xbc\xd4\xb9' for tricking the strlen check that my input is just 5 A's long. But something strips my \x00 always with the message:
-bash: Warnung: command substitution: ignored null byte in input
So i went to make a way bigger input and have an Integer overflow so that the short where the length is saved to is too big and gets below 32700... But that didnt seem to work either.
My inputs are constructed as follow:
./challenge `python -c "print 'A'*30000 + '\x00' + 'A'*10000" + '\xXX\xXX\xXX\xXX'`
(for \xXX my desired inputs)
or
./challenge `python programm.py`
I am also using gdb for the analysis of the memory. But i cant find the correct way to use it...
The code for my task is:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
int main(int argc, char *argv[])
{
int type = 0xdeadbeef;
char buf[32700];
short len;
if (argc < 2)
return -1;
len = strlen(argv[1]);
if (len > 32700) {
printf("Too long\n");
return -1;
}
strcpy(buf, argv[1]);
if (type == 0x00b1c2d3) {
printf("Success\n");
} else {
printf("Try again\n");
}
exit(0);
}
Any other ideas (or a 'solution') for this problem? I am trying so hard now since days. Also i didn't have anything todo with C (especially memory analysis, gdb and all of that) for years, so it's extra hard for me to understand some things.
The issue in the code is that the length len, which is calculated using strlen(), is a short. This means it's a 16-bit value. Since strlen() returns a size_t, which is a 64-bit value on my x86_64 system, only the bottom 16 bits will be stored in the len variable.
So, on first glance you can submit anything longer than 65,535 bytes, overflow that short int, and store just the lower 16-bits. But there's another issue to note. The len variable is a signed short integer. This means that it stores values from [-32768,32767], whereas a size_t is an unsigned value. So when strlen() returns its value, lengths that have the highest bit set (anything over 32,767 bytes) will become negative numbers.
Here's an example gdb session illustrating this:
(gdb) break 22
Breakpoint 1 at 0x829: file challenge.c, line 22.
(gdb) run `ruby -e 'print("A"*32768)'`
Starting program: /home/josh/foo/challenge `ruby -e 'print("A"*32768)'`
Breakpoint 1, main (argc=2, argv=0x7fffffff6338) at challenge.c:22
22 if (type == 0x00b1c2d3) {
(gdb) p len
$1 = -32768
(gdb) x/64bx $rsp
0x7ffffffee260: 0x38 0x63 0xff 0xff 0xff 0x7f 0x00 0x00
0x7ffffffee268: 0x00 0x00 0x00 0x00 0x02 0x00 0x00 0x00
0x7ffffffee270: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
0x7ffffffee278: 0x00 0x00 0x00 0x80 0xef 0xbe 0xad 0xde
0x7ffffffee280: 0x41 0x41 0x41 0x41 0x41 0x41 0x41 0x41
0x7ffffffee288: 0x41 0x41 0x41 0x41 0x41 0x41 0x41 0x41
0x7ffffffee290: 0x41 0x41 0x41 0x41 0x41 0x41 0x41 0x41
0x7ffffffee298: 0x41 0x41 0x41 0x41 0x41 0x41 0x41 0x41
(gdb)
See how the bytes at 0x7ffffffee27a are 0x00 0x80? That's -32768 to a short int, so the length is perceived to be negative (well under 32,700!).
I also included some extra info in the memory dump there, so you can see that the type field immediately succeeds the len field in memory, followed finally by the buf array (which contains my 32768 As!). Note that only certain lengths of input will work -- those that appear negative, or overflow to a value less than 32,700 when truncated to 16 bits.
This doesn't solve the entire challenge, but at least should give you a good start to understanding what you can control. Note also that there is data above the buf array, such as where rbp points, your saved return pointer, etc. Depending on how you are compiling this (e.g., with or without stack canaries, etc.), there might be some more things for you to solve along the way.
I am working on custom board containing a 32bit MCU (Cortex A5) and a 16bit wide DRAM chip (LPDDR2). The MCU has an on-board DRAM controller which supports both DDR3 and LPDDR2, and I do have a working setup using LPDDR2.
Now, I am trying to half the clock rate at boot time on both MCU and DRAM (they both use the same PLL) due to power-restrictions, and this is where my troubles begin.
As mentioned, I do have a working setup using the full frequency (DRAM: 400MHz, MCU: 396MHz), so one would expect that halving the frequency and updating the timings according to the DRAM datasheet should yeld another working setup, but no.
The DRAM init runs at boot time from MCU intram, so does any tests. The whole procedure is handled by a board-specific version of U-Boot 2015.04.
I have a collection of tests that run at MCU boot to verify DRAM integrity. One of these tests is a so-called "walking bit"-test, where I use a 32bit uint, a toggle each bit in sequence, reading back to verify.
What I found was that, when reading back, the lower 16 bits have not been touched, while the upper 16 bits seems altered. After some investigation, I found the following pattern (assuming a watermark "0xaa"):
write -> readback
0x8000_0000 -> 0x0000_aaaa
0x4000_0000 -> 0x0000_aaaa
0x2000_0000 -> 0x0000_aaaa
0x1000_0000 -> 0x0000_aaaa
[...]
0x0008_0000 -> 0x0000_aaaa
0x0004_0000 -> 0x0000_aaaa
0x0002_0000 -> 0x0000_aaaa
0x0001_0000 -> 0x0000_aaaa
0x0000_8000 -> 0x8000_aaaa
0x0000_4000 -> 0x4000_aaaa
0x0000_2000 -> 0x2000_aaaa
0x0000_1000 -> 0x1000_aaaa
[...]
0x0000_0008 -> 0x0008_aaaa
0x0000_0004 -> 0x0004_aaaa
0x0000_0002 -> 0x0002_aaaa
0x0000_0001 -> 0x0001_aaaa
The watermark is present, although I suspect it got there from a previous debugging-session. This I will address later, hence my primary focus at the moment is getting the "walking bit"-test to pass.
Here is a memory dump:
(gdb) x/16b addr
0x80000000: 0x00 0x00 0x55 0x55 0x55 0x55 0x00 0x80
0x80000008: 0xaa 0xaa 0xaa 0xaa 0xaa 0xaa 0x00 0x55
(gdb) p/x *addr
$59 = 0x55550000
(gdb) set *addr = 0xaabbccdd
(gdb) p/x *addr
$60 = 0xccdd0000
(gdb) x/16b addr
0x80000000: 0x00 0x00 0xdd 0xcc 0xbb 0xaa 0x00 0x80
0x80000008: 0xaa 0xaa 0xaa 0xaa 0xaa 0xaa 0x00 0x55
Can anyone tell my what might cause this type of behaviour?
Cheers
Note: I have intentionally left out MCU and DRAM specifications, as I believe that the question can be addressed only with JEDEC/DFI in mind.
Edit: Added memory dump.
Edit: Here is the source of the "walking bit"-test. Run from MCU intram on memory area located on DRAM. Assumed bug-free:
static u32 __memtest_databus(volatile u32 * const addr)
{
/* Walking bit */
u32 pattern = (1u << 31);
u32 failmask = 0;
for(; pattern; pattern >>= 1)
{
*addr = pattern;
if(*addr != pattern)
failmask |= pattern;
}
return failmask;
}
Edit: The PLL and VCO has been checked, and settings are correct. PLL is stable and DRAM PHY does obtain a lock.
Link to DRAM Data Sheet
the bytes look like they have shifted, not altered.
quote
(gdb) x/16b addr
0x80000000: 0x00 0x00 *0xdd 0xcc 0xbb 0xaa* 0x00 0x80
0x80000008: 0xaa 0xaa 0xaa 0xaa 0xaa 0xaa 0x00 0x55
unquote
You have one severe bug here: u32 pattern = (1 << 31);.
The integer constant 1 is of type int, which is 32 bits on your ARM system.
You left shift this signed number out of bounds and invoke undefined behavior; anything could happen. The variable pattern can get any value.
Correct code would be u32 pattern = (u32)1 << 31; or u32 pattern = 1u << 31;
How to convert a monochrome bmp image file (in my case 16*16 pixels) into binary format? This code reads the bitmap information. I have to store the pixel information into an array & it's not stored properly. I have shared the code
#pragma pack(push, 1)
typedef struct BitMap
{
short Signature;
long Reserved1;
long Reserved2;
long DataOffSet;
long Size;
long Width;
long Height;
short Planes;
short BitsPerPixel;
long Compression;
long SizeImage;
long XPixelsPreMeter;
long YPixelsPreMeter;
long ColorsUsed;
long ColorsImportant;
long data[16];
}BitMap;
#pragma pack(pop)
reading image file:
struct BitMap source_info;
struct Pix source_pix;
FILE *fp;
FILE *Dfp;
Dfp=fopen("filename.bin","wb")
if(!(fp=fopen("filename.bmp","rb")))
{
printf(" can not open file");
exit(-1);
}
fread(&source_info, sizeof(source_info),1,fp);
printf("%d\n",source_info.DataOffSet);
printf("%d\n",source_info.Width*source_info.Height);
for(i=0;i<16;i++)
fprintf(Dfp,"%d\t",source_info.data[i]);
Observed output using hex editor is
Highlighted data i want to get stored in data array so that i can use it further in the code.
However output in filename.bin is
0 16777215 63 63 63 95 95 95
31 31 31 31 31 31 31 31
I'm new to this field. Can someone help me out where i'm going wrong?
There's actually no problem with the data.
The problem is you're using the wrong way to print them.
Try replacing your code:
printf("%d\n",source_info.DataOffSet);
printf("%d\n",source_info.Width*source_info.Height);
for(i=0;i<16;i++)
fprintf(Dfp,"%d\t",source_info.data[i]);
with this:
printf("%x\n",source_info.DataOffSet);
printf("%x\n",source_info.Width*source_info.Height);
for(i=0;i<16;i++)
fprintf(Dfp,"%x\t",source_info.data[i]);
As %d is for signed decimals while %x is for hexadecimals. See the section of The conversion specifier in the manual page of printf
EDITED:
As you've posted your new questions in the comments:
output in hex is 0x00 0xffffff 0x3f 0x3f 0x3f 0x5f 0x5f 0x5f 0x1f 0x1f 0x1f 0x1f 0x1f 0x1f 0x1f 0x1f Can u explain how the output is getting stored? I'm unable to get the same output – user2967899 7 mins ago
here's my edited answer.
Assumptions: your working platform is just as normal, on which size of short is 2 bytes and of long it's 4.
From definition of struct BitMap we know the field data is at its offset of 0x36. Comparing of the image we know the data shall be (in hexadecimal):
data[0]: 0000 0000
data[1]: ffff ff00
......
Then the result you got seems strange since data[1] is 0x00ffffffff instead of 0xffffff00. However that's correct. This is cause by endianess, for which please read this wiki page first: http://en.wikipedia.org/wiki/Endianness
As the hex-editor represents data in the real order of bytes, and I assume you're working with a little-endian machine (which most PC on this planet has), this order is just reversed of the real order of your data in long:
/* data in C */
unsigned long x = 305419896; /* 305419896 == 0x12345678 */
/* arithmetically the four bytes in x: */
/* 0x12 0x34 0x56 0x78 */
/* the real order to be observed in a hex-editor due to endianess: */
/* 0x78 0x56 0x34 0x12 */
/* so this holds true in C: */
unsigned char *a = &x;
assert(a[0] == 0x78);
assert(a[1] == 0x56);
assert(a[2] == 0x34);
assert(a[3] == 0x12);
First, I'm a student still. So I am not very experienced.
I'm working with a piece of bluetooth hardware and I am using its protocol to send it commands. The protocol requires packets to be sent with LSB first for each packet field.
I was getting error packets back to me indicating my CRC values were wrong so I did some investigating. I found the problem, but I became confused in the process.
Here is Some GDB output and other information elucidating my confusion.
I'm sending a packet that should look like this:
|Start Flag| Packet Num | Command | Payload | CRC | End Flag|
0xfc 0x1 0x0 0x8 0x0 0x5 0x59 0x42 0xfd
Here is some GDB output:
print /x reqId_ep
$1 = {start_flag = 0xfc, data = {packet_num = 0x1, command = {0x0, 0x8}, payload = {
0x0, 0x5}}, crc = 0x5942, end_flag = 0xfd}
reqId_ep is the variable name of the packet I'm sending. It looks all good there, but I am receiving the CRC error codes from it so something must be wrong.
Here I examine 9 bytes in hex starting from the address of my packet to send:
x/9bx 0x7fffffffdee0
0xfc 0x01 0x00 0x08 0x00 0x05 0x42 0x59 0xfd
And here the problem becomes apparent. The CRC is not LSB first. (0x42 0x59)
To fix my problem I removed the htons() that I set my CRC value equal with.
And here is the same output above without htons():
p/x reqId_ep
$1 = {start_flag = 0xfc, data = {packet_num = 0x1, command = {0x0, 0x8}, payload = {
0x0, 0x5}}, crc = 0x4259, end_flag = 0xfd}
Here the CRC value is not LSB.
But then:
x/9bx 0x7fffffffdee0
0xfc 0x01 0x00 0x08 0x00 0x05 0x59 0x42 0xfd
Here the CRC value is LSB first.
So apparently the storing of C is LSB first? Can someone please cast a light of knowledge upon me for this situation? Thank you kindly.
This has to do with Endianness in computing:
http://en.wikipedia.org/wiki/Endianness#Endianness_and_operating_systems_on_architectures
For example, the value 4660 (base-ten) is 0x1234 in hex. On a Big Endian system, it would be stored in memory as 1234 while on a Little Endian system it would be stored as 3412
If you want to avoid this sort of issue in the future, it might just be easiest to create a large array or struct of unsigned char, and store individual values in it.
eg:
|Start Flag| Packet Num | Command | Payload | CRC | End Flag|
0xfc 0x1 0x0 0x8 0x0 0x5 0x59 0x42 0xfd
typedef struct packet {
unsigned char startFlag;
unsigned char packetNum;
unsigned char commandMSB;
unsigned char commandLSB;
unsigned char payloadMSB;
unsigned char payloadLSB;
unsigned char crcMSB;
unsigned char crcLSB;
unsigned char endFlag;
} packet_t;
You could then create a function that you compile differently based on the type of system you are building for using preprocessor macros.
eg:
/* Uncomment the line below if you are using a little endian system;
/* otherwise, leave it commented
*/
//#define LITTLE_ENDIAN_SYSTEM
// Function protocol
void writeCommand(int cmd);
//Function definition
void writeCommand(int cmd, packet_t* pkt)
{
if(!pkt)
{
printf("Error, invalid pointer!");
return;
}
#if LITTLE_ENDIAN_SYSTEM
pkt->commandMSB = (cmd && 0xFF00) >> 8;
pkt->commandLSB = (cmd && 0x00FF);
# else // Big Endian system
pkt->commandMSB = (cmd && 0x00FF);
pkt->commandLSB = (cmd && 0xFF00) >> 8;
#endif
// Done
}
int main void()
{
packet_t myPacket = {0}; //Initialize so it is zeroed out
writeCommand(0x1234,&myPacket);
return 0;
}
One final note: avoid sending structs as a stream of data, send it's individual elements one-at-a-time instead! ie: don't assume that the struct is stored internally in this case like a giant array of unsigned characters. There are things that the compiler and system put in place like packing and allignment, and the struct could actually be larger than 9 x sizeof(unsigned char).
Good luck!
This is architecture dependent based on which processor you're targeting. There are what is known as "Big Endian" systems, which store the most significant byte of a word first, and "Little Endian" systems that store the least significant byte first. It looks like you're looking at a Little Endian system there.
void Sound(int f)
{
USHORT B=1193180/f;
UCHAR temp = In_8(0x61);
temp = temp | 3;
Out_8(0x61,temp);
Out_8(0x43,0xB6);
Out_8(0x42,B&0xF);
Out_8(0x42,(B>>8)&0xF);
}
In_8/Out_8 reads/writes 8 bit to/from a specified port(implementation details omitted).
How does it make PC beep?
UPDATE
Why &0xF is used here? Shouldn't it be 0xFF?
The PC has a 8255 timer chip, which is controlled using the ports 0x61, 0x43 and 0x42.
When port 0x61 bit 0 is set to 1, this means "turn on the timer that is connected to the speaker".
When port 0x61 bit 1 is set to 1, this means "turn on the speaker".
This is done in the first paragraph of your code.
The second part puts the "magic value" 0xB6 on port 0x43, which means that the following two bytes arriving at port 0x42 will be interpreted as the divisor for the timer frequency. The division's resulting frequency (1193180 / divisor) will then be sent to the speaker.
http://gd.tuwien.ac.at/languages/c/programming-bbrown/advcw3.htm#sound