I am sending some raw bytes over the wire in C (using HTTP). I'm currently doing it like this:
// response is a large buffer
int n = 0; // response length
int x = 42; // want client to read x
int y = 43; // and y
// write a simple HTTP response containing a 200 status code then x and y in binary format
strcpy(response, "HTTP/1.1 200\r\n\r\n");
n += 16; // status line we just wrote is 16 bytes long
memcpy(response + n, &x, sizeof(x));
n += sizeof(x);
memcpy(response + n, &y, sizeof(y));
n += sizeof(y);
write(client, response, n);
In JavaScript, I then read this data using code like this:
request = new XMLHttpRequest();
request.responseType = "arraybuffer";
request.open("GET", "/test");
request.onreadystatechange = function() { if (this.readyState === XMLHttpRequest.DONE) { console.log(new Int32Array(this.response)) } }
request.send();
which prints [42, 43] as it should.
I'm wondering if there is a more elegant way to do this on the server-side though, e.g.
n += sprintf(response, "HTTP/1.1 200\r\n\r\n%4b%4b", &x, &y);
Where %4b is a made-up format specifier which just says: copy the 4 bytes from that address into the string (which would be "*\0\0\0") Is there a format specifier like the fictional %4b that does something like this?
It is an XY problem, you are asking about how to use sprintf() to solve your problem, rather than simply asking how to solve your problem. YOur actual problem is how to make that code more "elegant".
There is no particular reason to send the data in a single write operation - the network stack buffering will ensure that the data is packetised efficiently:
static const char header[] = "HTTP/1.1 200\r\n\r\n" ;
write( client, header, sizeof(header) - 1 ) ;
write( client, &x, sizeof(x) ) ;
write( client, &y, sizeof(y) ) ;
Note that X and Y will be written in the native machine byte order, which may be incorrect at the receiver. More generically then:
static const char header[] = "HTTP/1.1 200\r\n\r\n" ;
write( client, header, sizeof(header) - 1 ) ;
uint32_t nl = htonl( x ) ;
write( client, &nl, sizeof(nl) ) ;
nl = htonl( y ) ;
write( client, &nl, sizeof(nl) ) ;
Is there a format specifier like the fictional %4b?
No, there is not, and your method is fine. I would suggest using snprintf and some check to avoid buffer overflow, adding ex. static_assert(sizeof(int) == 4, "") checking that the platform uses big endian and similar environment and error handling and avoiding undefined behavior checks.
That said, you can use %c printf specifier multiple times, like "%c%c%c%c", ((char*)&x)[3], ((char*)&x)[2], ((char*)&x)[1], ((char*)&x)[0] to print 4 bytes. You can wrap it in macros and do:
#include <stdio.h>
#define PRI_BYTES_4 "%c%c%c%c"
#define ARG_BYTES_BE_4(var) \
((const char*)&(var))[3], \
((const char*)&(var))[2], \
((const char*)&(var))[1], \
((const char*)&(var))[0]
int main() {
int var =
'l' << 24 |
'a' << 16 |
'm' << 8 |
'e';
printf("Hello, I am " PRI_BYTES_4 ".\n",
ARG_BYTES_BE_4(var));
// will print `Hello, I am lame.`
}
Related
I am trying to read continuous data from remote device and I have static variable declared to receive and send ACK . Payload of 0 and 1 holds the sequence number of the data I am getting from remote device .
The problem I have is with variable fragment_num. After it reaches 0x0c it is resetting back to 0.
Its Free RTOS application . Are there any obvious reasons for a static variable to reset to 0 or is there any problem with my code ? Thanks
#define INTIAL_FRAGMENT 0x00
static uint8_t length;
static uint8_t fragment_num ;
uint8_t image[128];
download ()
{
if(((payload[1] << 8) | (payload[0])) == INTIAL_FRAGMENT)
{
memset(image , 0,128);
memcpy(image , payload,(len));
info_download();
length = len;
fragment_num +=1 ;
}
if(((payload[1] << 8) | (payload[0])) == fragment_num)
{
memcpy((image+length+1) , payload,(len));
length += len;
fragment_num ++;
info_download();
}
The problem is likely buffer overflow.
static uint8_t fragment_num ;
uint8_t image[128];
The compiler may have laid out fragment_num right after image in memory. If length or len is incorrect then memcpy() could write past the end of image and overwrite the value of fragment_num.
memcpy((image+length+1) , payload,(len));
I believe you want (image+length) instead of (image+length+1) here. Adding one skips a byte.
You should probably also verify len before memcpy() to make sure it doesn't overflow, e.g.:
if (len > 128)
return -1;
memcpy(image, payload, len);
if (length + len > 128)
return -1;
memcpy(image + length, payload, len);
I'm working on a crackme , and having a bit of trouble making sense of the flag I'm supposed to retrieve.
I have disassembled the binary using radare2 and ghidra , ghidra gives me back the following pseudo-code:
undefined8 main(void)
{
long in_FS_OFFSET;
double dVar1;
double dVar2;
int local_38;
int local_34;
int local_30;
int iStack44;
int local_28;
undefined2 uStack36;
ushort uStack34;
char local_20;
undefined2 uStack31;
uint uStack29;
byte bStack25;
long local_10;
local_10 = *(long *)(in_FS_OFFSET + 0x28);
__printf_chk(1,"Insert flag: ");
__isoc99_scanf(&DAT_00102012,&local_38);
uStack34 = uStack34 << 8 | uStack34 >> 8;
uStack29 = uStack29 & 0xffffff00 | (uint)bStack25;
bStack25 = (undefined)uStack29;
if ((((local_38 == 0x41524146) && (local_34 == 0x7b594144)) && (local_30 == 0x62753064)) &&
(((iStack44 == 0x405f336c && (local_20 == '_')) &&
((local_28 == 0x665f646e && (CONCAT22(uStack34,uStack36) == 0x40746f31)))))) {
dVar1 = (double)CONCAT26(uStack34,CONCAT24(uStack36,0x665f646e));
dVar2 = (double)CONCAT17((undefined)uStack29,CONCAT43(uStack29,CONCAT21(uStack31,0x5f)));
__printf_chk(0x405f336c62753064,1,&DAT_00102017);
__printf_chk(dVar1,1,"y: %.30lf\n");
__printf_chk(dVar2,1,"z: %.30lf\n");
dVar1 = dVar1 * 124.8034902710365;
dVar2 = (dVar1 * dVar1) / dVar2;
round_double(dVar2,0x1e);
__printf_chk(1,"%.30lf\n");
dVar1 = (double)round_double(dVar2,0x1e);
if (1.192092895507812e-07 <= (double)((ulong)(dVar1 - 4088116.817143337) & 0x7fffffffffffffff))
{
puts("Try Again");
}
else {
puts("Well done!");
}
}
if (local_10 != *(long *)(in_FS_OFFSET + 0x28)) {
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
return 0;
}
It is easy to see that there's a part of the flag in plain-sight , but the other part is a bit more interesting :
if (1.192092895507812e-07 <= (double)((ulong)(dVar1 - 4088116.817143337) & 0x7fffffffffffffff))
From what I understand , I have to generate the missing part of the flag depending on this condition . The problem is that I absolutely have no idea how to do this .
I can assume this missing part is 8 bytes of size , according to this line :
dVar2=(double)CONCAT17((undefined)uStack29,CONCAT43(uStack29,CONCAT21(uStack31,0x5f)));`
Considering flags are usually ascii , with some special characters , let's say , each byte will have values from 0x21 to 0x7E , that's almost 8^100 combinations , which will clearly take too much time to compute.
Do you guys have an idea on how I should proceed to solve this ?
Edit : Here is the link to the binary : https://filebin.net/dpfr1nocyry3sijk
You can tweak the Ghidra reverse result by edit variable type. Based on scanf const string %32s your local_38 should be char [32].
Before the first if, there are some char swap.
And the first if statment give you a long constrain of flag
At this point, you can confirm part of flag is FARADAY{d0ubl3_#nd_f1o#t, then is ther main part of this challenge.
It print x, y, z based on the flag, but you'll quickly find x and y is constrain by the if, so you only need to solve z to get the flag, so you think you need to bruteforce all double value limit by printable ascii.
But there are a limitaion in if statment says byte0 of this double must be _ and a math constrain there, simple math tell dVar2 - 4088116.817143337 <= 1.192092895507813e-07 and it comes dVar2 is very close 4088116.817143337
And byte 3 and byte 7 in this double will swap
By reverse result: dVar2 = y*y*x*x/z, solve this equation you can say z must near 407.2786840401004 and packed to little endian is `be}uty#. Based on double internal structure format, MSB will affect exponent, so you can make sure last byte is # and it shows byte0 and byte3 is fixed now by constrain and flag common format with {} pair.
So finally, you only need to bureforce 5 bytes of printable ascii to resolve this challenge.
import string, struct
from itertools import product
possible = string.ascii_lowercase + string.punctuation + string.digits
for nxt in product(possible, repeat=5):
n = ''.join(nxt).encode()
s = b'_' + n[:2] + b'}' + n[2:] + b'#'
rtn = struct.unpack("<d", s)[0]
rtn = 1665002837.488342 / rtn
if abs(rtn - 4088116.817143337) <= 0.0000001192092895507812:
print(s)
And bingo the flag is FARADAY{d0ubl3_#nd_f1o#t_be#uty}
I am communicating with a board that requires I send it 2 signed byte.
explaination of data type
what I need to send
Would I need to bitwise manipulation or can I just send 16bit integer as the following?
int16_t rc_min_angle = -90;
int16_t rc_max_angle = 120;
write(fd, &rc_min_angle, 2);
write(fd, &rc_max_angle, 2);
int16_t has the correct size but may or may not be the correct endianness. To ensure little endian order use macros such as the ones from endian.h:
#define _BSD_SOURCE
#include <endian.h>
...
uint16_t ec_min_angle_le = htole16(ec_min_angle);
uint16_t ec_max_angle_le = htole16(ec_max_angle);
write(fd, &ec_min_angle_le, 2);
write(fd, &ec_max_angle_le, 2);
Here htole16 stands for "host to little endian 16-bit". It converts from the host machine's native endianness to little endian: if the machine is big endian it swaps the bytes; if it's little endian it's a no-op.
Also note that you have you pass the address of the values to write(), not the values themselves. Sadly, we cannot inline the calls and write write(fd, htole16(ec_min_angle_le), 2).
If endian functions are not available, simply write the bytes in little endian order.
Perhaps with a compound literal.
// v------------- compound literal ---------------v
write(fd, &(uint8_t[2]){rc_min_angle%256, ec_min_angle/256}, 2);
write(fd, &(uint8_t[2]){rc_max_angle%256, ec_max_angle/256}, 2);
// ^-- LS byte ---^ ^-- MS byte ---^
// &
I added the & assuming the write() is a like write(2) - Linux.
If you don't need to have it type-generic, you can simply do:
#include <stdint.h>
#include <unistd.h>
/*most optimizers will turn this into `return 1;`*/
_Bool little_endian_eh() { uint16_t x = 1; return *(char *)&x; }
void swap2bytes(void *X) { char *x=X,t; t=x[0]; x[0]=x[1]; x[1]=t; }
int main()
{
int16_t rc_min_angle = -90;
int16_t rc_max_angle = 120;
//this'll very likely be a noop since most machines
//are little-endian
if(!little_endian_eh()){
swap2bytes(&rc_min_angle);
swap2bytes(&rc_max_angle);
}
//TODO error checking on write calls
int fd =1;
write(fd, &rc_min_angle, 2);
write(fd, &rc_max_angle, 2);
}
To send little-endian data, you can just generate the bytes manually:
int write_le(int fd, int16_t val) {
unsigned char val_le[2] = {
val & 0xff, (uint16_t) val >> 8
};
int nwritten = 0, total = 2;
while (nwritten < total) {
int n = write(fd, val_le + nwritten, total - nwritten);
if (n == -1)
return nwritten > 0 ? nwritten : -1;
nwritten += n;
}
return nwritten;
}
A good compiler will recognize that the code does nothing and compile the bit manipulation to no-op on a little-endian platform. (See e.g. gcc generating the same code for the variant with and without the bit-twiddling.)
Note also that you shouldn't ignore the return value of write() - not only can it encounter an error, it can also write less than you gave it to, in which case you must repeat the write.
I'm trying to write the binary number of 16-bit signed integer to a file. I searched a lot and ofcourse I found many examples which converts integer variables to binary. But in my case these functions will not be efficient, because I need to convert 50e6 samples/s. Calling a function to convert each sample will need a lot of computing time.
So what I want to do is:
int array[] = {233, 431, 1024, ...}
for (i = 0; i < sizeof(array); i++){
fprintf(outfile, "%any_binary_format \n", array[i]);
}
result in the file should be:
0000000011101001
0000000110101111
0000010000000000
fprintf is intended for formatted output - the formatting being "human readable" text, it is therefore not the appropriate function to use if you want binary output. For that you should use fwrite():
for (i = 0; i < sizeof(array) / sizeof(*array); i++ )
{
fwrite (&array[i], sizeof(*array), 1, outfile ) ;
}
Note I have also fixed your loop termination to correctly iterate the number of elements in the array. But in fact the loop is unnecessary - the output is binary, the array is binary - you can just output the entire array thus:
fwrite( array, sizeof(array), 1, outfile ) ;
Your performance requirement of 50Msps will require write performance of around 95Mb/s sustained - that is a lot to ask, and unlikely to be achieved by writing one sample at a time. You may be better off using a memory mapped file, but unless you are using a real-time OS, there are no guarantees that you will sustain that output rate indefinitely - it only takes some other process to access the drive, and it may introduce an unacceptable delay.
Also note that the file must have been opened for binary output - especially on Windows to prevent translation of CR to CR+LF which will be disastrous for your sample data.
If you want to use printf you can use something like this:
#define BYTE_TO_BINARY_PATTERN "%c%c%c%c%c%c%c%c\n"
#define BYTE_TO_BINARY(byte) \
(byte & 0x80 ? '1' : '0'), \
(byte & 0x40 ? '1' : '0'), \
(byte & 0x20 ? '1' : '0'), \
(byte & 0x10 ? '1' : '0'), \
(byte & 0x08 ? '1' : '0'), \
(byte & 0x04 ? '1' : '0'), \
(byte & 0x02 ? '1' : '0'), \
(byte & 0x01 ? '1' : '0')
int main()
{
uint8_t value = 5;
printf(BYTE_TO_BINARY_PATTERN, BYTE_TO_BINARY(value));
return 0;
}
Should print 00000101. I use this sometimes in embedded code when debugging to check register values. Just replace printf with fprintf if you want to write the ascii binary strings to file.
If your compiler supports inline you don't need to worry about the overhead of a small function, take a look at this.
Anyway you can simply implement the function as a macro.
If you want a faster approach you can use a larger buffer (the size for the faster runtime is machine-dependent) for example char str[1 << 16], writing the results to the buffer and using fwrite/write to the out stream.
Another approach is to map the process via mmap/msync.
Anyway you don't need to look at a faster function, but rather a deeper knowledge of the system you're working on.
#define SHORT_WIDTH 16
#define TEST 1
#define PADDING 1 /* set to 0 if you don't need the leading 0s */
char *ShortToBin(unsigned short x, char *buffer) {
#if PADDING
int i;
for(i = 0; i < SHORT_WIDTH; ++i)
buffer[SHORT_WIDTH - i - 1] = '0' + ((x >> i) & 1);
return buffer;
#else
char *ptr = buffer + SHORT_WIDTH;
do {
*(--ptr) = '0' + (x & 1);
x >>= 1;
} while(x);
return ptr;
#endif
}
#if TEST
#include <stdio.h>
int main() {
short n;
char str[SHORT_WIDTH+1]; str[SHORT_WIDTH]='\0';
while(scanf("%hd", &n) == 1)
puts(ShortToBin(n, str));
return 0;
}
#endif
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int *int_pointer = (int *) malloc(sizeof(int));
// open output file
FILE *outptr = fopen("test_output", "w");
if (outptr == NULL)
{
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
*int_pointer = 0xabcdef;
fwrite(int_pointer, sizeof(int), 1, outptr);
//clean up
fclose(outptr);
free(int_pointer);
return 0;
}
this is my code and when I see the test_output file with xxd it gives following output.
$ xxd -c 12 -g 3 test_output
0000000: efcdab 00 ....
I'm expecting it to print abcdef instead of efcdab.
Which book are you reading? There are a number of issues in this code, casting the return value of malloc for example... Most importantly, consider the cons of using an integer type which might vary in size and representation from system to system.
An int is guaranteed the ability to store values between the range of -32767 and 32767. Your implementation might allow more values, but to be portable and friendly with people using ancient compilers such as Turbo C (there are a lot of them), you shouldn't use int to store values larger than 32767 (0x7fff) such as 0xabcdef. When such out-of-range conversions are performed, the result is implementation-defined; it could involve saturation, wrapping, trap representations or raising a signal corresponding to computational error, for example, the latter of two which could cause undefined behaviour later on.
You need to translate to an agreed-upon field format. When sending data over the write, or writing data to a file to be transferred to other systems, it's important that the protocol for communication be agreed upon. This includes using the same size and representation for integer fields. Both output and input should be followed by a translation function (serialisation and deserialisation, respectively).
Your fields are binary, and so your file should be opened in binary mode. For example, use fopen(..., "wb") rather than "w". In some situations, '\n' characters might be translated to pairs of \r\n characters, otherwise; Windows systems are notorious for this. Can you imagine what kind of havoc and confusion this could wreak? I can, because I've answered a question about this problem.
Perhaps uint32_t might be a better choice, but I'd choose unsigned long as uint32_t isn't guaranteed to exist. On that note, for systems which don't have htonl (which returns uint32_t according to POSIX), that function could be implemented like so:
uint32_t htonl(uint32_t x) {
return (x & 0x000000ff) << 24
| (x & 0x0000ff00) << 8
| (x & 0x00ff0000) >> 8
| (x & 0xff000000) >> 24;
}
As an example inspired by the above htonl function, consider these macros:
typedef unsigned long ulong;
#define serialised_long(x) serialised_ulong((ulong) x)
#define serialised_ulong(x) (x & 0xFF000000) / 0x1000000 \
, (x & 0xFF0000) / 0x10000 \
, (x & 0xFF00) / 0x100 \
, (x & 0xFF)
typedef unsigned char uchar;
#define deserialised_long(x) (x[3] <= 0x7f \
? deserialised_ulong(x) \
: -(long)deserialised_ulong((uchar[]) { 0x100 - x[0] \
, 0xFF - x[1] \
, 0xFF - x[2] \
, 0xFF - x[3] })
#define deserialised_ulong(x) ( x[0] * 0x1000000UL \
+ x[1] * 0x10000UL \
+ x[2] * 0x100UL \
+ x[3] )
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
FILE *f = fopen("test_output", "wb+");
if (f == NULL)
{
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
ulong value = 0xABCDEF;
unsigned char datagram[] = { serialised_ulong(value) };
fwrite(datagram, sizeof datagram, 1, f);
printf("%08lX serialised to %02X%02X%02X%02X\n", value, datagram[0], datagram[1], datagram[2], datagram[3]);
rewind(f);
fread(datagram, sizeof datagram, 1, f);
value = deserialised_ulong(datagram);
printf("%02X%02X%02X%02X deserialised to %08lX\n", datagram[0], datagram[1], datagram[2], datagram[3], value);
fclose(f);
return 0;
}
Use htonl()
It converts from whatever the host-byte-order is (endianness of your machine) to network byte order. So whatever machine you're running on you will get the the same byte order. These calls are used so that regardless of the host you're running on the bytes are sent over the network in the right order, but it works for you too.
See the man pages of htonl and byteorder. There are various conversion functions available, also for different integer sizes, 16-bit, 32-bit, 64-bit ...
#include <stdio.h>
#include <stdlib.h>
#include <arpa/inet.h>
int main(void) {
int *int_pointer = (int *) malloc(sizeof(int));
// open output file
FILE *outptr = fopen("test_output", "w");
if (outptr == NULL) {
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
*int_pointer = htonl(0xabcdef); // <====== This ensures correct byte order
fwrite(int_pointer, sizeof(int), 1, outptr);
//clean up
fclose(outptr);
free(int_pointer);
return 0;
}