How to fprintf an int to binary in C? - c

I'm trying to write the binary number of 16-bit signed integer to a file. I searched a lot and ofcourse I found many examples which converts integer variables to binary. But in my case these functions will not be efficient, because I need to convert 50e6 samples/s. Calling a function to convert each sample will need a lot of computing time.
So what I want to do is:
int array[] = {233, 431, 1024, ...}
for (i = 0; i < sizeof(array); i++){
fprintf(outfile, "%any_binary_format \n", array[i]);
}
result in the file should be:
0000000011101001
0000000110101111
0000010000000000

fprintf is intended for formatted output - the formatting being "human readable" text, it is therefore not the appropriate function to use if you want binary output. For that you should use fwrite():
for (i = 0; i < sizeof(array) / sizeof(*array); i++ )
{
fwrite (&array[i], sizeof(*array), 1, outfile ) ;
}
Note I have also fixed your loop termination to correctly iterate the number of elements in the array. But in fact the loop is unnecessary - the output is binary, the array is binary - you can just output the entire array thus:
fwrite( array, sizeof(array), 1, outfile ) ;
Your performance requirement of 50Msps will require write performance of around 95Mb/s sustained - that is a lot to ask, and unlikely to be achieved by writing one sample at a time. You may be better off using a memory mapped file, but unless you are using a real-time OS, there are no guarantees that you will sustain that output rate indefinitely - it only takes some other process to access the drive, and it may introduce an unacceptable delay.
Also note that the file must have been opened for binary output - especially on Windows to prevent translation of CR to CR+LF which will be disastrous for your sample data.

If you want to use printf you can use something like this:
#define BYTE_TO_BINARY_PATTERN "%c%c%c%c%c%c%c%c\n"
#define BYTE_TO_BINARY(byte) \
(byte & 0x80 ? '1' : '0'), \
(byte & 0x40 ? '1' : '0'), \
(byte & 0x20 ? '1' : '0'), \
(byte & 0x10 ? '1' : '0'), \
(byte & 0x08 ? '1' : '0'), \
(byte & 0x04 ? '1' : '0'), \
(byte & 0x02 ? '1' : '0'), \
(byte & 0x01 ? '1' : '0')
int main()
{
uint8_t value = 5;
printf(BYTE_TO_BINARY_PATTERN, BYTE_TO_BINARY(value));
return 0;
}
Should print 00000101. I use this sometimes in embedded code when debugging to check register values. Just replace printf with fprintf if you want to write the ascii binary strings to file.

If your compiler supports inline you don't need to worry about the overhead of a small function, take a look at this.
Anyway you can simply implement the function as a macro.
If you want a faster approach you can use a larger buffer (the size for the faster runtime is machine-dependent) for example char str[1 << 16], writing the results to the buffer and using fwrite/write to the out stream.
Another approach is to map the process via mmap/msync.
Anyway you don't need to look at a faster function, but rather a deeper knowledge of the system you're working on.
#define SHORT_WIDTH 16
#define TEST 1
#define PADDING 1 /* set to 0 if you don't need the leading 0s */
char *ShortToBin(unsigned short x, char *buffer) {
#if PADDING
int i;
for(i = 0; i < SHORT_WIDTH; ++i)
buffer[SHORT_WIDTH - i - 1] = '0' + ((x >> i) & 1);
return buffer;
#else
char *ptr = buffer + SHORT_WIDTH;
do {
*(--ptr) = '0' + (x & 1);
x >>= 1;
} while(x);
return ptr;
#endif
}
#if TEST
#include <stdio.h>
int main() {
short n;
char str[SHORT_WIDTH+1]; str[SHORT_WIDTH]='\0';
while(scanf("%hd", &n) == 1)
puts(ShortToBin(n, str));
return 0;
}
#endif

Related

How to reverse strings that have been obfuscated using floats and double?

I'm working on a crackme , and having a bit of trouble making sense of the flag I'm supposed to retrieve.
I have disassembled the binary using radare2 and ghidra , ghidra gives me back the following pseudo-code:
undefined8 main(void)
{
long in_FS_OFFSET;
double dVar1;
double dVar2;
int local_38;
int local_34;
int local_30;
int iStack44;
int local_28;
undefined2 uStack36;
ushort uStack34;
char local_20;
undefined2 uStack31;
uint uStack29;
byte bStack25;
long local_10;
local_10 = *(long *)(in_FS_OFFSET + 0x28);
__printf_chk(1,"Insert flag: ");
__isoc99_scanf(&DAT_00102012,&local_38);
uStack34 = uStack34 << 8 | uStack34 >> 8;
uStack29 = uStack29 & 0xffffff00 | (uint)bStack25;
bStack25 = (undefined)uStack29;
if ((((local_38 == 0x41524146) && (local_34 == 0x7b594144)) && (local_30 == 0x62753064)) &&
(((iStack44 == 0x405f336c && (local_20 == '_')) &&
((local_28 == 0x665f646e && (CONCAT22(uStack34,uStack36) == 0x40746f31)))))) {
dVar1 = (double)CONCAT26(uStack34,CONCAT24(uStack36,0x665f646e));
dVar2 = (double)CONCAT17((undefined)uStack29,CONCAT43(uStack29,CONCAT21(uStack31,0x5f)));
__printf_chk(0x405f336c62753064,1,&DAT_00102017);
__printf_chk(dVar1,1,"y: %.30lf\n");
__printf_chk(dVar2,1,"z: %.30lf\n");
dVar1 = dVar1 * 124.8034902710365;
dVar2 = (dVar1 * dVar1) / dVar2;
round_double(dVar2,0x1e);
__printf_chk(1,"%.30lf\n");
dVar1 = (double)round_double(dVar2,0x1e);
if (1.192092895507812e-07 <= (double)((ulong)(dVar1 - 4088116.817143337) & 0x7fffffffffffffff))
{
puts("Try Again");
}
else {
puts("Well done!");
}
}
if (local_10 != *(long *)(in_FS_OFFSET + 0x28)) {
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
return 0;
}
It is easy to see that there's a part of the flag in plain-sight , but the other part is a bit more interesting :
if (1.192092895507812e-07 <= (double)((ulong)(dVar1 - 4088116.817143337) & 0x7fffffffffffffff))
From what I understand , I have to generate the missing part of the flag depending on this condition . The problem is that I absolutely have no idea how to do this .
I can assume this missing part is 8 bytes of size , according to this line :
dVar2=(double)CONCAT17((undefined)uStack29,CONCAT43(uStack29,CONCAT21(uStack31,0x5f)));`
Considering flags are usually ascii , with some special characters , let's say , each byte will have values from 0x21 to 0x7E , that's almost 8^100 combinations , which will clearly take too much time to compute.
Do you guys have an idea on how I should proceed to solve this ?
Edit : Here is the link to the binary : https://filebin.net/dpfr1nocyry3sijk
You can tweak the Ghidra reverse result by edit variable type. Based on scanf const string %32s your local_38 should be char [32].
Before the first if, there are some char swap.
And the first if statment give you a long constrain of flag
At this point, you can confirm part of flag is FARADAY{d0ubl3_#nd_f1o#t, then is ther main part of this challenge.
It print x, y, z based on the flag, but you'll quickly find x and y is constrain by the if, so you only need to solve z to get the flag, so you think you need to bruteforce all double value limit by printable ascii.
But there are a limitaion in if statment says byte0 of this double must be _ and a math constrain there, simple math tell dVar2 - 4088116.817143337 <= 1.192092895507813e-07 and it comes dVar2 is very close 4088116.817143337
And byte 3 and byte 7 in this double will swap
By reverse result: dVar2 = y*y*x*x/z, solve this equation you can say z must near 407.2786840401004 and packed to little endian is `be}uty#. Based on double internal structure format, MSB will affect exponent, so you can make sure last byte is # and it shows byte0 and byte3 is fixed now by constrain and flag common format with {} pair.
So finally, you only need to bureforce 5 bytes of printable ascii to resolve this challenge.
import string, struct
from itertools import product
possible = string.ascii_lowercase + string.punctuation + string.digits
for nxt in product(possible, repeat=5):
n = ''.join(nxt).encode()
s = b'_' + n[:2] + b'}' + n[2:] + b'#'
rtn = struct.unpack("<d", s)[0]
rtn = 1665002837.488342 / rtn
if abs(rtn - 4088116.817143337) <= 0.0000001192092895507812:
print(s)
And bingo the flag is FARADAY{d0ubl3_#nd_f1o#t_be#uty}

Discriminate bits after "bit stuffing"

I have written a piece of code to add a '0' after 6 consecutive '1' in a bit stream. But how to decode it?
Here an example of one bits stream:
original = {01101111110111000101111110001100...etc...}
stuffed = {011011111O101110001011111O10001100...etc...}
(The 'O' stand for the stuffed '0'.)
As you can see a '0' was added after each '111111' and to retrieve the original stream one has to remove it. Easy.
But... What if the original stream had the same form as the stuffed one? How do I know if I have to remove these bits?!
I think you are confused with the basics. Pretend you want a B added after 2 As. This is not 'stuffed':
AAAAA
'Stuffing' it gives you:
AABAABA
The above is either 'stuffed' or 'not stuffed'. In other words you can stuff it again:
AABBAABBA
Or you could 'unstuff' it:
AAAAAA
What if the original stream had the same form as the stuffed one?
So if a bitstream has 10 consecutive 1s in it then it has clearly not been stuffed. You can't say the same for a bitstream that could have been stuffed.
My question was so dumb... But it was late !
Here the piece of code I wrote. It takes two streams of bits. The length of the stream to be stuffed is in its first byte. It works well except the new length after stuffing is not yet updated.
I used macro so it's more readable.
#include "bitstuff.h"
#include <stdio.h>
#include <stdlib.h>
#include <inttypes.h>
#define sbi(byte, bit) (byte = byte | (1 << bit))
#define cbi(byte, bit) (byte = byte & ~ (1 << bit))
#define ibc(byte, bit) (~byte & (1 << bit))
#define ibs(byte, bit) (byte & (1 << bit))
#define clr(byte) (byte = 0)
void bitstuff(uint8_t* stream, uint8_t* stuff) {
int8_t k = 7, b = 7;
uint8_t row = 0;
uint8_t len = 8**stream++;
stuff++;
while(len--) {
if(ibs(*stream, k--)) {
row++;
if(row==5) {
cbi(*stuff, b--);
if(b<0) {b=7; stuff++;};
sbi(*stuff, b--);
if(b<0) {b=7; stuff++;};
}
else {
sbi(*stuff, b--);
if(b<0) {b=7; stuff++;};
}
}
else {
clr(row);
cbi(*stuff, b--);
if(b<0) {b=7; stuff++;};
}
if(k<0) {k=7; stream++;};
}
}

fwrite() in c writes bytes in a different order

#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int *int_pointer = (int *) malloc(sizeof(int));
// open output file
FILE *outptr = fopen("test_output", "w");
if (outptr == NULL)
{
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
*int_pointer = 0xabcdef;
fwrite(int_pointer, sizeof(int), 1, outptr);
//clean up
fclose(outptr);
free(int_pointer);
return 0;
}
this is my code and when I see the test_output file with xxd it gives following output.
$ xxd -c 12 -g 3 test_output
0000000: efcdab 00 ....
I'm expecting it to print abcdef instead of efcdab.
Which book are you reading? There are a number of issues in this code, casting the return value of malloc for example... Most importantly, consider the cons of using an integer type which might vary in size and representation from system to system.
An int is guaranteed the ability to store values between the range of -32767 and 32767. Your implementation might allow more values, but to be portable and friendly with people using ancient compilers such as Turbo C (there are a lot of them), you shouldn't use int to store values larger than 32767 (0x7fff) such as 0xabcdef. When such out-of-range conversions are performed, the result is implementation-defined; it could involve saturation, wrapping, trap representations or raising a signal corresponding to computational error, for example, the latter of two which could cause undefined behaviour later on.
You need to translate to an agreed-upon field format. When sending data over the write, or writing data to a file to be transferred to other systems, it's important that the protocol for communication be agreed upon. This includes using the same size and representation for integer fields. Both output and input should be followed by a translation function (serialisation and deserialisation, respectively).
Your fields are binary, and so your file should be opened in binary mode. For example, use fopen(..., "wb") rather than "w". In some situations, '\n' characters might be translated to pairs of \r\n characters, otherwise; Windows systems are notorious for this. Can you imagine what kind of havoc and confusion this could wreak? I can, because I've answered a question about this problem.
Perhaps uint32_t might be a better choice, but I'd choose unsigned long as uint32_t isn't guaranteed to exist. On that note, for systems which don't have htonl (which returns uint32_t according to POSIX), that function could be implemented like so:
uint32_t htonl(uint32_t x) {
return (x & 0x000000ff) << 24
| (x & 0x0000ff00) << 8
| (x & 0x00ff0000) >> 8
| (x & 0xff000000) >> 24;
}
As an example inspired by the above htonl function, consider these macros:
typedef unsigned long ulong;
#define serialised_long(x) serialised_ulong((ulong) x)
#define serialised_ulong(x) (x & 0xFF000000) / 0x1000000 \
, (x & 0xFF0000) / 0x10000 \
, (x & 0xFF00) / 0x100 \
, (x & 0xFF)
typedef unsigned char uchar;
#define deserialised_long(x) (x[3] <= 0x7f \
? deserialised_ulong(x) \
: -(long)deserialised_ulong((uchar[]) { 0x100 - x[0] \
, 0xFF - x[1] \
, 0xFF - x[2] \
, 0xFF - x[3] })
#define deserialised_ulong(x) ( x[0] * 0x1000000UL \
+ x[1] * 0x10000UL \
+ x[2] * 0x100UL \
+ x[3] )
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
FILE *f = fopen("test_output", "wb+");
if (f == NULL)
{
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
ulong value = 0xABCDEF;
unsigned char datagram[] = { serialised_ulong(value) };
fwrite(datagram, sizeof datagram, 1, f);
printf("%08lX serialised to %02X%02X%02X%02X\n", value, datagram[0], datagram[1], datagram[2], datagram[3]);
rewind(f);
fread(datagram, sizeof datagram, 1, f);
value = deserialised_ulong(datagram);
printf("%02X%02X%02X%02X deserialised to %08lX\n", datagram[0], datagram[1], datagram[2], datagram[3], value);
fclose(f);
return 0;
}
Use htonl()
It converts from whatever the host-byte-order is (endianness of your machine) to network byte order. So whatever machine you're running on you will get the the same byte order. These calls are used so that regardless of the host you're running on the bytes are sent over the network in the right order, but it works for you too.
See the man pages of htonl and byteorder. There are various conversion functions available, also for different integer sizes, 16-bit, 32-bit, 64-bit ...
#include <stdio.h>
#include <stdlib.h>
#include <arpa/inet.h>
int main(void) {
int *int_pointer = (int *) malloc(sizeof(int));
// open output file
FILE *outptr = fopen("test_output", "w");
if (outptr == NULL) {
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
*int_pointer = htonl(0xabcdef); // <====== This ensures correct byte order
fwrite(int_pointer, sizeof(int), 1, outptr);
//clean up
fclose(outptr);
free(int_pointer);
return 0;
}

Binary file that can only be printed in hex format , but not binary format

Here is a binary file that contains:
0xff 0xff 0xff
which is exactly three bytes.
I try to use the dump_file function here
#include "table.h"
#include "debug.h"
typedef unsigned int Code
void dump_file( char* fileName[] )
{
char c;
for (int i = 0; i < 4; ++i)
{
log_info("File: %s",fileName[i]);
FILE* file = fopen(fileName[i],"rb");
fread(&c,sizeof(char),1,file);
while( !feof(file) ){
dump_code( c , 8 );
fread(&c,sizeof(char),1,file);
}
}
}
void dump_code( Code code,int BitsNum )
{
int mask = 1 << (BitsNum-1);
for (int i = 0; i < BitsNum ; ++i)
{
if(i%8==0)putchar('|');
putchar((mask & code) ? '1' : '0');
code <<= 1;
}
puts("");
}
to print the file in binary format, but it prints nothing. ( Somehow it bumps into EOF in an undesirable manner ?? )
I also use the Unix unity xxd.
When I signal xxd to print my file in binary, it prints nothing. But if I choose to print hexademically, it prints as expected. What's wrong with this file?
This file is generated by a parser. The C program uses fseek to jump to various location in a file and print the corresponding binary code. It might go like:
0th byte --> 1st byte --> 3rd byte --> 5th byte --> 2nd byte --> 4th byte --> 6th byte
It is guaranteed that there is no "leak" in the resulting file, i.e, every byte will be traversed.
What is the reason for this strange behavior?
Update 1
While pointed out by samgak that this might be due to the interpretation of 0xff, some of my other experiments indicate that even file containing:
0x01 0x01 0x01
which results in the same phenomenon.
Update 2
Here's the relevent code that write Code into file:
#define CODE_FILE_NUM 3
void writeCode( FILE* out[] , Code code ){
for (int i = 0; i < CODE_FILE_NUM; ++i){
fwrite(&code,sizeof(char),1,out[i]);
code >>= 8;
}
}
Code is an unsigned int, which has 4 bytes. Function writeCode will only consider the lower 3 bytes and write each byte into 3 seperate files.
I have found the reason.
It's because I forgot to close the output files.
I tried to dump unclosed binary files ( That is: open and read data from files that haven't been closed. ) , which resulted in unpredictable behaviors.

how to disable fast frames in ath5k wireless driver

By default, fast frames are enabled in ath5k. (http://wireless.kernel.org/en/users/Drivers/ath5k)
I have found the macro which disables it
#define AR5K_EEPROM_FF_DIS(_v) (((_v) >> 2) & 0x1
The question is what do I do with it?
Do I replace the above line with
#define AR5K_EEPROM_FF_DIS(_v) 1
?
Do I compile it passing some parameter?
The bit-shift expression confuses me. Is _v a variable?
The question is more general as to how to deal with such macros in drivers. I've seen them in other codes too and always got confused.
Ok, I try to explain with a simplified example
#include <stdio.h>
/* Just for print in binary mode */
char *chartobin(unsigned char c)
{
static char a[9];
int i;
for (i = 0; i < 8; i++)
a[7 - i] = (c & (1 << i)) == (1 << i) ? '1' : '0';
a[8] = '\0';
return a;
}
int main(void)
{
unsigned char u = 0xf;
printf("%s\n", chartobin(u));
u >>= 2; // Shift bits 2 positions (to the right)
printf("%s\n", chartobin(u));
printf("%s\n", chartobin(u & 0x1)); // Check if the last bit is on
return 0;
}
Output:
00001111
00000011
00000001
Do I replace the above line with #define AR5K_EEPROM_FF_DIS(_v) 1?
Nooooo!!
If you initialize u with 0xb instead of 0xf you get:
00001011
00000010
00000000
As you can see (((_v) >> 2) & 0x1 != 1
Fast frames are not enabled or used on ath5k. It's a feature allowing the card to send multiple frames at once (think of it as an early version of 11n frame aggregation) that's implemented on MadWiFi and their proprietary drivers and can only be used with an Access Point that also supports it. What you see there is a flag stored at the device's EEPROM that instructs the driver if fast frames can be used or not, that macro you refer to just checks if that flag is set. You can modify the header file to always return 1 but that wouldn't make any difference, the driver never uses that information.

Resources