how to convert a char to binary? - c

is there a simple way to convert a character to its binary representation?
Im trying to send a message to another process, a single bit at a time.
So if the message is "Hello", i need to first turn 'H' into binary, and then send the bits in order.
Storing in an array would be preferred.
Thanks for any feedback, either pseudo code or actual code would be the most helpful.
I think I should mention this is for a school assignment to learn about signals... it's just an interesting way to learn about them. SIGUSR1 is used as 0, SIGUSR2 is used as 1, and the point is to send a message from one process to another, pretending the environment is locking down other methods of communication.

You have only to loop for each bit do a shift and do an logic AND to get the bit.
for (int i = 0; i < 8; ++i) {
send((mychar >> i) & 1);
}
For example:
unsigned char mychar = 0xA5; // 10100101
(mychar >> 0) 10100101
& 1 & 00000001
============= 00000001 (bit 1)
(mychar >> 1) 01010010
& 1 & 00000001
============= 00000000 (bit 0)
and so on...

What about:
int output[CHAR_BIT];
char c;
int i;
for (i = 0; i < CHAR_BIT; ++i) {
output[i] = (c >> i) & 1;
}
That writes the bits from c into output, least significant bit first.

Related

How to reverse strings that have been obfuscated using floats and double?

I'm working on a crackme , and having a bit of trouble making sense of the flag I'm supposed to retrieve.
I have disassembled the binary using radare2 and ghidra , ghidra gives me back the following pseudo-code:
undefined8 main(void)
{
long in_FS_OFFSET;
double dVar1;
double dVar2;
int local_38;
int local_34;
int local_30;
int iStack44;
int local_28;
undefined2 uStack36;
ushort uStack34;
char local_20;
undefined2 uStack31;
uint uStack29;
byte bStack25;
long local_10;
local_10 = *(long *)(in_FS_OFFSET + 0x28);
__printf_chk(1,"Insert flag: ");
__isoc99_scanf(&DAT_00102012,&local_38);
uStack34 = uStack34 << 8 | uStack34 >> 8;
uStack29 = uStack29 & 0xffffff00 | (uint)bStack25;
bStack25 = (undefined)uStack29;
if ((((local_38 == 0x41524146) && (local_34 == 0x7b594144)) && (local_30 == 0x62753064)) &&
(((iStack44 == 0x405f336c && (local_20 == '_')) &&
((local_28 == 0x665f646e && (CONCAT22(uStack34,uStack36) == 0x40746f31)))))) {
dVar1 = (double)CONCAT26(uStack34,CONCAT24(uStack36,0x665f646e));
dVar2 = (double)CONCAT17((undefined)uStack29,CONCAT43(uStack29,CONCAT21(uStack31,0x5f)));
__printf_chk(0x405f336c62753064,1,&DAT_00102017);
__printf_chk(dVar1,1,"y: %.30lf\n");
__printf_chk(dVar2,1,"z: %.30lf\n");
dVar1 = dVar1 * 124.8034902710365;
dVar2 = (dVar1 * dVar1) / dVar2;
round_double(dVar2,0x1e);
__printf_chk(1,"%.30lf\n");
dVar1 = (double)round_double(dVar2,0x1e);
if (1.192092895507812e-07 <= (double)((ulong)(dVar1 - 4088116.817143337) & 0x7fffffffffffffff))
{
puts("Try Again");
}
else {
puts("Well done!");
}
}
if (local_10 != *(long *)(in_FS_OFFSET + 0x28)) {
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
return 0;
}
It is easy to see that there's a part of the flag in plain-sight , but the other part is a bit more interesting :
if (1.192092895507812e-07 <= (double)((ulong)(dVar1 - 4088116.817143337) & 0x7fffffffffffffff))
From what I understand , I have to generate the missing part of the flag depending on this condition . The problem is that I absolutely have no idea how to do this .
I can assume this missing part is 8 bytes of size , according to this line :
dVar2=(double)CONCAT17((undefined)uStack29,CONCAT43(uStack29,CONCAT21(uStack31,0x5f)));`
Considering flags are usually ascii , with some special characters , let's say , each byte will have values from 0x21 to 0x7E , that's almost 8^100 combinations , which will clearly take too much time to compute.
Do you guys have an idea on how I should proceed to solve this ?
Edit : Here is the link to the binary : https://filebin.net/dpfr1nocyry3sijk
You can tweak the Ghidra reverse result by edit variable type. Based on scanf const string %32s your local_38 should be char [32].
Before the first if, there are some char swap.
And the first if statment give you a long constrain of flag
At this point, you can confirm part of flag is FARADAY{d0ubl3_#nd_f1o#t, then is ther main part of this challenge.
It print x, y, z based on the flag, but you'll quickly find x and y is constrain by the if, so you only need to solve z to get the flag, so you think you need to bruteforce all double value limit by printable ascii.
But there are a limitaion in if statment says byte0 of this double must be _ and a math constrain there, simple math tell dVar2 - 4088116.817143337 <= 1.192092895507813e-07 and it comes dVar2 is very close 4088116.817143337
And byte 3 and byte 7 in this double will swap
By reverse result: dVar2 = y*y*x*x/z, solve this equation you can say z must near 407.2786840401004 and packed to little endian is `be}uty#. Based on double internal structure format, MSB will affect exponent, so you can make sure last byte is # and it shows byte0 and byte3 is fixed now by constrain and flag common format with {} pair.
So finally, you only need to bureforce 5 bytes of printable ascii to resolve this challenge.
import string, struct
from itertools import product
possible = string.ascii_lowercase + string.punctuation + string.digits
for nxt in product(possible, repeat=5):
n = ''.join(nxt).encode()
s = b'_' + n[:2] + b'}' + n[2:] + b'#'
rtn = struct.unpack("<d", s)[0]
rtn = 1665002837.488342 / rtn
if abs(rtn - 4088116.817143337) <= 0.0000001192092895507812:
print(s)
And bingo the flag is FARADAY{d0ubl3_#nd_f1o#t_be#uty}

How to check the number of set bits in an 8-bit unsigned char?

So I have to find the set bits (on 1) of an unsigned char variable in C?
A similar question is How to count the number of set bits in a 32-bit integer? But it uses an algorithm that's not easily adaptable to 8-bit unsigned chars (or its not apparent).
The algorithm suggested in the question How to count the number of set bits in a 32-bit integer? is trivially adapted to 8 bit:
int NumberOfSetBits( uint8_t b )
{
b = b - ((b >> 1) & 0x55);
b = (b & 0x33) + ((b >> 2) & 0x33);
return (((b + (b >> 4)) & 0x0F) * 0x01);
}
It is simply a case of shortening the constants the the least significant eight bits, and removing the final 24 bit right-shift. Equally it could be adapted for 16bit using an 8 bit shift. Note that in the case for 8 bit, the mechanical adaptation of the 32 bit algorithm results in a redundant * 0x01 which could be omitted.
The fastest approach for an 8-bit variable is using a lookup table.
Build an array of 256 values, one per 8-bit combination. Each value should contain the count of bits in its corresponding index:
int bit_count[] = {
// 00 01 02 03 04 05 06 07 08 09 0a, ... FE FF
0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, ..., 7, 8
};
Getting a count of a combination is the same as looking up a value from the bit_count array. The advantage of this approach is that it is very fast.
You can generate the array using a simple program that counts bits one by one in a slow way:
for (int i = 0 ; i != 256 ; i++) {
int count = 0;
for (int p = 0 ; p != 8 ; p++) {
if (i & (1 << p)) {
count++;
}
}
printf("%d, ", count);
}
(demo that generates the table).
If you would like to trade some CPU cycles for memory, you can use a 16-byte lookup table for two 4-bit lookups:
static const char split_lookup[] = {
0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4
};
int bit_count(unsigned char n) {
return split_lookup[n&0xF] + split_lookup[n>>4];
}
Demo.
I think you are looking for Hamming Weight algorithm for 8bits?
If it is true, here is the code:
unsigned char in = 22; //This is your input number
unsigned char out = 0;
in = in - ((in>>1) & 0x55);
in = (in & 0x33) + ((in>>2) & 0x33);
out = ((in + (in>>4) & 0x0F) * 0x01) ;
Counting the number of digits different than 0 is also known as a Hamming Weight. In this case, you are counting the number of 1's.
Dasblinkenlight provided you with a table driven implementation, and Olaf provided you with a software based solution. I think you have two other potential solutions. The first is to use a compiler extension, the second is to use an ASM specific instruction with inline assembly from C.
For the first alternative, see GCC's __builtin_popcount(). (Thanks to Artless Noise).
For the second alternative, you did not specify the embedded processor, but I'm going to offer this in case its ARM based.
Some ARM processors have the VCNT instruction, which performs the count for you. So you could do it from C with inline assembly:
inline
unsigned int hamming_weight(unsigned char value) {
__asm__ __volatile__ (
"VCNT.8"
: "=value"
: "value"
);
return value;
}
Also see Fastest way to count number of 1s in a register, ARM assembly.
For completeness, here is Kernighan's bit counting algorithm:
int count_bits(int n) {
int count = 0;
while(n != 0) {
n &= (n-1);
count++;
}
return count;
}
Also see Please explain the logic behind Kernighan's bit counting algorithm.
I made an optimized version. With a 32-bit processor, utilizing multiplication, bit shifting and masking can make smaller code for the same task, especially when the input domain is small (8-bit unsigned integer).
The following two code snippets are equivalent:
unsigned int bit_count_uint8(uint8_t x)
{
uint32_t n;
n = (uint32_t)(x * 0x08040201UL);
n = (uint32_t)(((n >> 3) & 0x11111111UL) * 0x11111111UL);
/* The "& 0x0F" will be optimized out but I add it for clarity. */
return (n >> 28) & 0x0F;
}
/*
unsigned int bit_count_uint8_traditional(uint8_t x)
{
x = x - ((x >> 1) & 0x55);
x = (x & 0x33) + ((x >> 2) & 0x33);
x = ((x + (x >> 4)) & 0x0F);
return x;
}
*/
This produces smallest binary code for IA-32, x86-64 and AArch32 (without NEON instruction set) as far as I can find.
For x86-64, this doesn't use the fewest number of instructions, but the bit shifts and downcasting avoid the use of 64-bit instructions and therefore save a few bytes in the compiled binary.
Interestingly, in IA-32 and x86-64, a variant of the above algorithm using a modulo ((((uint32_t)(x * 0x08040201U) >> 3) & 0x11111111U) % 0x0F) actually generates larger code, due to a requirement to move the remainder register for return value (mov eax,edx) after the div instruction. (I tested all of these in Compiler Explorer)
Explanation
I denote the eight bits of the byte x, from MSB to LSB, as a, b, c, d, e, f, g and h.
abcdefgh
* 00001000 00000100 00000010 00000001 (make 4 copies of x
--------------------------------------- with appropriate
abc defgh0ab cdefgh0a bcdefgh0 abcdefgh bit spacing)
>> 3
---------------------------------------
000defgh 0abcdefg h0abcdef gh0abcde
& 00010001 00010001 00010001 00010001
---------------------------------------
000d000h 000c000g 000b000f 000a000e
* 00010001 00010001 00010001 00010001
---------------------------------------
000d000h 000c000g 000b000f 000a000e
... 000h000c 000g000b 000f000a 000e
... 000c000g 000b000f 000a000e
... 000g000b 000f000a 000e
... 000b000f 000a000e
... 000f000a 000e
... 000a000e
... 000e
^^^^ (Bits 31-28 will contain the sum of the bits
a, b, c, d, e, f, g and h. Extract these
bits and we are done.)
Maybe not the fastest, but straightforward:
int count = 0;
for (int i = 0; i < 8; ++i) {
unsigned char c = 1 << i;
if (yourVar & c) {
//bit n°i is set
//first bit is bit n°0
count++;
}
}
For 8/16 bit MCUs, a loop will very likely be faster than the parallel-addition approach, as these MCUs cannot shift by more than one bit per instruction, so:
size_t popcount(uint8_t val)
{
size_t cnt = 0;
do {
cnt += val & 1U; // or: if ( val & 1 ) cnt++;
} while ( val >>= 1 ) ;
return cnt;
}
For the incrementation of cnt, you might profile. If still too slow, an assember implementation might be worth a try using carry flag (if available). While I am in against using assembler optimizations in general, such algorithms are one of the few good exceptions (still just after the C version fails).
If you can omit the Flash, a lookup table as proposed by #dasblinkenlight is likey the fastest approach.
Just a hint: For some architectures (notably ARM and x86/64), gcc has a builtin: __builtin_popcount(), you also might want to try if available (although it takes int at least). This might use a single CPU instruction - you cannot get faster and more compact.
Allow me to post a second answer. This one is the smallest possible for ARM processors with Advanced SIMD extension (NEON). It's even smaller than __builtin_popcount() (since __builtin_popcount() is optimized for unsigned int input, not uint8_t).
#ifdef __ARM_NEON
/* ARM C Language Extensions (ACLE) recommends us to check __ARM_NEON before
including <arm_neon.h> */
#include <arm_neon.h>
unsigned int bit_count_uint8(uint8_t x)
{
/* Set all lanes at once so that the compiler won't emit instruction to
zero-initialize other lanes. */
uint8x8_t v = vdup_n_u8(x);
/* Count the number of set bits for each lane (8-bit) in the vector. */
v = vcnt_u8(v);
/* Get lane 0 and discard other lanes. */
return vget_lane_u8(v, 0);
}
#endif

how to disable fast frames in ath5k wireless driver

By default, fast frames are enabled in ath5k. (http://wireless.kernel.org/en/users/Drivers/ath5k)
I have found the macro which disables it
#define AR5K_EEPROM_FF_DIS(_v) (((_v) >> 2) & 0x1
The question is what do I do with it?
Do I replace the above line with
#define AR5K_EEPROM_FF_DIS(_v) 1
?
Do I compile it passing some parameter?
The bit-shift expression confuses me. Is _v a variable?
The question is more general as to how to deal with such macros in drivers. I've seen them in other codes too and always got confused.
Ok, I try to explain with a simplified example
#include <stdio.h>
/* Just for print in binary mode */
char *chartobin(unsigned char c)
{
static char a[9];
int i;
for (i = 0; i < 8; i++)
a[7 - i] = (c & (1 << i)) == (1 << i) ? '1' : '0';
a[8] = '\0';
return a;
}
int main(void)
{
unsigned char u = 0xf;
printf("%s\n", chartobin(u));
u >>= 2; // Shift bits 2 positions (to the right)
printf("%s\n", chartobin(u));
printf("%s\n", chartobin(u & 0x1)); // Check if the last bit is on
return 0;
}
Output:
00001111
00000011
00000001
Do I replace the above line with #define AR5K_EEPROM_FF_DIS(_v) 1?
Nooooo!!
If you initialize u with 0xb instead of 0xf you get:
00001011
00000010
00000000
As you can see (((_v) >> 2) & 0x1 != 1
Fast frames are not enabled or used on ath5k. It's a feature allowing the card to send multiple frames at once (think of it as an early version of 11n frame aggregation) that's implemented on MadWiFi and their proprietary drivers and can only be used with an Access Point that also supports it. What you see there is a flag stored at the device's EEPROM that instructs the driver if fast frames can be used or not, that macro you refer to just checks if that flag is set. You can modify the header file to always return 1 but that wouldn't make any difference, the driver never uses that information.

Arduino: Printing ASCII characters to LEDs / pointer weirdness

I've made a 24 x 15 LED matrix using TLC5940s and shift registers, running on an Arduino Due.
First thing first, I want to print an ASCII char, eg. 'A' directly to the display.
So I stored a font array in PROGMEM, using a link I found :
PROGMEM prog_uchar font[][5] = {
{0x00,0x00,0x00,0x00,0x00}, // 0x20 32
{0x00,0x00,0x6f,0x00,0x00}, // ! 0x21 33
...
{0x08,0x14,0x22,0x41,0x00}, // < 0x3c 60
{0x14,0x14,0x14,0x14,0x14}, // = 0x3d 61
{0x00,0x41,0x22,0x14,0x08}, // > 0x3e 62
{0x02,0x01,0x51,0x09,0x06}, // ? 0x3f 63
etc.
Now, the weird part, is that everything almost works, but is shifted slightly.
For example, '>' gets morphed to this:
00000000 --> 00001000
01000001 --> 00000000
00100010 --> 01000001
00010100 --> 00100010
00001000 --> 00010100
So i think it is a pointer boundary issue. Maybe I've just been looking at it for too long, but I don't know!
Here is the relevant code, where I've hardcoded the values for '<'. I did a Serial.println, and it is showing the correct value for 'b'. x and y are 0 and 0, to place the character in the corner. The commented-out assignment of 'b' is what I usually do, instead of hardcoding. charWidth = 5 and charHeight = 7. setRainbowSinkValue(int led, int brightness) has worked forever, so it's not that.
unsigned char testgt[] = {0x00,0x41,0x22,0x14,0x08};
for (int iterations = 0; iterations < time; iterations++)
{
unsigned char indexOfWidth = 0;
unsigned char indexOfHeight = 0;
for (int col = x; col < x + charWidth; col++)
{
openCol(col);
Tlc.clear();
unsigned char b = testgt[indexOfWidth];//pgm_read_byte_near(font[ch] + indexOfWidth );
//Serial.println(b);
indexOfHeight = 0;
for (int ledNum = y; ledNum < y + charHeight; ledNum++)
{
if (b & (1 << indexOfHeight))
{
setRainbowSinkValue(ledNum, 100);
}
indexOfHeight++;
}
indexOfWidth++;
Tlc.update();
}
}
Can you see anything wrong? The most likely issue is the bit operations, but it makes sense...
b & 1...
b & 2...
b & 4...
b & 8...
b & 16...
b & 32...
b & 64...
b & 128...
Basically the value you extract from testgt[] goes into the wrong column on the display. The presented code looks fine. It's something in what you haven't shown. Perhaps, your circuitry is indexing the input differently from what you're expecting.
Ok I worked it out... There seems to be a slight timing difference between the speed of the 74hc595 and the TLC5940s, so what's happening is that there is residual flicker carried over to the next column because the TLC isn't cleared by the time the 74hc595 opens the new column. So I switched Tlc.clear() and openCol(col) and added a delay(5) in between, and that helps. Doesn't fix it entirely, but delay(15) hurts the eyes, and it's an acceptable compromise.

ASCII compressor works for short test file, not on long

The current project in Systems Programming is to come up with an ASCII compressor that removes the top zero bit and writes the contents to the file.
In order to facilitate decompression, the original file size is written to file, then the compressed char bytes. There are two files to run tests on- one that is 63 bytes long, and the other is 5344213 bytes. My code below works as expected for the first test file, as it writes 56 bytes of compressed text plus 4 bytes of file header.
However, when I try it on the long test file, the compressed version is 3 bytes shorter than the original, when it should be roughly 749KiB smaller, or 14% of original size. I've worked out the binary bit shift values for the first two write loops of the long test file, and they match up what is being recorded on my test printout.
while ( (characters= read(openReadFile, unpacked, BUFFER)) >0 ){
unsigned char packed[7]; //compression storage
int i, j, k, writeCount, endLength, endLoop;
//loop through the buffer array
for (i=0; i< characters-1; i++){
j= i%7;
//fill up the compressed array
packed[j]= packer(unpacked[i], unpacked[i+1], j);
if (j == 6){
writeCalls++; //track how many calls made
writeCount= write(openWriteFile, packed, sizeof (packed));
int packedSize= writeCount;
for (k=0; k<7 && writeCalls < 10; k++)
printf("%X ", (int)packed[k]);
totalWrittenBytes+= packedSize;
printf(" %d\n", packedSize);
memset(&packed[0], 0, sizeof(packed)); //clear array
if (writeCount < 0)
printOpenErrors(writeCount);
}
//end of buffer array loop
endLength= characters-i;
if (endLength < 7){
for (endLoop=0; endLoop < endLength-1; endLoop++){
packed[endLoop]= packer(unpacked[endLoop], unpacked[endLoop+1], endLoop);
}
packed[endLength]= calcEndBits(endLength, unpacked[endLength]);
}
} //end buffer array loop
} //end file read loop
The packer function:
//calculates the compressed byte value for the array
char packer(char i, char j, int k){
char packStyle;
switch(k){
//shift bits based on mod value with 8
case 0:
packStyle= ((i & 0x7F) << 1) | ((j & 0x40) >> 6);
break;
case 1:
packStyle= ((i & 0x3F) << 2) | ((j & 0x60) >> 5);
break;
case 2:
packStyle= ((i & 0x1F) << 3) | ((j & 0x70) >> 4);
break;
case 3:
packStyle= ((i & 0x0F) << 4) | ((j & 0x78) >> 3);
break;
case 4:
packStyle= ((i & 0x07) << 5) | ((j & 0x7C) >> 2);
break;
case 5:
packStyle= ((i & 0x03) << 6) | ((j & 0x7E) >> 1);
break;
case 6:
packStyle= ( (i & 0x01 << 7) | (j & 0x7F));
break;
}
return packStyle;
}
I've verified that there are 7 bytes written out every time the packed buffer is flushed, and there are 763458 write calls made for the long file, which match up to 5344206 bytes written.
I'm getting the same hex codes from the printout that I worked out in binary beforehand, and I can see the top bit of every byte removed. So why aren't the bit shifts being reflected in the results?
Ok, since this is homework I'll just give you a few hints without giving out a solution.
First are you sure that the 56 bytes you get on the first file are the right bytes? Sure the count looks good, but you got lucky on count (proof is the second test file). I can immediately see at least two key mistakes in the code.
To make sure you have the right output, the byte count is not enough. You need to dig deeper. How about checking the bytes themselves one by one. 63 characters is not that much to go heh? There are many ways you can do this. You could use od (a pretty good Linux/Unix tool to look at the binary contents of files, if you're on Windows use some Hex editor). Or you could print out debug information from within your program.
Good luck.
Why do you expect the output to be 14% shorter than the input? How could it, when you store a byte into packed as many times as there are input bytes, except for the last group? The size of the output will always be within 7 of the size of the input.

Resources