Decoding a base64 encoded binary string in C - c

I am trying to get my decoder code to work. I am using the example 64-bit encoded string from wikipedia, trying to reproduce the text they encoded.
#include <stdio.h>
//Convert raw binary character to the cb64 index
unsigned char get_cb64(unsigned char c){
if(c>='A' && c<='Z'){return c-'A';}
if(c>='a' && c<='z'){return c-'G';}
if(c>='0' && c<='9'){return c+4;}
if(c=='+'){return '>';}
if(c=='/'){return '?';}
else{return 0;}
}
void main(int argc, char** argv)
{
unsigned char* str = "TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJ5IGhpcyByZWFzb24sIGJ1dCBieSB0aGlzIHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbmltYWxzLCB3aGljaCBpcyBhIGx1c3Qgb2YgdGhlIG1pbmQsIHRoYXQgYnkgYSBwZXJzZXZlcmFuY2Ugb2YgZGVsaWdodCBpbiB0aGUgY29udGludWVkIGFuZCBpbmRlZmF0aWdhYmxlIGdlbmVyYXRpb24gb2Yga25vd2xlZGdlLCBleGNlZWRzIHRoZSBzaG9ydCB2ZWhlbWVuY2Ugb2YgYW55IGNhcm5hbCBwbGVhc3VyZS4=";
//convert each binary character to its cb64 index
int size = 360;
int num_bytes = 8;
unsigned char str_cb64[size + 1];
int cb64_idx;
int i;
for(i=0; i < size; i++){
str_cb64[i]=get_cb64(str[i]);
}
str_cb64[size] = 0;
//convert blocks of 4 6 bit chars to 3 8 bit chars
int end_size = size*6/8;
unsigned char ascii_out[end_size];
int out_idx = 0;
int in_idx = 0;
while(in_idx < end_size/4){
ascii_out[out_idx] = str_cb64[in_idx+0] << 2 | str_cb64[in_idx+1] >> 4;
ascii_out[out_idx+1] = str_cb64[in_idx+1] << 4 | str_cb64[in_idx+2] >> 2;
ascii_out[out_idx+2] = str_cb64[in_idx+2] << 6 | str_cb64[in_idx+3];
out_idx += 3;
in_idx += 4;
}
for(i=0; i < end_size; i++){printf("%d\n",ascii_out[i]);}
}
To inspect, the code here prints the ascii value of each decoded character, which SHOULD be between 48 and 122, but there are values from (0, 255). I tested the conversion from the raw binary to the cb64 index, and that seems to work fine. The problem is in my shifting code. Any idea why it isn't working? I double checked the shifts and they look like they are coded correctly.
Thanks!

Your loop should be either while(in_idx < size) or while(out_idx < end_size). Right now you are comparing an input value to an output value, as well as dividing the output value even though you add one for each byte instead of each iteration. This will cause your loop to exit well before all of the data has been processed. Since ascii_out wasn't initialized, this could be the only problem if the beginning of the output is right, since the end will contain random data which happened to be in that space.

Related

Convert int to ASCII characters in C

How can I convert integer value to ASCII characters in C language?
I want to assign characters to array of chars.
char buff[10];
Let's say we have:
int = 93 (HEX: 5D) -> result should be - buff = {']'}
int = 13398 (HEX: 3456) -> result should be buff = {'4', 'V'}
Similar as is done here
I don't need to care about non printable characters. There will be always printable characters.
Just use bit-shifting to get the individual bytes.
Assuming an architecture on which the size of int is 4:
int someInt = ...
uint8_t first = (someInt >> 24);
uint8_t second = (someInt >> 16);
uint8_t third = (someInt >> 8);
uint8_t fourth = someInt;
Now you can just put the resulting bytes into your array. Make sure to check first, second and third to make sure they're not 0 first, and skip them if they are. Make sure to end your array with a null terminator, as required by C strings.
This answer assumes big-endian ordering, since that's what you indicated in your example. If you want little-endian, just reverse the order of the bytes when you put them in the array.
Note that this will turn 5DC into 05 and DC. If you want 5D instead, you should check to see whether the first digit in the original int is 0. You can do this using the & operator, testing the int against 0xf0000000, 0x00f00000, etc. If you find the first digit to be 0, shift the int to the right by 4 bits before extracting the bytes from it.
So, something like this:
void ExtractBytes(int anInt, uint8_t *buf, size_t bufSize) {
// passing an empty buffer to this function would be stupid,
// but hey, doesn't hurt to be idiot-proof
if (bufSize == 0) { return; }
// Get our sizes
const int intSize = sizeof(anInt);
const int digitCount = intSize * 2;
// find first non-zero digit
int firstNonZero = -1;
for (int i = 0; i < digitCount; i++) {
if ((anInt & (0xf << ((digitCount - 1 - i) * 4))) != 0) {
firstNonZero = i;
break;
}
}
if (firstNonZero < 0) {
// empty string; just bail out.
buf[0] = 0;
return;
}
// check whether first non-zero digit is even or odd;
// shift if it's odd
int intToUse = (firstNonZero % 2 != 0) ? (anInt >> 4) : anInt;
// now, just extract our bytes to the buffer
int bufPtr = 0;
for (int i = intSize - 1; i >= 0; i--) {
// shift over the appropriate amount, mask against 0xff
uint8_t byte = (intToUse >> (i * 8));
// If the byte is 0, we can just skip it
if (byte == 0) {
continue;
}
// always check to make sure we don't overflow our buffer.
// if we're on the last byte, make it a null terminator and bail.
if (bufPtr == bufSize - 1) {
buf[bufPtr] = 0;
return;
}
// Copy our byte into the buffer
buf[bufPtr++] = byte;
}
// Now, just terminate our string.
// We can be sure that bufPtr will be less than bufSize,
// since we checked for that in the loop. So:
buf[bufPtr] = 0;
// Aaaaaand we're done
}
Now let's take it for a spin:
uint8_t buf[10];
ExtractBytes(0x41424344, buf, 10);
printf("%s\n", buf);
ExtractBytes(0x4142434, buf, 10);
printf("%s\n", buf);
and the output:
ABCD
ABC
convert integer value to ASCII characters in C language?...
Referring to an ASCII table, the value of ']' in C will always be interpreted as 0x5D, or decimal value 93. While the value of "]" in C will always be interpreted as a NULL terminated char array, i.e., a string representation comprised of the values:
|93|\0|
(As illustrated in This Answer, similar interpretations are valid for all ASCII characters.)
To convert any of the integer (char) values to something that looks like a "]", you can use a string function to convert the char value to a string representation. For example all of these variations will perform that conversion:
char strChar[2] = {0};
sprintf(strChar, "%c", ']');
sprintf(strChar, "%c", 0x5D);
sprintf(strChar, "%c", 93);
and each produce the identical C string: "]".
I want to assign characters to array of chars...
example of how to create an array of char, terminated with a NULL char, such as "ABC...Z":
int i;
char strArray[27] = {0};
for(i=0;i<26;i++)
{
strArray[i] = i+'A';
}
strArray[i] = 0;
printf("Null terminated array of char: %s\n", strArray);
unsigned u = ...;
if (0x10 > u)
exit(EXIT_FAILURE);
while (0x10000 < u) u /= 2;
while (0x1000 > u) u *= 2;
char c[2] = {u / 0x100, u % 0x100);

converting base 4 code to letters

im working on an assmebler project that i have and i need to translate binary machine code that i have to a "weird" 4 base code for example
if i get binary code like this "0000-10-01-00" i should translate it to "aacba"
00=a
01=b
10=c
11=d
i have managed to translate the code to 4 base code but i dont know how to continue from there or if this is the right way to do it,...
adding my code below
void intToBase4 (unsigned int *num)
{
int d[7];
int j,i=0;
double x=0;
while((*num)>0)
{
d[i]=(*num)%4;
i++;
(*num)=(*num)/4;
}
for(x=0,j=i-1; j>=0; j--)
{
x += d[j]*pow(10,j);
}
(*num)=(unsigned int)x;
}
I've included a little 32-bit to num to letter converter for you to grasp the basics. It works a single "32-bit number" at a time. You could use this as a basis for an array based solution like you have half way done in your example, or change the type to be bigger, or whatever. It should show you roughly what you need to do:
void intToBase4 (uint32_t num, char *outString)
{
// There are 16 digits per num in this example
for(int i=0; i<16; i++)
{
// Grab the lowest 2 bits and convert to a letter.
*outString++ = (num & 0x03) + 'a';
// Shift next 2 bits low
num >>= 2;
}
// NUL terminate string.
*outString = '\0';
}
A bit more universal one:
Usage:
value - value to decode, buff - buff where result string will be stored, numofwrds - number of fields to be decoded, ... fields sizes in bits
example "xxxxyyvvzz": - 4 bits, two bits, two bits, two bits
decode(v, buff, 4, 4, 2, 2, 2);
char dictionary[] = "abcdefghijklmnopqrstuwxyz";
char *decode(unsigned int value, char *buff, int numofwrds, ...)
{
va_list vl;
int *fieldsizes = malloc(sizeof(int) * numofwrds);
int bitsize = 0;
char *result = NULL;
if (fieldsizes != NULL)
{
va_start(vl, numofwrds);
for (int i = 0; i < numofwrds; i++)
{
fieldsizes[i] = va_arg(vl, int);
bitsize += fieldsizes[i];
}
va_end(vl);
for (int i = 0; i < numofwrds; i++)
{
unsigned int mask, offset;
mask = (1 << fieldsizes[i]) - 1;
offset = bitsize - fieldsizes[i];
mask <<= offset;
buff[i] = dictionary[(value & mask) >> offset];
bitsize -= fieldsizes[i];
}
free(fieldsizes);
result = buff;
}
buff[numofwrds] = '\0';
return result;
}

Storing bits from an array in an integer

So i have an array of bits, basically 0's and 1's in a character array.
Now what I want to do is store these bits in an integer I have in another array (int array), but I'm not sure how to do this.
Here is my code to get the bits:
char * convertStringToBits(char * string) {
int i;
int stringLength = strlen(string);
int mask = 0x80; /* 10000000 */
char *charArray;
charArray = malloc(8 * stringLength + 1);
if(charArray == NULL) {
printf("An error occured!\n");
return NULL; //error - cant use charArray
}
for(i = 0; i < stringLength; i++) {
mask = 0x80;
char c = string[i];
int x = 0;
while(mask > 0) {
char n = (c & mask) > 0;
printf("%d", n);
charArray[x++] = n;
mask >>= 1; /* move the bit down */
}
printf("\n");
}
return charArray;
}
This gets a series of bits in an array {1, 0, 1, 1, 0, 0, 1} for example. I want to store this in the integers that I have in another array. I've heard about integers having unused space or something.
For Reference: The integer values are red values from the rgb colour scheme.
EDIT:
To use this I would store this string in the integer values, later to be decoded the same way to retrieve the message (steganography).
So you want to do LSB substitution for the integers, the simplest form of steganography.
It isn't that integers have unused space, it's just that changing the LSB changes the value of an integer by 1, at most. So if you're looking at pixels, changing their value by 1 won't be noticeable by the human eye. In that respect, the LSB holds redundant information.
You've played with bitwise operations. You basically want to clear the last bit of an integer and substitute it with the value of one of your bits. Assuming your integers range between 0 and 255, you can do the following.
pixel = (pixel & 0xfe) | my_bit;
Edit: Based on the code snippet from the comments, you can achieve this like so.
int x;
for (x = 0; x < messageLength; x++) {
rgbPixels[x][0] = (rgbPixels[x][0] & 0xfe) | bitArray[x];
}
Decoding is much simpler, in that all you need to do is read the value of the LSB of each pixel. The question here is how will you know how many pixels to read? You have 3 options:
The decoder knows the message length in advance.
The message length is similarly hidden in some known location so that the decoder can extract it. For example, 16 bits representing in binary the message length, which is hidden in the first 16 pixels before bitArray.
You use an end-of-message marker, where you keep extracting bits until you hit a signature sequence that signals you to stop. For example, eight 0s in a row. You must make sure that how long the sequence and whatever it will be, it mustn't be encountered prematurely in your bit array.
So say somehow you have allocated the size for the message length. You can simply get extract your bit array (after allocation) like so.
int x;
for (x = 0; x < messageLength; x++) {
bitArray[x] = rgbPixels[x][0] & 0x01;
}
This converts a string to the equivalent int.
char string[] = "101010101";
int result = 0;
for (int i=0; i<strlen(string); i++)
result = (result<<1) | string[i]=='1';

how to store the value of a variable to string array?

Here I have created a string and I am storing the binary value of a number in the string.. I want to store the value of the variable num to the string.
i contains the length of the binary number for the given decimal number..suppose the given number is A=6, i contains 3 and i need a string 'result' having '110' which is the binary value of 6.
char* result = (char *)malloc((i)* sizeof(char));
i--;
while(A>=1)
{
num=A%2;
result[i]=num; // here I need to store the value of num in the string
A=A/2;
i--;
}
It appears from the code you've posted is that what you are trying to do is to print a number in binary in a fixed precision. Assuming that's what you want to do, something like
unsigned int mask = 1 << (i - 1);
unsigned int pos = 0;
while (mask != 0) {
result[pos] = (A & mask) == 0 ? '0' : '1';
++pos;
mask >>= 1;
}
result[pos] = 0; //If you need a null terminated string
edge cases left as an exercise for the reader.
I'm not sure specifically what you are asking for. Do you mean the binary representation (i.e. 00001000) of a number written into a string or converting the variable to a string (i.e. 8)? I'll assume you mean the first.
The easiest way to do this is to repeatedly test the least significant bit and shift the value to the right (>>). We can do this in for loop. However you will need to know how many bits you need to read. We can do this with sizeof.
int i = 15;
for (int b = 0; b < sizeof(i); ++b) {
uint8_t bit_value = (i & 0x1);
i >>= 1;
}
So how do we turn this iteration into a string? We need to construct the string in reverse. We know how many bits are needed, so we can create a string buffer accordingly with an extra byte for NULL termination.
char *buffer = calloc(sizeof(i) + 1, sizeof(char));
What this does is allocates memory that is sizeof(i) + 1 elements long where each element is sizeof(char), and then zero's each element. Now lets put the bits into the string.
for (int b = 0; b < sizeof(i); ++b) {
uint8_t bit_value = (i & 0x1);
size_t offset = sizeof(i) - 1 - b;
buffer[offset] = '0' + bit_value;
i >>= 1;
}
So what's happening here? In each pass we're calculating the offset in the buffer that we should be writing a value to, and then we're adding the ASCII value of 0 to bit_value as we write it into the buffer.
This code is untested and may have some issues, but that is left as an exercise to the reader. If you have any questions, let me know!
here is the whole code. It is supposed to work fine.
int i=0;
int A;//supposed entered by user
//calculating the value of i
while(A!=0)
{
A=A/2;
i++;
}
char* result=(char *)malloc(sizeof(char)*i);
i--;
while(A!=0)
{
result[i]='0'+(A%2);
A=A/2;
i--;
}
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
char *numToBinStr(int num){
static char bin[sizeof(int) * CHAR_BIT + 1];
char *p = &bin[sizeof(int) * CHAR_BIT];//p point to end
unsigned A = (unsigned)num;
do {
*--p = '0' + (A & 1);
A >>= 1;
}while(A > 0);//do-while for case value of A is 0
return p;
}
int main(void){
printf("%s\n", numToBinStr(6));
//To duplicate, if necessary
//char *bin = strdup(numToBinStr(6));
char *result = numToBinStr(6);
char *bin = malloc(strlen(result) + 1);
strcpy(bin, result);
printf("%s\n", bin);
free(bin);
return 0;
}
You could use these functions in <stdlib.h>:
itoa(); or sprintf()
The second link has some examples as well.

How to convert byte to int?

I have an Arduino which is reading in a set of three bytes from a program which correspond to degrees in which an actuator must turn. I need to convert these bytes into integers so I can pass those integers on to my actuators.
For example, I know the default rest state value I receive from the program is 127. I wrote a C# program to interpret these bytes and that can get them to a single integer value. However, I am unable to figure out how to do this in the Arduino environment with C. I have tried typecasting each byte to a char and storing that in a string. However that returns garbled values that make no sense.
void loop() {
if(Serial.available() && sw)
{
for(int j = 0; j < 3; j++)
{
input[j] = Serial.read();
}
//command = ((String)input).toInt();
sw = 0;
}
String myString = String((char *)input);
Serial.println(myString);
}
The return value of Serial.read() is an int. Therefore, if you have the following code snippet:
int input[3];
for (int i = 0; i < 3; i++) {
input[i] = Serial.read();
}
Then input should store three ints. However, the code:
char* input[3];
for (int i = 0; i < 3; i++) {
input[i] = Serial.read();
}
Will just store the byte conversion from int to char.
If you want to store this as a string, you need to do a proper conversion. In this case, use itoa (see Arduino API description).
The code snippet would be:
#include <stdlib>
char* convertedString = itoa(input[i]);
This should work:
int command = input[0]*256*256 + input[1]*256 + input[2];
By the way the default language you use to program your an Arduino is C++, not C. Although they have some similarities.
Below logic will help you
iDst = (cSrc[0] << 16) | (cSrc[1] << 8) | cSrc[2]
or else you can use union for this case
union byte2char
{
char c[4];
int i;
};
But union implementation needs to consider little and big endian systems

Resources