Usually I develop in PHP. But for a project I have to develop a small program in C (using Visual Studio on Windows).
I need to display values on the screen (in the final version I write to a file).
Among the 3 values there is a value (the C value in my example) that I need to see displayed as a character or as a number.
Example :
valueA = 65 | valueB = 10 | valueC = 80 => I want display A - 10 - P
valueA = 65 | valueB = 78 | valueC = 80 => I want display A - N - P
So I created this code and it works (I used static variables for a test code)
// Initialization
unsigned char valueA = 65;
unsigned char valueB = 10;
unsigned char valueC = 80;
unsigned char formatDisplay[25] = { 0 };
unsigned char formatA = 'c';
unsigned char formatB = 'c';
unsigned char formatC = 'c';
// If Character NOT Human Readable
if (valueB < 33) formatB = 'd';
// Determination Format Display
sprintf(formatDisplay, "\n%%%c - %%%c - %%%c", formatA, formatB, formatC);
// Display Values
printf(formatDisplay, valueA, valueB, valueC);
But isn't there any simpler...?
I develop in C and not in C++
In the above question and example I only have 3 values including 1 value with a variant display but I can later have more values and more variant display.
Using all the comments and answers I got, I answer my question, if it can help someone one day :).
With an example with 4 values including 2 with variable display (B and D in the case below).
// Initialization
unsigned char valueA = 65;
unsigned char valueB = 10;
unsigned char valueC = 80;
unsigned char valueD = 85;
unsigned char formatDisplay[30] = { 0 };
unsigned char formatB = 'c';
unsigned char formatD = 'c';
// If Character NOT Human Readable
if (!isgraph(valueB)) formatB = 'd';
if (!isgraph(valueD)) formatD = 'd';
// Determination Format Display
sprintf(formatDisplay, "\n%%c - %%%c - %%c - %%%c", formatB, formatD);
// Display Values
printf(formatDisplay, valueA, valueB, valueC, valueD);
Related
I have a char 2D array called char newString[][]
I want to convert the 2nd and 3rd rows to integers.
Lets say newString[2]= {1,2} and newString[3]= {2,2]}
I am trying to get int n = 12 and int m = 22 by turning my 2D array into an integer. Because later I want to do m^n ( m to the power of n).
char newString[32][32];
int n;
int m;
// let newString[2]= {1,2}
// let newString[3]= {2,2}
// convert it to an int
m = newString[2] - '0'; // I want m = 12
n = newString[3] - '0'; // n = 22
I know that char - '0' will give you an integer if assigned to an integer, but how to assign a 2d array into an integer
I tried using these, but they only work for 1D arrays, any ideas?
the function: strtol() will not do the desired functionality as there is no trailing NUL byte, so no string to convert.
Suggest:
// given
char newString[][] =
{
{ '1', '2'},
{ '2', '2'}
};
// convert to integers
int n = newString[0][0] - '0';
n = n*10 + (newString[0][1] - '0');
int m = newString[1][0] - '0';
m = m*10 + (newString[1][1] - '0');
// which results in:
// n contains 12
// m contains 22
Im trying to read a png without using any external libraries in C, and so far I've managed to retrieve a png's bytes and split them into respective chunks. However, when I try to access any of the data from these chunks, it returns very bad values. this is my test picture, and it has dimensions 202x71. However, when I try to print the width and height retrieved by my program when it accesses the IHDR chunk, I get:
width: -1996594944 - height: -1577759370
which physically cant be the pictures dimensions due to them being negative. This is weird, because even if the data is incorrect, my code still seems to be getting the proper types and lengths:
value of type: IHDR
value of data: ê■a
value of length: 13
value of crc: FKخ
value of type: sRGB
value of data: ê■a
value of length: 1
value of crc: «╬Θ
I know that printing binary data isn't the best, but its just there because. The function which calculates the chunks is this:
typedef struct Chunk Chunk;
struct Chunk {
int length;
unsigned char type[4];
unsigned char *data;
unsigned char crc[4];
};
Chunk* getChunksFromBytes(unsigned char* bytes)
{
int next_seg = 7;
int chunks = 0;
long int size = 1;
Chunk new_chunk;
Chunk* file_chunks = (Chunk*) malloc(sizeof(Chunk));
Chunk* temp = file_chunks;
while(1){
new_chunk.length = (int) bytes[next_seg+1] << 24 |
bytes[next_seg+2] << 16 |
bytes[next_seg+3] << 8 |
bytes[next_seg+4];
new_chunk.type[0] = bytes[next_seg+5];
new_chunk.type[1] = bytes[next_seg+6];
new_chunk.type[2] = bytes[next_seg+7];
new_chunk.type[3] = bytes[next_seg+8];
unsigned char data[new_chunk.length];
for(int i = 0; i < new_chunk.length; i++){
data[i] = bytes[next_seg+9+i];
}
new_chunk.data = data;
new_chunk.crc[0] = bytes[next_seg+new_chunk.length+9];
new_chunk.crc[1] = bytes[next_seg+new_chunk.length+10];
new_chunk.crc[2] = bytes[next_seg+new_chunk.length+11];
new_chunk.crc[3] = bytes[next_seg+new_chunk.length+12];
size += sizeof(new_chunk)+1;
temp = realloc(temp, size);
temp[chunks] = new_chunk;
chunks++;
if(bytes[next_seg+5] == 'I' && bytes[next_seg+6] == 'E' && bytes[next_seg+7] == 'N' && bytes[next_seg+8] == 'D') break;
next_seg+=new_chunk.length+12;
}
return temp;
}
and the code where I use these values to calculate the height and width are seen here:
Chunk* file_chunks = getChunksFromBytes(file_bytes);
int width = (int) file_chunks[0].data[0] << 24 |
file_chunks[0].data[1] << 16 |
file_chunks[0].data[2] << 8 |
file_chunks[0].data[3];
int height = (int) file_chunks[0].data[4] << 24 |
file_chunks[0].data[5] << 16 |
file_chunks[0].data[6] << 8 |
file_chunks[0].data[7];
I double checked that the order was right according to Wikipedia's entry on png files, yet the dimensions still seem to be incorrect. I don't have any inkling on why this could be. Please do help.
I have a function PostProcess that is fixed and cannot change. It takes an array of 6 bytes and outputs a 24-bit value.
I'm trying to work out that for a given 24-bit number what function PreProcess would give me the same output and input values.
For example if I set my input value to be 2^24 -1 = 16777215 then I would expect to get 16777215 on the output.
It's not clear how I would implement this functionality. I've added the code below with a test and the functionality of PostProcess
void PreProcess(unsigned int in, unsigned char out[]);
int PostProcess(unsigned char pu8Input[]);
int main()
{
unsigned int InputVal = 16777215; // max value for 24 bits
unsigned char PreProcessed[6] = {0};
PreProcess(InputVal,PreProcessed);
unsigned int OutputVal = PostProcess(PreProcessed);
if(InputVal == OutputVal)
printf("True!");
else
printf("False");
return 0;
}
void PreProcess(unsigned int in, unsigned char out[])
{
//TODO
}
int PostProcess(unsigned char pu8Input[])
{
unsigned int u32Out = 0u;
u32Out += (pu8Input[0] - '0') * 100000;
u32Out += (pu8Input[1] - '0') * 10000;
u32Out += (pu8Input[2] - '0') * 1000;
u32Out += (pu8Input[3] - '0') * 100;
u32Out += (pu8Input[4] - '0') * 10;
u32Out += (pu8Input[5] - '0') * 1;
u32Out &= 0xFFFFFF;
return u32Out;
}
Reverse the operation
Note; with in > 999999, out[0] will be outside the '0'-'9' range.
void PreProcess(unsigned int in, unsigned char out[]) {
in &= 0xFFFFFFu; // Enforce 24-bit limit.
for (int index = 5; index > 0; index--) {
out[index] = in%10u + '0';
in /= 10u;
}
// `in` will be 0 to 167
out[0] = in + '0;
// With ASCII, `out[0]` will be 48 to 215
}
The input integer can have the maximum value of 2^24 - 1, the array of characters is 6 bytes long... having the possibility to change PostProcess() it would have been easy: 6 characters are exactly those required to store a 24 bit integer in HEX format. A character every 4 bytes; maximum value (0x) FFFFFF.
But PostProcess() implementation is fixed and it is designed as a sort of "max-6-digits-atoi". So, if the input buffer's value is {'3', '4', '5', '6', '7', '8'}, then the integer 345678.
It seems that 999999 can be printed at most, but here comes the trick: who bounds us to store in the char buffer only digits? We don't have any contraints (but we have to rely on ASCII encoding scheme).
The strategy
Let's make sure that the lest 5 bytes of the char buffer contain the decimal representation of the input number. In this way, the PostProcess will convert those digits as expected. The value of those digits can be calculated as in % 100000
Being the maximum input value 2^24-1 = 16777215, we have to represent the range [0-167] with the first byte of the array
Since the PostProcess will subtract '0' from pu8Input[0], we make sure to compensate it when generating pu8Input[0]
The code
#include <stdio.h>
#include <string.h>
void PreProcess(unsigned int in, unsigned char out[])
{
if(in <= 16777215)
{
char aux[7];
unsigned int auxInt = in % 100000;
unsigned char firstchar;
firstchar = (in / 100000) + '0';
sprintf( aux, "%c%05u", firstchar, auxInt );
memcpy( out, aux, 6 );
}
}
Summarizing:
We calculate the remainder auxInt = in % 100000
We calculate the leading char as firstchar = (in / 100000) + '0'
We put them together with sprintf, using an auxiliary char buffer 7 bytes long (because we need room for the string terminator)
We memcpythe auxiliary char buffer to the output buffer
I can't figure out what's wrong with this code!
It returns 208 as a decimal
where it should be 0
typedef unsigned char uchar;
int CONVERTION_BinStrToDecimal(char* binstr) //transform a inary string to a decimal number
{
int cpts = 0;
unsigned char dec = 0;
uchar x = 0;
for (cpts = 0; cpts <= 7; cpts++) {
x = 7 - cpts;
dec += (binstr[cpts]*pow(2,x));
}
return dec;
}
int main()
{
uchar decimal = 0;
char bin[8] = "00000000"; //example
decimal = CONVERTION_BinStrToDecimal(bin);
printf("%d", decimal);
}
binstr[cpts] yields the ascii code of 0 or 1 (which is 0x30 or 0x31).
You need to use binstr[cpts] == '1' to convert a ascii '1' to the number 1 and everything else to 0 (assuming that no other characters may occur). Another option would be binstr[cpts] - '0'.
Btw, using the pow() function is disregarded for such cases, better substitute pow(2,x) by (1<<x).
for (cpts = 0; cpts <= 7; cpts++) {
x = 7 - cpts;
dec += ((binstr[cpts] == '1')*(1 << x));
}
There are many possibilities to make it look nicer, of course, the most obvious being (binstr[cpts] == '1') << x.
Furthermore, mind that your code expects exactly 8 binary digits to calculate the correct result.
Alternatively, if you zero-terminate your string you can use strtol function with base 2, e.g.:
char bin[9] = "00000000";
decimal = strtol(bin, NULL, 2);
Because I'm masochistic I'm trying to write something in C to decode an 8-bit PNG file (it's a learning thing, I'm not trying to reinvent libpng...)
I've got to the point when the stuff in my deflated, unfiltered data buffer unmistakably resembles the source image (see below), but it's still quite, erm, wrong, and I'm pretty sure there's something askew with my implementation of the filtering algorithms. Most of them are quite simple, but there's one major thing I don't understand in the docs, not being good at maths or ever having taken a comp-sci course:
Unsigned arithmetic modulo 256 is used, so that both the inputs and outputs fit into bytes.
What does that mean?
If someone can tell me that I'd be very grateful!
For reference, (and I apologise for the crappy C) my noddy implementation of the filtering algorithms described in the docs look like:
unsigned char paeth_predictor (unsigned char a, unsigned char b, unsigned char c) {
// a = left, b = above, c = upper left
char p = a + b - c; // initial estimate
char pa = abs(p - a); // distances to a, b, c
char pb = abs(p - b);
char pc = abs(p - c);
// return nearest of a,b,c,
// breaking ties in order a,b,c.
if (pa <= pb && pa <= pc) return a;
else if (pb <= pc) return b;
else return c;
}
void unfilter_sub(char* out, char* in, int bpp, int row, int rowlen) {
for (int i = 0; i < rowlen; i++)
out[i] = in[i] + (i < bpp ? 0 : out[i-bpp]);
}
void unfilter_up(char* out, char* in, int bpp, int row, int rowlen) {
for (int i = 0; i < rowlen; i++)
out[i] = in[i] + (row == 0 ? 0 : out[i-rowlen]);
}
void unfilter_paeth(char* out, char* in, int bpp, int row, int rowlen) {
char a, b, c;
for (int i = 0; i < rowlen; i++) {
a = i < bpp ? 0 : out[i - bpp];
b = row < 1 ? 0 : out[i - rowlen];
c = i < bpp ? 0 : (row == 0 ? 0 : out[i - rowlen - bpp]);
out[i] = in[i] + paeth_predictor(a, b, c);
}
}
And the images I'm seeing:
Source
Source http://img220.imageshack.us/img220/8111/testdn.png
Output
Output http://img862.imageshack.us/img862/2963/helloworld.png
It means that, in the algorithm, whenever an arithmetic operation is performed, it is performed modulo 256, i.e. if the result is greater than 256 then it "wraps" around. The result is that all values will always fit into 8 bits and not overflow.
Unsigned types already behave this way by mandate, and if you use unsigned char (and a byte on your system is 8 bits, which it probably is), then your calculation results will naturally just never overflow beyond 8 bits.
It means only the last 8 bits of the result is used. 2^8=256, the last 8 bits of unsigned value v is the same as (v%256).
For example, 2+255=257, or 100000001, last 8 bits of 257 is 1, and 257%256 is also 1.
In 'simple language' it means that you never go "out" of your byte size.
For example in C# if you try this it will fail:
byte test = 255 + 255;
(1,13): error CS0031: Constant value '510' cannot be converted to a
'byte'
byte test = (byte)(255 + 255);
(1,13): error CS0221: Constant value '510' cannot be converted to a
'byte' (use 'unchecked' syntax to override)
For every calculation you have to do modulo 256 (C#: % 256).
Instead of writing % 256 you can also do AND 255:
(175 + 205) mod 256 = (175 + 205) AND 255
Some C# samples:
byte test = ((255 + 255) % 256);
// test: 254
byte test = ((255 + 255) & 255);
// test: 254
byte test = ((1 + 379) % 256);
// test: 124
byte test = ((1 + 379) & 0xFF);
// test: 124
Note that you sometimes can simplify a byte-series:
(byteVal1 + byteVal2 + byteVal3) % 256
= (((byteVal1 % 256) + (byteVal2 % 256)) % 256 + (byteVal3 % 256)) % 256