CRC lookup table generated in C always gives different results - c

I'm trying to create a function that generates a CRC lookup table. I'm working with an 8051 micro-controller, and I'd rather do the table lookup method but at the same time, I'd rather have my computer generate the values which then I can load directly into the micro-controller. Most of this source code has been borrowed from: http://www.rajivchakravorty.com/source-code/uncertainty/multimedia-sim/html/crc8_8c-source.html
I only added in the "main" function
#include <stdio.h>
#define GP 0x107
#define DI 0x07
static unsigned char crc8_table[256];
static int made_table=0;
static void init_crc8()
{
int i,j;
unsigned char crc;
if (!made_table) {
for (i=0; i<256; i++) {
crc = i;
for (j=0; j<8; j++)
crc = (crc << 1) ^ ((crc & 0x80) ? DI : 0);
crc8_table[i] = crc & 0xFF;
}
made_table=1;
}
}
void crc8(unsigned char *crc, unsigned char m)
{
if (!made_table)
init_crc8();
*crc = crc8_table[(*crc) ^ m];
*crc &= 0xFF;
}
int main()
{
unsigned char crc[1];
crc8(crc,'S');
printf("S=%x\n",crc[0]); //different hex code almost every time
crc8(crc,'T');
printf("T=%x\n",crc[0]); //different hex code almost every time
return 0;
}
When I execute the program, I expected the same values on the screen but the hex codes after the printed equals signs changed on nearly every program execution.
What can I do to correct that issue? I don't want to be collecting incorrect CRC values.

crc[0] is not initialized. You need crc[0] = 0; or *crc = 0; before calling crc8() with crc. Then you won't get random answers coming from the random initial contents of crc[0].
You don't need the *crc &= 0xff; in crc8(). If char is eight bits, then it does nothing. If you have an odd architecture where char is more than eight bits, then you need to do *crc = crc8_table[((*crc) ^ m) & 0xff]; to assure that you don't go outside the bounds of the table. (Only the low eight bits of m will be used in the CRC calculation.) The contents of the table have already been limited to eight bits, so in any case you don't need a final & 0xff.
You may need a different initial value than zero, and you may need to exclusive-or the final CRC value with something, depending on the definition of the CRC-8 that you want. In the RevEng catalog of CRC's, there are two 8-bit CRCs with that polynomial that are not reflected. Both happen to start with an initial value of zero, but one is exclusive-ored with 0x55 at the end. Also the CRC definition you need may be reflected, in which case the shift direction changes and the polynomial is flipped. If your CRC-8 needs to be interoperable with some other software, then you need to find out the full definition of the CRC being used.
Passing a pointer seems like an odd choice here. It would be more efficient to just pass and return the CRC value directly. E.g. unsigned crc8(unsigned crc, unsigned ch) {, which would apply the eight bits in ch to the CRC crc, and return the new value. Note that you do not need to make the CRC value a char. unsigned is generally what C routines most efficiently take as an argument and return. In fact usually the first argument is passed in a register and returned in the same register.
Usually one computes a CRC on a message consisting of a series of bytes. It would be more efficient to have a routine that does the whole message with a loop, so that you don't need to check to see if the table has been built yet for every single byte of the message.

In the main, crc[0] has not been initialized. As a result, in crc8, *crc in the expression (*crc) ^ m is uninitialized, hence your random values.
Fix: initialize crc[0]. Something like
unsigned char crc[1] = { 0 };

Related

Bitwise operation in C language (0x80, 0xFF, << )

I have a problem understanding this code. What I know is that we have passed a code into a assembler that has converted code into "byte code". Now I have a Virtual machine that is supposed to read this code. This function is supposed to read the first byte code instruction. I don't understand what is happening in this code. I guess we are trying to read this byte code but don't understand how it is done.
static int32_t bytecode_to_int32(const uint8_t *bytecode, size_t size)
{
int32_t result;
t_bool sign;
int i;
result = 0;
sign = (t_bool)(bytecode[0] & 0x80);
i = 0;
while (size)
{
if (sign)
result += ((bytecode[size - 1] ^ 0xFF) << (i++ * 8));
else
result += bytecode[size - 1] << (i++ * 8);
size--;
}
if (sign)
result = ~(result);
return (result);
}
This code is somewhat badly written, lots of operations on a single line and therefore containing various potential bugs. It looks brittle.
bytecode[0] & 0x80 Simply reads the MSB sign bit, assuming it's 2's complement or similar, then converts it to a boolean.
The loop iterates backwards from most significant byte to least significant.
If the sign was negative, the code will perform an XOR of the data byte with 0xFF. Basically inverting all bits in the data. The result of the XOR is an int.
The data byte (or the result of the above XOR) is then bit shifted i * 8 bits to the left. The data is always implicitly promoted to int, so in case i * 8 happens to give a result larger than INT_MAX, there's a fat undefined behavior bug here. It would be much safer practice to cast to uint32_t before the shift, carry out the shift, then convert to a signed type afterwards.
The resulting int is converted to int32_t - these could be the same type or different types depending on system.
i is incremented by 1, size is decremented by 1.
If sign was negative, the int32_t is inverted to some 2's complement negative number that's sign extended and all the data bits are inverted once more. Except all zeros that got shifted in with the left shift are also replaced by ones. If this is intentional or not, I cannot tell. So for example if you started with something like 0x0081 you now have something like 0xFFFF01FF. How that format makes sense, I have no idea.
My take is that the bytecode[size - 1] ^ 0xFF (which is equivalent to ~) was made to toggle the data bits, so that they would later toggle back to their original values when ~ is called later. A programmer has to document such tricks with comments, if they are anything close to competent.
Anyway, don't use this code. If the intention was merely to swap the byte order (endianess) of a 4 byte integer, then this code must be rewritten from scratch.
That's properly done as:
static int32_t big32_to_little32 (const uint8_t* bytes)
{
uint32_t result = (uint32_t)bytes[0] << 24 |
(uint32_t)bytes[1] << 16 |
(uint32_t)bytes[2] << 8 |
(uint32_t)bytes[3] << 0 ;
return (int32_t)result;
}
Anything more complicated than the above is highly questionable code. We need not worry about signs being a special case, the above code preserves the original signedness format.
So the A^0xFF toggles the bits set in A, so if you have 10101100 xored with 11111111.. it will become 01010011. I am not sure why they didn't use ~ here. The ^ is a xor operator, so you are xoring with 0xFF.
The << is a bitshift "up" or left. In other words, A<<1 is equivalent to multiplying A by 2.
the >> moves down so is equivalent to bitshifting right, or dividing by 2.
The ~ inverts the bits in a byte.
Note it's better to initialise variables at declaration it costs no additional processing whatsoever to do it that way.
sign = (t_bool)(bytecode[0] & 0x80); the sign in the number is stored in the 8th bit (or position 7 counting from 0), which is where the 0x80 is coming from. So it's literally checking if the signed bit is set in the first byte of bytecode, and if so then it stores it in the sign variable.
Essentially if it's unsigned then it's copying the bytes from from bytecode into result one byte at a time.
If the data is signed then it flips the bits then copies the bytes, then when it's done copying, it flips the bits back.
Personally with this kind of thing i prefer to get the data, stick in htons() format (network byte order) and then memcpy it to an allocated array, store it in a endian agnostic way, then when i retrieve the data i use ntohs() to convert it back to the format used by the computer. htons() and ntohs() are standard C functions and are used in networking and platform agnostic data formatting / storage / communication all the time.
This function is a very naive version of the function which converts form the big endian to little endian.
The parameter size is not needed as it works only with the 4 bytes data.
It can be much easier archived by the union punning (and it allows compilers to optimize it - in this case to the simple instruction):
#define SWAP(a,b,t) do{t c = (a); (a) = (b); (b) = c;}while(0)
int32_t my_bytecode_to_int32(const uint8_t *bytecode)
{
union
{
int32_t i32;
uint8_t b8[4];
}i32;
uint8_t b;
i32.b8[3] = *bytecode++;
i32.b8[2] = *bytecode++;
i32.b8[1] = *bytecode++;
i32.b8[0] = *bytecode++;
return i32.i32;
}
int main()
{
union {
int32_t i32;
uint8_t b8[4];
}i32;
uint8_t b;
i32.i32 = -4567;
SWAP(i32.b8[0], i32.b8[3], uint8_t);
SWAP(i32.b8[1], i32.b8[2], uint8_t);
printf("%d\n", bytecode_to_int32(i32.b8, 4));
i32.i32 = -34;
SWAP(i32.b8[0], i32.b8[3], uint8_t);
SWAP(i32.b8[1], i32.b8[2], uint8_t);
printf("%d\n", my_bytecode_to_int32(i32.b8));
}
https://godbolt.org/z/rb6Na5
If the purpose of the code is to sign-extend a 1-, 2-, 3-, or 4-byte sequence in network/big-endian byte order to a signed 32-bit int value, it's doing things the hard way and reimplementing the wheel along the way.
This can be broken down into a three-step process: convert the proper number of bytes to a 32-bit integer value, sign-extend bytes out to 32 bits, then convert that 32-bit value from big-endian to the host's byte order.
The "wheel" being reimplemented in this case is the the POSIX-standard ntohl() function that converts a 32-bit unsigned integer value in big-endian/network byte order to the local host's native byte order.
The first step I'd do is to convert 1, 2, 3, or 4 bytes into a uint32_t:
#include <stdint.h>
#include <limits.h>
#include <arpa/inet.h>
#include <errno.h>
// convert the `size` number of bytes starting at the `bytecode` address
// to a uint32_t value
static uint32_t bytecode_to_uint32( const uint8_t *bytecode, size_t size )
{
uint32_t result = 0;
switch ( size )
{
case 4:
result = bytecode[ 0 ] << 24;
case 3:
result += bytecode[ 1 ] << 16;
case 2:
result += bytecode[ 2 ] << 8;
case 1:
result += bytecode[ 3 ];
break;
default:
// error handling here
break;
}
return( result );
}
Then, sign-extend it (borrowing from this answer):
static uint32_t sign_extend_uint32( uint32_t in, size_t size );
{
if ( size == 4 )
{
return( in );
}
// being pedantic here - the existence of `[u]int32_t` pretty
// much ensures 8 bits/byte
size_t bits = size * CHAR_BIT;
uint32_t m = 1U << ( bits - 1 );
uint32_t result = ( in ^ m ) - m;
return ( result );
}
Put it all together:
static int32_t bytecode_to_int32( const uint8_t *bytecode, size_t size )
{
uint32_t result = bytecode_to_uint32( bytecode, size );
result = sign_extend_uint32( result, size );
// set endianness from network/big-endian to
// whatever this host's endianness is
result = ntohl( result );
// converting uint32_t here to signed int32_t
// can be subject to implementation-defined
// behavior
return( result );
}
Note that the conversion from uint32_t to int32_t implicitly performed by the return statement in the above code can result in implemenation-defined behavior as there can be uint32_t values that can not be mapped to int32_t values. See this answer.
Any decent compiler should optimize that well into inline functions.
I personally think this also needs much better error handling/input validation.

What size data / position is this C code assigning to a value

I am trying to port some C code to another language. Most of the code is working, except working out what this section does. The C code I've been handed is not in a compilable state so I can't do a run time analysis, but will fix that as a last resort.
Buffer is pointer to file's raw contents.
Based on reading the code my expectation was this:
if pos = 4132, then this is trying to read a 16-bit unsigned value from file position (pos << 8) + (pos + 1).
However when I make this calculation for pos = 4132, I get a file position of 1061925, which is way beyond the end of the file.
unsigned short id;
struct file
{
char *buffer;
}
id = (*(file_instance->buffer + pos)<<8) + *(file_instance->buffer+pos+1);
To make the code easier to read, you could transform it using the identities *(A+B) == A[B], and X << 1 == X * 2:
id = (file_instance->buffer[0] * 256) + file_instance->buffer[1];
If buffer were an unsigned char *, then this code would be a common idiom for reading a 16-bit integer out of memory, where the first octet is the most-significant one. For example if the memory were { 0x01, 0x02 } then you can verify that this equation produces the integer value 0x0102.
However you said buffer is a char *. In C, char may either be a signed or an unsigned type. If it is unsigned on the system you are working on, then everything is fine. Also, everything is fine if the data you are reading never has the MSB set of any byte.
But if char is signed then the code causes undefined behaviour due to left-shifting a negative number, when the value at file_instance->buffer[0] is a negative value (i.e. has the MSB set). Also there may be further unexpected behaviour when adding two negative numbers.
If you are unable to run the code then it may be difficult to work out what the existing behaviour was, since it would be at the mercy of various hardware and compiler optimization details.
If you can run code on the target system then you could try and see what happens with a code snippet like:
#include <stdio.h>
int main()
{
char buf[2] = { 0xAA, 0xBB }; // put sample data here
int r = buf[0] << 8 + buf[1];
printf("%x\n", (unsigned)r);
}

Copy 6 byte array to long long integer variable

I have read from memory a 6 byte unsigned char array.
The endianess is Big Endian here.
Now I want to assign the value that is stored in the array to an integer variable. I assume this has to be long long since it must contain up to 6 bytes.
At the moment I am assigning it this way:
unsigned char aFoo[6];
long long nBar;
// read values to aFoo[]...
// aFoo[0]: 0x00
// aFoo[1]: 0x00
// aFoo[2]: 0x00
// aFoo[3]: 0x00
// aFoo[4]: 0x26
// aFoo[5]: 0x8e
nBar = (aFoo[0] << 64) + (aFoo[1] << 32) +(aFoo[2] << 24) + (aFoo[3] << 16) + (aFoo[4] << 8) + (aFoo[5]);
A memcpy approach would be neat, but when I do this
memcpy(&nBar, &aFoo, 6);
the 6 bytes are being copied to the long long from the start and thus have padding zeros at the end.
Is there a better way than my assignment with the shifting?
What you want to accomplish is called de-serialisation or de-marshalling.
For values that wide, using a loop is a good idea, unless you really need the max. speed and your compiler does not vectorise loops:
uint8_t array[6];
...
uint64_t value = 0;
uint8_t *p = array;
for ( int i = (sizeof(array) - 1) * 8 ; i >= 0 ; i -= 8 )
value |= (uint64_t)*p++ << i;
// left-align
value <<= 64 - (sizeof(array) * 8);
Note using stdint.h types and sizeof(uint8_t) cannot differ from1`. Only these are guaranteed to have the expected bit-widths. Also use unsigned integers when shifting values. Right shifting certain values is implementation defined, while left shifting invokes undefined behaviour.
Iff you need a signed value, just
int64_t final_value = (int64_t)value;
after the shifting. This is still implementation defined, but all modern implementations (and likely the older) just copy the value without modifications. A modern compiler likely will optimize this, so there is no penalty.
The declarations can be moved, of course. I just put them before where they are used for completeness.
You might try
nBar = 0;
memcpy((unsigned char*)&nBar + 2, aFoo, 6);
No & needed before an array name caz' it's already an address.
The correct way to do what you need is to use an union:
#include <stdio.h>
typedef union {
struct {
char padding[2];
char aFoo[6];
} chars;
long long nBar;
} Combined;
int main ()
{
Combined x;
// reset the content of "x"
x.nBar = 0; // or memset(&x, 0, sizeof(x));
// put values directly in x.chars.aFoo[]...
x.chars.aFoo[0] = 0x00;
x.chars.aFoo[1] = 0x00;
x.chars.aFoo[2] = 0x00;
x.chars.aFoo[3] = 0x00;
x.chars.aFoo[4] = 0x26;
x.chars.aFoo[5] = 0x8e;
printf("nBar: %llx\n", x.nBar);
return 0;
}
The advantage: the code is more clear, there is no need to juggle with bits, shifts, masks etc.
However, you have to be aware that, for speed optimization and hardware reasons, the compiler might squeeze padding bytes into the struct, leading to aFoo not sharing the desired bytes of nBar. This minor disadvantage can be solved by telling the computer to align the members of the union at byte-boundaries (as opposed to the default which is the alignment at word-boundaries, the word being 32-bit or 64-bit, depending on the hardware architecture).
This used to be achieved using a #pragma directive and its exact syntax depends on the compiler you use.
Since C11/C++11, the alignas() specifier became the standard way to specify the alignment of struct/union members (given your compiler already supports it).

CRC bit-order confusion

I'm calculating a CCITT CRC-16 bit by bit. I do this that way because it's a prototype that later should be ported to VHDL and end up in hardware to check a serial bit-stream.
On the net I found a single bit CRC-16 update step code. Wrote a test-program and it works. Except for one strange thing: I have to feed the bits of a byte from lowest to highest bit. If I do it this way, I get correct results.
In the CCITT definition of CRC-16 the bits should be feed highest bit to lowest bit though. The data-stream that I want to calculate the CRC from comes in this format as well, so my current code is kind of useless for me.
I'm confused. I would have not expected that feeding the bits the wrong way around could work at all.
Question: Why is it possible that a CRC can be written to take the data in two different bit-orders, and how do I transform my single bit update code that it accepts the data MSB first?
For reference, here is the relevant code. Initialization and the final check have been removed to keep the example short:
typedef unsigned char bit;
void update_crc_single_bit (bit * crc, bit data)
{
// update CRC for a single bit:
bit temp[16];
int i;
temp[0] = data ^ crc[15];
temp[1] = crc[0];
temp[2] = crc[1];
temp[3] = crc[2];
temp[4] = crc[3];
temp[5] = data ^ crc[4] ^ crc[15];
temp[6] = crc[5];
temp[7] = crc[6];
temp[8] = crc[7];
temp[9] = crc[8];
temp[10] = crc[9];
temp[11] = crc[10];
temp[12] = data ^ crc[11] ^ crc[15];
temp[13] = crc[12];
temp[14] = crc[13];
temp[15] = crc[14];
for (i=0; i<16; i++)
crc[i] = temp[i];
}
void update_crc_byte (bit * crc, unsigned char data)
{
int j;
// calculate CRC lowest bit first
for (j=0; j<8; j++)
{
bit b = (data>>j)&1;
update_crc_single_bit(crc, b);
}
}
Edit: Since there is some confusion here: I have to compute the CRC bit by bit, and for each byte MSB first. I can't simply store the bits because the code shown above is a prototype for something that will end up in hardware (without memory).
The code shown above generates the correct result if I feed in a bit-stream in the following order (shown is the index of the received bit. Each byte gets transmitted MSB first):
|- first byte -|- second byte -|- third byte
7,6,5,4,3,2,1,0,15,14,13,12,11,10,9,8,....
I need the single update loop to be transformed that it generates the same CRC using natural order (e.g. as received):
|- first byte -|- second byte -|- third byte
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,....
If you look at the RevEng 16-bit CRC Catalogue, you see that there are two different CRCs called "CCITT", one of which is labeled there "CCITT-False". Somewhere along the way someone got confused about what the CCITT 16-bit CRC was, and that confusion was propagated widely. The two CRCs are described thusly, with the first one (KERMIT) being the true CCITT CRC:
KERMIT
width=16 poly=0x1021 init=0x0000 refin=true refout=true xorout=0x0000 check=0x2189 name="KERMIT"
and
CRC-16/CCITT-FALSE
width=16 poly=0x1021 init=0xffff refin=false refout=false xorout=0x0000 check=0x29b1 name="CRC-16/CCITT-FALSE"
You will note that the real one is reflected, and the false one is not, and there is another difference in the initialization. In reflected CRCs, the lowest bit of the data is processed first, so it appears that you are trying to compute the true CCITT CRC.
When the CRC is reflected, so is the order of the bits in the polynomial that is exclusive-ored into the register, so 0x1021 becomes 0x8408. Here is a simple C implementation that you can check against:
#include <stddef.h>
#define POLY 0x8408
unsigned crc16_ccitt(unsigned crc, unsigned char *buf, size_t len)
{
int k;
while (len--) {
crc ^= *buf++;
for (k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ POLY : crc >> 1;
}
return crc;
}
I don't know what you mean by "In the CCITT definition of CRC-16 the bits should be feed highest bit to lowest bit though". What definition are you referring to?
In this Altera document, you can see the shift register implementation of the CRC for a hardware implementation. Here is a copy of the diagram:
For your code, you need to reverse your register, temp[], indices. temp[0] is temp[15] and so on.
Update - If you look at:
RevEng 16-bit CRC Catalogue
there's a link to:
Online CRC calculator
The first three labeled as CRC-CCITT operate on data sent or received MSB to LSB using the polynomial 0x11021. The only difference is the starting value:
CRC-CCITT (XModem) - crc initialized to 0x0000, same as prefixing by 0x0000.
CRC-CCITT (0xFFFF) - crc initialized to 0xFFFF, same as prefixing by 0x84CF.
CRC-CCITT (0x1D0F) - crc initialized to 0x1D0F, same as prefixing by 0xFFFF.
So my guess is that you want to use one of these three.
normally, bits are transferred on the line least significant bit first. So, in case you have an array of bytes, first bit is least significant bit of first byte, then comes the next to least significant bit... so up to the most significant bit of the first byte and then comes the least significant bit of the next byte. This is the order of the bits (coefficients) in the polyonomial division you are making. Try my routines at https://github.com/mojadita/crc.git (you have there a table for CRC16-CCITT)

How to convert from integer to unsigned char in C, given integers larger than 256?

As part of my CS course I've been given some functions to use. One of these functions takes a pointer to unsigned chars to write some data to a file (I have to use this function, so I can't just make my own purpose built function that works differently BTW). I need to write an array of integers whose values can be up to 4095 using this function (that only takes unsigned chars).
However am I right in thinking that an unsigned char can only have a max value of 256 because it is 1 byte long? I therefore need to use 4 unsigned chars for every integer? But casting doesn't seem to work with larger values for the integer. Does anyone have any idea how best to convert an array of integers to unsigned chars?
Usually an unsigned char holds 8 bits, with a max value of 255. If you want to know this for your particular compiler, print out CHAR_BIT and UCHAR_MAX from <limits.h> You could extract the individual bytes of a 32 bit int,
#include <stdint.h>
void
pack32(uint32_t val,uint8_t *dest)
{
dest[0] = (val & 0xff000000) >> 24;
dest[1] = (val & 0x00ff0000) >> 16;
dest[2] = (val & 0x0000ff00) >> 8;
dest[3] = (val & 0x000000ff) ;
}
uint32_t
unpack32(uint8_t *src)
{
uint32_t val;
val = src[0] << 24;
val |= src[1] << 16;
val |= src[2] << 8;
val |= src[3] ;
return val;
}
Unsigned char generally has a value of 1 byte, therefore you can decompose any other type to an array of unsigned chars (eg. for a 4 byte int you can use an array of 4 unsigned chars). Your exercise is probably about generics. You should write the file as a binary file using the fwrite() function, and just write byte after byte in the file.
The following example should write a number (of any data type) to the file. I am not sure if it works since you are forcing the cast to unsigned char * instead of void *.
int homework(unsigned char *foo, size_t size)
{
int i;
// open file for binary writing
FILE *f = fopen("work.txt", "wb");
if(f == NULL)
return 1;
// should write byte by byte the data to the file
fwrite(foo+i, sizeof(char), size, f);
fclose(f);
return 0;
}
I hope the given example at least gives you a starting point.
Yes, you're right; a char/byte only allows up to 8 distinct bits, so that is 2^8 distinct numbers, which is zero to 2^8 - 1, or zero to 255. Do something like this to get the bytes:
int x = 0;
char* p = (char*)&x;
for (int i = 0; i < sizeof(x); i++)
{
//Do something with p[i]
}
(This isn't officially C because of the order of declaration but whatever... it's more readable. :) )
Do note that this code may not be portable, since it depends on the processor's internal storage of an int.
If you have to write an array of integers then just convert the array into a pointer to char then run through the array.
int main()
{
int data[] = { 1, 2, 3, 4 ,5 };
size_t size = sizeof(data)/sizeof(data[0]); // Number of integers.
unsigned char* out = (unsigned char*)data;
for(size_t loop =0; loop < (size * sizeof(int)); ++loop)
{
MyProfSuperWrite(out + loop); // Write 1 unsigned char
}
}
Now people have mentioned that 4096 will fit in less bits than a normal integer. Probably true. Thus you can save space and not write out the top bits of each integer. Personally I think this is not worth the effort. The extra code to write the value and processes the incoming data is not worth the savings you would get (Maybe if the data was the size of the library of congress). Rule one do as little work as possible (its easier to maintain). Rule two optimize if asked (but ask why first). You may save space but it will cost in processing time and maintenance costs.
The part of the assignment of: integers whose values can be up to 4095 using this function (that only takes unsigned chars should be giving you a huge hint. 4095 unsigned is 12 bits.
You can store the 12 bits in a 16 bit short, but that is somewhat wasteful of space -- you are only using 12 of 16 bits of the short. Since you are dealing with more than 1 byte in the conversion of characters, you may need to deal with endianess of the result. Easiest.
You could also do a bit field or some packed binary structure if you are concerned about space. More work.
It sounds like what you really want to do is call sprintf to get a string representation of your integers. This is a standard way to convert from a numeric type to its string representation. Something like the following might get you started:
char num[5]; // Room for 4095
// Array is the array of integers, and arrayLen is its length
for (i = 0; i < arrayLen; i++)
{
sprintf (num, "%d", array[i]);
// Call your function that expects a pointer to chars
printfunc (num);
}
Without information on the function you are directed to use regarding its arguments, return value and semantics (i.e. the definition of its behaviour) it is hard to answer. One possibility is:
Given:
void theFunction(unsigned char* data, int size);
then
int array[SIZE_OF_ARRAY];
theFunction((insigned char*)array, sizeof(array));
or
theFunction((insigned char*)array, SIZE_OF_ARRAY * sizeof(*array));
or
theFunction((insigned char*)array, SIZE_OF_ARRAY * sizeof(int));
All of which will pass all of the data to theFunction(), but whether than makes any sense will depend on what theFunction() does.

Resources