Suppose I have three unsigned ints, {a, b, c, d}, which I want to pack with non-standard lengths, {9,5,7,11} respectively. I wish to make a network packet (unsigned char pkt[4]) that I can pack these values into and unpack them reliably on another machine using the same header file regardless of endianness.
Everything I have read about using packed structs suggests that the bit-ordering will not be predictable so that is out of the question. So that leaves me with bit-set and bit-clear operations, but I'm not confident in how to ensure that endianness will not cause me problems. Is the following sufficient, or shall I run into problems with the endianness of a and d separately?
void pack_pkt(uint16_t a, uint8_t b, uint8_t c, uint16_t d, uint8_t *pkt){
uint32_t pkt_h = ((uint32_t)a & 0x1FF) // 9 bits
| (((uint32_t)b & 0x1F) << 9) // 5 bits
| (((uint32_t)c & 0x3F) << 14) // 7 bits
| (((uint32_t)d & 0x7FF) << 21); //11 bits
*pkt = htonl(pkt_h);
}
void unpack_pkt(uint16_t *a, uint8_t *b, uint8_t *c, uint16_t *d, uint8_t *pkt){
uint32_t pkt_h = ntohl(*pkt);
(*a) = pkt_h & 0x1FF;
(*b) = (pkt_h >> 9) & 0x1F;
(*c) = (pkt_h >> 14) & 0x3F;
(*d) = (pkt_h >> 21) & 0x7FF;
}
If so, what other measures can I take to ensure portability?
Structs with bitfields are indeed essentially useless for this purpose, as their field order and even padding rules are not consistent.
shall I run into problems with the endianness of a and d separately?
The endianness of a and d doesn't matter, their byte-order is never used. a and d are not reinterpreted as raw bytes, only their integer values are used or assigned to, and in those cases endianness does not enter the picture.
There is an other problem though: uint8_t *pkt in combination with *pkt = htonl(pkt_h); means that only the least significant byte is saved (regardless of whether it is executed by a little endian or big endian machine, because this is not a reinterpretation, it's an implicit conversion). uint8_t *pkt is OK by itself, but then the resulting group of 4 bytes must be copied into the buffer it points to, it cannot be assigned all in one go. uint32_t *pkt would enable such a single-assignment to work without losing data, but that makes the function less convenient to use.
Similarly in unpack_pkt, only one byte of data is currently used.
When those issues are fixed, it should be good:
void pack_pkt(uint16_t a, uint8_t b, uint8_t c, uint16_t d, uint8_t *buffer){
uint32_t pkt_h = ((uint32_t)a & 0x1FF) // 9 bits
| (((uint32_t)b & 0x1F) << 9) // 5 bits
| (((uint32_t)c & 0x3F) << 14) // 7 bits
| (((uint32_t)d & 0x7FF) << 21); //11 bits
uint32_t pkt = htonl(pkt_h);
memcpy(buffer, &pkt, sizeof(uint32_t));
}
void unpack_pkt(uint16_t *a, uint8_t *b, uint8_t *c, uint16_t *d, uint8_t *buffer){
uint32_t pkt;
memcpy(&pkt, buffer, sizeof(uint32_t));
uint32_t pkt_h = ntohl(pkt);
(*a) = pkt_h & 0x1FF;
(*b) = (pkt_h >> 9) & 0x1F;
(*c) = (pkt_h >> 14) & 0x3F;
(*d) = (pkt_h >> 21) & 0x7FF;
}
An alternative that works without worrying about endianness at any point is manually deconstructing the uint32_t (rather than conditionally byte-swapping it with htonl and then reinterpreting it as raw bytes), for example:
void pack_pkt(uint16_t a, uint8_t b, uint8_t c, uint16_t d, uint8_t *pkt){
uint32_t pkt_h = ((uint32_t)a & 0x1FF) // 9 bits
| (((uint32_t)b & 0x1F) << 9) // 5 bits
| (((uint32_t)c & 0x3F) << 14) // 7 bits
| (((uint32_t)d & 0x7FF) << 21); //11 bits
// example serializing the bytes in big endian order, regardless of host endianness
pkt[0] = pkt_h >> 24;
pkt[1] = pkt_h >> 16;
pkt[2] = pkt_h >> 8;
pkt[3] = pkt_h;
}
The original approach isn't bad, this is just an alternative, something to consider. Since nothing is ever reinterpreted, endianness does not matter at all, which may increase confidence in the correctness of the code. Of course as a downside, it requires more code to get the same thing done. By the way, even though manually deconstructing the uint32_t and storing 4 separate bytes looks like a lot of work, GCC can compile it efficiently into a bswap and 32bit store. On the other hand Clang misses this opportunity and other compilers may as well, so this is not without its drawbacks.
for packing and packing i suggest use struct like this
remember size of struct is different in other machines like 8 bit system vs 32 bit system compile same struct with different sizes we call it padding in struct so you can use pack to be sure struct size is same in transmitter and receiver
typedef struct {
uint8_t A;
uint8_t B;
uint8_t C;
uint8_t D;
} MyPacket;
now you can stream this struct into byte stream such as SerialPort or UART or something else
and in the receiver you can pack bytes to gether
see the following functions
void transmitPacket(MyPacket* packet) {
int len = sizeof(MyPacket);
uint8_t* pData = (uint8_t*) packet;
while (len-- > 0) {
// send bytes 1 by 1
transmitByte(*pData++);
}
}
void receivePacket(MyPacket* packet) {
int len = sizeof(MyPacket);
uint8_t* pData = (uint8_t*) packet;
while (len-- > 0) {
// receive bytes 1 by 1
*pData++ = receiveByte();
}
}
remember bit ordering in byte is same every where but you must check your byte ordering for be sure packet will not be miss understand in receiver
for example if sizeof your packet is 4 bytes and you send low byte first
you have to receive low byte in receiver
in your code you get packet in uint8_t* pointer but your actual sizeof packet is uint32_t and is 4 bytes
Related
I am dealing with a situation where I need to send/receive data via a TCP/IP socket between myself (client) and a server. The message structure is proprietary, but is basically arrays of uint32_t. I am tasked with handling the Endian conversion on my end. As the client, I am operating in Windows (little endian). The server is operating in VxWorks environment (big endian). Therefor, I need to convert data I send from little to big, and data I receive from big to little.
Now, I am aware that endianness refers to BYTE order within a word. So, I created a function that would do the byte swapping for each uint32_t word in a given array. See below.
void convertEndian(uint32_t inputData[], int size)
{
uint32_t b1, b2, b3, b4;
for (int i = 0; i < size; ++i)
{
b1 = (inputData[i] & 0xFF000000) >> 24;
b2 = (inputData[i] & 0x00FF0000) >> 8;
b3 = (inputData[i] & 0x0000FF00) << 8;
b4 = (inputData[i] & 0x000000FF) << 24;
inputData[i] = b1 | b2 | b3 | b4;
}
}
This approach is fine for certain message types I'll be dealing with, where each word is defined by an entire uint32_t value. However, some messages have many words that have their own unique bit fields. Below is an example of one:
Struct test
{
Unsigned int var1 : 16;
Unsigned int var2 : 12;
Unsigned int var3 : 1;
Unsigned int var4 : 1;
Unsigned int var5 : 1;
Unsigned int var6 : 1;
}
How do I implement endian conversion for such cases? There is one message type for example, where I will be receiving an array of about 32 words of uint32_t and each of those words has its own set of bit fields representing various things.
I guess my only choice is to mask/shift for each word as needed. But then I will pretty much have to make 32 unique functions for each word. It seems very labor intensive.
Any suggestions would be appreciated.
This is hopefully without too many assumptions.
Totally untested, humour not marked.
static_assert(sizeof(test) == sizeof (uint32_t), "Houston we have a problem");
template < typename NetOrHost>
void convertEndian(uint32_t inputData[], int size) {
for (int i = 0; i < size; ++i)
inputData[i] = NetOrHost(inputData[i]);
}
// or simply use
std::for_each(inputData, inputData+size, htonl /* ntohl for the other way */); // this part needs a bit of testing ...
// Now encoding the test struct
// Using https://stackoverflow.com/a/20194422/4013258 BitCount
// or just use magic values, *hands OP a foot-gun*
// Once only
std::array<int, 6> bits; // todo make this constexpr
bits[0] = BitCount(test.var1);
bits[1] = BitCount(test.var2);
etc ...
static_assert(std::accumulate(bits.begin(), bits.end(), 0) == 32, "Preconditions violated");
// this goes well because we know that it fits in a 32-bit unsigned integer.
uint32_t encoded = 0;
encoded = test.var1;
encoded <<= bits[1];
encoded += test.var2;
encoded <<= bits[2];
encoded += test.var3;
etc.
// To decode
uint32_t decode = inputData[i];
test.var6 = decode & ((1 << bits[5])-1); // or make a mask array
decode >>= bits[5];
etc.
This would be a lot easier with reflection ...
My goal is to save a long in four bytes like this:
unsigned char bytes[4];
unsigned long n = 123;
bytes[0] = (n >> 24) & 0xFF;
bytes[1] = (n >> 16) & 0xFF;
bytes[2] = (n >> 8) & 0xFF;
bytes[3] = n & 0xFF;
But I want the code to be portable, so I use CHAR_BIT from <limits.h>:
unsigned char bytes[4];
unsigned long n = 123;
bytes[0] = (n >> (CHAR_BIT * 3)) & 0xFF;
bytes[1] = (n >> (CHAR_BIT * 2)) & 0xFF;
bytes[2] = (n >> CHAR_BIT) & 0xFF;
bytes[3] = n & 0xFF;
The problem is that the bitmask 0xFF only accounts for eight bits, which is not necessarily equivalent to one byte. Is there a way to make the upper code completely portable for all platforms?
How about something like:
unsigned long mask = 1;
mask<<=CHAR_BIT;
mask-=1;
and then using this as the mask instead of 0xFF?
Test program:
#include <stdio.h>
int main() {
#define MY_CHAR_BIT_8 8
#define MY_CHAR_BIT_9 9
#define MY_CHAR_BIT_10 10
#define MY_CHAR_BIT_11 11
#define MY_CHAR_BIT_12 12
{
unsigned long mask = 1;
mask<<=MY_CHAR_BIT_8;
mask-= 1;
printf("%lx\n", mask);
}
{
unsigned long mask = 1;
mask<<=MY_CHAR_BIT_9;
mask-= 1;
printf("%lx\n", mask);
}
{
unsigned long mask = 1;
mask<<=MY_CHAR_BIT_10;
mask-= 1;
printf("%lx\n", mask);
}
{
unsigned long mask = 1;
mask<<=MY_CHAR_BIT_11;
mask-= 1;
printf("%lx\n", mask);
}
{
unsigned long mask = 1;
mask<<=MY_CHAR_BIT_12;
mask-= 1;
printf("%lx\n", mask);
}
}
Output:
ff
1ff
3ff
7ff
fff
I work almost exclusively with embedded systems where I rather often have to provide portable code between all manner of more or less exotic systems. Like writing code which will work both on some tiny 8 bit MCU and a x86_64.
But even for me, bothering with portability to exotic obsolete DSP systems and the like is a huge waste of time. These systems barely exist in the real world - why exactly do you need portability to them? Is there any other reason than "showing off" mostly useless language lawyer knowledge of C? In my experience, 99% of all such useless portability concerns boil down to programmers "showing off", rather than an actual requirement specification.
And even if you for some strange reason do need such portability, this task doesn't make any sense to begin with since neither char nor long are portable! If char is not 8 bits then what makes you think long is 4 bytes? It could be 2 bytes, it could be 8 bytes, or it could be something else.
If portability is an actual concern, then you must use stdint.h. Then if you truly must support exotic systems, you have to decide which ones. The only real-world computers I know of that actually do use different byte sizes are various obsolete exotic TI DSPs from the 1990s, which use 16 bit bytes/char. Lets assume this is your intended target which you have decided is important to support.
Lets also assume that a standard C compiler (ISO 9899) exists for that exotic target, which is highly unlikely. (More likely you'll get a poorly conforming, mostly broken legacy C90 thing... or even more likely those who use the target write everything in assembler.) In case of a standard C compiler, it will not implement uint8_t since it's not a mandatory type if the target doesn't support it. Only uint_least8_t and uint_fast8_t are mandatory.
Then you'd go about it like this:
#include <stdint.h>
#include <limits.h>
#if CHAR_BIT == 8
static void uint32_to_uint8 (uint8_t dst[4], uint32_t u32)
{
dst[0] = (u32 >> 24) & 0xFF;
dst[1] = (u32 >> 16) & 0xFF;
dst[2] = (u32 >> 8) & 0xFF;
dst[3] = (u32 >> 0) & 0xFF;
}
#endif
// whatever other conversion functions you need:
static void uint32_to_uint16 (uint16_t dst[2], uint32_t u32){ ... }
static void uint64_to_uint16 (uint16_t dst[2], uint32_t u32){ ... }
The exotic DSP will then use the uint32_to_uint16 function. You could use the same compiler #if CHAR_BIT checks to do #define byte_to_word uint32_to_uint16 etc.
And then should also immediately notice that endianess will be the next major portability concern. I have no idea what endianess obsolete DSPs often use, but that's another question.
What about:
unsigned long mask = (unsigned char)-1;
This will work because the C standard says in 6.3.1.3p2
1 When a value with integer type is converted to another integer type
other than _Bool, if the value can be represented by the new type, it
is unchanged.
2 Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
And that unsigned long can represent all values of unsigned char.
#define CHARMASK ((1UL << CHAR_BIT) - 1)
int main(void)
{
printf("0x%x\n", CHARMASK);
}
And the mask will always have width of the char. Calculated compile time, no additional variables needed.
Or
#define CHARMASK ((unsigned char)(~0))
You can do it without the masks as well
void foo(unsigned int n, unsigned char *bytes)
{
bytes[0] = ((n << (CHAR_BIT * 0)) >> (CHAR_BIT * 3));
bytes[1] = ((n << (CHAR_BIT * 1)) >> (CHAR_BIT * 3));
bytes[2] = ((n << (CHAR_BIT * 2)) >> (CHAR_BIT * 3));
bytes[3] = ((n << (CHAR_BIT * 3)) >> (CHAR_BIT * 3));
}
int main(void)
{
unsigned int z = 0xaabbccdd;
unsigned char bytes[4];
foo(z, bytes);
printf("0x%02x 0x%02x 0x%02x 0x%02x\n", bytes[0], bytes[1], bytes[2], bytes[3]);
}
I need to write a function that copies the specified number of low-order bytes of a given integer into an address in memory, whilst preserving their order.
void lo_bytes(uint8_t *dest, uint8_t no_bytes, uint32_t val)
I expect the usage to look like this:
uint8 dest[3];
lo_bytes(dest, 3, 0x44332211);
// Big-endian: dest = 33 22 11
// Little-endian: dest = 11 22 33
I've tried to implement the function using bit-shifts, memcpy, and iterating over each byte of val with a for-loop, but all of my attempts failed to work on either one or the other endianness.
Is it possible to do this in a platform-independent way, or do I need to use #ifdefs and have a separate piece of code for each endianness?
I've tried to implement the function using bit-shifts, memcpy, and
iterating over each byte of val with a for-loop, but all of my
attempts failed to work on either one or the other endianness.
All arithmetic, including bitwise arithmetic, is defined in terms of the values of the operands, not their representations. This cannot be sufficient for you because you want to obtain a result that differs depending on details of the representation style for type uint32_t.
You can operate on object representations via various approaches, but you still need to know which bytes to operate upon. That calls for some form of detection. If big-endian and little-endian are the only byte orders you're concerned with supporting, then I favor an approach similar to that given in #P__J__'s answer:
void lo_bytes(uint8_t *dest, uint8_t no_bytes, uint32_t val) {
static const union { uint32_t i; uint8_t a[4] } ubytes = { 1 };
memcpy(dest, &val + (1 - ubytes.a[0]) * (4 - no_bytes), no_bytes);
}
The expression (1 - ubytes.a[0]) evaluates to 1 if the representation of uint32_t is big endian, in which case the high-order bytes occur at the beginning of the representation of val. In that case, we want to skip the first 4 - no_bytes of the representation and copy the rest. If uint32_t has a little-endian representation, on the other hand, (1 - ubytes.a[0]) will evaluate to 0, with the result that the memcpy starts at the beginning of the representation. In every case, whichever bytes are copied from the representation of val, their order is maintained. That's what memcpy() does.
Is it possible to do this in a platform-independent way, or do I need to use #ifdefs and have a separate piece of code for each endianness?
No, that doesn't even make sense. Anything that cares about a specific characteristic of a platform (e.g. endianness) can't be platform independent.
Example 1 (platform independent):
// Copy the 3 least significant bytes to dest[]
dest[0] = value & 0xFF; dest[1] = (value >> 8) & 0xFF; dest[2] = (value >> 16) & 0xFF;
Example 2 (platform independent):
// Copy the 3 most significant bytes to dest[]
dest[0] = (value >> 8) & 0xFF; dest[1] = (value >> 16) & 0xFF; dest[2] = (value >> 24) & 0xFF;
Example 3 (platform dependent):
// I want the least significant bytes on some platforms and the most significant bytes on other platforms
#ifdef PLATFORM_TYPE_A
dest[0] = value & 0xFF; dest[1] = (value >> 8) & 0xFF; dest[2] = (value >> 16) & 0xFF;
#endif
#ifdef PLATFORM_TYPE_B
dest[0] = (value >> 8) & 0xFF; dest[1] = (value >> 16) & 0xFF; dest[2] = (value >> 24) & 0xFF;
#endif
Note that it makes no real difference what the cause of the platform dependence is (if it's endianness or something else), as soon as you have a platform dependence you can't have platform independence.
int detect_endianess(void) //1 if little endian 0 if big endianes
{
union
{
uint16_t u16;
uint8_t u8[2];
}val = {.u16 = 0x1122};
return val.u8[0] == 0x22;
}
void lo_bytes(void *dest, uint8_t no_bytes, uint32_t val)
{
if(detect_endianess())
{
memcpy(dest, &val, no_bytes);
}
else
{
memcpy(dest, (uint8_t *)(&val) + sizeof(val) - no_bytes, no_bytes);
}
}
Many implementations of htonl() or ntohl() test for the endianness of the platform first and then return a function which is either a no-op or a byte-swap.
I once read a page on the web about a few tricks to handle to/from big/little-endian conversions, without any preconceived knowledge of the hardware configuration. Just taking endianness for what it is : a representation of integers in memory. But I could not find it again, so I wrote this :
typedef union {
uint8_t b[4];
uint32_t i;
} swap32_T;
uint32_t to_big_endian(uint32_t x) {
/* convert to big endian, whatever the endianness of the platform */
swap32_T y;
y.b[0] = (x & 0xFF000000) >> 24;
y.b[1] = (x & 0x00FF0000) >> 16;
y.b[2] = (x & 0x0000FF00) >> 8;
y.b[3] = (x & 0x000000FF);
return y.i;
}
My two questions are :
Do you know a cleaner way to write this to_big_endian() function ?
Did you ever bookmarked this mysterious page I can not find, which contained very precious (because unusual) advices on endianness ?
edit
not really a duplicate (even if very close) mainly because I do not want to detect endianness. The same code compile on both architecture, with the same result
little endian
for u = 0x12345678 (stored as 0x78 0x56 0x34 0x12)
to_big_endian(u) = 0x12345678 (stored as 0x78 0x56 0x34 0x12)
big endian
for u = 0x12345678 (stored as 0x12 0x34 0x56 0x78)
to_big_endian(u) = 0x78563412 (stored as 0x78 0x56 0x34 0x12)
same code, same result... in memory.
Here is my own version of the same (although memory convention in this example is little endian instead of big endian) :
/* unoptimized version; solves endianess & alignment issues */
static U32 readLE32 (const BYTE* srcPtr)
{
U32 value32 = srcPtr[0];
value32 += (srcPtr[1]<<8);
value32 += (srcPtr[2]<<16);
value32 += (srcPtr[3]<<24);
return value32;
}
static void writeLE32 (BYTE* dstPtr, U32 value32)
{
dstPtr[0] = (BYTE)value32;
dstPtr[1] = (BYTE)(value32 >> 8);
dstPtr[2] = (BYTE)(value32 >> 16);
dstPtr[3] = (BYTE)(value32 >> 24);
}
Basically, what's missing in your function prototype to make the code a bit easier to read is a pointer to the source or destination memory.
Depending on your intentions, this may or may not be an answer to your question. However, if all you want to do is to be able to convert various types to various endiannesses (including 64-bit types and little endian conversions, which the htonl obviously won't do), you may want to consider the htobe32 and related functions:
uint16_t htobe16(uint16_t host_16bits);
uint16_t htole16(uint16_t host_16bits);
uint16_t be16toh(uint16_t big_endian_16bits);
uint16_t le16toh(uint16_t little_endian_16bits);
uint32_t htobe32(uint32_t host_32bits);
uint32_t htole32(uint32_t host_32bits);
uint32_t be32toh(uint32_t big_endian_32bits);
uint32_t le32toh(uint32_t little_endian_32bits);
uint64_t htobe64(uint64_t host_64bits);
uint64_t htole64(uint64_t host_64bits);
uint64_t be64toh(uint64_t big_endian_64bits);
uint64_t le64toh(uint64_t little_endian_64bits);
These functions are technically non-standard, but they appear to be present on most Unices.
It should also be said, however, as Paul R rightly points out in the comments, that there is no runtime test of endianness. The endianness is a fixed feature of a given ABI, so it is always a constant at compile-time.
Well ... That's certainly a workable solution, but I don't understand why you'd use a union. If you want an array of bytes, why not just have an array of bytes as an output pointer argument?
void uint32_to_big_endian(uint8_t *out, uint32_t x)
{
out[0] = (x >> 24) & 0xff;
out[1] = (x >> 16) & 0xff;
out[2] = (x >> 8) & 0xff;
out[3] = x & 0xff;
}
Also, it's often better code-wise to shift first, and mask later. It calls for smaller mask literals, which is often better for the code generator.
Well, here's my solution for a general signed/unsigned integer, independent of machine endianness, and of any size capable to store the data ---you need a version for each, but the algorithm is the same):
AnyLargeEnoughInt fromBE(BYTE *p, size_t n)
{
AnyLargeEnoughInt res = 0;
while (n--) {
res <<= 8;
res |= *p++;
} /* for */
return res;
} /* net2host */
void toBE(BYTE *p, size_t n, AnyLargeEnoughInt val)
{
p += n;
while (n--) {
*--p = val & 0xff;
val >>= 8;
} /* for */
} /* host2net */
AnyLargeEnoughInt fromLE(BYTE *p, size_t n)
{
p += n;
AnyLargeEnoughInt res = 0;
for (n--) {
res <<= 8;
res |= *--p;
} /* for */
return res;
} /* net2host */
void toLE(BYTE *p, size_t n, AnyLargeEnoughInt val)
{
while (n--) {
*p++ = val & 0xff;
val >>= 8;
} /* for */
} /* host2net */
I need to store a port number in a char array so that it can be written to a socket. The protocol specifies that bytes 25 and 26 are for the port number. I have tried a variety of things but this is what I have now to store the number:
//msg is declared as char msg[50];
msg[25] = (uint8_t) (UDP_PORT / 256);
msg[26] = (uint8_t) (UDP_PORT % 256);
Then to recover the number on the other side of the socket I have:
uint8_t msb, lsb;
msb = msg[25];
lsb = msg[26];
int port = msb * 256 + lsb;
Any tips would be greatly appreciated.
Have you tried, shift, and and or operations?
msg[25] = (uint8_t) ((UDP_PORT >> 8) & 0xff);
msg[26] = (uint8_t) (UDP_PORT & 0xff);
and then
uint8_t msb, lsb;
msb = msg[25];
lsb = msg[26];
int port = (msb << 8) | lsb;
Are you sure the UDP_PORT is a 16-bit value? ...because if it is 32-bit, the devision will not work to isolate the msb. Of course the bit-wise operations would.
Most protocols that send an integer in a multibyte binary format will send the value in network byte order for consistency across platform boundaries. Use the socket API's htons() and ntohs() functions to convert your integer between host byte order (what the local machine natively uses) and network byte order, eg:
Sending:
uint16_t *ptr = (uint16_t*) &msg[25];
*ptr = htons(UDP_PORT);
Receiving:
uint16_t *ptr = (uint16_t*) &msg[25];
int port = ntohs(*ptr);
Maybe this is what you want:
/* if UDP_PORT is a variable, and local machine is big-endian */
unsigned char *ptr = (unsigned char *)&UDP_PORT;
msg[25] = *ptr;
msg[26] = *(ptr + 1);
then:
/* if msg is a address variable */
unsigned short *ptr = (unsigned short *)(msg + 25);
int port = 0x0000FFFF & (unsigned short)*ptr;
You can do like this also .... for example iam showing as ...here i hv taken unsigned buffer u can modify it according to your requirement ...
unsigned char upBuff[3] = {0};
upBuff[1] = att_seq_no & 0xff;
upBuff[0] = att_seq_no >> 8;
printf("Updated Sequence Number(For Attendance Record Table)(IN HEXA)\
Now is(IN HEXA) : %02x %02x\n", upBuff[0], upBuff[1]);