Will the statement below calculate the length of the array???:
UART1_BUF[1] = (unsigned char)(lcl_ptr - (unsigned char *)&UART1_BUF[1]);
/////////////////////////////////////////////////////////////////////////////////////
unsigned char UART1_BUF[128];
void apple_Build_SetFIDTokenValues(void)
/* apple_Build_SetFIDTokenValues -
*
* This function builds the apple protocol StartIDPS() command.
*/
{
unsigned char * lcl_ptr;
UART1_BUF[0] = BT_START_OF_PACKET;
UART1_BUF[1] = 0x00;
//BundleSeedIDPrefToken
lcl_ptr = apple_Build_BundleSeedIDPrefToken(&UART1_BUF[1]);
UART1_BUF[1] = (unsigned char)(lcl_ptr - (unsigned char *)&UART1_BUF[1]);
*lcl_ptr = apple_checksum((unsigned char *)UART1_BUF, UART1_BUF[1]);
UART1_BUF[UART1_BUF[1]] = *lcl_ptr;
}
unsigned char * apple_Build_BundleSeedIDPrefToken(unsigned char *buf_ptr)
{
*(buf_ptr++) = 0x0D; //length of BundleSeedIDPrefToken minus this byte
*(buf_ptr++) = BundleSeedIDPref_Token_FID_TYPE;
*(buf_ptr++) = BundleSeedIDPref_Token_FID_SUBTYPE;
//BundleSeedIDString
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
return (buf_ptr);
}
Yes, provided that the result fits in a byte - which, from the code sample, it will - and, by 'length of the array', you mean the number of bytes minus the packet header.
It will give you the number of bytes added by the apple_Build_BundleSeedIDPrefToken() function. (The total number of bytes that have been filled in the UART_BUF[] array is one more than that, because that statement won't count the byte at UART_BUF[0].)
In general, subtracting two pointers of type T * into an array of elements of type T gives you the difference as the number of elements (rather than the number of bytes). (The result is undefined if either of the two pointers do not point to either an element within the same array, or the element just past the last one.) The result itself has a signed integer type, ptrdiff_t, which is defined in <stddef.h>.
However, here, both pointers are pointing into the same array of unsigned char, so each element is a byte by definition.
So, the expression lcl_ptr - (unsigned char *)&UART1_BUF[1] will give the number of bytes added by the function. (Note that &UART_BUF[1] is of type unsigned char * already, so the cast inside the expression is unnecessary.)
That expression is then cast to unsigned char, which could in theory truncate the result, although it clearly doesn't in the above example.
I note that the code is a little odd, in that it assigns to UART_BUF[1] three times!
UART1_BUF[1] = 0x00; sets it to 0;
lcl_ptr = apple_Build_BundleSeedIDPrefToken(&UART1_BUF[1]); sets it to 0x0D inside the called function;
UART1_BUF[1] = (unsigned char)(lcl_ptr - (unsigned char *)&UART1_BUF[1]); sets it to 0x0E, as the function adds 14 bytes.
More generally, remember to be careful with pointer subtractions: expecting them to always give a number of bytes is a common mistake...
#include <stdio.h>
#include <stddef.h>
int main(void)
{
int array[4];
int *start, *end;
start = &array[1];
end = &array[3];
printf("Difference (as int *): %d\n", end - start);
printf("Difference (as char *): %d\n", (char *)end - (char *)start);
return 0;
}
gives (on a system where sizeof(int)==4):
Difference (as int *): 2
Difference (as char *): 8
Related
I am currently working on a task where I need to print the address of a variable. It would be easy to use printf %p but I am only allowed to use write from unistd.
I tried casting the pointer in to an unsigned integer and uintptr_t and then converting it into a hexadecimal number. With uintptr_t it works but with an unsigned integer it only prints half of the address. Maybe someone can explain me why this is the case?
I also saw some solutions using ">>" and "<<" but I didn't get why that works. It would be nice if someone can explain a solution using "<<" and ">>" step by step, because I am not sure if I am allowed to use uintptr_t.
this is the code I use to cast it into a unsigned int / unitptr_t / unsigned long long (I know that ft_rec_hex is missing leading 0's):
void ft_rec_hex(unsigned long long nbr)
{
char tmp;
if (nbr != 0)
{
ft_rec_hex(nbr / 16);
if (nbr % 16 < 10)
tmp = nbr % 16 + '0';
else
tmp = (nbr % 16) - 10 + 'a';
write(1, &tmp, 1);
}
}
int main(void)
{
char c = 'd';
unsigned long long ui = (unsigned long long)&c;
ft_rec_hex(ui);
}
It looks like only half of the address is printed because the "unsigned integer" you used has only half size of uintptr_t. (note that uintptr_t is an unsigned integer type)
You can use an array of unsigned char to store data in a pointer variable and print that to print full pointer withput uintptr_t.
Using character types to read objects with other type is allowed according to strict aliasing rule.
#include <stdio.h>
#include <unistd.h>
void printOne(unsigned char v) {
const char* chars = "0123456789ABCDEF";
char data[2];
data[0] = chars[(v >> 4) & 0xf];
data[1] = chars[v & 0xf];
write(1, data, 2);
}
int main(void) {
int a;
int* p = &a;
/* to make sure the value is correct */
printf("p = %p\n", (void*)p);
fflush(stdout);
unsigned char ptrData[sizeof(int*)];
for(size_t i = 0; i < sizeof(int*); i++) {
ptrData[i] = ((unsigned char*)&p)[i];
}
/* print in reversed order, assuming little endian */
for (size_t i = sizeof(int*); i > 0; i--) {
printOne(ptrData[i - 1]);
}
return 0;
}
Or read data in a pointer variable as unsigned char array without copying:
#include <stdio.h>
#include <unistd.h>
void printOne(unsigned char v) {
const char* chars = "0123456789ABCDEF";
char data[2];
data[0] = chars[(v >> 4) & 0xf];
data[1] = chars[v & 0xf];
write(1, data, 2);
}
int main(void) {
int a;
int* p = &a;
/* to make sure the value is correct */
printf("p = %p\n", (void*)p);
fflush(stdout);
/* print in reversed order, assuming little endian */
for (size_t i = sizeof(int*); i > 0; i--) {
printOne(((unsigned char*)&p)[i - 1]);
}
return 0;
}
It would be easy to use printf %p but I am only allowed to use write from unistd.
Then form a string and print that.
int n = snprintf(NULL, 0, "%p", (void *) p);
char buf[n+1];
snprintf(buf, sizeof buf, "%p", (void *) p);
write(1, buf, n);
Using a pointer converted to an integer marginally reduces portability and does not certainly form the best textual representation of the pointer - something implementation dependent.
With uintptr_t it works but with an unsigned integer it only prints half of the address.
unsigned is not specified to be wide enough to contain all the information in a pointer.
uintptr_t, when available (very common), can preserve most of that information for void pointers. Good enough to round-trip to an equivalent pointer, even if in another form.
int main(){
char a = 5;
char *p = &a; // 8 bits
int num = 123456789; // 32 bits
*p = num;
return 0;
}
As a is 1 byte and num is 4 bytes, does *p = num truncate num to 1 byte before assigning it to a? or a 32 bit value gets written to memory and corrupts the stack?
Since you're assigning an int value to a char, the value is converted in an implementation-defined way to be in the range of a char (most likely, the low order byte of num will be the value which is assigned).
The fact that you're dereferencing a char * to assign to a char doesn't change this. It would be the same as if you did a = num;.
I have a 64-bit number written as two 32-bit unsinged ints: unsigned int[2]. unsigned int[0] is MSB, and unsigned int[1] is LSB. How would I convert it to double?
double d_from_u2(unsigned int*);
memcpy it from your source array to a double object in proper order. E.g. if you want to swap the unsigned parts
unsigned src[2] = { ... };
double dst;
assert(sizeof dst == sizeof src);
memcpy(&dst, &src[1], sizeof(unsigned));
memcpy((unsigned char *) &dst + sizeof(unsigned), &src[0], sizeof(unsigned));
Of course, you can always just reinterpret both source and destination objects as arrays of unsigned char and copy them byte-by-byte in any order you wish
unsigned src[2] = { ... };
double dst;
unsigned char *src_bytes = (unsigned char *) src;
unsigned char *dst_bytes = (unsigned char *) &dst;
assert(sizeof dst == 8 && sizeof src == 8);
dst_bytes[0] = src_bytes[7];
dst_bytes[1] = src_bytes[6];
...
dst_bytes[7] = src_bytes[0];
(The second example is not intended to be equivalent to the first one.)
There are several ways to copy the bits of your two integers into an object of type double.
At the lowest level, you can convert your input pointer to a [unsigned] char *, create a [unsigned] char * to the first byte of the return value, and copy between those by whatever means you choose. This provides you every opportunity to adjust byte order as may be needed -- for example, although your array is ordered most-significant word first, the order of the bytes within those words might not be what you need.
In the event that you need the bytes to be transferred into your double most-significant byte first, and that you do not want to depend on the machine byte order, you might do this:
double d_from_u2(unsigned int *in) {
double result;
unsigned char *result_bytes = (unsigned char *) &result;
for (int i = 0; i < 4; i++) {
result_bytes[i] = in[0] >> (24 - 8 * i);
result_bytes[i + 4] = in[1] >> (24 - 8 * i);
}
return result;
}
Using arithmetic (shifts, in this case) allows you to operate on the numeric values of the input independently of details of numeric representation.
Here is a solution that works without memcpybut using union:
#include "stdio.h"
#include "stdint.h"
double d_from_u2(unsigned int* v) {
union {
int32_t x[2];
int64_t y;
} u = { .x = { v[1], v[0] }};
printf("%llu\n", u.y); // 1311768467463794450
return (double)u.y;
}
int main(void) {
int32_t x[2];
x[0] = 0x12345678;
x[1] = 0x9abcef12;
printf("%f\n", d_from_u2(x)); // 1311768467463794432.000000
return 0;
}
See demo. In initializes the array int32_t[2] in the union and uses the int64_t to convert it to a double. The order of the initialization depends on which machine (little or big endian) it runs or where the values comes from (1 first).
Is is possible to convert int to "string" in C just using casting? Without any functions like atoi() or sprintf()?
What I want would be like this:
int main(int argc, char *argv[]) {
int i = 500;
char c[4];
c = (char)i;
i = 0;
i = (int)c;
}
The reason is that I need to generate two random ints (0 to 500) and send both as one string in a message queue to another process. The other process receives the message and do the LCM.
I know how to do with atoi() and itoa(). But my teachers wants just using cast.
Also, why isn't the following possible to compile?
typedef struct
{
int x;
int y;
} int_t;
typedef struct
{
char x[sizeof(int)];
char y[sizeof(int)];
} char_t;
int main(int argc, char *argv[])
{
int_t rand_int;
char_t rand_char;
rand_int.x = (rand() % 501);
rand_int.y = (rand() % 501);
rand_char = (char_t)rand_int;
}
Of course it's not possible, because an array is an object and needs storage. Casts result in values, not objects. Some would say the whole point/power of C is that you have control over the storage and lifetime of objects.
The proper way to generate a string containing a decimal representation of an integer is to create storage for it yourself and use snprintf:
char buf[sizeof(int)*3+2];
snprintf(buf, sizeof buf, "%d", n);
You have to convert 500 to "500".
"500" is the same as '5' then '0' then '0' then 0. The last element 0 is the null terminator of a string.
500 is equal to 5 * 100 + 0 * 10 + 0 * 1. You have to do some math here. Basically you have to use the / operator.
Then this could be also useful: '5' is the same as '0' + 5.
Without giving away an exact coded answer, what you'll want to do is loop through each digit of the integer (by computing its remainder modulo 10 via the % operator), and then add its value to the ASCII value of '0', casting the result back to a char, and placing that result in a null-terminated string.
An example which pretends like implicit casts don't exist might look like this:
char c = (char) ( ((int) '0') + 5 ); // c should now be '5'.
You can determine the length of the resulting string by computing the log base 10 of the number, or by simply allocating it dynamically as you go using realloc().
Casting is a horrible way to do this due to endianness, but here is an example anyhow - there are some occasions where it is useful (unions work better these days though, due to compiler handling of these types of casts).
#include <stdio.h> //for printf
#define INT(x) ((int*)(x)) //these are not endian-safe methods
#define CHAR(x) ((char*)(x))
int main(void)
{
int *x=INT(&"HI !");
printf("%X\n",*x); //look up the ascii and note the order
printf("%s\n",CHAR(x));
return 0;
}
For an int with a value <500, if the most significant byte comes first, then you get a "string" (pointer to a char array) of "" (or {0}) but if the endianness is LSB first (x86 is little endian) then you would get a usable 3 byte "string" char* (not necessarily human readable characters) but there is no guarantee that there will be a zero byte in an integer and since all you have is a pointer to the address where the int was stored, if you were to run normal string functions on it, they would go past the end of the original int into no-mans-land (in small test programs it will often be environment variables) ... anyhow for more portability you can use network byte order (which for little endian is a no-op):
#include <arpa/inet.h>
uint32_t htonl(uint32_t hostlong);
uint16_t htons(uint16_t hostshort);
uint32_t ntohl(uint32_t netlong);
uint16_t ntohs(uint16_t netshort);
These functions just byteswap as necessary to get network byte order. On your x86 they will be optimized away, so you might as well use them for portability.
Just because it is not listed yet: Here a way to convert int to char array with variable size allocation by using snprintf:
int value = 5
// this will just output the length which is to expect
int length = snprintf( NULL, 0, "%d", value );
char* valueAsString = malloc( length + 1 );// one more for 0-terminator
snprintf( valueAsString, length + 1, "%d", value );
get the number of divisions then add one by one to your buffer
char *int2str(int nb) {
int i = 0;
int div = 1;
int cmp = nb;
char *nbr = malloc(sizeof(char) * 12);
if (!nbr)
return (NULL);
if (nb < 0)
nbr[i++] = '-';
while ((cmp /= 10) != 0)
div = div * 10;
while (div > 0) {
nbr[i++] = abs(nb / div) + 48;
nb = nb % div;
div /= 10;
}
nbr[i] = '\0';
return (nbr);
}
Even more compact:
char *lotaa(long long nb) {
int size = (nb ? floor(log10(llabs(nb))) : 0) + (nb >= 0 ? 1 : 2);
char *str = malloc(size + 1);
str[0] = '-';
str[size] = 0;
for(nb = llabs(nb); nb > 0 || (size > 0 && str[1] == 0); nb /= 10)
str[--size] = '0' + nb % 10;
return (str);
}
I'm working on a C project and I have some instructions of type:
add $1 $2 $3
So I'm reading the line as a string, parsing through it and have a corresponding integer for add, say - 2. Could anyone please tell me how I could convert this to binary in order to write it to a file?
The registers are 5 bits and the operation is 6 bits. The total will be 32 (the last 10 bits are unused).
So the registers are stored in say op[] = "2", char r1[] = "1", char r2[] = "2" etc (note that register number can be as high as 31). Could anyone give me an example for a function that would convert this to binary in the format 000010 00001 00010 00011 0000000000
The easiest way will be using a bit field:
struct code {
unsigned opcode : 6;
unisgned operand1 : 5;
unisgned operand2 : 5;
unisgned operand2 : 5;
} test_code;
Now you can simply assign to the different members:
test_code.opcode = 0x02;
test_code.operator1 = 0x01;
test_code.operator2 = 0x02;
test_code.operator3 = 0x03;
atoi(op) will give you 2, so you can just string it together
As far as putting it into that structure you want, just create a structure that has bitfields in it and place it in a union with a 32 bit unsigned integer, and you can take the value directly.
Quick pseudo code
const char* int32_to_bin(int32_t value) {
int pos = 0;
char output[33];
while(value > 0) {
if (value & 1) output[pos++] = '1';
else output[pos++] = '0';
value >>= 1;
}
output[pos] = 0;
return output;
}
What you're asking is a C question but you tag as objective-c so I'll cover both.
C:
These variables such as op[].. are really defined as like char op[..] (not sure if your length), which are C strings of course.
So the operation is 6 bit and each register is 5 bits, that's 15 + 6 = 21 bit word. I'll assume the top 11 bits are zeroes.
What you need are 4 more variables that are integers:
int opint; int r0int; int r1int; int r2int;
You want the integer value of those strings to go in to those integers. You can use atoi() to achieve this, such as opint = atoi(op);
Now that you've got your integers derived from strings, you need to create the 32 bit word. The easiest way to do this is to first create one integer that holds those bits in the right place. You can do it (assuming 32 bit integers) like this:
int word = 0;
word |= ((opint & 0x3f) << (21 - 6))) |
(r0int & 0x1f) << (21 - 11)) |
(r1int & 0x1f) << (21 - 16))
(r2int & 0x1f));
Where the << is shifting in to place. After this, you should have the word integer properly formed. Next, just turn it in to a binary representation (if that's even necessary? Not sure on your application)
Objective-C
The only difference is that I assume those strings start as NSString *op; etc. In this case, get the integers by opint = [op intValue];, then form the word as I describe.
This code will convert a string to binary
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
char* stringToBinary(char* s) ;
void binchar(char output[], char character);
void itoa(int value, char* str, int base);
int main(int argc, char const *argv[])
{
printf("%s\n", stringToBinary("asdf") );
}
char* stringToBinary(char* s) {
if(s == NULL) return 0; /* no input string */
size_t len = strlen(s);
char *binary = malloc(len*8 + 1); // each char is one byte (8 bits) and + 1 at the end for null terminator
int i =0;
char output[9];
for(i=0; i< len; i++){
binchar(output, s[i]);
strcat(binary,output);
}
return binary;
}
void binchar(char output[], char character)
{
//char output[9];
itoa(character, output, 2);
}
// since GCC is not fully supporting itoa function here is its implementaion
// itoa implementation is copied from here http://www.strudel.org.uk/itoa/
void itoa(int value, char* str, int base) {
static char num[] = "0123456789abcdefghijklmnopqrstuvwxyz";
char* wstr=str;
int sign;
// Validate base
if (base<2 || base>35){ *wstr='\0'; return; }
// Take care of sign
if ((sign=value) < 0) value = -value;
// Conversion. Number is reversed.
do *wstr++ = num[value%base]; while(value/=base);
if(sign<0) *wstr++='-';
*wstr='\0';
// Reverse string
void strreverse(char* begin, char* end);
strreverse(str,wstr-1);
}
void strreverse(char* begin, char* end) {
char aux;
while(end>begin)
aux=*end, *end--=*begin, *begin++=aux;
}