I have two unsigned int pointers with 32 bit and I want do an XOR operation between these unsigned pointers.
char* a = "01110011011100100110111101000011";
char* b = "10111001100011001010010110111101";
unsigned* au = (unsigned*) a;
unsigned* bu = (unsigned*) b;
unsigned* cu = a ^ b;
Error is:
invalid operands to binary ^ (have ‘unsigned int *’ and ‘unsigned int *’)
You have strings, not unsigned integers. You'll need to convert them to unsigned integers before you can do bitwise operations on them:
char* a = "01110011011100100110111101000011";
char* b = "10111001100011001010010110111101";
unsigned au = strtoul(a, 0, 2);
unsigned bu = strtoul(b, 0, 2);
unsigned cu = a ^ b;
Your code needs to work a little harder to pull that off.
Iterate over the strings.
Pull the numbers from the digits.
Perform an XOR between the numbers.
Convert the number to a digit.
Add the resultant digit to the output.
char str1[] = "01110011011100100110111101000011";
char str2[] = "10111001100011001010010110111101";
char* p1 = str1;
char* p2 = str2;
for ( ; *p1 != '\0' && *p2 != '\0'; ++p1, ++p2 )
{
unsigned int n1 = *p1 - '0';
unsigned int n2 = *p2 - '0';
unsigned int n = n1 ^ n2;
char c = n + '0';
// Now you can change either a or be to contain the output.
*p1 = c;
}
// At this point, str1 should contain a string that looks like you performed an XOR between str1 and str2.
Related
I've tried every possible combination of *'s and &'s. I can't figure out what is wrong with this program and what is causing this compile time error
I'm getting this error:
error: initialization makes pointer from integer without a cast
compilation terminated due to -Wfatal-errors.
With this code:
int main() {
unsigned int ten = (int) 10;
unsigned short *a = (short) 89;
unsigned int *p =(int) 1;
unsigned short *q = (short) 10;
unsigned short *s = (short) 29;
function(ten,a,p, q, s);
return 0;
}
The function prototype is:
int function(unsigned int a, unsigned short *const b,
unsigned int *const c,
unsigned short *const d,
unsigned short *const e);
The function is empty, I'm just trying to get it to compile with an input file.
These variables are pointers and you try to assign them integer (short) values:
unsigned short *a = (short) 89;
unsigned int *p =(int) 1;
unsigned short *q = (short) 10;
unsigned short *s = (short) 29;
That is why you get the compiler errors.
The question is not 100 % clear, but maybe this is what you want:
unsigned int ten = (int) 10;
unsigned short a = (short) 89;
unsigned int p =(int) 1;
unsigned short q = (short) 10;
unsigned short s = (short) 29;
function(ten, &a, &p, &q, &s);
The problem is you are initializing pointers with integer constants for a, p, q and s (ie, saying assign value 89 to type int* where 89 is of type int, not int*). You probably dont want those variables to be pointers, you can pass them in as pointers to those variables instead:
int main() {
unsigned int ten = (int) 10;
unsigned short a = (short) 89;
unsigned int p = (int) 1;
unsigned short q = (short) 10;
unsigned short s = (short) 29;
function(ten, &a, &p, &q, &s);
return 0;
}
Without more information about decode_instruction, it is hard to tell if this is the correct form; it really does depend what that function does.
I think the title is pretty self explanatory but basically what I'm saying is that, if I have the following instruction:
a = (char) b;
knowing that a's type is char and b's is unsigned char, can that instruction result in making a and b have different binary representations?
The type char can be either signed or unsigned. Char types have no padding, so all bits are value bits.
If char is unsigned, then the value bits of a will be the same as those of b.
If char is signed, then...
if the value of b is representable by char, the common value bits of a and b will the same.
otherwise, the conversion from unrepresentable unsigned char value to char results in an implementation-defined result.
The answer in general, is no, there is no difference. Here you can test it yourself. Just supply the respective values for 'a' and 'b'
#include <stdio.h>
#include <string.h>
const char *byte_to_binary(int x)
{
static char b[9];
b[0] = '\0';
int z;
for (z = 128; z > 0; z >>= 1)
strcat(b, ((x & z) == z) ? "1" : "0");
}
return b;
}
int main(void) {
unsigned char b = -7;
char a = -7;
printf("1. %s\n", byte_to_binary(a));
a = (char) b;
printf("2. %s\n", byte_to_binary(a));
return 0;
}
while reading the book Computer System a programmer perspective.I have found that if i take most negative value in the range of a data type for example :
char a = -128;
a = - a;
The variable a will still have the value -128 and i understand that but when i do
char a = 50;
char b = -128;
char r = a - b;
It give me the correct result -78, why is that?? it because the automatic promotion to int or there is a hardware subtraction without needing to calculate two complement of -128??
char a = -128; means char a = 0x80;, so at the promotion to int it will be 0xFFFFFF80.
-0xFFFFFF80 = 0x00000080 and that is casted to an char 0x80 which is the same as the start value.
char a = -128;
char b = 50;
char r = a - b;
0xFFFFFF80
0xFFFFFFCE
=
0xFFFFFF4E
so 4E will be written in r, which is +78.
live demo: https://ideone.com/sjyBOa
I have a unsigned char array containing the following value : "\x00\x91\x12\x34\x56\x78\x90";
That is number being sent in Hexadecimal format.
Additionally, it is in BCD format : 00 in byte, 91 in another byte (8 bits)
On the other side I require to decode this value as 0091234567890.
I'm using the following code:
unsigned int conver_bcd(char *p,size_t length)
{
unsigned int convert =0;
while (length--)
{
convert = convert * 100 + (*p >> 4) * 10 + (*p & 15);
++p
}
return convert;
}
However, the result which I get is 1430637214.
What I understood was that I'm sending hexadecimal values (\x00\x91\x12\x34\x56\x78\x90) and my bcd conversion is acting upon the decimal values.
Can you please help me so that I can receive the output as 00911234567890 in Char
Regards
Karan
It looks like you are simply overflowing your unsigned int, which is presumably 32 bits on your system. Change:
unsigned int convert =0;
to:
uint64_t convert = 0;
in order to guarantee a 64 bit quantity for convert.
Make sure you add:
#include <stdint.h>
Cast char to unsigned char, then print it with %02x.
#include <stdio.h>
int main(void)
{
char array[] = "\x00\x91\x12\x34\x56\x78\x90";
int size = sizeof(array) - 1;
int i;
for(i = 0; i < size; i++){
printf("%02x", (unsigned char )array[i]);
}
return 0;
}
Change return type to unsigned long long to insure you have a large enough integer.
Change p type to an unsigned type.
Print value with leading zeros.
unsigned long long conver_bcd(const char *p, size_t length) {
const unsigned char *up = (const unsigned char*) p;
unsigned long long convert =0;
while (length--) {
convert = convert * 100 + (*up >> 4) * 10 + (*up & 15);
++up;
}
return convert;
}
const char *p = "\x00\x91\x12\x34\x56\x78\x90";
size_t length = 7;
printf( "%0*llu\n", (int) (length*2), conver_bcd(p, length));
// 00911234567890
Will the statement below calculate the length of the array???:
UART1_BUF[1] = (unsigned char)(lcl_ptr - (unsigned char *)&UART1_BUF[1]);
/////////////////////////////////////////////////////////////////////////////////////
unsigned char UART1_BUF[128];
void apple_Build_SetFIDTokenValues(void)
/* apple_Build_SetFIDTokenValues -
*
* This function builds the apple protocol StartIDPS() command.
*/
{
unsigned char * lcl_ptr;
UART1_BUF[0] = BT_START_OF_PACKET;
UART1_BUF[1] = 0x00;
//BundleSeedIDPrefToken
lcl_ptr = apple_Build_BundleSeedIDPrefToken(&UART1_BUF[1]);
UART1_BUF[1] = (unsigned char)(lcl_ptr - (unsigned char *)&UART1_BUF[1]);
*lcl_ptr = apple_checksum((unsigned char *)UART1_BUF, UART1_BUF[1]);
UART1_BUF[UART1_BUF[1]] = *lcl_ptr;
}
unsigned char * apple_Build_BundleSeedIDPrefToken(unsigned char *buf_ptr)
{
*(buf_ptr++) = 0x0D; //length of BundleSeedIDPrefToken minus this byte
*(buf_ptr++) = BundleSeedIDPref_Token_FID_TYPE;
*(buf_ptr++) = BundleSeedIDPref_Token_FID_SUBTYPE;
//BundleSeedIDString
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
*(buf_ptr++) = '0';
return (buf_ptr);
}
Yes, provided that the result fits in a byte - which, from the code sample, it will - and, by 'length of the array', you mean the number of bytes minus the packet header.
It will give you the number of bytes added by the apple_Build_BundleSeedIDPrefToken() function. (The total number of bytes that have been filled in the UART_BUF[] array is one more than that, because that statement won't count the byte at UART_BUF[0].)
In general, subtracting two pointers of type T * into an array of elements of type T gives you the difference as the number of elements (rather than the number of bytes). (The result is undefined if either of the two pointers do not point to either an element within the same array, or the element just past the last one.) The result itself has a signed integer type, ptrdiff_t, which is defined in <stddef.h>.
However, here, both pointers are pointing into the same array of unsigned char, so each element is a byte by definition.
So, the expression lcl_ptr - (unsigned char *)&UART1_BUF[1] will give the number of bytes added by the function. (Note that &UART_BUF[1] is of type unsigned char * already, so the cast inside the expression is unnecessary.)
That expression is then cast to unsigned char, which could in theory truncate the result, although it clearly doesn't in the above example.
I note that the code is a little odd, in that it assigns to UART_BUF[1] three times!
UART1_BUF[1] = 0x00; sets it to 0;
lcl_ptr = apple_Build_BundleSeedIDPrefToken(&UART1_BUF[1]); sets it to 0x0D inside the called function;
UART1_BUF[1] = (unsigned char)(lcl_ptr - (unsigned char *)&UART1_BUF[1]); sets it to 0x0E, as the function adds 14 bytes.
More generally, remember to be careful with pointer subtractions: expecting them to always give a number of bytes is a common mistake...
#include <stdio.h>
#include <stddef.h>
int main(void)
{
int array[4];
int *start, *end;
start = &array[1];
end = &array[3];
printf("Difference (as int *): %d\n", end - start);
printf("Difference (as char *): %d\n", (char *)end - (char *)start);
return 0;
}
gives (on a system where sizeof(int)==4):
Difference (as int *): 2
Difference (as char *): 8