How can I merge two ASCII characters? [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to merge two characters and print them via a single variable by using ASCII (refer to the image below):
[1]: https://i.stack.imgur.com/TWodP.jpg

try this if your machine is little endian
unsigned int C = (b << 8) | a;
printf("%s",&C);
otherwise if your machine is big endian try
unsigned int C = (a << 24) | (b<<16);
printf("%s",&C);

Based on my comment, but improved by Jonathan's input, I propose to do this:
int C;
C= (((unsigned char)a)<<8)|((unsigned char)b);
You have already tried the commented version to be helpful, this one is basically the same, just being robust against potentially negative values of a and b (which I considered out of scope, but Jonathan is right in being as safe as possible).
As for the explanation:
The << 8 part, a so-called bitshift left, moves a value by 8 bit towards the MSBs.
I.e. a 00000000010000001 becomes a 01000000100000000.
To be safe from negative value (see below why that is important), the value is first type-casted to unsigned char. That is the part ((unsigned char)a). Note that I tend to be generous when it comes to using (), some people do not like that. This is done for both values.
With values 'A' and 'B' we end up with
0100000100000000 and
0000000001000010.
The next part uses a bitwise OR (|), in contrast to a logical OR (||).
The result is
0100000101000010, which is what I understand to be your goal.
The importance of protecting against negative input is this. Any negative 8bit value has the MSB set and when cast to a wider data type will end up with all 8 new high bits set. This is because of the representation of negative values in integers in 2-compliment.
The final conversion to the desired wider data type is as Jonathan explains:
If you have (unsigned char)A << 8 as a term, then the unsigned char value is extended to int before the shift occurs, and the result is an int.

Related

How do i do left shift greater then 64bits? warning: shift count >= width of type [-Wshift-count-overflow] [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am trying to do left shift greater then or equal to 64, but I am not sure which DATA TYPE exists to help me out here.
I'm working on existing project, where 63 macros are already taken the next comes 64(which are my changes) for which i have to do left shift operation.
Note: I just want to understand how do i set a particular bit greater then 64bits. "I am not sure which DATA TYPE exists to help me out here".
Below code is just a sample code. We know there no data type exists greater then 64bits, but can there be any solution for this.
#include <stdio.h>
#define PEAK 64
int main()
{
unsigned long int a;
a= (1ULL << PEAK);
printf("%lu",a);
return 0;
}
main.c:8:10: warning: left shift count >= width of type [-Wshift-count-overflow]
a= (1ULL << PEAK);
^~
I just want to understand how do i set a particular bit greater then 64bits.
You can't.
Old answer:
You can do a left shift greater than or equal to 64-bits by doing exactly what you're doing.
This, of course, won't result in anything usable (either the original value, zero, or something else), and is undefined behavior, so don't do it.
If you want a data type that can do this, you're mostly out of luck. There are no guarantees that an 128-bit data type exists in C, and any compiler extensions that you may see are not portable. This may be possible with SIMD instructions but they're not portable across processors.
That said, there is unsigned __int128 in GCC and Clang that allows shifting (through emulation of wider integers). However, this isn't available in MSVC. Also note that you won't be able to print this number, so it's pretty pointless anyway.
You can shift a 64-bit unsigned type left by zero to 63 bits. Anything else will lead to undefined behaviour. The largest unsigned integer type is uintmax_t but it is usually unsigned long long on most common implementations, which is 64 bits, and the shift is equally undefined then. In practice it will result in either zero, the original value, or completely random behaviour.
Why do you think you need to do this?

Operating on arrays using Unsigned Values [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have been generating a C function from Matlab Coder environnement in order to implement it in an other software called MAX/MSP.
I'm trying to make sense of it, thanks to my poor level in C programming, and there is some syntax elements I can't understand: the use of unsigned values like 0U or 1U to pass arrays.
The next example doesn't do anything. Pasting the entire code wouldn't help much, unless you think so.
void function1(const double A[49], double B[50])
{
function2( (double *)&A[0U] );
}
static void function2(const double A[])
{
}
While doing some math, Matlab wrote something like:
b = 2;
f[1U] += b;
I don't understand the use of unsigned value either...
Thanks a lot!
For a[n], array indexes are always non-negative values, from 0 to n-1. Appending a u to a decimal constant poses no problem for indexing an array, yet does offer a benefit: it insures that the value is of minimal type width and of some unsigned type.
Automated generation of a fixed index like with Matlab benefits by using a u suffix.
Consider small and large values on a 32-bit unsigned/int/long/size_t system
aweeu[0u]; // `0u` is 32-bit `unsigned`.
aweenou[0]; // `0` is 32-bit `int`.
abigu[3000000000u]; // `3000000000u` is a 32-bit `unsigned`.
abignou[3000000000]; // `3000000000` is a 64-bit `long long`.
Is this of value? Perhaps. Some compiles make look at the value first and see that all above are in range of size_t and not complain. Others may complain about an index of type long long or possible even int. By appending the u, such rare complaints do not occur.
The U suffix is obviously not necessary here. It can be useful to force unsigned arithmetics in certain situations, with surprising side effects:
if (-1 < 1U) {
printf("surprise!\n");
}
In some rare occasions, it is necessary to avoid some type changes. On many current architectures, the following comparisons hold and the type of 2147483648 is different from that of 2147483648U is more than just signedness:
For example, on 32-bit linux and 32- and 64-bit windows:
sizeof(2147483648) == sizeof(long long) // 8 bytes
sizeof(2147483648U) == sizeof(unsigned) // 4 bytes
On many embedded systems with 16-bit ints:
sizeof(2147483648) == sizeof(long long) // 8 bytes
sizeof(2147483648U) == sizeof(unsigned long) // 4 bytes
sizeof(32768) == sizeof(long) // 4 bytes
sizeof(32768U) == sizeof(unsigned int) // 2 bytes
Depending on implementation details, array index values can exceed the range of both type int and type unsigned, and pointer offset values can be even larger. Just specifying U is no guarantee of anything.

C Binary Operator & (-1) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
Suppose you have code including:
if(i & (-1)) {}
Depending on i, what would this operation return?
There's no definitive answer to this question: it depends on the type of i and, if the operation is performed in the domain of signed type, on the signed representation used by the given platform.
For example, if i is of type unsigned int (or some larger unsigned type), the entire operation will be performed in the domain of that unsigned type. In that case -1 will get implicitly converted (by usual arithmetic conversions) to all-ones bit pattern as wide as i. The whole if will effectively become equivalent to if (i).
But with i of signed type - there's no way to say anyhting for certain.
The results of performing a bitwise operation on a negative value are implementation defined.
For example, if 2's complement representation is used for negatives, the value -1 will be represented by a sequence of all 1 bits, so performing a bitwise AND with -1 will result in the value of i.
On the other hand, if sign magnitude representation is used, only 2 bits are set in the value -1, the highest and the lowest. In that case, only the highest and lowest bits of i (after any conversions) will be set in the result.
So to summarize, you can't depend on the results without some implementation defined method of determining the representation of negative values.

Set proper bits on n-bit number with given decimal value [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Suppose I have the number 0b000 and I need to set the correct bits, so that they will be equal for ex. 5. (0b101).
How do I do that, using bitwise operations algorithm?
Okay, more details then. I'm developing morse code decoder, and to describe a input, I'm using 8 bits: 000 00000, where first three bits are the number of dot/dashes given, and the rest of bits are reserved for the input, where dot is 0, and dash is 1.
For example, the letter A (.-) would be: 010 01000.
The question is, how can I modify the first three bits so that they will show how many dot/dashes were given during the input?
You switch bits on using |. Let's stick with your non-standard notation for binary literals (note that C++14 onwards supports it):
0b000 | 0b100 is 0b100.
0b100 | 0b001 is 0b101.
Note that you can toggle bits using ^ (work through some examples as an exercise).
Finally, you can switch off bits using '&~`.
Solved, if you want to set the first 3 bits, then shift 5 bits to the left (where value is the number you want to set on 3 first bits):
value = value << 5;
And then OR it with the rest of bits:
morseBits = morseBits | value;

Printing Bitwise NOT operator with hexadecimal format specifier [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
If i'm using m instead of ~m then the code gives me the expected hexadecimal value of 32 but here it's giving ffffffdf as output.
EDIT
I know how bitwise ~ NOT operator works. But i'm not understanding this.Could somebody explain this...??
#include<stdio.h>
int main()
{
unsigned int m = 32;
printf("%x\n", ~m); //ffffffdf is printed as output
return 0;
}
Every hexadecimal digit is four bits. Since you got 8 hexadecimal digits your integers seems to be 8*4=32 bit.
The NOT of 32 = 000000000000000000000000000010000 would be something like 11111111111111111111111111101111 which would be the hexadecimal digits above.
In C, ~ is the bitwise-not operator. You said you understand how this operator works, but your question indicates that you do not. So let's go through this example:
First, you declare m to be an unsigned int, which happens to be 32 bits wide on your platform. You assign it the decimal value 32. The variable m is 0x00000020.
Then, you print it out. When you print it out normally, the expected output appears. But when you print it out with the ~ operator, you get something completely different.
The ~ (bitwise-not) operator does exactly what it says on the tin: It negates (flips) every bit, so 1s become 0s and 0s become 1s. Let's see what that would do to your number:
m = 0b00000000000000000000000000100000 = 0x00000020
~m = 0b11111111111111111111111111011111 = 0xffffffdf
As you can see, the result exactly matches what is being output, which is good -- it means both your compiler and CPU are working as expected!

Resources