Convert negative number from String to unsigned Long - c

I am trying to convert a negative number from String to unsigned long,however am getting an unexpected result due to "-" sign.
Below the code
#include <stdio.h>
#include <string.h>
main()
{
char * pValue = "-1234657890000";
unsigned long ValueLong ;
sscanf( pValue, "%llu", &ValueLong ) ;
printf ("%llu",ValueLong);
}
Instead of having "-1234657890000" as printf output, the code is diplaying this value "18446742839051661616"
Any advise please?
Thank you

The value you provide will not fit in unsigned long(or at least it is not guaranteed to fit). Another point is that -1234567890000 is not at all unsigned. Use long long instead and keep in mind that the literal should be suffixed with LL like so:
char * pValue = "-1234657890000LL";

You said it yourself: "convert a negative number to unsigned long". This is what happens when you convert a negative value to a positive, that is when you "tell your compiler what to do".
You operate with same internal (binary) representation of a value, same sequence of zeroes and ones. Type signed or unsigned tells the compiler how to treat memory that holds the value and is referred to by a variable.
Try
#include <stdio.h>
#include <string.h>
main()
{
char * pValue = "-1234657890000";
long ValueLong ;
sscanf( pValue, "%ld", &ValueLong ) ;
printf ("%ld\n",ValueLong);
unsigned long v = (unsigned long)ValueLong;
printf ("%lu\n",v);
}
You'll get
-1234657890000
18446742839051661616
Here is probably is a more interesting example:
#include <stdio.h>
#include <string.h>
#include <limits.h>
main()
{
char * pValue = "-1";
long ValueLong ;
sscanf( pValue, "%ld", &ValueLong ) ;
printf ("%ld\n",ValueLong);
unsigned long v = (unsigned long)ValueLong;
printf ("%x\n",v);
printf ("%x\n",UINT_MAX);
}
And the output is
-1
ffffffff
ffffffff
Bear in mind that output depends on the target platform

Truly speaking the range of an Integer constant depends upon the compiler. For a 16-bit compiler like Turbo C or Turbo C++ the range is –32768 to 32767. For a 32-bit compiler the range would be even greater. I hope you got the answer. You are crossing the range

Thanks for all,it is solved as per your suggestions as below, the problem was on the unsigned datatype as you mention
#include <stdio.h>
#include <string.h>
main()
{
char * pValue = "-1234657890000";
long long ValueLong ;
sscanf( pValue, "%ld", &ValueLong ) ;
printf ("%ld",ValueLong);
}

Related

why itoa fuction returns 32 bits if the size of variable in 16 bit

size of short int is 2 bytes(16 bits) on my 64 bit processor and mingw compiler but when I convert short int variable to a binary string using itoa function
it returns string of 32 bits
#include<stdio.h>
int main(){
char buffer [50];
short int a=-2;
itoa(a,buffer,2); //converting a to binnary
printf("%s %d",buffer,sizeof(a));
}
Output
11111111111111111111111111111110 2
The answer is in understanding C's promotion of short datatypes (and char's, too!) to int's when those values are used as parameters passed to a function and understanding the consequences of sign extension.
This may be more understandable with a very simple example:
#include <stdio.h>
int main() {
printf( "%08X %08X\n", (unsigned)(-2), (unsigned short)(-2));
// Both are cast to 'unsigned' to avoid UB
return 0;
}
/* Prints:
FFFFFFFE 0000FFFE
*/
Both parameters to printf() were, as usual, promoted to 32 bit int's. The left hand value is -2 (decimal) in 32bit notation. By using the cast to specify the other parameter should not be subjected to sign extension, the printed value shows that it was treated as a 32 bit representation of the original 16 bit short.
itoa() is not available in my compiler for testing, but this should give the expected results
itoa( (unsigned short)a, buffer, 2 );
your problem is so simple , refer to itoa() manual , you will notice its prototype which is
char * itoa(int n, char * buffer, int radix);
so it takes an int that to be converted and you are passing a short int so it's converted from 2 byte width to 4 byte width , that's why it's printing a 32 bits.
to solve this problem :
you can simply shift left the array by 16 position by the following simple for loop :
for (int i = 0; i < 17; ++i) {
buffer[i] = buffer[i+16];
}
and it shall give the same result , here is edited version of your code:
#include<stdio.h>
#include <stdlib.h>
int main(){
char buffer [50];
short int a= -2;
itoa(a,buffer,2);
for (int i = 0; i < 17; ++i) {
buffer[i] = buffer[i+16];
}
printf("%s %d",buffer,sizeof(a));
}
and this is the output:
1111111111111110 2

C warning(clang compiler) "integer literal is too large to be represented in a signed integer"

I have this piece of code
#include <stdio.h>
typedef signed long long v2signed_long
__attribute__ ((__vector_size__ (sizeof(signed long long) * 2)));
int main()
{
v2signed_long v = {4611686018427387904LL, -9223372036854775808LL};
printf("%lli, %lli\n", v[0], v[1]);
return 0;
}
Which gives the following warning(The related questions didn't help):
:7:45: warning: integer literal is too large to be represented in
a signed integer type, interpreting as unsigned
[-Wimplicitly-unsigned-literal]
v2signed_long v = {4611686018427387904LL, -9223372036854775808LL};
Is there a way to solve this warning? Thanks!
The problem is that -9223372036854775808LL is not actually an integer literal. It's the literal 9223372036854775808LL with the unary - operator applied to it. The value 9223372036854775808 is too large to fit in a signed long long which is why you're getting the warning.
You can fix this by using the expression -9223372036854775807LL - 1LL instead. The value 9223372036854775807 fits in a signed long long as does -9223372036854775807LL, then subtracting 1 gives you the value you want.
Alternately, you can use the macro LLONG_MIN.
The compiler considers this construction
-9223372036854775808LL
like an integer literal 9223372036854775808LL for which the unary operator - is applied. But the value 9223372036854775808 is too big to be stored in an object of the type signed long long. So the compiler issues an error.
Instead use
#include <limits>
//...
v2signed_long v =
{
std::numeric_limits<long long>::max(), std::numeric_limits<long long>::min()
};
Here is a demonstrative program
#include <iostream>
#include <limits>
#include <vector>
int main()
{
std::vector<long long> v =
{
std::numeric_limits<long long>::max(), std::numeric_limits<long long>::min()
};
for ( const auto &item : v ) std::cout << item << '\n';
return 0;
}
Its output is
9223372036854775807
-9223372036854775808
As you removed the C++ tag then in C the code can look like
#include <stdio.h>
#include <limits.h>
int main(void)
{
long long int a[] = { LLONG_MAX, LLONG_MIN };
printf( "%lld\n%lld\n", a[0], a[1] );
return 0;
}
The program output is the same as shown above.
In any case it is much better to use this approach instead of magic numbers with a trick that only confuse readers of the code.

How do I sign extend and then find the value of the unsigned binary number in C?

I have a variable declared as an int
int a = -3;
I want the twos' complement values sign extended to 16 bits. So, it becomes: 1111 1111 1111 1101 and then find the unsigned value of this number, which would be 65533 (I believe).
In other words, I want to go from -3 to 65533.
Following this answer: Sign extension from 16 to 32 bits in C I'm stuck on the first step. Here's a shortened version of my code:
#include <stdio.h>
#include <string.h>
int main () {
int s = -3;
printf("%x\n", s);
s = (int16_t)s;
printf("%x\n", s);
int16_t i = s;
printf("%x\n", i);
return(0);
}
I compile with gcc test.c and all three printf statements give "fffffffd"
Do you know why the cast isn't working and perhaps any better solutions to the original problem?
Where you appear to be struggling with the issue is understanding just exactly what bits you are dealing with in interpreting the bits as signed or unsigned or as exact-width types.
How do I sign extend and then find the value of the unsigned binary
number in C?
Answer: you don't -- the bits never change....
When you declare int a = -3;, the bits are set in memory. Thereafter, unless you change them, they are the exact same 32-bits (or whatever sizeof a is on your hardware).
The key to understanding sign-extension is it only applies when interpreting a negative value under a two's complement system when casting or assigning to a larger-size type. Otherwise, you are not extending anything, you are just interpreting the bits as a different type.
A short example for your problem will illustrate. Here int a = -3; is declared (defined) and the bits of a thereafter never change. However, you can interpret those bits (or by lose analogy look at them from a different viewpoint) and use those bits as short, unsigned short, int16_t or uint16_t and in all unsigned cases, interpret them as a hexadecimal value, e.g.
#include <stdio.h>
#include <stdint.h>
#ifdef __GNUC__
#include <inttypes.h>
#endif
int main (void) {
int a = -3;
printf ("a (as int) : %d\n"
"a (as short) : %hd\n"
"a (as unsigned short) : %hu\n"
"a (as unsigned short hex) : 0x%0x\n"
"a (as PRId16) : %" PRId16 "\n"
"a (as PRIu16) : %" PRIu16 "\n"
"a (as PRIx16) : 0x%" PRIx16 "\n",
a, (short)a, (unsigned short)a, (unsigned short)a,
(int16_t)a, (uint16_t)a, (uint16_t)a);
return 0;
}
Example Use/Output
$ ./bin/exact3
a (as int) : -3
a (as short) : -3
a (as unsigned short) : 65533
a (as unsigned short hex) : 0xfffd
a (as PRId16) : -3
a (as PRIu16) : 65533
a (as PRIx16) : 0xfffd
Look things over, and look at all the answers, and let us know if you have any further questions.
Your code causes undefined behaviour by using the wrong format specifier for the argument type in printf.
Here is some correct code:
#include <stdio.h>
int main(void)
{
int s = -3;
uint16_t u = s; // u now has the value 65533
printf("%u\n", (unsigned int)u);
printf("%x\n", (unsigned int)u);
}
The code to printf a uint16_t is slightly complicated, it's simpler to cast to unsigned int and use %u or %x which are specifiers for unsigned int.
fffffffd is the correct two's complement value it's just being printed in hex
use %d in your printf statements
#include <stdio.h>
#include <string.h>
int main () {
int s = -3;
printf("%d\n", s);
s = (int16_t)s;
printf("%d\n", s);
int16_t i = s;
printf("%d\n", i);
return(0);
}
and you should see that 65533 value
In C casting signed value to a wider type automatically extends its sign.
To get a complement value you need to cast your value to appropriate unsigned type:
#include <stdio.h>
#include <stdint.h>
int main ()
{
int s = -3;
printf("%x\n", s);
int16_t s16 = (int16_t)s;
printf("%hd\n", s16);
uint16_t c16 = (uint16_t)s16;
printf("%hu\n", c16);
}
The output I get:
fffffffd
-3
65533

Unexpected unsigned integer behavior

I encountered this unexpected output with the following code in which I was verifying the maximum values (represented in decimal form) of the unsigned forms of short and int types when all their bits were set to 1.
#include <stdio.h>
int main()
{
unsigned int n1 = 0xFFFFFFFF;
unsigned short n2 = 0xFFFF;
printf("\nMax int = %+d", n1);
printf("\nMax short = %+d", n2);
return 0;
}
The output I get is (compiled using the Visual Studio 2017 C/C++ Compiler):
Max int = -1
Max short = +65535
Along the lines of unsigned short, I was expecting the maximum value of the unsigned int to be +4294967295. Why isn't it so?
You need to use %u for the format specifier for an unsigned type.
Using printf(), your conversions in the format string must match the type of the arguments, otherwise the behavior is undefined. %d is for int.
Try this for the maximum values:
#include <stdio.h>
int main()
{
printf("Max unsigned int = %u\n", (unsigned)-1);
printf("Max unsigned short = %hu\n", (unsigned short)-1);
return 0;
}
Side notes:
the maximum value of any unsigned type is -1 cast to that type.
Put newline at the end of your lines. Among other reasons, this flushes stdouts buffer with the default setting of line buffered.

How to cast hex value to WORD, DWORD or QWORD and store the result in a double variable?

I'm trying to cast a signed hex number to WORD, DWORD and QWORD by this way:
#include <stdio.h>
#include <stdlib.h>
#include <inttypes.h>
int main(void) {
printf("WORD=%d\n", (int16_t) strtol("F123", NULL, 16));
printf("DWORD=%d\n", (int32_t) strtol("FFFFF123", NULL, 16));
printf("QWORD=%lld\n", (int64_t) strtol("FFFFFFFFFFFFF123", NULL, 16));
return 0;
}
But it returns the following:
WORD=-3805
DWORD=2147483647
QWORD=2147483647
http://ideone.com/mqjldk
But why the DWORD and QWORD castings are not returning -3805 too?
I mean: 0xFFFFF123 stored in a DWORD would contain -3805 value in decimal, not 2147483647
Expected output:
WORD=-3805
DWORD=-3805
QWORD=-3805
Do you have an bitwise alternative to do it?
0xFFFFF123 is out of the range of a long int if a long int have 32 bit, so strtol() return LONG_MAX (0x7FFFFFFF = 2147483647 in our case).
use strtoull() to convert a string to a unsigned integer with at least 64 bits, and allways check for errors before proceed.
For print a integer with a specified bit size, use something like this:
printf("foo=%"PRIu32"\n",(uint32_t) foo);
a better way:
#include <stdio.h>
#include <stdlib.h>
#define __STDC_FORMAT_MACROS //we need that for PRI[u]8/16/32 format strings
#include <inttypes.h>
#include <errno.h>
void error_exit(void)
{
perror("ups");
exit(EXIT_FAILURE);
}
int main(void)
{
unsigned long long temp;
errno=0;
temp = strtoull("FFFFF123", NULL, 16);
if(errno)
{
error_exit();
}
printf("DWORD=%"PRId32"\n", (int32_t) temp );
errno=0;
temp = strtoull("FFFFFFFFFFFFF123", NULL, 16);
if(errno)
{
error_exit();
}
printf("QWORD=%"PRId64"\n", (int64_t) temp );
return EXIT_SUCCESS;
}
strtol does not assume two's complement input. In order for it to treat something as negative, you must use a minus sign. For example "-F123". This is the reason why the 2nd and 3rd line don't give negative output.
In case of the first line, you got the expected output mostly by accident. Because after the strtol call, you casted the hex value 0xF123 down to int16_t. It will not fit inside a int16_t, so it gets converted to a negative value.
Some bugs:
strtol("FFFFFFFFFFFFF123") will not work if long cannot hold the result. You should be using strtoll.
To print the stdint.h types, use the format specifier PRId from inttypes.h, example: printf("WORD=%" PRId16 "\n", my_int16_t);
Overall, avoid integer overflow of signed numbers. If you expect the input to fit inside an unsigned 32 variable but not a signed one, you should be using strtoul etc functions and after that convert to the signed type.
There are some errors inside this code.
The main issue in the first line is that you cannot cast a long int to an int16_t because int16_t is smaller in size than a long int.
In addition you should use the PRId16 specifier to print an int16_t.
The second line results in printing LONG_MAX as 0xFFFFF123 is out of range for a long int.
The third line has the same issue as the second, FFFFFFFFFFFFF123 is even more out of range and casting it to a 64bit values not help.
In addition you need to print it with %lld or PRId64 as Michael already stated.
The problem is that the maximum input for strtolis 0x7fffffff (or LONG_MAX). If the number to convert is out of range, errno is set and the result is LONG_MAX or LONG_MIN, see the strtol documentation for more details.
Illustration:
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
int main() {
int x = strtol("7f000000", NULL, 16); // in range
printf ("x = %d, errno = %d\n", errno);
x = strtol("F0000000", NULL, 16); // out of range
printf ("x = %d, errno = %d\n", errno);
return 0;
}
Output:
x = 2130706432, errno = 0
x = 2147483647, errno = 34

Resources