I have a string with length 28, that represent 1 number in radix 16.
long num=strtol(str,NULL,16);
But it is worked for small strings but not long string (it gave me a negative result). So for long string how can I translate it?
//--------------------------
Well I think there is only one way without creating bicycle. Use GMP. So simple code with how to use GMP will be good answer.
Well Thanks to #DavidSchwartz who helped me in comments. So the solution is to use GMP library:
mpz_t res;
mpz_init_set_str (res, str, 16);
gmp_fprintf(f1,"%s",mpz_get_str(NULL,10,res));
mpz_clear (res);
May be something wrong, but it worked somehow. If you find a mistake let me know, Ill change it for future people.
Since code is starting with a string representing an integer in base 16, could use a string to represent the base 10 number too.
Simple enough to convert one digit at a time.
Space management left for OP.
#include <stdio.h>
#include <stdlib.h>
char *baseNtobase10(char *s10, const char *s16, int n) {
strcpy(s10, "0");
size_t len = strlen(s10);
while (*s16) {
char sdigit[] = { *s16, 0 };
char *endptr;
int digit = (int) strtol(sdigit, &endptr, n);
if (endptr == sdigit) return NULL; // detect illegal digits
// multiple s10[] = s10[] * n + digit
for (size_t i = len; i-- > 0; ) {
digit += (s10[i] - '0') * n;
s10[i] = digit % 10 + '0';
digit /= 10;
}
// handle carry oout
while (digit) {
memmove(s10 + 1, s10, len + 1);
*s10 = digit % 10 + '0';
digit /= 10;
len++;
}
s16++;
}
return s10;
}
int main(void) {
char s10[100];
puts(baseNtobase10(s10, "123", 10));
puts(baseNtobase10(s10, "123", 16));
puts(baseNtobase10(s10, "1234567890123456789012345678", 16));
return 0;
}
Output
123
291
369229998778771388878179932591736
I was going to suggest strtoll, but that only goes to 16 characters or 64 bits. I suggest you use the bigint library, which has way higher limits.
Related
I'm passing almost all leetCode tests with this, but not understanding why the output is wrong ("/0") when the input is:
a = "10100000100100110110010000010101111011011001101110111111111101000000101111001110001111100001101"
b = "110101001011101110001111100110001010100001101011101010000011011011001011101111001100000011011110011"
Anyone has an idea to what is not working ?
Thanks
#include <stdio.h>
#include <stdlib.h>
char * sumBinary(long int binary1, long int binary2, char * result);
char * addBinary(char * a, char * b)
{
char * result;
long int a_int;
long int b_int;
a_int = atoi(a);
b_int = atoi(b);
result = malloc(sizeof(*result) * 1000);
if (!result)
return (NULL);
sumBinary(a_int, b_int, result);
return (result);
}
char * sumBinary(long int binary1, long int binary2, char * result)
{
int i;
int t;
int rem;
int sum[1000];
i = 0;
t = 0;
rem = 0;
if ((binary1 == 0) && (binary2 == 0))
{
result[0] = '0';
result[1] = '\0';
}
else
{
while (binary1 != 0 || binary2 != 0)
{
sum[i++] = (binary1 %10 + binary2 % 10 + rem) % 2;
rem = (binary1 %10 + binary2 % 10 + rem) / 2;
binary1 = binary1 / 10;
binary2 = binary2 / 10;
}
if (rem != 0)
sum[i++] = rem;
--i;
while (i >= 0)
{
result[t] = sum[i] + '0';
t++;
i--;
}
result[t] = '\0';
}
return (result);
}
For a start, you should be using atol(3), not atoi(3) if you're using long int. But that's not the main issue here.
atol(3) and atoi(3) expect strings containing decimal numbers, not binary, so that's not going to work well for you. You would need strtol(3), which you can tell to expect a string in ASCII binary. But again, this is not the main issue.
You don't give the question text, but I'm guessing they want you to add two arbitrarily-long ASCII-binary strings, resulting in an ASCII-binary string.
I imagine their expectation, given it's arbitrarily-long, is that you would be working entirely in the string domain. So you'd allocate for a string whose length is two greater than the longer of the two you get as parameters (+1 for the terminal NUL, the other +1 for a potential overflow digit).
Then you start from the end, working back to the start, adding the corresponding digits of the parameter strings, placing the results into the result string starting from its end (allowing for that terminal NUL), adding as if you were doing it by hand.
Don't forget to add a leading zero to the result string, if you don't overflow into that position.
Note that I'm not going to write the code for you. This is either a learning exercise or a test: either way, you need to do the coding so you can learn from it.
So a bit ago I was warming up and doing some very simple challenges. I came across one on edabit where you need to make a function to add the digits of a number and tell if the resulting number is "Oddish" or "Evenish"
(ie oddishOrEvenish(12) -> "Oddish" because 1 + 2 = 3 and 3 is odd)
so I solved it with some simple code
# include <stdio.h>
# include <stdlib.h>
char* odOrEv(int num);
int main(int argc, char* argv[]) {
printf("%s", odOrEv(12));
}
char* odOrEv(int num) {
char* strnum = (char*) malloc(11);
char* tempchar = (char*) malloc(2); // ik i can declare on one line but this is neater
int total = 0;
sprintf(strnum, "%d", num);
for (int i = 0; i < sizeof strnum; i++) {
tempchar[0] = strnum[i];
total += (int) strtol(tempchar, (char**) NULL, 10);
}
if (total % 2 == 0) return "Evenish";
return "Oddish";
}
and it worked first try! Pretty rudimentary but I did it. i then thought hey this is fun howabout I make it better, so I got it down to
# include "includes.h"
char* odOrEv(int num);
int main(int argc, char* argv[]) {
printf("%s", odOrEv(13));
}
char* odOrEv(int num) {
char* strnum = (char*) malloc(11);
int total = 0;
sprintf(strnum, "%d", num);
while (*strnum) total += (int) *strnum++;
return total % 2 == 0 ? "Evenish" : "Oddish";
}
just 5 lines for the function. Since I'm so pedantic though, I hate that I have to define strnum on a different line than declaring it since I use sprintf. I've tried searching, but I couldn't find any functions to convert int to string that I could use while declaring the string (e.x. char* strnum = int2str(num);). So is there any way to cut off that one line?
srry if this was too big just tried to explain everything
P.S. don't tell to use atoi() or stoi or any of those since they bad (big reason long to eplain) also I'd prefer if I didn't have to include any more directories but it's fine if I do
EDIT: forgot quote added it
To be honest it the one of the weirdest functions I have ever seen in my life.
You do not need strings, dynamic allocations and monster functions like sprintf or strtol.
char* odOrEv(int num)
{
int sum = 0;
while(num)
{
sum += num % 10;
num /= 10;
}
return sum % 2 == 0 ? "Evenish" : "Oddish";
}
You don't actually have to add the digits. The sum of even digits is always even, so you can ignore them. The sum of an odd number of odd digits is odd, the sum of an even number of odd digits is even. So just loop through the digits, alternating between oddish and evenish every time you see an odd digit.
You can loop through the digits by dividing the number by 10 and then checking whether the number is odd or even.
char *OddorEven(int num) {
int isOdd = 0;
while (num != 0) {
if (num % 2 != 0) {
isOdd = !isOdd;
}
num /= 10;
}
return isOdd ? "Oddish" : "Evenish";
}
I have the following program that converts decimal to binary:
#include <stdio.h>
#include <string.h>
int main() {
printf("Number (decimal): ");
int no;
scanf("%d", &no);
char bin[64];
while (no > 0) {
for (int i = strlen(bin); i > 0; i--) {
bin[i] = bin[i - 1];
}
int bit = no % 2;
char digit = bit + '0';
bin[0] = digit;
no /= 2;
}
printf("%s", bin);
return 0;
}
The program works correctly, but randomly the string "ttime__vdso_get" gets appended on the end.
The numbers that make it happen are different every time I compile.
1: 1
2: 01ttime_vsdo_get
3: 10ttime_vsdo_get
It becomes a little different when the numbers get bigger:
100039: 11000011011000111ttime__vdso_getm#
10000000000000000000000000000: ttime
What is happening?
If I had to diagnose it I'd say that I've managed to make a compiling program that's pulling memory from the wrong places. I don't know how how I managed to do it, though.
I'm using GCC, if it matters.
Just do char bin[64] = "";, never forget that a valid string is nul terminatedM#��M#.
And strlen() return an size_t !
I can also advice you to use char bin[sizeof no * CHAR_BIT + 1] = ""; that will use a correct maximum size for your string.
It may be because of this line of code :
for (int i = strlen(bin); i > 0; i--) {
bin[i] = bin[i - 1];
}
Try replacing strlen(bin) with 63.
It may also be a good idea to initialize your array bin with 0s.
try filling varible
char bin[64]
with 0
char decimalToHexadecimal(long int decimalNumber)
{
long int quotient;
long int remainder;
static char hexDecNum[100];
int i=0;
quotient = decimalNumber;
while (quotient != 0)
{
remainder = quotient % 16;
// to convert integer into character
if (remainder < 10)
{
remainder = remainder + 48;
}
else
{
remainder = remainder + 55;
}
hexDecNum[i++] = remainder;
quotient = quotient / 16;
}
}
This user defined function will convert decimal number to hexadecimal number.
I wanted to make function that will not use any library function like printf,scanf etc. I and to get return hexadecimal value of a decimal number from this function.
But, I am confused how to get return hexadecimal number from this function?
It's better to use an explicit string of the desired character set, that way you can remove any assumptions about the encoding, which is very good for portability.
Although C requires that the digits 0 through 9 are encoded using adjacent code points (i.e. '1' - '0' must equal 1, and so on), no such guarantee is made for the letters.
Also, returning a string requires heap allocation, a static buffer (which makes the function harder to use), or simply accepting the string from the caller which is often the best choice.
It's important to realize that the typical technique of extracting the remainder from division by 16 (or just masking out the four right-most bits) generates bits "from the right", whereas typical string-building runs from the left. This has to be taken into account as well, or you'd generate "d00f" when given 0xf00d.
Here's how it could look:
char * num2hex(char *buf, size_t buf_max, unsigned long number)
{
if(buf_max < 2)
return NULL;
char * put = buf + buf_max; // Work backwards.
*--put = '\0'; // Terminate the resulting string.
do
{
if(put == buf)
return NULL;
const unsigned int digit = number & 15;
*--put = "0123456789abcdef"[digit];
number >>= 4;
} while(number != 0);
return put;
}
This function returns the start of the built string (which is "right-aligned" in the provided buffer, so it's not at the start of it), or NULL if it runs out of space.
Note: yeah, this is perhaps a bit too terse, of course the digit set could be extracted out and given a name, but it's also pretty obvious what its purpose is, and indexing a literal is handy (some people don't seem to realize it's doable) and somewhat instructive to show off.
Read the chapter about strings in your C textbook.
One solution is to return a pointer to char:
char *decimalToHexadecimal(long int decimalNumber)
{
long int quotient;
long int remainder;
static char hexDecNum[100]; // must be static
quotient = decimalNumber;
int i = 0; // // <<<< you forgot to declare i
while (quotient != 0)
{
remainder = quotient % 16;
// to convert integer into character
if (remainder < 10)
{
remainder = remainder + 48;
}
else
{
remainder = remainder + 55;
}
hexDecNum[i++] = remainder;
quotient = quotient / 16;
}
hexDecNum[i] = 0; // <<< you forgot the NUL string terminator
return hexDecNum; // <<< you forgot to return something
}
int main()
{
printf("%s\n", decimalToHexadecimal(0x1234));
}
The hexDecNum buffer must be static, because you are returning a pointer to this buffer, and this buffer will cease to exist once you have returned from the decimalToHexadecimal function, because it's a local variable. Modern compilers will usually emit a warning if you return the address of a local variable.
The function is still not quite what you want. I leave it as an exercise to correct it.
Edit
Another approach for conversion is this: the decimal number is actually represented as a binary number (as all numbers BTW), so we don't even need division; we just can decompose the number into 4 bit nibbles (0000 to 1111) and transform these nibbles into a hexadecimal digits (0..9, A..F):
char *decimalToHexadecimal(long int decimalNumber)
{
static char hexDecNum[100];
int i;
for (i = 0; i < sizeof(decimalNumber) * 2; i++)
{
int digit = decimalNumber & 0xf;
if (digit >= 10)
digit += 'A' - 10; // better than writing 55
else
digit += '0'; // better than writing 48
hexDecNum[i] = digit;
decimalNumber >>= 4;
}
hexDecNum[i] = 0;
return hexDecNum;
}
The function suffers from the same problem as your original function. Improving it is left as an exercise.
Here I have created a string and I am storing the binary value of a number in the string.. I want to store the value of the variable num to the string.
i contains the length of the binary number for the given decimal number..suppose the given number is A=6, i contains 3 and i need a string 'result' having '110' which is the binary value of 6.
char* result = (char *)malloc((i)* sizeof(char));
i--;
while(A>=1)
{
num=A%2;
result[i]=num; // here I need to store the value of num in the string
A=A/2;
i--;
}
It appears from the code you've posted is that what you are trying to do is to print a number in binary in a fixed precision. Assuming that's what you want to do, something like
unsigned int mask = 1 << (i - 1);
unsigned int pos = 0;
while (mask != 0) {
result[pos] = (A & mask) == 0 ? '0' : '1';
++pos;
mask >>= 1;
}
result[pos] = 0; //If you need a null terminated string
edge cases left as an exercise for the reader.
I'm not sure specifically what you are asking for. Do you mean the binary representation (i.e. 00001000) of a number written into a string or converting the variable to a string (i.e. 8)? I'll assume you mean the first.
The easiest way to do this is to repeatedly test the least significant bit and shift the value to the right (>>). We can do this in for loop. However you will need to know how many bits you need to read. We can do this with sizeof.
int i = 15;
for (int b = 0; b < sizeof(i); ++b) {
uint8_t bit_value = (i & 0x1);
i >>= 1;
}
So how do we turn this iteration into a string? We need to construct the string in reverse. We know how many bits are needed, so we can create a string buffer accordingly with an extra byte for NULL termination.
char *buffer = calloc(sizeof(i) + 1, sizeof(char));
What this does is allocates memory that is sizeof(i) + 1 elements long where each element is sizeof(char), and then zero's each element. Now lets put the bits into the string.
for (int b = 0; b < sizeof(i); ++b) {
uint8_t bit_value = (i & 0x1);
size_t offset = sizeof(i) - 1 - b;
buffer[offset] = '0' + bit_value;
i >>= 1;
}
So what's happening here? In each pass we're calculating the offset in the buffer that we should be writing a value to, and then we're adding the ASCII value of 0 to bit_value as we write it into the buffer.
This code is untested and may have some issues, but that is left as an exercise to the reader. If you have any questions, let me know!
here is the whole code. It is supposed to work fine.
int i=0;
int A;//supposed entered by user
//calculating the value of i
while(A!=0)
{
A=A/2;
i++;
}
char* result=(char *)malloc(sizeof(char)*i);
i--;
while(A!=0)
{
result[i]='0'+(A%2);
A=A/2;
i--;
}
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
char *numToBinStr(int num){
static char bin[sizeof(int) * CHAR_BIT + 1];
char *p = &bin[sizeof(int) * CHAR_BIT];//p point to end
unsigned A = (unsigned)num;
do {
*--p = '0' + (A & 1);
A >>= 1;
}while(A > 0);//do-while for case value of A is 0
return p;
}
int main(void){
printf("%s\n", numToBinStr(6));
//To duplicate, if necessary
//char *bin = strdup(numToBinStr(6));
char *result = numToBinStr(6);
char *bin = malloc(strlen(result) + 1);
strcpy(bin, result);
printf("%s\n", bin);
free(bin);
return 0;
}
You could use these functions in <stdlib.h>:
itoa(); or sprintf()
The second link has some examples as well.