I have the following program that converts decimal to binary:
#include <stdio.h>
#include <string.h>
int main() {
printf("Number (decimal): ");
int no;
scanf("%d", &no);
char bin[64];
while (no > 0) {
for (int i = strlen(bin); i > 0; i--) {
bin[i] = bin[i - 1];
}
int bit = no % 2;
char digit = bit + '0';
bin[0] = digit;
no /= 2;
}
printf("%s", bin);
return 0;
}
The program works correctly, but randomly the string "ttime__vdso_get" gets appended on the end.
The numbers that make it happen are different every time I compile.
1: 1
2: 01ttime_vsdo_get
3: 10ttime_vsdo_get
It becomes a little different when the numbers get bigger:
100039: 11000011011000111ttime__vdso_getm#
10000000000000000000000000000: ttime
What is happening?
If I had to diagnose it I'd say that I've managed to make a compiling program that's pulling memory from the wrong places. I don't know how how I managed to do it, though.
I'm using GCC, if it matters.
Just do char bin[64] = "";, never forget that a valid string is nul terminatedM#��M#.
And strlen() return an size_t !
I can also advice you to use char bin[sizeof no * CHAR_BIT + 1] = ""; that will use a correct maximum size for your string.
It may be because of this line of code :
for (int i = strlen(bin); i > 0; i--) {
bin[i] = bin[i - 1];
}
Try replacing strlen(bin) with 63.
It may also be a good idea to initialize your array bin with 0s.
try filling varible
char bin[64]
with 0
Related
So a bit ago I was warming up and doing some very simple challenges. I came across one on edabit where you need to make a function to add the digits of a number and tell if the resulting number is "Oddish" or "Evenish"
(ie oddishOrEvenish(12) -> "Oddish" because 1 + 2 = 3 and 3 is odd)
so I solved it with some simple code
# include <stdio.h>
# include <stdlib.h>
char* odOrEv(int num);
int main(int argc, char* argv[]) {
printf("%s", odOrEv(12));
}
char* odOrEv(int num) {
char* strnum = (char*) malloc(11);
char* tempchar = (char*) malloc(2); // ik i can declare on one line but this is neater
int total = 0;
sprintf(strnum, "%d", num);
for (int i = 0; i < sizeof strnum; i++) {
tempchar[0] = strnum[i];
total += (int) strtol(tempchar, (char**) NULL, 10);
}
if (total % 2 == 0) return "Evenish";
return "Oddish";
}
and it worked first try! Pretty rudimentary but I did it. i then thought hey this is fun howabout I make it better, so I got it down to
# include "includes.h"
char* odOrEv(int num);
int main(int argc, char* argv[]) {
printf("%s", odOrEv(13));
}
char* odOrEv(int num) {
char* strnum = (char*) malloc(11);
int total = 0;
sprintf(strnum, "%d", num);
while (*strnum) total += (int) *strnum++;
return total % 2 == 0 ? "Evenish" : "Oddish";
}
just 5 lines for the function. Since I'm so pedantic though, I hate that I have to define strnum on a different line than declaring it since I use sprintf. I've tried searching, but I couldn't find any functions to convert int to string that I could use while declaring the string (e.x. char* strnum = int2str(num);). So is there any way to cut off that one line?
srry if this was too big just tried to explain everything
P.S. don't tell to use atoi() or stoi or any of those since they bad (big reason long to eplain) also I'd prefer if I didn't have to include any more directories but it's fine if I do
EDIT: forgot quote added it
To be honest it the one of the weirdest functions I have ever seen in my life.
You do not need strings, dynamic allocations and monster functions like sprintf or strtol.
char* odOrEv(int num)
{
int sum = 0;
while(num)
{
sum += num % 10;
num /= 10;
}
return sum % 2 == 0 ? "Evenish" : "Oddish";
}
You don't actually have to add the digits. The sum of even digits is always even, so you can ignore them. The sum of an odd number of odd digits is odd, the sum of an even number of odd digits is even. So just loop through the digits, alternating between oddish and evenish every time you see an odd digit.
You can loop through the digits by dividing the number by 10 and then checking whether the number is odd or even.
char *OddorEven(int num) {
int isOdd = 0;
while (num != 0) {
if (num % 2 != 0) {
isOdd = !isOdd;
}
num /= 10;
}
return isOdd ? "Oddish" : "Evenish";
}
As a C fresher, I am trying to write a recursive routine to convert a decimal number to the equivalent binary. However, the resultant string is not correct in the output. I think it has to be related to the Type casting from int to char. Not able to find a satisfactory solution. Can anyone help? Thanx in advance.
Code:
#include <stdio.h>
#include <conio.h>
int decimal, counter=0;
char* binary_string = (char*)calloc(65, sizeof(char));
void decimal_to_binary(int);
int main()
{
puts("\nEnter the decimal number : ");
scanf("%d", &decimal);
decimal_to_binary(decimal);
*(binary_string + counter) = '\0';
printf("Counter = %d\n", counter);
puts("The binary equivalent is : ");
puts(binary_string);
return 0;
}
void decimal_to_binary(int number)
{
if (number == 0)
return;
else
{
int temp = number % 2;
decimal_to_binary(number/2);
*(binary_string + counter) = temp;
counter++;
}
}
Should the casting store only the LSB of int in the char array each time?
Do not use global variables if not absolutely necessary. Changing the global variable in the function makes it very not universal.
#include <stdio.h>
char *tobin(char *buff, unsigned num)
{
if(num / 2) buff = tobin(buff, num / 2);
buff[0] = '0' + num % 2;
buff[1] = 0;
return buff + 1;
}
int main(void)
{
char buff[65];
unsigned num = 0xf1;
tobin(buff, num);
printf("%s\n", buff);
}
#include <stdio.h>
#include <stdlib.h>
#include <conio.h>
int decimal, counter=0;
//char* binary_string = (char*)calloc(65, sizeof(char));
//C does not allow initialization of global variables with
//non constant values. Instead declare a static char array with 65 elements.
//Alternatively declare binary_string in the main function and allocate memory with calloc.
char binary_string[65];
void decimal_to_binary(int);
int main()
{
puts("\nEnter the decimal number : ");
scanf("%d", &decimal);
decimal_to_binary(decimal);
//*(binary_string + counter) = '\0';
// This is more readable:
binary_string[counter] = '\0';
printf("Counter = %d\n", counter);
puts("The binary equivalent is : ");
puts(binary_string);
return 0;
}
void decimal_to_binary(int number)
{
if (number == 0)
return;
else
{
int temp = number % 2;
//decimal_to_binary(number/2);
//you call decimal_to_binary again before increasing counter.
//That means every time you call decimal_to_binary, the value of count
//is 0 and you always write to the first character in the string.
//*(binary_string + counter) = temp;
//This is more readable
//binary_string[counter] = temp;
//But you are still setting the character at position counter to the literal value temp, which is either 0 or 1.
//if its 0, you are effectively writing a \0 (null character) which in C represents the end of a string.
//You want the *character* that represents the value of temp.
//in ASCII, the value for the *character* 0 is 0x30 and for 1 it is 0x31.
binary_string[counter] = 0x30 + temp;
counter++;
//Now after writing to the string and incrementing counter, you can call decimal_to_binary again
decimal_to_binary(number/2);
}
}
If you compile this, run the resulting executable and enter 16 as a number, you may expect to get 10000 as output. But you get00001. Why is that?
You are writing the binary digits to the string in the wrong order.
The first binary digit you calculate is the least significant bit, which you write to the first character in the string etc.
To fix that aswell, you can do:
void decimal_to_binary(int number){
if(number == 0){
return;
}
else{
int temp = number % 2;
counter++;
//Store the position of the current digit
int pos = counter;
//Don't write it to the string yet
decimal_to_binary(number/2);
//Now we know how many characters are needed and we can fill the string
//in reverse order. The first digit (where pos = 1) goes to the last character in the string (counter - pos). The second digit (where pos = 2) goes to the second last character in the string etc.
binary_string[counter - pos] = 0x30 + temp;
}
}
This is not the most efficient way, but it is closest to your original solution.
Also note that this breaks for negative numbers (consider decimal = -1, -1 % 2 = -1).
Here I have created a string and I am storing the binary value of a number in the string.. I want to store the value of the variable num to the string.
i contains the length of the binary number for the given decimal number..suppose the given number is A=6, i contains 3 and i need a string 'result' having '110' which is the binary value of 6.
char* result = (char *)malloc((i)* sizeof(char));
i--;
while(A>=1)
{
num=A%2;
result[i]=num; // here I need to store the value of num in the string
A=A/2;
i--;
}
It appears from the code you've posted is that what you are trying to do is to print a number in binary in a fixed precision. Assuming that's what you want to do, something like
unsigned int mask = 1 << (i - 1);
unsigned int pos = 0;
while (mask != 0) {
result[pos] = (A & mask) == 0 ? '0' : '1';
++pos;
mask >>= 1;
}
result[pos] = 0; //If you need a null terminated string
edge cases left as an exercise for the reader.
I'm not sure specifically what you are asking for. Do you mean the binary representation (i.e. 00001000) of a number written into a string or converting the variable to a string (i.e. 8)? I'll assume you mean the first.
The easiest way to do this is to repeatedly test the least significant bit and shift the value to the right (>>). We can do this in for loop. However you will need to know how many bits you need to read. We can do this with sizeof.
int i = 15;
for (int b = 0; b < sizeof(i); ++b) {
uint8_t bit_value = (i & 0x1);
i >>= 1;
}
So how do we turn this iteration into a string? We need to construct the string in reverse. We know how many bits are needed, so we can create a string buffer accordingly with an extra byte for NULL termination.
char *buffer = calloc(sizeof(i) + 1, sizeof(char));
What this does is allocates memory that is sizeof(i) + 1 elements long where each element is sizeof(char), and then zero's each element. Now lets put the bits into the string.
for (int b = 0; b < sizeof(i); ++b) {
uint8_t bit_value = (i & 0x1);
size_t offset = sizeof(i) - 1 - b;
buffer[offset] = '0' + bit_value;
i >>= 1;
}
So what's happening here? In each pass we're calculating the offset in the buffer that we should be writing a value to, and then we're adding the ASCII value of 0 to bit_value as we write it into the buffer.
This code is untested and may have some issues, but that is left as an exercise to the reader. If you have any questions, let me know!
here is the whole code. It is supposed to work fine.
int i=0;
int A;//supposed entered by user
//calculating the value of i
while(A!=0)
{
A=A/2;
i++;
}
char* result=(char *)malloc(sizeof(char)*i);
i--;
while(A!=0)
{
result[i]='0'+(A%2);
A=A/2;
i--;
}
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
char *numToBinStr(int num){
static char bin[sizeof(int) * CHAR_BIT + 1];
char *p = &bin[sizeof(int) * CHAR_BIT];//p point to end
unsigned A = (unsigned)num;
do {
*--p = '0' + (A & 1);
A >>= 1;
}while(A > 0);//do-while for case value of A is 0
return p;
}
int main(void){
printf("%s\n", numToBinStr(6));
//To duplicate, if necessary
//char *bin = strdup(numToBinStr(6));
char *result = numToBinStr(6);
char *bin = malloc(strlen(result) + 1);
strcpy(bin, result);
printf("%s\n", bin);
free(bin);
return 0;
}
You could use these functions in <stdlib.h>:
itoa(); or sprintf()
The second link has some examples as well.
I have this code:
char *sort(char *string){ //shell-sort
int lnght = length(string) - 1; // length is my own function
int gap = lnght / 2;
while (gap > 0)
{
for (int i = 0; i < lnght; i++)
{
int j = i + gap;
int tmp =(int)string[j];
while (j >= gap && tmp > (int)string[j - gap])
{
string[j] = string[j - gap]; // code fails here
j -= gap;
}
string[j] = (char)tmp; // and here as well
}
if (gap == 2){
gap = 1;
}
else{
gap /= 2.2;
}
}
return string;
}
The code should sort (shell-sort) the characters in the string, given the ordinal value (ASCII value). Even though the code is pretty simple, it still fails at lines I've commented - segmentation fault. I've spent plenty of time with this code and still can't find the problem.
As you say in comment , you call our function like this -
char *str = "test string";
sort(str);
String literal is in read-only memory and creates a pointer str to that, thus it cannot be modified , and your function modifies it . Therefore ,it can result in segmentation fault .
Declare like this -
char str[] = "test string";
In situations like this look at your statements not so much as executable code, but as mathematical boundary conditions. I've replaced the monstrous name lnght with length for readability purposes.
Here are the relevant conditions that affect the value of j when entering the while loop, relative to the length.
i < length;
gap = length / 2;
j = i + gap;
Now we plug in a value. Consider the case where length == 10. Then presumably the maximum index in your array is 9 which is also the highest value that i can take on.
Then we also have that gap == 5 and so after entering the while loop j == i + gap == 9 + 5. Clearly 9 + 5 > 10. The rest is left as an exercise to the programmer.
How do you test your function? With a static string (i.e. char *buffer = "test string";) ?
Because on first loop at least j and j-gap should be inside the string boundaries. So if you get a segfault I guess it is because of a bad string (statics can't be modified).
Replacing length() by strlen() and calling it with a well-created test string lead me to a valid result:
"adgfbce" → "gfedcba"
I'm struggling with this programming assignment I have for one of my classes here. I'm an electrical engineering student so my programming is by no means amazing. I'm told to write a c program that takes a 12-bit number and extracts each of that 12 bit number's digits into a char array. I did the quick math and realized that the largest number we can obtain is 0xFFF or 4095 in decimal. I found an algorithm that I thought would work quite well but for some reason, my code isn't doing what I thought. I am continuing to try and troubleshoot this, but with my only way to run it being linux terminal window, I don't have a great debugging utility to step through the program. Any help would be greatly appreciated. Feel free to ask some questions as well and I'll do my best to field them. I am not a fluent programmer so bear that in mind. Also, if someone could explain integer division to me that would be helpful. I was under the assumption that something like 8/10 would return a result of 0 but I don't know that it's working that way when I run the program. Thank you.
I CANNOT USE FUNCTIONS TO DO THIS AND MUST DO IT MANUALLY.
Here is what I have thus far.
Attempt # Solution:
//12 bit value into string of decimal chars
//EX: 129 -> a '1' a '2' and a '9'
void main (void) {
#include <stdlib.h>
#include <stdint.h>
#include <stdio.h>
#include <stddef.h>
//Initialize an array with 5 spaces, each space
//holds one character (accounts for largest number 4095)
char OUT[5];
uint8_t length = sizeof(char);
//Isolating each int value happens here
//initialize i to act as a counter to loop through array
uint8_t i=2;
//Initialize an input value to test the code;
uint16_t IN=549;
while (IN/10 > 0)
{
OUT[length-(i+1)] = '0' + (IN%10);
IN=IN/10;
if (IN <= 10)
{
if (IN = 10)
{
OUT[length-(i+1)] ='1';
//fixes infinite loop issue
IN=0;
}
else
{
OUT[length-(i+1)] ='0' + IN;
//fixes infinite loop issue
IN=0;
}
}
//Increment Counter to keep track of char array
i++;
}
//add the new line at the end of the array of chars
OUT[length-1]='\n';
printf("String is -> %s", OUT);
}
Couple of notes:
Using IN%10 is part of the algorithm that isolates the furthest right digit in decimal. I had to add some "fudge factors" to my counter to get the array to line up properly and account for the \n at the end of my char array. The conditional statements that I put inside my while loop were to catch some edge cases (mainly when IN became 10 or less).
It looks like you are bending over backwards to handle the fact that you have to print the leftmost character first. The logic is much simpler if you just generate the rightmost character first and then reverse it.
#include <stdlib.h>
#include <stdint.h>
#include <stdio.h>
#include <stddef.h>
#include <string.h>
//12 bit value into string of decimal chars
//EX: 129 -> a '1' a '2' and a '9'
void main (void) {
char OUT[5];
memset(OUT, 0, 5);
int i=0;
int j;
uint16_t IN=549;
// Generate the string in reverse order.
while (IN != 0)
{
OUT[i++] = '0' + (IN%10);
IN/=10;
}
// Reverse the string
for (j = 0; j < i/2; j++) {
char temp = OUT[j];
OUT[j] = OUT[i-1-j];
OUT[i-1-j] = temp;
}
printf("String is -> %s\n", OUT);
}
This line is your problem:
uint8_t length = sizeof(char);
The sizeof function returns the size of the type between parentheses. In this case it returns 1, because a char is a 1 byte variable. Then when you try to access the elements of the array you get a negative number from length-(i+1).
Here is my final solution using the more complex way in case anyone is curious/stumbles into the problem later on.
void main (void) {
#include <stdlib.h>
#include <stdint.h>
#include <stdio.h>
#include <stddef.h>
//Initialize an array with 5 spaces, each space holds one character (accounts for largest number 4095)
char OUT[5];
OUT[0] = '0';
OUT[1] = '0';
OUT[2] = '0';
OUT[3] = '0';
OUT[4] = '0';
uint8_t length =5; //sizeof(char) was giving me an error and thus not used.;
//Isolating each int value happens here
//initialize i to act as a counter to loop through array
uint8_t i=2;
//Initialize an input value to test the code;
uint16_t IN=4012;
//Initialize variable for % operator value
uint16_t mod=0;
while (IN/10 > 0)
{
mod=IN%10;
OUT[length-i] = '0' + mod;
IN=IN/10;
if (IN <= 10)
{
if (IN == 10)
{
OUT[length-(i+2)] ='1';
OUT[length-(i+1)] ='0';
//fixes infinite loop issue
IN=0;
}
else
{
OUT[length-(i+1)] ='0' + IN;
//fixes infinite loop issue
IN=0;
}
}
//Increment Counter to keep track of char array
i++;
}
//add the new line at the end of the array of chars
OUT[length-1]='\n';
printf("String is -> %s", OUT);
}