Lately I had a task that included printing base-4 representation of a number. Since I didn't find a function to do it for me, I implemented it (which is not so hard of course), but I wonder, is there a way to do it using format placeholders?
I'm not asking how to implement such function, but if such function / format placeholder already exists?
There is no standard C or C++ function, but you may be able to use itoa
The closest you could get to doing it with printf is using snprintf to convert it to hex, then a lookup table to convert hex digits to pairs of base-4 digits. :-)
No, not in the Standard C library.
I think that printf can handle only decimal, hexadecimal and octal values.
So i think no.
Related
Beginner programmer here.
Is there a way to input octal values and expect decimal or binary outputs without the use of functions or the math.h library? I have an idea of using a loop to achieve this but I haven't the faintest clue of how it should look like and what loop to use. I appreciate your help. Thank you.
stdio.h functions are allowed
math.h functions are not allowed
everyone. I have found a solution. I used decimal to octal code and switched values.
while loop for decimal to octal:
while(input){
rem1=input%8;
input=input/8;
ans1=ans1+(rem1*place_value);
place_value1=place_value1*10;
}
while loop for octal to decimal:
while(input){
rem1=input%10;
input=input/10;
ans1=ans1+(rem1*place_value);
place_value1=place_value1*8;
}
For octal to binary, simply take the answer from octal-decimal and use decimal to binary. Thank you everyone for helping.
I'm writing a program in CAPL (which is based on C and minus some concepts) to convert a string containing a number displayed in scientific notation to a float (doesn't strictly need to be a float but I think its an appropriate type for this). For example:
-7.68000000E-06 should be converted to -0.00000768
I've done some looking around for this and atof() comes up a lot but this is not supported in CAPL so I cannot use that.
A list of the other C concepts not supported in CAPL:
Update: Thanks everyone for the help. M. Spiller's answer proved to be the easiest solution. I have accepted this answer.
In CAPL the function is called atodbl with the same signature as C's atof.
I have a requirement, part of which needs conversion from decimal to hex.
I prefer the following way to do that thing as below:
sprintf(mcc,"%x",n);
n here will be an integer.
But my peer says it is not a good way to do that.
Instead he says to manually convert the decimal using a function,
but I did not get a good explanation about why not to use sprintf.
Are there any problems with the above method I am using?
Should I go for a function which manually converts the decimal number into hex?
As long as you make sure that the buffer pointed to by mcc is big enough for the resulting string, I see no problem.
If you're on a machine which supports the GNU extensions then you can use asprintf as well. This will allocate heap memory, store the string in it and then give you a pointer to this. Just make sure you free the string when you're done with it.
The problem with sprintf is that it's hard to guarantee your buffer is big enough. Use snprintf instead.
In this case the maximum possible output length is small and easily calculated, but it's not always this easy, and my guess is that this is why your friend told you not to use sprintf.
In the C language where did they come up with the name atoi for converting a string to an integer? The only thing I can think of is Array To Integer for an acronym but that doesn't really make sense.
It means Ascii to Integer. Likewise, you can have atol for Ascii to Long, atof for Ascii to Float, etc.
A Google search for 'atoi "ascii to integer"' confirms this on several pages.
I'm having trouble finding any official source on it... but in this listing of man pages from Third Edition Unix (1973) collected by Dennis Ritchie himself, it does contain the line:
atoi(III): convert ASCII to integer
In fact, even the first edition Unix (ca 1971) man pages list atoi as meaning Ascii to Integer.
So even if there isn't any documentation more official than man pages indicating that atoi means Ascii to Integer (I suspect there is and I just haven't been able to locate it), it's been Ascii to Integer by convention at least since 1971.
I griefly believe that function atoi means ascii to integer.
I have a lot of #define's in my code. Now a weird problem has crept up.
I have this:
#define _ImmSign 010100
(I'm trying to simulate a binary number)
Obviously, I expect the number to become 10100. But when I use the number it has changed into 4160.
What is happening here? And how do I stop it?
ADDITIONAL
Okay, so this is due to the language interpreting this as an octal. Is there some smart way however to force the language to interpret the numbers as integers? If a leading 0 defines octal, and 0x defines hexadecimal now that I think of it...
Integer literals starting with a 0 are interpreted as octal, not decimal, in the same way that integer literals starting with 0x are interpreted as hexadecimal.
Remove the leading zero and you should be good to go.
Note also that identifiers beginning with an underscore followed by a capital letter or another underscore are reserved for the implementation, so you shouldn't define them in your code.
Prefixing an integer with 0 makes it an octal number instead of decimal, and 010100 in octal is 4160 in decimal.
There is no binary number syntax in C, at least without some compiler extension. What you see is 010100 interpreted as an octal (base 8) number: it is done when a numeric literal begins with 0.
010100 is treated as octal by C because of the leading 0. Octal 10100 is 4160.
Check this out it has some macros for using binary numbers in C
http://www.velocityreviews.com/forums/t318127-using-binary-numbers-in-c.html
There is another thread that has this also
Can I use a binary literal in C or C++?
If you are willing to write non-portable code and use gcc, you can use the binary constants extension:
#define _ImmSign 0b010100
Octal :-)
You may find these macros helpful to represent binary numbers with decimal or octal numbers in the form of 1's and 0's. They do handle leading zeros, but unfortunately you have to pick the correct macro name depending on whether you have a leading zero or not. Not perfect, but hopefully helpful.