Where did the name `atol` come from? [duplicate] - c

This question already has answers here:
Where did the name `atoi` come from?
(2 answers)
Closed 5 years ago.
Anyone know what the source of the name of the function atol for converting a string to an long?
I thought about Array To long but it's not sounds to me true.

ASCII To Long is what atol(3) means (in the early days of Unix, ASCII was only used, and IIRC was mentioned in the K&R book)
Today we usually use UTF-8 everywhere, but atol still works (since UTF-8 for digits uses the same encoding than ASCII)
On C implementations using another encoding (e.g. EBCDIC) atol should still do what is expected (so atol("345") would give 345), since the C standard requires that the encoding of digit characters is consecutive. Its implementation might be more complex (or encoding specific).
so today, the atol name don't refer anymore to ASCII. The C11 standard n1570 don't mention ASCII (as mandatory) IIRC. you might rewrite history by reading atol as anything to long even if historically it was ASCII to long.

It's Ascii to long, the same convention is used for atoi etc.

Related

What do atoi, atol, and stoi stand for? [duplicate]

This question already has answers here:
Where did the name `atoi` come from?
(2 answers)
Closed 6 years ago.
I understand what said functions do, but I can't guess how their names were created, except that the last letter is from the return type.
atoi -> ASCII to integer.
atol -> ASCII to long.
atof -> ASCII to floating.
stoi -> string to integer.
stol -> string to long.
stoll -> string to long long.
stof -> string to float.
stod -> string to double.
stold -> string to long double.
atoi, atol, atof come from C and its godfather most probably is considered to be Ken Thompson the co-creator of the UNIX operating system and the creator of the B programming language which is the predecessor of the C programming language. The names are mentioned in the first UNIX Programmer's Manual November 3, 1971 and as you can see in the owner's label ken is mentioned which is the nickname of Ken Thomson:
stoi, stol, stoll, stof, stod and stold got in C++ since C++11. Consequently, the naming must have been a unanimous decision of the C++ committee. The original proposal N1803 though dates back in 2005. I couldn't find in the proposal why the named these functions after these names. My guess is that probably they wanted to keep the uniformity with their C "equivalents" mentioned above.

Converting a Letter to a Number in C [duplicate]

This question already has answers here:
Converting Letters to Numbers in C
(10 answers)
Closed 6 years ago.
Alright so pretty simple, I want to convert a letter to a number so that a = 0, b = 1, etc. Now I know I can do
number = letter + '0';
so when I input the letter 'a' it gives me the number 145. My question is, if I am to run this on a different computer or OS, would it still give me the same number 145 for when I input the letter 'a'?
It depends on what character encoding you are using. If you're using the same encoding and compiler on both the computers, yes, it will be the same. But if you're using another encoding like EBCDIC on one computer and ASCII on another, you cannot guarantee them to be the same.
Also, you can use atoi.
If you do not want to use atoi, see: Converting Letters to Numbers in C
It depends on what character encoding you are using.
It is also important to note that if you use ASCII the value will fit in a byte.
If you are using UTF-8 for example, the value wont fit a byte but you will require two bytes (int16) at least.
Now, lets assume you are making sure you use one specific character encoding then, the value will be the same no matter the system.
Yes, the number used to represent a is defined in the American Standard Code for Information Interchange. This is the standard that C compilers use by default, so on all other OSs you will get the same result.

isLetter with accented characters in C

I'd like to create (or find) a C function to check if a char c is a letter...
I can do this for a-z and A-Z easily of course.
However i get an error if testing c == á,ã,ô,ç,ë, etc
Probably those special characters are stored in more then a char...
I'd like to know:
How these special characters are stored, which arguments my function needs to receive, and how to do it?
I'd also like to know if are there any standard function that already does this.
I think you're looking for the iswalpha() routine:
#include <wctype.h>
int iswalpha(wint_t wc);
DESCRIPTION
The iswalpha() function is the wide-character equivalent of
the isalpha(3) function. It tests whether wc is a wide
character belonging to the wide-character class "alpha".
It does depend upon the LC_CTYPE of the current locale(7), so its use in a program that is supposed to handle multiple types of input correctly simultaneously might not be ideal.
If you are working with single-byte codesets such as ISO 8859-1 or 8859-15 (or any of the other 8859-x codesets), then the isalpha() function will do the job if you also remember to use setlocale(LC_ALL, ""); (or some other suitable invocation of setlocale()) in your program. Without this, the program runs in the C locale, which only classifies the ASCII characters (8859-x characters in the range 0x00..0x7F).
If you are working with multibyte or wide character codesets (such as UTF8 or UTF16), then you need to look to the wide character functions found in <wchar.h> and <wctype.h>.
How these characters are stored is locale-dependent. On most UNIX systems, they'll be stored as UTF8, whereas a Win32 machine will likely represent them as UTF16. UTF8 is stored as a variable-amount of chars, whereas UTF16 is stored using surrogate pairs - and thus inside a wchar_t (or unsigned short) (though incidentally, sizeof(wchar_t) on Windows is only 2 (vs 4 on *nix), and thus you'll often need 2 wchar_t types to store the 1 character if a surrogate pair encoding is used - which it will be in many cases).
As was mentioned, the iswalpha() routine will do this for you, and is documented here. It should take care of locale-specific issues for you.
You probably want http://site.icu-project.org/. It provides a portable library with APIs for this.

ASCII char to int conversions in C [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Char to int conversion in C.
I remember learning in a course a long time ago that converting from an ASCII char to an int by subtracting '0' is bad.
For example:
int converted;
char ascii = '8';
converted = ascii - '0';
Why is this considered a bad practice? Is it because some systems don't use ASCII? The question has been bugging me for a long time.
While you probably shouldn't use this as part of a hand rolled strtol (that's what the standard library is for) there is nothing wrong with this technique for converting a single digit to its value. It's simple and clear, even idiomatic. You should, though, add range checking if you are not absolutely certain that the given char is in range.
It's a C language guarantee that this works.
5.2.1/3 says:
In both the source and execution basic character sets, the value of each character after 0 in the above list [includes the sequence: 0,1,2,3,4,5,6,7,8,9] shall be one greater that the value of the previous.
Character sets may exist where this isn't true but they can't be used as either source or execution character sets in any C implementation.
Edit: Apparently the C standard guarantees consecutive 0-9 digits.
ASCII is not guaranteed by the C standard, in effect making it non-portable. You should use a standard library function intended for conversion, such as atoi.
However, if you wish to make assumptions about where you are running (for example, an embedded system where space is at a premium), then by all means use the subtraction method. Even on systems not in the US-ASCII code page (UTF-8, other code pages) this conversion will work. It will work on ebcdic (amazingly).
This is a common trick taught in C classes primarily to illustrate the notion that a char is a number and that its value is different from the corresponding int.
Unfortunately, this educational toy somehow became part of the typical arsenal of most C developers, partially because C doesn't provide a convenient call for this (it is often platform specific, I'm not even sure what it is).
Generally, this code is not portable for non-ASCII platforms, and for future transitions to other encodings. It's also not really readable. At a minimum wrap this trick in a function.

Where did the name `atoi` come from?

In the C language where did they come up with the name atoi for converting a string to an integer? The only thing I can think of is Array To Integer for an acronym but that doesn't really make sense.
It means Ascii to Integer. Likewise, you can have atol for Ascii to Long, atof for Ascii to Float, etc.
A Google search for 'atoi "ascii to integer"' confirms this on several pages.
I'm having trouble finding any official source on it... but in this listing of man pages from Third Edition Unix (1973) collected by Dennis Ritchie himself, it does contain the line:
atoi(III): convert ASCII to integer
In fact, even the first edition Unix (ca 1971) man pages list atoi as meaning Ascii to Integer.
So even if there isn't any documentation more official than man pages indicating that atoi means Ascii to Integer (I suspect there is and I just haven't been able to locate it), it's been Ascii to Integer by convention at least since 1971.
I griefly believe that function atoi means ascii to integer.

Resources