So I understand that something like b* accepts epsilon, b, bb, bbb, etc.
However, when I take the union of these two characters, what types of strings are accepted by the language? Is babababa accepted?
(b U a)* means "any string of any length containing either a or b". It is all strings of as and bs. Yes, the string you suggest is matched. Any string with only as and bs (or no symbols at all) is matched.
Related
I would like to convert a double into character string and I find gcvt() and _gcvt(). I just wondering what is the difference between them. Both return with char* and both need value, number of digits and buffer as a given parameters
As per the google search result
The _gcvt() function is identical to gcvt(). Use _gcvt() for ANSI/ISO naming conventions.
Not sure I am using the right terminology here, but I need the print or deparse methods use C notation (e.g. "\x05" instead of "\005" ) when escaping bytes out of the regular character set.
x <- "This is a \x05 symbol"
print(x)
[1] "This is a \005 symbol"
Is there a native way to accomplish this?
I need this for generating BSON: http://bsonspec.org/#/specification. All of the examples explicitly use \x05 notation.
Hacking into the internals of print seems a bad idea. Instead I think you should do the string escaping yourself, and eventually use cat to print the string without any extra escaping.
You can use encodeString to do the initial escaping, gregexpr to identify octal \0.. escapes, strtoi to convert strings representing octal numbers to those numbers, sprintf to print numbers in hexadecimal, and regenmatches to operate on the matched parts. The whole process would look something like this:
inputString <- "This is a \005 symbol. \x13 is \\x13."
x <- encodeString(inputString)
m <- gregexpr("\\\\[0-3][0-7][0-7]", x)
charcodes <- strtoi(substring(regmatches(x, m)[[1]], 2, 4), 8)
regmatches(x, m) <- list(sprintf("\\x%02x", charcodes))
cat(x, "\n")
Note that this approach will convert octal escapes like \005 to hexadecimal escapes like \x05, but other escape sequences like \t or \a won't be affected by this. You might need more code to deal with those as well, but the above should contain all the ingredients you need.
Note that the BSON specification you refer to almost certainly meant raw bytes, so as long as your string contains a character with code 5, which you can write as "\x05" in your input, and you write that string to the desired output in binary mode, it shouldn't matter at all how R prints that string to you. After all, octal \005 and hexadecimal \x05 are just two representations of the same byte you'll write.
Does cat suit your needs? Note, you have to escape the backslash:
> x <- "This is a \\x05 symbol\n"
> cat(x)
This is a \x05 symbol
I want to handle some char variables and would like to get a list of some functions that can do these tasks when it comes to handling chars.
Getting first characters of a char (var_name[1] doesnt seem to work)
Getting last characters of a char
Checking for char1 matches with char2 ( eg if "unicorn" matches words with "bicycle"
I am pretty sure some of these methods exist in libraries such as stdio.h or so but google isnt my friend.
EDIT:My 3rd question means not direct match with strcmp but single character match(eg if "hey" and "hello") have e as common letter.
Use var_name[0] to get first character (array indexes run from 0 to N - 1, where N is the number of elements in the array).
Use var_name[strlen(var_name) - 1] to get the last character.
Use strcmp() to compare two char strings.
EDIT:
To search for character in a string you can use strchr():
if (strchr("hello", 'e') && strchr("hey", 'e'))
{
}
There is also strpbrk() function that would indicate if two strings have any common characters:
if (strpbrk("hello", "hey"))
{
}
Assuming you mean a char[], and not a char which is a single character.
C uses 0-based indexing, var_name[0] gives you the first char.
strlen() gives you the length of the string, which together with my answer to 1. means
char lastchar = var_name[strlen(var_name)-1]; http://www.cplusplus.com/reference/clibrary/cstring/strlen/
strcmp(var_name1, var_name2) == 0. http://www.cplusplus.com/reference/clibrary/cstring/strcmp/
I am pretty sure some of these methods exist in libraries such as
stdio.h or so but google isnt my friend.
The string functions in the C standard library (libc) are described in the header file . If you're on a unix-ish machine, try typing man 3 string at a command line. You can then use the man program again to get more information about specific functions, e.g. man 3 strlen. (The '3' just tells man to look in "section 3", which describes the C standard library functions.)
What you're looking for is the string functions in the C runtime library. These are defined in string.h, not stdio.h.
But your list of problems is simple:
var_name[0] works perfectly well for accessing the first char in an array. var_name[ 1] doesn't work because arrays in C are zero-based.
The last char in an array is:
char c;
c = var_name[strlen(var_name)-1];
Testing for equality is simple:
if (var_name[0] == var_name[1])
; // they match
C and C++ strings are zero indexed. The memory you need to hold a particular length string has to be at least the string length and one character for the string terminator \0. So, the first character is array[0].
As #Carey Gregory said, the basic string handling functions are in string.h. But these are only primitives for handling strings. C is a low level enough language, that you have an opportunity to build up your own string handling library based on the functions in string.h.
On example might be that you want to pass a string pointer to a function and also the length of the buffer holding that sane string, not just the string length itself.
Rules:
2 strings, a and b, both of them consist of ASCII chars and non-ASCII chars (say, Chinese Characters gbk-encoded).
If the non-ASCII chars contained in b also show up in a and no less than the times they appear in b, then we say b is similar with a.
For example:
a = "ab中ef日jkl中本" //non-ASCII chars:'中'(twice), '日'(once), '本'(once)
b = "bej中中日" //non-ASCII chars:'中'(twice), '日'(once)
c = 'lk日日日' //non-ASCII chars:'日'(3 times, more than twice in a)
according to the rule, b is similar with a, but c is not.
Here is my question:
We don't know how many non-ASCII chars are there in a and b, probably many.
So to find out how many times a non-ASCII char appears in a and b, am I supposed to use a Hash-Table to store their appearing-times?
Take string a as an example:
[non-ASCII's hash-value]:[times]
中's hash-val : 2
日's hash-val : 1
本's hash-val : 1
Check string b, if we encounter a non-ASCII char in b, then hash it and check a's hash-table, if the char is present in a's hash-table, then its appearing-times decrements by 1.
If the appearing-times is less than 0 (-1), then we say b is not similar with a.
Or is there any better way?
PS:
I read string a byte by byte, if the byte is less than 128, then I take is as an ASCII char, otherwise I take it as part of a non-ASCII char (multi-bytes).
This is what I am doing to find out the non-ASCII chars.
Is it right?
You have asked two questions:
Can we count the non-ASCII characters using a hashtable? Answer: sure. As you read the characters (not the bytes), examine the codepoints. For any codepoint greater than 127, put it into a counting hashtable. That is for a character c, add (c,1) if c is not in the table, and update (c,x) to (c, x+1) if c is in the table already.
Is there a better way to solve this problem than your approach of incrementing counts in a and decrementing as you run through b? If your hashtable implementation gives nearly O(1) access, then I suspect not. You are looking at each character in the string exactly once, and for each character your are doing either an hashtable insert or lookup and an addition or subtraction, and a check against 0. With unsorted strings, you have to look at all the characters in both strings anyway, so you've given, I think, the best solution.
The interviewer might be looking for you to say things like, "Hmmmmm, if these strings were actually massive files that could not fit in memory, what would I do?" Or for you to ask "Well are the string sorted? Because if they are, I can do it faster...".
But now let's say the strings are massive. The only thing you are storing in memory is the hashtable. Unicode has only around 1 million codepoints and you are storing an integer count for each, so even if you are getting data from gigabyte sized files you only need around 4MB or so for your hash table (or a small multiple of this, as there will be overhead).
In the absence of any other conditions, your algorithm is nice. Sorting the strings beforehand isn't good; it takes up more memory and isn't a linear-time operation.
ADDENDUM
Since your original comments mentioned the type char as opposed to wchar_t, I thought I'd show an example of using wide strings. See http://codepad.org/B3MXOgqc
Hope that helps.
ADDENDUM 2
Okay here is a C program that shows exactly how to go through a widestring and work at the character level:
http://codepad.org/QVX3QPat
It is a very short program so I will also paste it here:
#include <stdio.h>
#include <string.h>
#include <wchar.h>
char *s1 = "abd中日";
wchar_t *s2 = L"abd中日";
int main() {
int i, n;
printf("length of s1 is %d\n", strlen(s1));
printf("length of s2 using wcslen is %d\n", wcslen(s2));
printf("The codepoints of the characters of s2 are\n");
for (i = 0, n = wcslen(s2); i < n; i++) {
printf("%02x\n", s2[i]);
}
return 0;
}
Output:
length of s1 is 9
length of s2 using wcslen is 5
The codepoints of the characters of s2 are
61
62
64
4e2d
65e5
What can we learn from this? A couple things:
If you use plain old char for CJK characters then the string length will be wrong.
To use Unicode characters in C, use wchar_t
String literals have a leading L for wide strings
In this example I defined a string with CJK characters and used wchar_t and a for-loop with wcslen. Please note here that I am working with real characters, NOT BYTES, so I get the correct count of characters, which is 5. Now I print out each codepoint. In your interview question, you will be looking to see if the codepoint is >= 128. I showed them in Hex, as is the culture, so you can look for > 0x7F. :-)
ADDENDUM 3
A few notes in http://tldp.org/HOWTO/Unicode-HOWTO-6.html are worth reading. There is a lot more to character handling than the simple example above shows. In the comments below J.F. Sebastian gives a number of other important links.
Of the few things that need to be addressed is normalization. For example, does your interviewer care that when given two strings, one containing just a Ç and the other a C followed by a COMBINING MARK CEDILLA BELOW, would they be the same? They represent the same character, but one uses one codepoint and the other uses two.
I have a program which places structures in a linked list based on the 'name' they have stored in them.
To find their place in the list, i need to figure out if the name im inserting is earlier or later in the alphabet then those in the structures beside it.
The names are inside the structures, which i have access to.
I don't need a full comaparison if that is more work, even just the first letter is fine.
Thanks for the help!
It's not clear to me what your question is, but something like this would work:
if (node1->name[0] <= node2->name[0]) {
...
} else {
...
}
This will compare the first letter of the name in each of the nodes.
If you have two C strings, a and b, you can simply compare their first elements:
*a == *b
Where == can be any of the six relational operators.
Remember that with C strings, the char* points to the first character in the string.
strcmp() compares two C strings and will tell you what order they're in, or if they're the same. If you don't care about case, you can use strcasecmp(). These functions won't compare any more of the strings than necessary to determine what order to return.
strcmp man page
strcasecmp man page
You can simply iterate through the list and insert the new element in the correct place based on comparisons you do while passing each element. The simplest case sensitive version can be done just by comparing the numeric values of the letters (e.g. a[0] < b[0]), or you can convert both to a common case if you want to be case-insensitive (see ctype.h). Or you can compare the whole words with strcmp.