construct an FA (theory of automata) - theory

You are required to construct a finite automaton for the language of all those strings whose length is odd, but contain an even number of b’s defined over the alphabet {a,b}.
I have done this
but I know this is wrong, so what is the answer to this question?

You say you are looking for an automaton, but your (wrong) answer is a regular expression. I will provide an automaton. It uses two counters mod 2; one for the length, one for the number of b. So the states are:
q[0,0], q[0,1], q[1,0], q[1,1]
where e.g. q[0,1] means that the total length is even (first zero) while the number of b is odd (the one). So the final state is q[1,0] while q[0,0] is initial.
The transitions are rather obvious, doing the necessary changes for the counters:
q[0,0] reads a -> q[1,0]
q[0,0] reads b -> q[1,1]
q[0,1] reads a -> q[1,1]
q[0,1] reads b -> q[1,0]
q[1,0] reads a -> q[0,0]
q[1,0] reads b -> q[0,1]
q[1,1] reads a -> q[0,1]
q[1,1] reads b -> q[0,0]


Haskell : Increment index in a loop

I have a function that calculates f(n) in Haskell.
I have to write a loop so that it will start calculating values from f(0) to f(n), and will every time compare the value of f(i) with some fixed value.
I am an expert in OOP, hence I am finding it difficult to think in the functional way.
For example, I have to write something like
while (number < f(i))
How would I write this in Haskell?
The standard approach here is
Create an infinite list containing all values of f(n).
Search this list until you find what you're after.
For example,
takeWhile (number <) $ map f [0..]
If you want to give up after you reach "n", you can easily add that as a separate step:
takeWhile (number <) $ take n $ map f [0..]
or, alternatively,
takeWhile (number <) $ map f [0 .. n]
You can do all sorts of other filtering, grouping and processing in this way. But it requires a mental shift. It's a bit like the difference between writing a for-loop to search a table, versus writing an SQL query. Think about Haskell as a bit like SQL, and you'll usually see how to structure your code.
You can generate the list of the is such that f i is larger than your number:
[ i | i<-[0..] , f i > number ]
Then, you can simply take the first one, if that's all you want:
head [ i | i<-[0..] , f i > number ]
Often, many idiomatic loops in imperative programming can be rephrased as list comprehensions, or expressed through map, filter, foldl, foldr. In the general case, when the loop is more complex, you can always exploit recursion instead.
Keep in mind that a "blind" translation from imperative to functional programming will often lead to non-idiomatic, hard-to-read code, as it would be the case when translating in the opposite direction. Still, I find it relieving that such translation is always possible.
If you are new to functional programming, I would advise against learning it by translating what you know about imperative programming. Rather, start from scratch following a good book (LYAH is a popular choice).
The first thing that's weird from a functional approach is that it's unclear what the result of your computation is. Do you care about the final result of f (i)? Perhaps you care about i itself. Without side effects everything neends to have a value.
Let's assume you want the final value of the function f (i) as soon as some comparison fails. You can simulate your own while loops using recursion and guards!
while :: Int -> Int -> (Int -> Int) -> Int
while start number f
| val >= number = val
| otherwise = while (start + 1) number f
val = f start
Instead of explicit recursion, you can use until e.g.
findGreaterThan :: (Int -> Int) -> Int -> Int -> (Int, Int)
findGreaterThan f init max = until (\(v, i) -> v >= max) (\(v, i) -> (f v, i + 1)) (init, 0)
this returns a pair containing the first value to fail the condition and the number of iterations of the given function.

Optimizing a word parser

I have a code/text editor than I'm trying to optimize. Currently, the bottleneck of the program is the language parser than scans all the keywords (there is more than one, but they're written generally the same).
On my computer, the the editor delays on files around 1,000,000 lines of code. On lower-end computers, like Raspberry Pi, the delay starts happening much sooner (I don't remember exactly, but I think around 10,000 lines of code). And although I've never quite seen documents larger than 1,000,000 lines of code, I'm sure that they're there and I want my program to be able to edit them.
This leads me to the question: what's the fastest way to scan for a list of words within large, dynamic string?
Here's some information that may affect the design of the algorithm:
the keywords
qualifying characters allowed to be part of a keyword, (I call them qualifiers)
the large string
This is (roughly) the method I'm currently using to parse strings:
// this is just an example, not an excerpt
// I haven't compiled this, I'm just writing it to
// illustrate how I'm currently parsing strings
struct tokens * scantokens (char * string, char ** tokens, int tcount){
int result = 0;
struct tokens * tks = tokens_init ();
for (int i = 0; string[i]; i++){
// qualifiers for C are: a-z, A-Z, 0-9, and underscore
// if it isn't a qualifier, skip it
while (isnotqualifier (string[i])) i++;
for (int j = 0; j < tcount; j++){
// returns 0 for no match
// returns the length of the keyword if they match
result = string_compare (&string[i], tokens[j]);
if (result > 0){ // if the string matches
token_push (tks, i, i + result); // add the token
// token_push (data_struct, where_it_begins, where_it_ends)
if (result > 0){
i += result;
} else {
// skip to the next non-qualifier
// then skip to the beginning of the next qualifier
/* ie, go from:
'some_id + sizeof (int)'
to here:
'some_id + sizeof (int)'
if (!tks->len){
free (tks);
return 0;
} else return tks;
Possible Solutions:
Contextual Solutions:
I'm considering the following:
Scan the large string once, and add a function to evaluate/adjust the tokens markers every time there is user input (instead of re-scanning the entire document over and over). I expect that this will fix the bottleneck because there is much less parsing involved. But, it doesn't completely fix the program, because the initial scan may still take a really long time.
Optimize token-scanning algorithm (see below)
I've also considered, but have rejected, these optimizations:
Scanning the code that is only on the screen. Although this would fix the bottleneck, it would limit the ability to find user-defined tokens (ie variable names, function names, macros) that appear earlier on than where the screen starts.
Switching the text into a linked list (a node per line), rather than a monolithic array. This doesn't really help the bottleneck. Although insertions/deletions would be quicker, the loss of indexed access slows down the parser. I think that, also, a monolithic array is more likely to be cached, than a broken-up list.
Hard coding a scan-tokens function for every language. Although this could be the best optimization for performance, it's doesn't seem practical in a software development point of view.
Architectural solution:
With assembly language, a quicker way to parse these strings would be to load characters into registers and compare them 4 or 8 bytes at a time. There are some additional measures and precautions that would have to be taken into account, such as:
Does the architecture support unaligned memory access?
All strings would have to be of size s, where s % word-size == 0, to prevent reading violations
But these issues seem like they can be easily fixed. The only problem (other than the usual ones that come with writing in assembly language) is that it's not so much an algorithmic solution as it is a hardware solution.
Algorithmic Solution:
So far, I've considered having the program rearrange the list of keywords to make a binary search algorithm a little more possible.
One way I've thought about rearranging them for this is by switch the dimensions of the list of keywords. Here's an example of that in C:
// some keywords for the C language
auto // keywords[0]
break // keywords[1]
case char const continue // keywords[2], keywords[3], keywords[4]
default do double
else enum extern
float for
if int
register return
short signed sizeof static struct switch
union unsigned
void volatile
/* keywords[i] refers to the i-th keyword in the list
Switching the dimensions of the two-dimensional array would make it look like this:
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3
1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2
1 | a b c c c c d d d e e e f f g i i l r r s s s s s s t u u v v w
2 | u r a h o o e o o l n x l o o f n o e e h i i t t w y n n o o h
3 | t e s a n n f u s u t o r t t n g t o g z a r i p i s i l i
4 | o a e r s t a b e m e a o g i u r n e t u t e o i d a l
5 | k i u l r t s r t e o i c c d n g t e
6 | n l e n t n d f c t h e n i
7 | u t e f e l
8 | e r d e
// note that, now, keywords[0] refers to the string "abccccdddeeefffiilrr"
This makes it more efficient to use a binary search algorithm (or even a plain brute force algorithm). But it only words for the first characters in each keyword, afterwards nothing can be considered 'sorted'. This may help in small sets of words like an a programming language, but it wouldn't be enough for a larger set of words (like in the entire English language).
Is there more than can be done to improve this algorithm?
Is there another approach that can be taken to increase performance?
This question from SO doesn't help me. The Boyer-Moore-Horspool algorithm (as I understand it) is an algorithm for finding a sub-string within a string. Since I'm parsing for multiple strings I think there's much more room for optimization.
Aho-Corasick is a very cool algorithm but it's not ideal for keyword matches, because keyword matches are aligned; you can't have overlapping matches because you only match a complete identifier.
For the basic identifier lookup, you just need to build a trie out of your keywords (see note below).
Your basic algorithm is fine: find the beginning of the identifier, and then see if it's a keyword. It's important to improve both parts. Unless you need to deal with multibyte characters, the fastest way to find the beginning of a keyword is to use a 256-entry table, with one entry for each possible character. There are three possibilities:
The character can not appear in an identifier. (Continue the scan)
The character can appear in an identifier but no keyword starts with the character. (Skip the identifier)
The character can start a keyword. (Start walking the trie; if the walk cannot be continued, skip the identifier. If the walk finds a keyword and the next character cannot be in an identifier, skip the rest of the identifier; if it can be in an identifier, try continuing the walk if possible.)
Actually steps 2 and 3 are close enough together that you don't really need special logic.
There is some imprecision with the above algorithm because there are many cases where you find something that looks like an identifier but which syntactically cannot be. The most common cases are comments and quoted strings, but most languages have other possibilities. For example, in C you can have hexadecimal floating point numbers; while no C keyword can be constructed just from [a-f], a user-supplied word might be:
On the other hand, C++ allows user-defined numeric suffixes, which you might well want to recognize as keywords if the user adds them to the list:
Beyond all of the above, it's really impractical to parse a million lines of code every time the user types a character in an editor. You need to develop some way of caching tokenization, and the simplest and most common one is to cache by input line. Keep the input lines in a linked list, and with every input line also record the tokenizer state at the beginning of the line (i.e., whether you're in a multi-line quoted string; a multi-line comment, or some other special lexical state). Except in some very bizarre languages, edits cannot affect the token structure of lines preceding the edit, so for any edit you only need to retokenize the edited line and any subsequent lines whose tokenizer state has changed. (Beware of working too hard in the case of multi-line strings: it can create lots of visual noise to flip the entire display because the user types an unterminated quote.)
Note: For smallish (hundreds) numbers of keywords, a full trie doesn't really take up that much space, but at some point you need to deal with bloated branches. One very reasonable datastructure, which can be made to perform very well if you're careful about data layout, is a ternary search tree (although I'd call it a ternary search trie.)
It will be hard to beat this code.
Suppose your keywords are "a", "ax", and "foo".
Take the list of keywords, sorted, and feed it into a program that prints out code like this:
break; case 'a':{
if (0){
} else if (strcmp(pc, "a")==0 && !alphanum(pc[1])){
// push "a"
pc += 1;
} else if (strcmp(pc, "ax")==0 && !alphanum(pc[2])){
// push "ax"
pc += 2;
break; case 'f':{
if (0){
} else if (strcmp(pc, "foo")==0 && !alphanum(pc[3])){
// push "foo"
pc += 3;
// etc. etc.
// etc. etc.
Then if you don't see a keyword, just increment pc and try again.
The point is, by dispatching on the first character, you quickly get into the subset of keywords starting with that character.
You might even want to go to two levels of dispatch.
Of course, as always, take some stack samples to see what the time is being used for.
Regardless, if you have data structure classes, you're going to find that consuming a large part of your time, so keep that to a minimum (throw religion to the wind :)
The fastest way to do it would be a finite state machine built to the word set. Use Lex to build the FSM.
The best algorithm for this problem is probably Aho-Corasick. There already exist C implementations, e.g.,

Why the Haskell sequence function can't be lazy or why recursive monadic functions can't be lazy

With the question Listing all the contents of a directory by breadth-first order results in low efficiencyI learned that the low efficiency is due to a strange behavior of the recursive monad functions.
sequence $ map return [1..]::[[Int]]
sequence $ map return [1..]::Maybe [Int]
and ghci will fall into an endless calculation.
If we rewrite the sequence function in a more readable form like follows:
sequence' [] = return []
sequence' (m:ms) = do {x<-m; xs<-sequence' ms; return (x:xs)}
and try:
sequence' $ map return [1..]::[[Int]]
sequence' $ map return [1..]::Maybe [Int]
we get the same situation, an endless loop.
Try a finite list
sequence' $ map return [1..]::Maybe [Int]
it will spring out the expected result Just [1,2,3,4..] after a long time waiting.
From what we tried,we can come to the conclusion that although the definition of sequence' seems to be lazy, it is strict and has to make out all the numbers before the result of sequence' can be printed.
Not only just sequence', if we define a function
iterateM:: Monad m => (a -> m a) -> a -> m [a]
iterateM f x = (f x) >>= iterateM0 f >>= return.(x:)
and try
iterateM (>>=(+1)) 0
then endless calculation occurs.
As we all know,the non-monadic iterate is defined just like the above iterateM, but why the iterate is lazy and iterateM is strict.
As we can see from above, both iterateM and sequence' are recursive monadic functions.Is there some thing strange with recursive monadic functions
The problem isn't the definition of sequence, it's the operation of the underlying monad. In particular, it's the strictness of the monad's >>= operation that determines the strictness of sequence.
For a sufficiently lazy monad, it's entirely possible to run sequence on an infinite list and consume the result incrementally. Consider:
Prelude> :m + Control.Monad.Identity
Prelude Control.Monad.Identity> runIdentity (sequence $ map return [1..] :: Identity [Int])
and the list will be printed (consumed) incrementally as desired.
It may be enlightening to try this with Control.Monad.State.Strict and Control.Monad.State.Lazy:
-- will print the list
Prelude Control.Monad.State.Lazy> evalState (sequence $ map return [1..] :: State () [Int]) ()
-- loops
Prelude Control.Monad.State.Strict> evalState (sequence $ map return [1..] :: State () [Int]) ()
In the IO monad, >>= is by definition strict, since this strictness is exactly the property necessary to enable reasoning about effect sequencing. I think #jberryman's answer is a good demonstration of what is meant by a "strict >>=". For IO and other monads with a strict >>=, each expression in the list must be evaluated before sequence can return. With an infinite list of expressions, this isn't possible.
You're not quite grokking the mechanics of bind:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Here's an implementation of sequence that only works on 3-length lists:
sequence3 (ma:mb:mc:[]) = ma >>= (\a-> mb >>= (\b-> mc >>= (\c-> return [a,b,c] )))
You see how we have to "run" each "monadic action" in the list before we can return the outer constructor (i.e. the outermost cons, or (:))? Try implementing it differently if you don't believe.
This is one reason monads are useful for IO: there is an implicit sequencing of effects when you bind two actions.
You also have to be careful about using the terms "lazy" and "strict". It's true with sequence that you must traverse the whole list before the final result can be wrapped, but the following works perfectly well:
Prelude Control.Monad> sequence3 [Just undefined, Just undefined, Nothing]
Monadic sequence cannot in general work lazily on infinite lists. Consider its signature:
sequence :: Monad m => [m a] -> m [a]
It combines all monadic effects in its argument into a single effect. If you apply it to an infinite list, you'd need to combine an infinite number of effect into one. For some monads, it is possible, for some monads, it is not.
As an example, consider sequence specialized to Maybe, as you did in your example:
sequence :: [Maybe a] -> Maybe [a]
The result is Just ... iff all elements in the array are Just .... If any of the elements is Nothing then the result is Nothing. This means that unless you examine all elements of the input, you cannot tell if the result is Nothing or Just ....
The same applies for sequence specialized to []: sequence :: [[a]] -> [[a]]. If any of the elements of the argument is an empty list, the whole result is an empty list, like in sequence [[1],[2,3],[],[4]]. So in order to evaluate sequence on a list of lists, you have to examine all the elements to see what the result will look like.
On the other hand, sequence specialized to the Reader monad can process its argument lazily, because there is no real "effect" on Reader's monadic computation. If you define
inf :: Reader Int [Int]
inf = sequence $ map return [1..]
or perhaps
inf = sequence $ map (\x -> reader (* x)) [1..]
it will work lazily, as you can see by calling take 10 (runReader inf 3).

Is this the correct way for memmove in reverse order?

I'm trying to understand how does memmove work. I'm taking an example where I have data in memory in this manner.
Start at 0
First Memory Block(A) of size 10
Hence A->(0,10) where 0 being where it starts and 10 it's length.
Thus B-> (10,20)
C-> (30,50)
D-> (80,10)
Let's say that we have a variable X which records where can insert next which would be 90 in the example given above.
Now if I want to delete B, then I would like to move C and D to free space occupied by B.
input is input array.
So input array will look like having first 10 characters belonging to block A, next 20 belonging to block B etc.
This I think can be done using memmove as follows:
Now I want to try for reverse order.
So we start from behind
Start at 100
First memory block(A) of size 10
A-> (100,10) 100 is where it starts and 10 it's length
B-> (90,20)
C-> (70,50)
D-> (20,10)
Similar to first example, let's say we have a variable X where we record where we can insert next. This would be 10 for the example in reverse order.
Now if I want to delete block B, then I would like C and D to overlap in B's space. This would be memmove in reverse order.
I think this can be done in this manner:
As per Alex comment, I think I've not kept the correct ordering of data. Data would be like,
and X which would be D's starting address i.e at 20
Now if we want to delete B,memmove would look something like this.
memmove(input+X+length(B), input+X,start(B)-X)
Are there better ways to do this?
Note this is not for homework.
C and D occupy together 50+10=60, so why 20 in memmove(input+start(B), input+start(B)+length(B), 20)?
As for the other part, in C objects don't start with their last byte (the first byte is at the lowest address and the last byte at the highest). This part is confusing.

Iterating with respect to two variables in haskell

OK, continuing with my solving of the problems on Project Euler, I am still beginning to learn Haskell and programming in general.
I need to find the lowest number divisible by the numbers 1:20
So I started with:
divides :: Int -> Int -> Bool
divides d n = rem n d == 0
divise n a | divides n a == 0 = n : divise n (a+1)
| otherwise = n : divise (n+1) a
What I want to happen is for it to keep moving up for values of n until one magically is evenly divisible by [1..20].
But this doesn't work and now I am stuck as from where to go from here. I assume I need to use:
for the value of a but I don't know how to implement this.
Well, having recently solved the Euler problem myself, I'm tempted to just post my answer for that, but for now I'll abstain. :)
Right now, the flow of your program is a bit chaotic, to sound like a feng-shui person. Basically, you're trying to do one thing: increment n until 1..20 divides n. But really, you should view it as two steps.
Currently, your code is saying: "if a doesn't divide n, increment n. If a does divide n, increment a". But that's not what you want it to say.
You want (I think) to say "increment n, and see if it divides [Edit: with ALL numbers 1..20]. If not, increment n again, and test again, etc." What you want to do, then, is have a sub-test: one that takes a number, and tests it against 1..20, and then returns a result.
Hope this helps! Have fun with the Euler problems!
Edit: I really, really should remember all the words.
Well, as an algorithm, this kinda sucks.
But you're getting misled by the list. I think what you're trying to do is iterate through all the available numbers, until you find one that everything in [1..20] divides. In your implementation above, if a doesn't divide n, you never go back and check b < a for n+1.
Any easy implementation of your algorithm would be:
lcmAll :: [Int] -> Maybe Int
lcmAll nums = find (\n -> all (divides n) nums) [1..]
(using Data.List.find and Data.List.all).
A better algorithm would be to find the lcm's pairwise, using foldl:
lcmAll :: [Int] -> Int
lcmAll = foldl lcmPair 1
lcmPair :: Int -> Int -> Int
lcmPair a b = lcmPair' a b
where lcmPair' a' b' | a' < b' = lcmPair' (a+a') b'
| a' > b' = lcmPair' a' (b + b')
| otherwise = a'
Of course, you could use the lcm function from the Prelude instead of lcmPair.
This works because the least common multiple of any set of numbers is the same as the least common multiple of [the least common multiple of two of those numbers] and [the rest of the numbers]
The function 'divise' never stops, it doesn't have a base case. Both branches calls divise, thus they are both recursive. Your also using the function divides as if it would return an int (like rem does), but it returns a Bool.
I see you have already started to divide the problem into parts, this is usually good for understanding and making it easier to read.
Another thing that can help is to write the types of the functions. If your function works but your not sure of its type, try :i myFunction in ghci. Here I've fixed the type error in divides (although other errors remains):
*Main> :i divise
divise :: Int -> Int -> [Int] -- Defined at divise.hs:4:0-5
Did you want it to return a list?
Leaving you to solve the problem, try to further divide the problem into parts. Here's a naive way to do it:
A function that checks if one number is evenly divisible by another. This is your divides function.
A function that checks if a number is dividable by all numbers [1..20].
A function that tries iterates all numbers and tries them on the function in #2.
Here's my quick, more Haskell-y approach, using your algorithm:
Prelude> let divisibleByUpTo i n = all (\x -> (i `rem` x) == 0) [1..n]
Prelude> take 1 $ filter (\x -> snd x == True) $ map (\x -> (x, divisibleByUpTo x 4)) [1..]
divisibleByUpTo returns a boolean if the number i is divisible by every integer up to and including n, similar to your divides function.
The next line probably looks pretty difficult to a Haskell newcomer, so I'll explain it bit-by-bit:
Starting from the right, we have map (\x -> (x, divisibleByUpTo x 4)) [1..] which says for every number x from 1 upwards, do divisibleByUpTo x 4 and return it in a tuple of (x, divisibleByUpTo x 4). I'm using a tuple so we know which number exactly divides.
Left of that, we have filter (\x -> snd x == True); meaning only return elements where the second item of the tuple is True.
And at the leftmost of the statement, we take 1 because otherwise we'd have an infinite list of results.
This will take quite a long time for a value of 20. Like others said, you need a better algorithm -- consider how for a value of 4, even though our "input" numbers were 1-2-3-4, ultimately the answer was only the product of 3*4. Think about why 1 and 2 were "dropped" from the equation.