Booth Algorithm in string - c

I tried solving this problem in SPOJ using Booth Algorithm in O(n) time, but it failed though it worked for all test cases I tried.
Then I did in Brute force way in O(n^2) time, it worked. I have attached the code for both the cases, tell me where I went wrong or is Booth algo a correct approach for this problem?
Isnt the problem, finding the minimum rotation for Lexicographically smallest string
For first approach, Booth Algorithm : http://ideone.com/J5gl5
For second approach, Brute Force : http://ideone.com/ofTeA

Your algorithm gives the wrong answer for string "ABAED", for example.
Your algorithm returns 7 (even though this is longer than the string!).
The correct answer is 0.
(Note this bug may also be present in wherever you found a description of the algorithm! If you look at the history/discussion for the wikipedia article there are a lot of edits fixing bugs - both claiming to fix bugs in the original paper, and to fix bugs in bugfix...)
It seems to work a lot better if you replace:
if( lst[i] < lst[ans+i+1] )
with
if( lst[j] < lst[ans+i+1] )

Related

What is wrong with the implementation of this inversion count algorithm?

I am doing a question on www.hackerrank.com and I have been stuck on it for days.
Here is the statement of the question https://www.hackerrank.com/challenges/insertion-sort. Basically, I have to count how many swaps occur in insertion sort for a given array in O(nlog(n)) time.
http://paste.ubuntu.com/12637144/ Here is my submitted code. I use merge sort and count how many times each element is displaced. This code passes for more than half of the site's tests. When it fails it doesn't time out, and it doesn't have a compilation error or segmentation fault.
Furthermore, when I got the input for one of the failed test cases (Here is the input that it failed on the site http://paste.ubuntu.com/12637165/) and tested it with this variation of my code http://paste.ubuntu.com/12637127/ which actually runs the insertion sort algorithm counting the number of swaps along the way and checks it against the merge sort count, I pass all of the tests. Also, I have generated thousands of random test cases, and they also all pass using this test.
I don't think its a problem on the site's end because in the discussions for the problem other people seem to be passing the tests just fine without any questions or complaints. So maybe I am misunderstanding either the question or I am simply writing both the algorithm and the test cases for the algorithm wrong. Does anyone have any suggestions?
If n can be upto 100000, then the no. of inversions can be ~= n^2 / 2 which wont fit in a 32 bit integer. Try using a 64 bit integer for counting and for return value of mergeSort.

Is an optimal algorithm a complete algorithm?

I do understand that a complete algorithm is one where if there is a solution, the algorithm is able to find it and that optimal algorithm is one where it manages to find a least cost solution.
But is an optimal algorithm, a complete algorithm? Can please briefly explain?
Thanks.
Yes, by definition. Finding the optimal solution entails proving optimality. This can be done by finding all solutions or by proving that no solution can have better cost than the one found already. In either case, at least one solution has to be found.
If there is no solution, neither an optimal nor a complete algorithm would find one of course.
The notion of completeness refers to the ability of the algorithm to find a solution if one exists, and if not, to report that no solution is possible.
If an algorithm can find an solution if it exists but it's not capable to "say" that there is no solution in cases when there are no solution, then it's not complete.
Yes. In simple terms
completeness defines :
If a solution is possible, it ensures the solution. (Is it guaranteed or not ?)
Optimal :
Ensures the best solution will be found or not ?
Therefore according to your problem if an algorithm is optimal, it tells the best solution is found. Then automatically it ensures the completeness of the algorithm because it already found the solution (as guaranteed).

why DFS is not optimal but BFs is optimal

I have this question in my mind for long but never got reasonable answer for that :
Usually in artifitial intelligent course when it comes to search it is always said that BFS is optimal but DFS is not but I can come up with many example that shows with DFS we can even get the answer faster. So can anyone explain it ? Am I missing something?
Optimal as in "produces the optimal path", not "is the fastest algorithm possible". When searching a state space for a path to a goal, DFS may produce a much longer path than BFS. Note that BFS is only optimal when actions are unweighted; if different actions have different weights, you need something like A*.
This is because by optimal strategy they mean the one whose returned solution maximizes the utility.
Regarding this, nothing guarantees that the first solution found by DFS s optimal. Also BFS is not optimal in a general sense, so your statement as-is is wrong.
The main point here is about being guaranteed that a certain search strategy will always return the optimal result. Any strategy might be the best in a given case, but often (especially in AI), you don't know in what specific case you are, at most you have some hypothesis on that case.
However, DFS is optimal when the search tree is finite, all action costs are identical and all solutions have the same length. However limitating this may sound, there is an important class of problems that satisfies these conditions: the CSPs (constraint satisfaction problems). Maybe all the examples you thought about fall in this (rather common) category.
You can refer to my answer to this question in which I explain why DFS is not optimal and why BFS is not the best choice for solving uninformed state-space search problems.
You can refer to the link below it considers an eg of a tree and solves it with both the approaches.
link

PACMAN: a short path for eating all the dots

I am trying to find a solution for the PACMAN problem of finding a short path (not the shortest, but a good one) that eats all the dots in a big maze. I've seen a lot of people talking about TSP, Dijsktra, BFS, A*. I don't think this is a TSP since I don't have to go back where I started and I can repeat node if I want. And I don't think Dijsktra, BFS and A* would help because I'm not looking for the shortest path and even if that was the case, it wouldn't give a answer in a reasonable time.
Could anyone give me hints on this? What kind of problem is this? Is this a kind of TSP? What kind of algorithms approach this problem in a efficient way? I'd appreciate any hints on implementation.
I take it you're trying to do contest where you find the shortest path in the big maze in under 30 seconds?
I actually did this last year for fun (my college class didn't do the contest). After weeks of research, I was able to do an exact solution of the maze in under 30 seconds.
The heuristic I used was actually an exact heuristic. I wrote a bunch of code to find the minimal path length using a much more efficient algorithm based on graph decomposition and dynamic programming, and then fed the results back into A* as the 'heuristic' value.
The key thing to realize is that while the graph is very big (273 nodes), it has a low carving width (5), meaning that it can be solved efficiently using a fixed parameter tractable algorithm.
Hopefully that's enough keywords to get you on the right track.
Update: I wrote a blog post explaining the solution

C spell checking, string concepts, algorithms

this is my first question on stack overflow. Some quick background, this is not for a school project, just for fun and practice and learning. I'm trying to make a spell checker in C. The problem I'm having is coming up with possible words to replace a misspelled word.
I should also point at that in my courses we haven't gotten to higher level programming concepts like time complexity or algorithm development. I say that because I have a feeling there are names for the concepts I'm really asking about, I just haven't heard of them yet.
In other similar posts here most people suggest using the levenshtein distance or traversing patricia trees; would it be a problem to just compare substrings? The (very much inefficient) algorithm I've come up with is:
compare the first N characters, where N = length of the misspelled word - 1, to dictionary words (they would be read from a system file into a dynamically allocated array)
if N characters from the misspelled word and a word from a dictionary match, add it to a list of suggestions; if no more matches are found, decrement N
continue until 10 suggestions are found or N = 0
It feels clunky and awkward, but it's sort of how our textbook suggests approaching this. I've read wiki articles on traversing trees and calculating all kinds of interesting things for efficiency and accuracy, but they're over my head at this point. Any help is appreciated, and thank you for taking the time to read this.
Modern computers are fast, really fast. It would be worthwhile for you to code this up using the algorithm you describe, and see how well it works for you on your machine with your dictionary. If it works acceptably well, then great! Otherwise, you can try to optimise it by choosing a better algorithm.
All the fancy algorithms you read about have one or both of the following goals:
Speed up the spell checking
Offer better suggestions for corrections
But that's only important if you're seriously concerned about performance. There's nothing wrong with writing your own code to do this. It may not be great, but you'll learn a lot more than jumping in and implementing an algorithm you don't understand yet.

Resources