C spell checking, string concepts, algorithms - c

this is my first question on stack overflow. Some quick background, this is not for a school project, just for fun and practice and learning. I'm trying to make a spell checker in C. The problem I'm having is coming up with possible words to replace a misspelled word.
I should also point at that in my courses we haven't gotten to higher level programming concepts like time complexity or algorithm development. I say that because I have a feeling there are names for the concepts I'm really asking about, I just haven't heard of them yet.
In other similar posts here most people suggest using the levenshtein distance or traversing patricia trees; would it be a problem to just compare substrings? The (very much inefficient) algorithm I've come up with is:
compare the first N characters, where N = length of the misspelled word - 1, to dictionary words (they would be read from a system file into a dynamically allocated array)
if N characters from the misspelled word and a word from a dictionary match, add it to a list of suggestions; if no more matches are found, decrement N
continue until 10 suggestions are found or N = 0
It feels clunky and awkward, but it's sort of how our textbook suggests approaching this. I've read wiki articles on traversing trees and calculating all kinds of interesting things for efficiency and accuracy, but they're over my head at this point. Any help is appreciated, and thank you for taking the time to read this.

Modern computers are fast, really fast. It would be worthwhile for you to code this up using the algorithm you describe, and see how well it works for you on your machine with your dictionary. If it works acceptably well, then great! Otherwise, you can try to optimise it by choosing a better algorithm.
All the fancy algorithms you read about have one or both of the following goals:
Speed up the spell checking
Offer better suggestions for corrections
But that's only important if you're seriously concerned about performance. There's nothing wrong with writing your own code to do this. It may not be great, but you'll learn a lot more than jumping in and implementing an algorithm you don't understand yet.

Related

C-Input/Output From File-Insertion Sort

How can I do this programming?
Can you give many hint or advice me?
c: Read the file and get the words to be alphabetic sorted (I did reading,but I didn't sorting)
It looks like the problem is worded poorly; is c supposed to direct you to read the words into an unsorted list? That would make sense to me.
Anyway, design your insertionsort function to match the prototype of the standard library's qsort. This way you can reuse your code and move the logic for comparing two words out of your sort function. Determining whether a word "comes before" another word is trivial.
For calculating the running time of your algorithm, take a look at the clock function. This does not return the running time of your program but be a better indicator of how much CPU time your sorting algorithm took. A good way to minimize the running time of your program is to refrain from making system calls and heap allocations in your loops, if possible. Note that insertion sort has a very bad worst-case time complexity but is very good for almost-sorted data. Selecting the right sorting algorithm for your data set can make a big difference.

Cache Oblivious Search

Please forgive this stupid question, but I didn't find any hint by googling it.
If I have an array (contiguous memory), and I search sequentially for a given pattern (for example build the list of all even numbers), am I using a cache-oblivious algorithm? Yes it's quite stupid as an algorithm, but I'm trying to understand here :)
Yes, you are using a cache-oblivious algorithm since your running time is O(N/B) - i.e. # of disk transfers, which is dependent on the block size, but your algorithm doesn't depend on a particular value of the block size. Additionally, this means that you are both cache-oblivious as well as cache-efficient.

How do algorithms differ from design patterns?

I am new to C programming; coming from an OOP PHP background.
I find C to be (no wonder) a much more difficult language. I had particularly lots of problems figuring out a couple of things on arrays at first: like there is no native associative array.
Now, this part I guess I'm figuring out little by little, but now I have a question regarding a conversation I had just yesterday with a C developer. She was explaining the binary search algorithm to me because I asked her whether there were libraries to do array related stuff in C or not because it seemed like a smarter solution than always re-inventing the wheel.
I would really love to learn more about algorithms in C, in particular what differences are there between algorithms and the design patterns I'm used to using in PHP?
Taking things in order: the extent of C's support for anything like an associative array would be qsort to sort an array of structures based on a key, and bsearch to find one based on a key. There are, of course, quite a few alternatives -- various other libraries have hash tables, balanced trees, etc. Exactly which will suit your purposes is hard to guess though.
Offhand, I don't know of many good books covering algorithms that use C as their primary vehicle for demonstration. A few obvious recommendations for books on algorithms in general (mostly language independent) would be:
The Art of Computer Programming by Donald Knuth. This is pretty much the class algorithms book. It's now (finally) up to four volumes. Knuth originally started on it in 1967, planning to write 7 volumes. Only three volumes were available for a long time. A fourth was added quite recently. At the rate he's going, it's only going to make it to 7 if Knuth lives to be well past 100 years old. Nonetheless, the parts that are there are extremely good -- but (warning!) he analyzes the algorithms in considerable detail; if you don't know at least a little calculus, a fair amount will probably be hard to follow.
Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein. IIRC, there's now a newer edition than I have, which adds yet another author. This is a large book (dropping it on your toes would be quite painful). It uses a fair amount of mathematical notation and such throughout, but if you're willing to work a little at looking up the notation, it's really pretty understandable. It covers quite a bit of important ground (e.g., graph algorithms) that are scheduled for later volumes of Knuth, but not (at least yet) available there.
Algorithms and Data Structures by Aho, Hopcraft and Ullman. This is (by a pretty fair margin) the smallest, lightest, and at least for most people probably the easiest of these to follow.
Though it's only available used anymore, if you can find a copy of Algorithms + Data Structures = Programs by Niklaus Wirth, that's what I'd really suggest. It uses Pascal (no surprise -- Niklaus Wirth invented Pascal), but that's enough like C that it doesn't cause a real problem. It doesn't go into as much depth as Knuth about each algorithm, but still enough to give a good feel for when one is likely to be a good choice versus another. For somebody in your position (some background in programming, but little in this area) it's my top recommendation.
Though I've said it before, I think it bears repeating: IMO, all of Robert Sedgewick's books on algorithms should be avoided. Algorithms in C++ is probably the worst of them, but the others are only marginally better. The code they include (again, especially the C++ version) is truly execrable, and the descriptions of algorithms are often incomplete and/or misleading. The most recent editions have fixed some of the problems, but (IMO) not nearly enough to qualify as something that should ever be recommended. If there was no alternative, you could probably get by with these, but given the number of alternatives that are dramatically superior, the only reason to read these at all is if somebody gives them to you, and you absolutely can't afford anything else.
As far as algorithms versus design patterns goes, the line can get blurry in places, but generally an algorithm is much more tightly defined. An algorithm will normally have a specific, tightly defined input which it processes in a specific way to produce an equally specific result/output. A design pattern tends to be more loosely defined, more generic. An algorithm can be generic as well (e.g., a sorting algorithms might require a type that defines a strict, weak ordering) but still has specific requirements on the type.
A design pattern tends to be somewhat more loosely defined. For example, the visitor pattern involves processing groups of objects -- but we don't want to modify the types of those objects when we decide we need to process them in a new and different way. We do that by defining the processes separately from the objects to be processed, along with how we'll traverse the groups of objects, and allow a process to work with each.
To look at it from a rather different direction, you can usually implement an algorithm with a function or a small group of functions. A design pattern tends to be oriented more toward the style in which you write your code, rather than just "here's a function, use it."
"Algorithms in C, Parts 1-5 (Bundle): Fundamentals, Data Structures, Sorting, Searching, and Graph Algorithms (3rd Edition)"
Cannot stress how good that series is.

How does this sort function work?

As part of my job, I'm occasionally called upon to evaluate candidates for programming positions. A code snippet recently passed across my desk and my first thoughts were that I wasn't sure code like this would even compile any more. But compile it does, and it works as well.
Can anyone explain why and how this works? The mandate was to provide a function to sort five integer values.
void order5(arr) int *arr; {
int i,*a,*b,*c,*d,*e;
a=arr,b=arr+1,c=arr+2,d=arr+3,e=arr+4;
L1: if(*a >*b){*a^=*b;*b^=*a;*a^=*b;}
L2: if(*b >*c){*b^=*c;*c^=*b;*b^=*c;goto L1;}
L3: if(*c >*d){*c^=*d;*d^=*c;*c^=*d;goto L2;}
if(*d >*e){*d^=*e;*e^=*d;*d^=*e;goto L3;}
}
Now I can see the disadvantages of this approach (lack of readability and maintainability for anyone born after 1970) but can anyone think of any advantages? I'm hesitant to dismiss it out of hand but, before we decide whether or not to bring this person back in for round 2, I'd like to know if it has any redeeming features beyond job security for the author.
It's a fully unrolled bubble sort with the XOR-swap trick expressed inline. I compiled it with several different options hoping it produced some awesome compact code, but it's really not that impressive. I threw in some __restrict__ keywords so that the compiler would know that none of the *a could alias each other, which does help quite a bit. Overall though, I think the attempted cleverness has gone so far outside the norm that the compiler is really not optimizing the code very well at all.
I think the only advantage here is novelty. It certainly caught your eye! I would have been more impressed with abuses of more modern technology, like sorting with MMX/SSE or the GPU, or using 5 threads which all fight it out to try to insert their elements into the right place. Or perhaps an external merge sort, just in case the 5 element array can't fit in core.
The xor trick just swaps two integers. The goto's are the imitation of the loop. Advantages? None at all except for showing off how obfuscated a code you can write. The parameter after function () is a deprecated feature. And having an array on hand and havong 5 distinct pointers pointing at each elem of the array is just horrible. To sum it up: Yuck! :)
It's a screwy implementation of Gnome sort for five items.
Here is how a
garden gnome sorts a line of flower
pots. Basically, he looks at the
flower pot next to him and the
previous one; if they are in the right
order he steps one pot forward,
otherwise he swaps them and steps one
pot backwards. Boundary conditions: if
there is no previous pot, he steps
forwards; if there is no pot next to
him, he is done.
The "stepping one pot forward" is done by falling through to the next if. The goto immediately after each XOR-swap does the "stepping one pot backwards."
You can't dismiss someone out of hand for an answer like this. It might have been provided tongue-in-cheek.
The question is highly artificial, prompting contrived answers.
You need to find out how the candidate would solve more real-world problems.
lack of readability and maintainability for anyone born after 1970
Are people born before 1970 better at maintaining unreadable code then? If so, that's good because I was and it can only be a selling point.
before we decide whether or not to bring this person back in for round 2, I'd like to know if it has any redeeming features beyond job security for the author.
The code has no one redeeming features. It bizarrely uses the xor swap technique whose only potential redeeming feature would be saving oner integer's worth of stack space. However, even that is negated by the five pointers defined and the unused int. It also has a gratuitous use of the comma operator.
Normally, I'd also say "goto, yuck", but in this case, it has been used in quite an elegant way, once you understand the sort algorithm used. In fact, you could argue that it makes the gnome sort algorithm clearer than using an index variable (except it cannot be generalised to n elements). So there you have the redeeming feature, it makes goto look good :)
As for "do you bring the candidate back for the second interview". If the code fragment was accompanied by a detailed comment explaining how the algorithm worked and the writer's motivation for using it, I'd say definitely yes. If not, I'd probably ring him up and ask those questions.
NB, the code fragment uses K&R style parameter declarations. This means the author probably hasn't programmed in C for 10 to 15 years or he copied it off the Internet.

Did you apply computational complexity theory in real life?

I'm taking a course in computational complexity and have so far had an impression that it won't be of much help to a developer.
I might be wrong but if you have gone down this path before, could you please provide an example of how the complexity theory helped you in your work? Tons of thanks.
O(1): Plain code without loops. Just flows through. Lookups in a lookup table are O(1), too.
O(log(n)): efficiently optimized algorithms. Example: binary tree algorithms and binary search. Usually doesn't hurt. You're lucky if you have such an algorithm at hand.
O(n): a single loop over data. Hurts for very large n.
O(n*log(n)): an algorithm that does some sort of divide and conquer strategy. Hurts for large n. Typical example: merge sort
O(n*n): a nested loop of some sort. Hurts even with small n. Common with naive matrix calculations. You want to avoid this sort of algorithm if you can.
O(n^x for x>2): a wicked construction with multiple nested loops. Hurts for very small n.
O(x^n, n! and worse): freaky (and often recursive) algorithms you don't want to have in production code except in very controlled cases, for very small n and if there really is no better alternative. Computation time may explode with n=n+1.
Moving your algorithm down from a higher complexity class can make your algorithm fly. Think of Fourier transformation which has an O(n*n) algorithm that was unusable with 1960s hardware except in rare cases. Then Cooley and Tukey made some clever complexity reductions by re-using already calculated values. That led to the widespread introduction of FFT into signal processing. And in the end it's also why Steve Jobs made a fortune with the iPod.
Simple example: Naive C programmers write this sort of loop:
for (int cnt=0; cnt < strlen(s) ; cnt++) {
/* some code */
}
That's an O(n*n) algorithm because of the implementation of strlen(). Nesting loops leads to multiplication of complexities inside the big-O. O(n) inside O(n) gives O(n*n). O(n^3) inside O(n) gives O(n^4). In the example, precalculating the string length will immediately turn the loop into O(n). Joel has also written about this.
Yet the complexity class is not everything. You have to keep an eye on the size of n. Reworking an O(n*log(n)) algorithm to O(n) won't help if the number of (now linear) instructions grows massively due to the reworking. And if n is small anyway, optimizing won't give much bang, too.
While it is true that one can get really far in software development without the slightest understanding of algorithmic complexity. I find I use my knowledge of complexity all the time; though, at this point it is often without realizing it. The two things that learning about complexity gives you as a software developer are a way to compare non-similar algorithms that do the same thing (sorting algorithms are the classic example, but most people don't actually write their own sorts). The more useful thing that it gives you is a way to quickly describe an algorithm.
For example, consider SQL. SQL is used every day by a very large number of programmers. If you were to see the following query, your understanding of the query is very different if you've studied complexity.
SELECT User.*, COUNT(Order.*) OrderCount FROM User Join Order ON User.UserId = Order.UserId
If you have studied complexity, then you would understand if someone said it was O(n^2) for a certain DBMS. Without complexity theory, the person would have to explain about table scans and such. If we add an index to the Order table
CREATE INDEX ORDER_USERID ON Order(UserId)
Then the above query might be O(n log n), which would make a huge difference for a large DB, but for a small one, it is nothing at all.
One might argue that complexity theory is not needed to understand how databases work, and they would be correct, but complexity theory gives a language for thinking about and talking about algorithms working on data.
For most types of programming work the theory part and proofs may not be useful in themselves but what they're doing is try to give you the intuition of being able to immediately say "this algorithm is O(n^2) so we can't run it on these one million data points". Even in the most elementary processing of large amounts of data you'll be running into this.
Thinking quickly complexity theory has been important to me in business data processing, GIS, graphics programming and understanding algorithms in general. It's one of the most useful lessons you can get from CS studies compared to what you'd generally self-study otherwise.
Computers are not smart, they will do whatever you instruct them to do. Compilers can optimize code a bit for you, but they can't optimize algorithms. Human brain works differently and that is why you need to understand the Big O. Consider calculating Fibonacci numbers. We all know F(n) = F(n-1) + F(n-2), and starting with 1,1 you can easily calculate following numbers without much effort, in linear time. But if you tell computer to calculate it with that formula (recursively), it wouldn't be linear (at least, in imperative languages). Somehow, our brain optimized algorithm, but compiler can't do this. So, you have to work on the algorithm to make it better.
And then, you need training, to spot brain optimizations which look so obvious, to see when code might be ineffective, to know patterns for bad and good algorithms (in terms of computational complexity) and so on. Basically, those courses serve several things:
understand executional patterns and data structures and what effect they have on the time your program needs to finish;
train your mind to spot potential problems in algorithm, when it could be inefficient on large data sets. Or understand the results of profiling;
learn well-known ways to improve algorithms by reducing their computational complexity;
prepare yourself to pass an interview in the cool company :)
It's extremely important. If you don't understand how to estimate and figure out how long your algorithms will take to run, then you will end up writing some pretty slow code. I think about compuational complexity all the time when writing algorithms. It's something that should always be on your mind when programming.
This is especially true in many cases because while your app may work fine on your desktop computer with a small test data set, it's important to understand how quickly your app will respond once you go live with it, and there are hundreds of thousands of people using it.
Yes, I frequently use Big-O notation, or rather, I use the thought processes behind it, not the notation itself. Largely because so few developers in the organization(s) I frequent understand it. I don't mean to be disrespectful to those people, but in my experience, knowledge of this stuff is one of those things that "sorts the men from the boys".
I wonder if this is one of those questions that can only receive "yes" answers? It strikes me that the set of people that understand computational complexity is roughly equivalent to the set of people that think it's important. So, anyone that might answer no perhaps doesn't understand the question and therefore would skip on to the next question rather than pause to respond. Just a thought ;-)
There are points in time when you will face problems that require thinking about them. There are many real world problems that require manipulation of large set of data...
Examples are:
Maps application... like Google Maps - how would you process the road line data worldwide and draw them? and you need to draw them fast!
Logistics application... think traveling sales man on steroids
Data mining... all big enterprises requires one, how would you mine a database containing 100 tables and 10m+ rows and come up with a useful results before the trends get outdated?
Taking a course in computational complexity will help you in analyzing and choosing/creating algorithms that are efficient for such scenarios.
Believe me, something as simple as reducing a coefficient, say from T(3n) down to T(2n) can make a HUGE differences when the "n" is measured in days if not months.
There's lots of good advice here, and I'm sure most programmers have used their complexity knowledge once in a while.
However I should say understanding computational complexity is of extreme importance in the field of Games! Yes you heard it, that "useless" stuff is the kind of stuff game programming lives on.
I'd bet very few professionals probably care about the Big-O as much as game programmers.
I use complexity calculations regularly, largely because I work in the geospatial domain with very large datasets, e.g. processes involving millions and occasionally billions of cartesian coordinates. Once you start hitting multi-dimensional problems, complexity can be a real issue, as greedy algorithms that would be O(n) in one dimension suddenly hop to O(n^3) in three dimensions and it doesn't take much data to create a serious bottleneck. As I mentioned in a similar post, you also see big O notation becoming cumbersome when you start dealing with groups of complex objects of varying size. The order of complexity can also be very data dependent, with typical cases performing much better than general cases for well designed ad hoc algorithms.
It is also worth testing your algorithms under a profiler to see if what you have designed is what you have achieved. I find most bottlenecks are resolved much better with algorithm tweaking than improved processor speed for all the obvious reasons.
For more reading on general algorithms and their complexities I found Sedgewicks work both informative and accessible. For spatial algorithms, O'Rourkes book on computational geometry is excellent.
In your normal life, not near a computer you should apply concepts of complexity and parallel processing. This will allow you to be more efficient. Cache coherency. That sort of thing.
Yes, my knowledge of sorting algorithms came in handy one day when I had to sort a stack of student exams. I used merge sort (but not quicksort or heapsort). When programming, I just employ whatever sorting routine the library offers. ( haven't had to sort really large amount of data yet.)
I do use complexity theory in programming all the time, mostly in deciding which data structures to use, but also in when deciding whether or when to sort things, and for many other decisions.
'yes' and 'no'
yes) I frequently use big O-notation when developing and implementing algorithms.
E.g. when you should handle 10^3 items and complexity of the first algorithm is O(n log(n)) and of the second one O(n^3), you simply can say that first algorithm is almost real time while the second require considerable calculations.
Sometimes knowledges about NP complexities classes can be useful. It can help you to realize that you can stop thinking about inventing efficient algorithm when some NP-complete problem can be reduced to the problem you are thinking about.
no) What I have described above is a small part of the complexities theory. As a result it is difficult to say that I use it, I use minor-minor part of it.
I should admit that there are many software development project which don't touch algorithm development or usage of them in sophisticated way. In such cases complexity theory is useless. Ordinary users of algorithms frequently operate using words 'fast' and 'slow', 'x seconds' etc.
#Martin: Can you please elaborate on the thought processes behind it?
it might not be so explicit as sitting down and working out the Big-O notation for a solution, but it creates an awareness of the problem - and that steers you towards looking for a more efficient answer and away from problems in approaches you might take. e.g. O(n*n) versus something faster e.g. searching for words stored in a list versus stored in a trie (contrived example)
I find that it makes a difference with what data structures I'll choose to use, and how I'll work on large numbers of records.
A good example could be when your boss tells you to do some program and you can demonstrate by using the computational complexity theory that what your boss is asking you to do is not possible.

Resources