I am extremely new to the concept of random number generation and I need to create my own algorithm for work written in C (the built-in random number generator will not work for me).
Can somebody point me to a good introduction on the topic so that I may be able to grasp the concept? Everything I've found so far seems to explain itself in terms of itself and it's not very helpful.
I'm looking for a layman's explanation on the topic.
Read chapter 7 in the online Numerical Recipes in C
For a good place to start learning, the Wikipedia articles are pretty good, and much more up-to-date than, say, Knuth. Also check out this paper by David Jones.
If C's isn't good enough for you, also consider an external library like my own public domain ojrandlib, which gives you a choice of algorithms like Marsaglia's MWC, Mersenne Twister, and others.
The first half of Knuth's TAOCP volume 2 ("Seminumerical algorithms") is devoted to random-number generation. He talks about a few pseudo-random number generators, then he spends a long time talking about what properties make a "good" PRNG for most purposes. It's probably worth reading if your job is to produce a PRNG that doesn't suck.
You might also want to look at George Marsaglia's work.
Related
I recently started taking Probabilistic Graphical Models on coursera, and 2 weeks after starting I am starting to believe I am not that great in Probability and as a result of that I am not even able to follow the first topic (Bayesian Network). That being said I want to make an effort to learn this course, so can you suggest me some other resources for PGM or for Probability which can be helpful in understanding this course.
You could try reading Pearl's 1988 book Probabilistic Reasoning in Intelligent Systems, which gives much background and insights into the bayesian way of seeing things. Concerning probability theory, you don't really need that much theory beside the three basic laws of probability and the definition of conditional probabilities, which are both simple and usually taught in school.
This book is very influential to the way AI has developed over the last 20 years. The author was awarded the Turing Award this year.
Also there's a rather new book by Koller and Friedman: Probabilistic Graphical Models (2009). You should already know about this one, since the course is probably held by Daphne Koller again. This book includes many more recent results and covers more ground, in more detail. It can be very demanding in parts. It probably also shares examples with the course.
PGMs are a bit advanced if you don't have a good grasp of probability theory. A more introductory class is Statistics 1, might be better to start there.
I am new to C programming; coming from an OOP PHP background.
I find C to be (no wonder) a much more difficult language. I had particularly lots of problems figuring out a couple of things on arrays at first: like there is no native associative array.
Now, this part I guess I'm figuring out little by little, but now I have a question regarding a conversation I had just yesterday with a C developer. She was explaining the binary search algorithm to me because I asked her whether there were libraries to do array related stuff in C or not because it seemed like a smarter solution than always re-inventing the wheel.
I would really love to learn more about algorithms in C, in particular what differences are there between algorithms and the design patterns I'm used to using in PHP?
Taking things in order: the extent of C's support for anything like an associative array would be qsort to sort an array of structures based on a key, and bsearch to find one based on a key. There are, of course, quite a few alternatives -- various other libraries have hash tables, balanced trees, etc. Exactly which will suit your purposes is hard to guess though.
Offhand, I don't know of many good books covering algorithms that use C as their primary vehicle for demonstration. A few obvious recommendations for books on algorithms in general (mostly language independent) would be:
The Art of Computer Programming by Donald Knuth. This is pretty much the class algorithms book. It's now (finally) up to four volumes. Knuth originally started on it in 1967, planning to write 7 volumes. Only three volumes were available for a long time. A fourth was added quite recently. At the rate he's going, it's only going to make it to 7 if Knuth lives to be well past 100 years old. Nonetheless, the parts that are there are extremely good -- but (warning!) he analyzes the algorithms in considerable detail; if you don't know at least a little calculus, a fair amount will probably be hard to follow.
Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein. IIRC, there's now a newer edition than I have, which adds yet another author. This is a large book (dropping it on your toes would be quite painful). It uses a fair amount of mathematical notation and such throughout, but if you're willing to work a little at looking up the notation, it's really pretty understandable. It covers quite a bit of important ground (e.g., graph algorithms) that are scheduled for later volumes of Knuth, but not (at least yet) available there.
Algorithms and Data Structures by Aho, Hopcraft and Ullman. This is (by a pretty fair margin) the smallest, lightest, and at least for most people probably the easiest of these to follow.
Though it's only available used anymore, if you can find a copy of Algorithms + Data Structures = Programs by Niklaus Wirth, that's what I'd really suggest. It uses Pascal (no surprise -- Niklaus Wirth invented Pascal), but that's enough like C that it doesn't cause a real problem. It doesn't go into as much depth as Knuth about each algorithm, but still enough to give a good feel for when one is likely to be a good choice versus another. For somebody in your position (some background in programming, but little in this area) it's my top recommendation.
Though I've said it before, I think it bears repeating: IMO, all of Robert Sedgewick's books on algorithms should be avoided. Algorithms in C++ is probably the worst of them, but the others are only marginally better. The code they include (again, especially the C++ version) is truly execrable, and the descriptions of algorithms are often incomplete and/or misleading. The most recent editions have fixed some of the problems, but (IMO) not nearly enough to qualify as something that should ever be recommended. If there was no alternative, you could probably get by with these, but given the number of alternatives that are dramatically superior, the only reason to read these at all is if somebody gives them to you, and you absolutely can't afford anything else.
As far as algorithms versus design patterns goes, the line can get blurry in places, but generally an algorithm is much more tightly defined. An algorithm will normally have a specific, tightly defined input which it processes in a specific way to produce an equally specific result/output. A design pattern tends to be more loosely defined, more generic. An algorithm can be generic as well (e.g., a sorting algorithms might require a type that defines a strict, weak ordering) but still has specific requirements on the type.
A design pattern tends to be somewhat more loosely defined. For example, the visitor pattern involves processing groups of objects -- but we don't want to modify the types of those objects when we decide we need to process them in a new and different way. We do that by defining the processes separately from the objects to be processed, along with how we'll traverse the groups of objects, and allow a process to work with each.
To look at it from a rather different direction, you can usually implement an algorithm with a function or a small group of functions. A design pattern tends to be oriented more toward the style in which you write your code, rather than just "here's a function, use it."
"Algorithms in C, Parts 1-5 (Bundle): Fundamentals, Data Structures, Sorting, Searching, and Graph Algorithms (3rd Edition)"
Cannot stress how good that series is.
I've been hunting on the net periodically for several months for an answer to this with no joy. Grateful if anyone can shed any light..
I'm interested in work that's been done on simulating the human brain. I could of course mean many things by that. Here's what I do mean, followed by what I don't mean:
I AM interested in simulations of how we think and feel. I'm not talking about down to the level of neurons, but more simulation of the larger modules that are involved. For example one might simulate the 'anger' module as a service that measures the degree one has been disrespected (in some system of representation) and outputs an appropriate measure of anger (again in some system of representation).
I am NOT interested in projects like the Blue Brain etc, where accurate models of neuron clusters are being built. I'm interested in models operating at much higher levels of abstraction, on the level of emotional modules, cognitive reasoning systems etc.
I'm also NOT interested in AI projects that take as their inspiration or paradigm human mechanisms, like Belief-Desire-Intention systems, but which are not actually trying to replicate human behavior. Interesting though these systems are, I'm not interested in making effective systems, but effectively modelling human thought and emotion.
I've been searching far and wide, but all I've found are papers from the 60s like this one:
Computer Simulation of Human Interaction in Small Groups
It almost appears to me as if psychologists were excited by simulating brains when computers were first available, but now don't do it at all?
Can anyone point me in the direction of more recent research/efforts, if there have been any?
There are a lot of people who've given it some thought, but one of the problems is that as AI research as continued, it seems increasingly that AI leads us to think certain things are actually relatively easy that seemed hard, while the apparently east stuff is what is hard.
Consider, for example, what an expert does in some field of discourse. We used to think, in the 60's or so, that things like medical diagnosis and chess playing were hard. We now know that as far as anyone can tell, they are simple search problems; it just happens that the meat computer does search relatively fast and with a lot of parallelism.
There are a number of people, like Jeff Hawkins, who are taking a different approach, and think simulation of the brain is the only way to get something more like what we mean by "thinking"; if they're right, then you're making a category error by saying those don't interest you.
The worst problem with the whole issue is that it appears increasingly difficult to say what we mean when we say we "think and feel" at all. John Searle, with his "Chinese Room" analogy, would argue that it's actually not possible for a mechanism to "think" or "be conscious". On the other hand, Alan Turing, with the famous Turing Test, proposed a weaker definition: for Turing, if you can't tell the difference between a "really" thinking and feeling being and a computer simulation of one, then you must assume the simulation is a "thinking and feeling" being.
I tend to come down on Turing's side: after all, I don't know that anyone but me is "
really" a thinking and feeling being. (To think about that question, look into the idea of a "philosophical zombie", which isn't -- as you might suspect -- a member of the Undead who wonders if there is Meaning in the eating of brains, but instead is a hypothetical entity that isn't conscious, but that perfectly simulates a conscious entity.)
So here's a suggestion: first, think of a way to test, with an effective computation (that is, a halting program or a sequence of tests that is sure to come to a conclusion) if you have really implemented something that can "think and feel"; once you do that, you'll be a long way toward thinking about how to build it.
You might be interested in work on Affective Computing:
http://en.wikipedia.org/wiki/Affective_computing
http://affect.media.mit.edu/
http://psychometrixassociates.com/bio.htm
you should take a look into neural networks if you haven't already.
http://en.wikipedia.org/wiki/Neural_network
In the book "On Intelligence", Jeff Hawkins talks a lot about how we need high-level models of the human. He provides a good literature survey of existing (at the time) research on that topic.
Act-R is a framework that serves the cognitive sciences to simulate the cognitive functions of the human mind. It is about memory, recognition, language understanding and so on. I'm not that familiar with it, so I have to point you to the wiki page.
https://secure.wikimedia.org/wikipedia/en/wiki/ACT-R
The program Eurisko was developed by Douglas Lenat in the late 70s and 80s. It's allegedly adept at learning general patterns and heuristics, and at improving it's own performance. Naturally, Lenat has never released the source code, and has published very little information about the exact inner workings of the program. So, in lieu of an official explanation, how might a program like Eurisko be designed? What open source technologies available today might make an implementation more practical?
Actually, Lenat published a fair amount on Eurisko (I was pretty interested in this 20 years ago). IIRC correctly, he published a number of papers in the AI literature, (here's a key one: "Why Eurisko appears to work" http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.89.1269&rep=rep1&type=pdf
Eurisko is based on his PhD thesis on AM ("A Mathematician") and that you can get through Stanford.
I'd go look at those first :-}
I don't know about "open source", but I'd certainly consider using LISP (Lenat did), or Prolog (because it has good symbolic manipulation support), and Eurisko is about symbolic computation.
All of this from past reading, with possible inaccuracies due to Somezheimer's. As I recall, Eurisko grew out of a postdoc with Herbert Simon, in which the decision was made to isolate AM's inference capabilities from those of the underlying Lisp.
Thus, a paper https://pdfs.semanticscholar.org/4cc4/a5e1591a5a4e81f6ad52e05833b3e750f56e.pdf described RLL, arguably an early DSL for writing discovery programs served as the platform on which Eurisko was written. I think I recall reading that Lenat made portions of the Eurisko code visible to Ken Haase
http://www.kenhaase.com/aboutkh.html
who was working on a rational reconstruction of Eurisko. My view is that it is possible to reconstruct Eurisko if one follows carefully all of the documents about it, though there are probably newer insights which can lead to improvements.
Overall, I believe the key insights are those which relate to complex-adaptive systems: feedback, decay, and process. Eurisko, like AM, used an agenda mechanism to organize agent behaviors and used feedback to adjust priorities of agenda items, and decay(Eurisko).
Eurisko had 4 agendas, each playing in different spaces, but, crucially, sharing the same knowledge base. Thus, the feedback given from one space might bump an agenda item in another space above the "boredom" threshold, returning that agenda to life.
Under that was a loop first seen in AM:
Find something to do
Do it
Study what you just did
Loop
To me, those key points seem profound and offer a glimpse into going beyond Eurisko.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What is the best place or a link to learn algorithms in C? How do you know when and where to use the implementation of algorithms by just looking into the problems?
Algorithms aren't necessarily tied to a specific language, just to clarify, so any algorithms book will work great as long as you can understand the concept being the data structure/algorithm.
That said, this seems like a good choice: Algorithms in C. I have the C++ equivalent on my shelf.
There is also a book that seems language agnostic (correct me if I'm wrong) called Data Structures & Algorithm's, though I hear it's a bit dated, so you'll miss out on more recent structures.
Don't forget the internet has a plethora of information available to you. However, books are usually better for these sorts of things. This is because internet resources tend to focus on one thing at a time. For example, you need to understand what Big-O notation is before you can understand what it means when we say a List has O(1) [constant time] removal.
A book will cover these things in the correct order, but an internet resource will focus on either Big-O notation or data structures, but often won't easily connect the two.
When it comes to using it, you'll mostly make the connection when it comes to what you'll be doing with the data.
For example, you might want a vector (array) if you just need ordered elements, but if you need ordered elements and removal from any place (but can sacrifice random access), then a list would be more appropriate, due to it's constant-time removal.
For a reasonable (though far from perfect) book on implementing commonly used algorithms in C, try Sedgewick's Algorithms in C. Note that as for any technical subject,a paper book is likely to be far superior to any Web resources.
As to how to know when to use a specific algorithm, I'm afraid that is down to experience.
For an algortihms text, Cormen, Leiserson and Rivest's 'Introduction to Algorithms' is a good start. The pseudocode implementations are easy to translate to C. Two web resources with many links to documentation about algorithms and sample implementations are:
Stony Brook Algorithm Repository
NIST Directory of Data Structures and Algorithms
Algorithms in C by Sedgewick is a great place to start the investigation. Once you are familiar with what algorithms are available and what the performance characteristics of each are, you'll be able to see where to use each of them.
This is my collection of mostly math-related algorithms:
List of algorithms
FXT (math related)
Numerical Methods
Numerical Recipes in C
How do u know when and where to use
the implementation of algorithms by
just looking into the problems
It's called "pattern matching", once you've seen and solved lots of problems you start to recognize common things and you can reuse your previous knowledge.
By the way, I would recommend you before a good book just on algorithms before starting with algorithms in C, which are more difficult to implement and more error prone than in higher level language, and once you are very confident with the general procedures you can start to tweak and optimize them in C.
Many good resources have already been named, so I won't repeat them here.
As for how do you know what algorithm to use when?
You need to have a big enough tool box, which you will obtain by sitting down and slogging through a long list of basic (and them more esoteric) data structures and algorithms. You should try to get all the basics, but really only need a sample from the more specialized ones.
You need to understand what trade offs are available to you (time, code complexity, memory, single versus multiple passes, in-place versus copy, stable versus unstable sorts, etc. ad nauseum), and how the algorithms you study do on each of these. Again, this is just a case of much studying. Big-O is a place to start, but is not the end all and be all of this.
You need to get a feel for understanding what are the real limits you face when presented with a problem, and how to express these in terms of the algorithm trade offs mentioned above. This requires a degree of intuition, and is generally learned by practice over time.
It is worth implementing some things more then one way as you go along, to learn in your gut, what works and what doesn't.
It is worth reading code written by folks more experienced than yourself, to see how they think.
Good luck.
The Wikipedia List of Algorithms is also very handy reference.
And, if you want to get deeper -- The Art of Computer Programming (wikipedia ref).
Preferably after the Robert Sedgewick book already referred in multiple answers.
I read Pointers on C by Kenneth Reek recently. I thought I was pretty well versed in C, but this book gave me a few epiphanies, despite being aimed at beginners. The code examples are things of beauty (but not the fastest code on a x86-like CPU). It provide good implementations of many of the most common algorithms and data-structures that are in use, with excellent explanations about why they are implemented as they are (and sometimes code or suggestions for alternative implementations).
On the same page as your question: patterns for creating reusable code in C (that is what we all want, isn't it?), C Interfaces and Implementations: Techniques for Creating Reusable Software, by David R. Hanson. It has been a few years since I read it, and I don't have a copy to verify what I recall is correct, but if I remember correctly it deals with how to create good C API:s to data structures and algorithms, as well as giving example implementations of some of the most common algorithms.
Of topic: As I have mostly written throw-away programs in C for private use, this one helped me get rid of some bad coding habits as well as being an excellent C reference: C: A reference Manual. Reminds me that I ought to buy that one.
One needs experience to know which set of algorithms to use for a particular problem. Defining a goal will help. Speed, memory, robustness, solution quality ... are all factors in determining which algorithms to use. We could devise different solutions to the same problem given different set of factors and scenarios.
The Algorithm Design Manual is worth a look.
A easy method to learn algorithms is to use Wiki page, who is dedicated to some "classical" algorithms like search algorithms or for sort. The constructions of algorithms is based on ability to use different data structures, like linked lists or C. So, first try to implement different data structures like simple linked list or binary tree, and after try to use in different algorithms who is related to real life problems.