How to pick a language for Artificial Intelligence programming? [closed] - artificial-intelligence

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What is the best programming language for artificial intelligence purposes?
Mind that using suggested language I must be able to employ any AI technique (or at least most of them).

All the cool bearded gurus in what's left of AI research use Lisp :)
There are two big camps: Common Lisp and Scheme. They have different syntax, etc. Lots of good stuff written for both.
Java is a very popular all-purpose language but a lot of the interesting stuff in AI / Functional Programming, such as passing closures as first-order objects, is clumsy to do in Java.
My personal preference would be to stay away from Windowsy languages like C# and F#. Cool people develop under Unix. Or Linux if they're cool but poor.
Some cool but weird people program in Haskell. A reasonably modern FP language with good performance. I tried it once, it made my brain hurt; but you might be smarter than I am.
UPDATE: Answers to Steve's questions.
I wouldn't be the one paying for a Unix variant; that's what corporations and research institutes do. The idea is, you want to be doing AI research for an outfit that sinks millions into their hardware and doesn't balk at paying a few thousand for an operating system. That's the kind of outfit likely to have good food in the cafeteria and/or pay well for doing fun work. But I'm certainly not knocking Linux.
F# may be cool but I see a whole raft of issues getting it to run on Linux or any other Unix (that's what I meant by "windowsy"), and I don't want to work under Windows (that's what I meant by "personal preference").
To elaborate on the "windowsy" theme: You mention that F# is an OCaml variant. From my own admittedly brief research, it seems that F# is missing functors, OCaml-style objects, polymorphic variants and the camlp4 preprocessor. A functional language without functors? Really? If one were disposed to not like Microsoft, as I admittedly am, one could conclude that they had gone ahead and crowbarred a perfectly good functional language, OCaml, into something they could get to run in their CLR so they could claim to "have" a functional language. Finally, because I don't suspect, I know that Microsoft always prioritizes market dominance over product quality, I don't plan to touch F#. But this is my personal preference, and clearly identified as such, while we're really more concerned with making a good recommendation for mary.ja45 .
I have better reasons to recommend Lisp over F# and even OCaml and Haskell. These are mostly based on the historic preponderance of Lisp over any other language in the AI field.
The bulk of AI literature is based on programs written in Lisp or Prolog. If nothing else, good knowledge of Lisp would allow a student to understand the sample programs. My personal favorite AI megaproject, Cyc, has runtimes in your choice of Common Lisp or C.
In the TIOBE index of programming language (as seen and used in industry), Lisp takes 15th place while Haskell takes 43rd and F# and OCaml place below 50th. Presence on the market correlates with employment opportunities, naturally.
That said, it's quite possible that a number of the younger "AI interesting" languages are poised to skyrocket. If some major research institute published some groundbreaking, defining-the-field research in, say, Scala, you'd see Scala's popularity advance sharply in the research community and, with some lag, in industry.
I (obviously) can't comment on F#'s other qualities but you're as welcome to make recommendations as I was.

Python seems to be used a lot in the general scientific community. It has a lot of libraries available and it's easy to learn.

I'll throw Scala into the pot.
it's usable for functional programming
it can be made as fast as Java
it's a modern language with lot's of nice aspects
Java seems to be a bit popular in AI, too and so you can use all those Java libraries from Scala
I've solved all exercises from a basic AI course in Scala. It worked really well.

If by "all of AI" you also mean machine learning, which I guess, Matlab, R and Python+Scipy should definately be mentioned.

I personally use Clojure for AI programming, and have found it to be a great all-rounder AI language.
Reasons:
It's a Lisp, and Lisps have historically been very strong the the AI field
It's a homoiconic language with powerful macros, so great for code generation and genetic programming. This is a surprisingly useful property for AI programming (and possibly explains some of the success of Lisp in general in this space)
It runs on the JVM and can easily access all the Java libraries for number crunching (Weka, Colt, etc.).
It's good for rapid interactive development - it's very dynamic and you can do pretty much everything interactively in a running Clojure REPL. No need for recompiling etc.

It matters probably whether the programming environment is academic or not, but for most non-academic AI application development I would recommend sticking with a mainstream language like Java or C++. One needs to be able to interface readily with other COTS or open-source software packages, and this can sometimes be difficult or impossible in more "exotic" languages. For academic work this may be a less critical issue.
Additionally, performance can be important for many applications, and mainstream languages generally have the most heavily-optimized compilers, e.g., C++ or Java.
It is true that functional programming languages like LISP, Scheme, etc have specialized features that may make it easier to implement particular AI methods, but I do not believe this to be true for AI-related programming as a whole, e.g., quantitative machine learning methods usually don't require a functional language. If you need access to both functional constructs and general software packages, there are some tools for LISP to help with this, and the recently-developed Clojure is a LISP-variant that runs on the JVM and can access Java libraries. Also, Groovy is another JVM-based language that includes support for closures.
Lastly, some programmers like paradigm flexibility and/or fast prototyping for AI projects. Ruby and Python both see some AI-related usage for this reason as multi-paradigm languages that can also be used for scripting.
Like most things in programming, the best answer for which language to use in AI development will ultimately depend on the needs of your projects.

It really depends on what kind of problem you are looking at. Also, how "deep" you want to go into AI stuff. If you want to learn from the basics and just implement theoretical AI stuff, go with a higher level language-- as in functional programming (and proven in AI) like lisp, or prolog. If you know what problem set you are dealing with and want efficient, go with something like Java, C++ and use a toolkit to do the stuff.
Since you mention Machine Learning look into Weka Toolkit in Java for some of these stuff.

Pick the programming language with AI techniques the same way you pick a language for any other project:
What is the problem you are trying to
solve?
Is there good support available for
the language?
What are the customers requirements?
I would recommend Prolog as a very good programming language used to implement AI systems.

There is no "best" language. Each one has its merits. When I studied AI, mostly we worked with lisp and prolog, but I've been most productive in AI with Java/C# and F# has a lot to offer.

How about a framework written in Java, supporting "High Level Logic" and agent style communication.
http://highlevellogic.blogspot.com/2010/11/when-will-we-have-artificial.html

It also depends on the size of your dataset. For web-scale datasets you may want to use Map-Reduce and that implies Hadoop. Hadoop is in Java -- but you could use any language (Python, etc.) for your Map-Reduce functions.

There is also a java framework called weka, developped by the university of waikato. I don't know wether it anwsers your question, but it may help.
Quoting wikipedia: «Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization» and more.

Related

AI library framework in Ada

I'm looking for an Ada constructed framework for AI. I think Ada would be perfect for implementing temporal and stochastic paradigms due to its tasking and real-time mechanisms, but did not find anyone who tried to make such a libraries. Actually I did not find strong implementations on other languages too. For C# I found http://www.c-sharpcorner.com/1/56/, and for C++ I found http://mind.sourceforge.net/cpp.html but both did not get much popularity. Maybe java has good AI libraries too, but I do not know. So, do you know an Ada implementation? Would it be useful for more anyone? If you know libraries from other languages, it would be useful to know and compare the implementation models in java, for example. Thanks.
Here's a few resources:
Book, rather old, though (1989): Artificial Intelligence With Ada
Looks like some kind of university student dissertation: MUTANTS: A generic genetic algorithm toolkit for Ada 95
Dmitry Kazakov's AI stuff, mostly fuzzy logic. (Dmitry writes really nice software.)
I once had a school AI project that used the CLIPS AI builder library.
Since I avoid coding in C where I don't have to, I made an Ada Binding to it, which I believe is licensed without restriction. If you want it, have at.
I used it to build an expert system capable of playing a user's opening moves in Empire. All the code is either in Ada, or Clips' expert system specification language.
Here's a potentially useful Java library. I haven't heard of any Ada libraries. Ada is a great language, though.
Here's some genetic stuff.

Would a novice learning C and Scheme simultaneously be considered bad practice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I've been on and off with the C language for the past year(?) until two months prior to present day when I decided to take my learning a bit more seriously. In some areas of the language I feel comfortable, but I know that by anyone's terms I'm still considered an amateur and have much, much more to learn.
Recently, I've heard nothing but good things about how helpful taking an approach in different paradigms is to gain perspective, so I figured maybe trying to learn another language would be nothing but beneficial to the areas I'm weak in with the C language and possibly programming concepts in general.
SICP is considered one of the most influential books every programmer should read according to stackoverflow and plenty of amazon review comments, so naturally I chose it - just recently purchased the hardback. I'm excited to learn in hopes of coming out with some much needed experience, but my only concern is whether it would be a problem for someone at an early stage as I am to attempt learning two languages with different paradigms at once. I'm hoping learning Scheme and the concepts from this book will help me to think differently and more abstractly with C rather than confusing me.
Any insight would be great - whether it be to continue with these two languages, perhaps choose another language to aid my C or drop the second language for now. I just need insight from a seasoned person on the matter.
Learning both will give you a good appreciation for the strengths and weaknesses of both langauges, as well as two very different problem solving approaches.
Have fun!
I think you'll be okay. It's similar to learning maths and chemistry at the same time, and most people manage just fine. (Except for people who don't understand maths and chemistry, and enroll into liberal arts ;) )
If you are new to programming, I would urge you to start with HTDP. Yes, it is scheme based, and focuses on recursion, but its goal is to give you a framework for approaching problems that is generally applicable. It may seem boring at first, telling you stuff you think you already know, but don't skim. The disciplined approach they take to approaching problems translates easily away from scheme and recursion, and is a useful tool in general.
There is no reason not to learn both languages at the same time. They are sufficiently different that you are unlikely to get confused. If you have time to learn only one, C is probably more generally useful, but they are both (by modern standards) very simple languages, so learning both should not be a problem.
Most academic environments (university, specifically) expect you to handle multiple new languages at the same time. (And there's hardly an earlier stage than "still in school.") Each subject is going to have its own preferred language in terms of features that benefit that subject, and each teacher is going to have their own preferred language and, well, they're the teacher so you just have to deal with it :)
As long as you can keep them separate, it's not really the language itself that's the important part. Focus instead on what that language does and what you can do with it.
Back at my school, it was easy to tell the freshmen from the seniors. The formers would talk about what languages they know, the latter would talk about what designs and abstract concepts they've used.
Remember, the language is just a tool. Development should be more language-agnostic, focusing more on the job at hand and just using the right tool for that job.
I disagree with some of the other answers. This is not like studying two different subjects simultaneously: you generate the same output (a useful computer program) in C or in Scheme, but you go about it in very different ways. Universities may have students taking classes using different languages at the same time, but those curricula are, in theory, curated by a thoughtful department that is trying to avoid confusing students.
While it is certainly possible to study both C and Scheme at the same time, they may turn out to not be complementary for you. I would recommend proceeding as you wish, but as soon as you hit a rough patch you may want to consider focusing on one at a time. Following HtDP is a great idea if you are new to programming. If you are comfortable programming in general and want to learn C -- a good goal to have! -- then you can focus on how you write programs in C. The key is that you first want to learn how to write programs, then you can focus on learning specific languages.
learning a programming language is no different than learning any other language. If you can handle learning spanish and french at the same time, you can handle C and Scheme.
Another reason to be familiar with both C and Scheme is the Foreign Function Interfaces (FFI) provided by almost all Scheme implementations. You can quickly prototype a product in Scheme (or some other Lisp) and then you may find that you need to optimize some portion of the code for speed. You can re-write that part in C and invoke your fast C function from Scheme using FFI. Or you may need to interface some library (GUI, database etc) with Scheme. Your C expertise coupled with the FFI will help you here.
Learning C has made me a better Scheme programmer and vice versa. After spending years with Scheme and Common Lisp, I've spent a year programming almost exclusively in C. When I go back to Scheme, it is much easier to express myself because I know the kinds of things the machine is good at; C has helped me develop a good sense for algorithms. Knowing Scheme before going into C allowed me to develop certain nonstandard idioms while learning the standard ones. I think that since the two are completely different ways of doing things, and they both have their advantages, they are the two best languages to master.

Artificial Intelligence Project - What language should I go for? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am a computer science student and I am going to work on an artificial intelligence project which will compose a musical tune according to the genre and mood inputs. Are the algorithms to be used for this project likely to be very resource-consuming? Would it make any difference (in terms of speed) if I choose to go with Java rather than C++? (Note : I know only these two languages and I am more comfortable with Java than C++.)
NB : Sorry for my poor English. If someone can, please clean up this post wherever necessary. Thanks.
Go with Java since you are more comfortable with it. That will allow you to concentrate on solving the problem, not the programming. Maybe C++ would end with a faster program, maybe not, but getting there will be slower and you don't categorically state that the program must be blazingly fast.
The resource consumption is way more influenced by the algorithmic approach than the language chosen. If you are comfortable with Java, program your application in that language - even though a C++ implementation might be 10% faster.
That being said, you might be interested with Artificial Intelligence API's for Java.
In my mind, the language mostly associated with AI is Lisp.
See the answers to Why is Lisp used for AI? - top voted mentions this was the case in the 60s and 70s, but these days dynamic languages are used (ruby, python and such).
It looks to me like you're at the proof-of-concept stage of your project. I'd use whatever language your most comfortable with. Well written Java code will run a lot faster then poorly written C.
I would use Common Lisp for a project like this. If you don't know Lisp, I would learn it for this type of project. It would be a great learning experience and since you are a CS student, it will only help you. Lisp is a language that can be a real eye opener.
I did a similar AI project a couple of years ago. I don't know what solution you will be implementing, but AI programs can generally be both resource consuming and may take a long time to run, but on the other hand, you'll need a language you're familiar with to get it done in time.
Therefore, my advice is that if you feel you know C++ (or C), go with one of them. If you don't know them, then consider carefully the time you will need to invest in learning a new language before choosing.
If you're starting from scratch, use whatever you know best. If you want to use established libraries to speed up development, you might want to investigate that first - but Java is certain to have some.
In your shoes, I'd pick Java for sure.
I'd go with Clojure for the following reasons:
It's a Lisp, and Lisps are great languages for AI development (partly historical, but also for some real concrete reasons - see this thread and this thread)
Clojure runs on the JVM and has great Java interop, so you can exploit all the great Java AI libraries (e.g. Weka) plus you already have some experience of the Java environment
JVMs have excellent optimizing JIT compilers nowadays, for all practical purposes you will get performance as fast as C/C++ for this kind of application.
My advice is design everything you need first, every ADT, every algorithm class, hierarchy, everything. This kind of project/programming could be really hard to design in C/C++ family of languages, maybe you could choose other language with less string typed philosophy. So i encourage you with using a language designed for this kind of problem, better suited to your application, functional paradigm ex: LISP, logical paradigm ex: PROLOG or something like that.
My 3rd year dissertation project was an implementation of heuristics for cellular network radio frequency allocation. I chose Java over C++ because it allowed me to visualize the results much easier than if I'd used C++. I don't believe the performance would have been significantly different in C++ - the complexity factor of your algos is going to be the biggest factor probably.

Should .NET developers *really* be spending time learning C for low-level exposure? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
When Joel Spolsky and Jeff Atwood began the disagreement in their podcast over whether programmers should learn C, regardless of their industry and platform of delivery, it sparkled quite an explosive debate within the developer community that probably still rages amongst certain groups today. I have been reading a number of passages from a number of programmer bloggers with their take on the matter. The arguments from both sides certainly carry weight, both what I did not find is a perspective that is uniquely angled from the standpoint of developers focused on just the .NET Framework. Practically all of them were commenting on a general programmer standpoint.
What am I trying to get at? Recall Jeff Atwood's opinion that most of the time developers at such high levels would spend would be on learning the business/domain, on top of whatever is needed to learn the technologies to achieve those domain requirements. In my working experience that is a very accurate description of the work life of many. Now supposing that .NET developers can fork the time for "extra curricular" learning, should that be C?
For the record, I have learnt C back in school myself, and I can absolutely understand and appreciate what the proponents are reasoning for. But, when thinking things through, I personally feel .NET developers should not dive straight into C. Because, the thing I wish more developers would take some time to learn is - MSIL and CLR.
Maybe I am stuck with the an unusual bunch of colleagues, I don't know, but it seems to me many people do not keep a conscious awareness that their C# or VB code compiles in IL first before JIT comes in and makes it raw machine code. Most do not know IL, and have no interest in how exactly the CLR handles the code they write. Reading Jeffrey Richter's CLR via C# was quite a shocker for me in so many areas; glad I read it despite colleagues dismissing it as "too low level". I am no expert in IL but with knowledge of the basics, I found myself following his text easier as I was already familiar with the stack behaviour of IL. I find myself disassembling assemblies to have a look at how the IL turns out when I write certain code.
I learn the CLR and MSIL because I know that is the direct layer below me. The layer that allows me to carry out my own layer of work. C, is actually further down. Closer to our "reality" is the CLR and MSIL. That is why I would recommend others to have a go at those, because I do not see enough folks delving at that layer. Or, is your team already all conversant with MSIL?
Of course you should. The greatest way to become overly specialized and single-minded (and, correspondingly, have limited marketable skills) is to only work with a single type of language and eschew all others as "not related to your current task."
Every programmer should have some experience with a modern JIT'd OO language (C#/Java), a lower-level simpler language (C, FORTRAN, etc), a very high level interpreted language (Python, Ruby, etc), and a functional language (Scheme, Lisp, Haskell, etc). Even if you don't use all of them on a day-to-day basis, the broadening of your thought process that such knowledge grants is quite useful.
I already know C and that helped me during the 1.1 days where there are a lot of things that are not yet in the .NET base libraries and I have to P/Invoke something from the Platform SDK.
My take is that we should always allocate a time for learning something that we don't know yet. To answer your question, I don't think it is essential for you to learn C but if you have some time to spare, C is a good language to learn and is just as valid as any other language out there.
True, C is way below the chain. Knowing MSIL can help devs understand how to optimise their apps better. As for learning C or MSIL, why not both? :)
.NET developers should learn about the CLR. But they should also learn C. I don't see how anybody can really understand how the CLR works without some low-level understanding of what happens on the bare metal.
Spending time learning about higher-level concepts is certainly beneficial, but if you concentrate too much on the high-level at the expense of the low-level, you risk becoming one of those "architect" people who can draw boxes and lines on whiteboards but who are unable to write any actual code.
What you learn by learning C will be useful for the remainder of your career. What you learn about the CLR will become obsolete as Microsoft changes their platform.
My take is that learning some compiled language and assembly is a must. Without that, you will not get the versatility required to switch between languages and stacks.
To be more specific -- I think that any good/great programmer must know these things by direct experience:
What is the difference between a register and a variable?
What is DMA?
How is a pixel put on the screen (at low level)?
What are interrupts?
...
Knowing these things is the difference between working with a system you understand and a system that, for all you know, works by magic. :)
To address some comments
You end up having two different kinds of developers:
people that can do one thing in 10 ways in one or two languages
people that can do one thing in one or two ways in 10 different languages
I strongly think that the second group are the better developers overall.
I think of it like this:
Programmers should probably be actually working in the highest-level language appropriate. What's appropriate depends on your scenario. A device driver or embedded system is in a different class from a CRUD desktop app or web page.
You want your programmers to have as much practice as possible in the language in which they are working.
Since most programmers end up working on generic desktop and web apps, you want programming students to move into the higher level languages as soon as possible during school.
However, the higher-level languages obfuscate a few basic programming problems, like pointers. If we apply our principle of using what's appropriate to students as well, those higher level languages may not be appropriate for first year students. That throws out Java, .Net, Python, and many others.
So students should use C (or better yet: C++ since it's "higher-level" and covers most of the same concepts) for the first year or two of school to cover basic concepts, but quickly move up to a higher-level language to enable more difficult programs earlier.
To be sufficiently advanced in writing C#, you need to understand the concepts in C, even if you don't learn the language proper.
More generally though, if you're serious about any skill, you should know what goes on at least one abstraction level below your primary working level.
Coding in jQuery should be paired with an understanding of JavaScript
Designing circuits necessitates knowing physics
Any good basketball player will learn about muscles, bones, and nutrition
A violinist will learn about the interplay of rosin, friction, bow hairs, string, and wood dryness
I like to learn a new language every year. Not necessarily to master it, but to force my brain to think in different ways.
I feel learning C is a good language to learn about low level concepts without the pain of coding in assembly.
However I feel that learning lessons from languages like Haskell, python, and even arguably regex (not exactly a language, but you catch my drift?) is as important as the lessons to be gleaned from C.
So I say, learn about the CLR and MSIL on the job if thats your area, and in your spare time, try picking up a different language once every so often. If that happens to be C this year, good for you and enjoy playing with pointers ;)
I don't see any reason why they should. Languages like Java and C# were designed so that you needn't worry about the low-level details. That's the same like asking whether a WinForms developer should spend time learning the Win32 API because that's whats happening underneath.
While it doesn't hurt to learn it, you'd probably gain more from spending more time learning the languages and platforms you are familiar with, unless there's a good need to learn the low-level technical details.
It can't be a bad idea to learn MSIL, but in a way it's just another .NET language, but with nasty syntax. It is another layer down, though, and I think people should have at least some vague understanding of all the layers.
C, being somewhat like assembly language with nicer syntax, is a nice way to get an idea of what's happening on quite a low level (although some things are still hidden from you).
And from the other end, I think everyone should know a bit of something like Haskell or Lisp to get an idea of higher-level stuff (and see some of the ideas being introduced in C# 3 in a cleaner form)
If you consider yourself a programmer, I would say yes, learn C.
Many people who write code do not consider themselves programmers. I write .NET apps maybe 3 hours a day at work, but I don't label myself a "programmer." I do a lot of things that have nothing to do with programming.
If you spend your whole day programming or thinking about programming, and you are going to make your entire career revolve arround programming, then you better be sure you know your stuff. Learning C would probably help build a base of knowledge that would be helpful if you're going to go very deep in programming skills.
With everthing, there are trade-offs. The more languages you learn, and the more time you spend dedicated to technology, the less time you have for learning other skills. For example, would it be better to learn C, or read books on project management? It depends on your goals. You want to be the best programmer EVAR? Learn C. Spend hours and hours writing code and dedicating yourself to the craft. You ever want to manage somebody else instead of coding all day? Use the time you would put into programming and find ways to improve your soft skills.
Should .net developers be learning C? I would say "not necessarily," but we should always be dabbling in some language outside of our professional bailiwick because every language brings with it a new way of thinking about problems. During my professional career as a .net (and before that, VB 2-6) developer, I've written small projects in Pascal, LISP, C, C++, PHP, JavaScript, Ruby, and Python and am currently dabbling in Lua and Perl.
Other than C++, I don't list any of them on my resume because I'm not looking to be a professional in any of them. Instead, I bring back interesting ideas from each of them to use in my .net-based work.
C is interesting in that it really gets you close to the OS, but that's hardly the only level you need to know about to be a good programmer.
The CLR is a virtual machine so if that's all you learn, then you only know what's happening at a virtual level.
Learning C will teach you more about the physical machine as far as memory usage goes, which as you mention is what the CLR uses underneath. Learning how the CLR works isn't going to give you as much insight into, say, garbage collection, as learning C. With C, you really appreciate what's involved in memory management.
Learning CIL on the other hand, tells you a bit more about execution in .NET than you would by learning C. Still, how IL maps to machine language will still be a mystery for the most part so knowing some of the high-level opcodes, like the ones for casting types, isn't that helpful in terms of understanding what's really going on as they're opaque for the most part. Learning C and pointers, however, will enlighten you on some of those aspects.
Is the issue learning C or MSIL, or is it more fundamental? I'd say that in general, more developers could stand to learn more about how computers, physical or virtual, work. A person can get to be a fairly competent programmer by only understanding a language and API in a box. To take the profession to the next level, I feel that developers really need to understand the whole stack. Not necessarily in detail, but in sufficient generality to help solve problems.
A lot of these skills are being talked about here can be acquired by learning more about compilers and language design. You probably need to learn C to do this (whoops, sneaky), but compiler writing is a great context to learn C in. Steve Yegge talks about this on his blog, and I largely agree with him on this point. My compiler writing course in university was one of the most eye opening courses I've ever taken, and I really wish it had been a 200 level course, instead of a 400 level one.
I posted this on another thread but it applies here to:
I believe you need a good foundation, but devote most of your time to learning what you will be using.
Learn enough assembler to add two numbers together and display the result on a console. You'll have a much better understanding of what is actually going on with the computer and it will make sense as to why we use binary/Hex. (this can be done in a day and can be done with debug from cmd.exe).
Learn enough C to have to allocate some memory and use pointers. A simple linked list is sufficient. (this can be done in a day or two).
Spend more time learning a language that you are going to use. I would let your interests steer you into which language (C#, Java, Ruby, Python, etc.).

An amnesia patient's "first" functional language? (I really like Clojure...)

I was recently diagnosed with a cascading dissociative disorder that causes retrograde amnesia in addition to an existing case of possible anterograde amnesia. Many people have tried to remind me of how great a programmer I was before -- Right now I get the concepts and the idioms, but I want to teach myself whether I know or not. I think I can overcome the amnesia problems in part with it.
My question for you, stackoverflow, is this: I recently found Clojure and it... it feels good to use, even in just copying down the examples from whatever webpage I can find. My goals in learning a functional programming language are to create a simple webserver, an irc AI bot of some variety, and a couchdb-like database system, all of which lightweight and specifically for education. What flaws does Clojure have? Is there a better functional programming language to use right now for education /and/ application?
I think Clojure is a very nice language. If I should point to any defect it is that it's very new, and even though the language seems very mature and production ready, the tools and frameworks around it aren't. So if you are going to make, for instance, a web-app, don't expect to fire three commands and have a "Your first web app is running, now read this documentation to create your models"-page on your browser.
There aren't that many libraries written in Clojure yet either, but that's not a huge problem if you consider that you can use almost anything written in Java.
Haskell currently has a large following and a growing base of libraries and applications. It's also used for education and research. I find it a very nice language to use.
Haskell, Erlang and Clojure are all good choices. I would personally recommend Clojure, you might be able to do some interesting database stuff with the Software Transational Memory system that is part of Clojure.
You list CouchDB in your question, and it's written in Erlang, which is meant to be a pretty engrossing language once you get into it.
I have no personal experience with Clojure, but i really recommend F#. It's quite a powerful language in the style of OCaml. I really like it because it's debugging tools and IDE are second to none, and you can take advantage of practically every library on the (huge) .NET platform.

Resources