As I learn more about Computer Science, AI, and Neural Networks, I am continually amazed by the cool things a computer can do and learn. I've been fascinated by projects new and old, and I'm curios of the interesting projects/applications other SO users have run into.
The Numenta Platform for Intelligent Computing. They are implementing the type of neuron described in "On Intelligence" by Jeff Hawkins. For an idea of the significance, they are working on software neurons that can visually recognize objects in about 200 steps instead of the thousands and thousands necessary now.
Edit: Apparently version 1.6.1 of the SDK is available now. Exciting times for learning software!!
This isn't AI itself, but OpenCyc (and probably it's commercial big brother, Cyc) could provide the "common sense" AI applications need to really understand the world in which they exist.
For example, Cyc could provide the enough general knowledge that it could begin to "read" and reason about encyclopedic content such as Wikipedia, or surf the "Semantic Web" acting as an agent to develop some domain-specific knowledge base.
w:
Arthur L. Samuel (1901 – July 29,
1990) was a pioneer in the field of
computer gaming and artificial
intelligence. The Samuel
Checkers-playing Program appears to be
the world's first self-learning
program...
Samuel designed various
mechanisms by which his program could
become better. In what he called rote
learning, the program remembered every
position it had already seen, along
with the terminal value of the reward
function. This technique effectively
extended the search depth at each of
these positions. Samuel's later
programs reevaluated the reward
function based on input professional
games. He also had it play thousands
of games against itself as another way
of learning. With all of this work,
Samuel’s program reached a respectable
amateur status, and was the first to
play any board game at this high of
level.
Samuel: Some Studies in Machine Learning Using the Game of Checkers (21 page pdf file). Singularity is near! :)
One of my own favorites is Donald Michie's 1960, Project: MENACE - Matchbox Educable Naughts and Crosses Engine. In this project Michie used a collection of matchboxes with colored beads that he taught to play Tic-Tac-Toe. This was to demonstrate that machines could in some sense learn from their previous successes and failures.
More information as well as a computer simulation of the experiment are here: http://www.adit.co.uk/html/menace_simulation.html
http://alice.pandorabots.com/
- This bot is able to have pretty intelligent conversation with us.
http://www.triumphpc.com/johnlennon/
recreating the personality and thoughts of John Lennon.. you can have a chat with him on this site.
http://AngelCog.org is quite interesting. The project is based around the idea that to make a true AI, you must do it in three stages:
1) Try to process logic in general, and be able to describe anything.
2) Logically process code, and process "Stories" about the real world.
3) Logically process it's own code, and talk to people.
The project is based around the idea, that once a program is logically processing it's own code, it is already an AI. Of course it also needs to be able to understand the "Real world". That's the "other half".
As far as I'm aware, no one else has a project based on the assumption that to make a proper AI, the AI must understand the language in which it is written. So lets say an AI is written in C++. Well then it must master C++ and be able to read and write and alter C++ programs, especially itself!!
It's still a "toy" right now however, and is still in the "First stage" of development. ("Try to process logic in general, and be able to describe anything."). But the developer is looking for help.
Related
char passage[5000][20];
int para_count(p)
{
int i=0,n,para_sentence=0,para_no=7,para[10];
for(n=1;n<=para_no;n++)
{
while (passage[i][0]!='\n')
{
if (punc_check(i,strlen(passage+i)-1)==1)
{
para_sentence++;
}
i++;
}
para[n] = para_sentence;
para_sentence=0;
}
return para[p];
}
This is my function to count no. of sentences in a paragraph of a passage. It will keep on counting sentence until \n.
It will store the number of sentence counted previously into an array: para. The number of sentences in the nth paragraph will be stored in para[n].
But it only count the first paragraph (the value is returned successfully), from para[2] onward they are all 0, I suppose para_sentence will be keep updating, right?
my example text:
I am most blessed to born in this era of computing. When I was in primary school, I first came across computer through computer games called “Maple Story”,where I showed up as adventurer travelling through a virtual world and killing monsters in the game. I was fascinated by the colourful graphics and amazing adventures, wondering how this could be done which has opened up my curiosity in the computing world. I was totally amazed by it when I first used a computer, it was very novel to me at that time. As I play more, I started to encounter malfunctioning which pushed me to seek for solutions, this is how I learnt more about the functioning of computers. This attracted me to explore more in this field, I started to go beyond just playing games. Besides gaming I am also deeply attracted by the internet world, I truly experience “information at my fingertips”, like a magician with a magic wand. As I continued my exploration on computer technology, the foundation of my passion and skills on computer grew gradually, I got to understand how incredible computing technology was, making it as my interest. Not only that, I started considering computer and technology as my future study or even lifelong career. I promised myself that I will definitely pursue my career in the computing world.
I am deeply impressed by your University. Since its foundation a century ago, HK University has been the top Asian University, many leaders from all walks of life completed their higher education in this prestigious University. I am attracted to the computing course offered by the department of Economics due to its good mix of computer technologies and applications in economics. Many professors in the department are world famous scholars with significant contribution in the field of economics, as well as computing. I deeply desire to join your department and be part of the group.
I have pursued a lot of computing related activities in addition to an academic subject. I took up a part time job as a computer advisor in a graphics design company managed by my uncle. He does not have too much knowledge on computer so I help him to set up his Macbook, like installing different softwares. I also help him out with the problems he encountered related to computers, from minor problems like how to save file to cloud drive, to major problems like Internet failure. Despite just solving the issues, I also give recommendations on how to make his job easier and avoid the problems again. For instance, I suggested and helped him to update his router, including the setup of it. Other than these, I deliver him the latest technology development as well, explaining things like AI, and esports. Through this experience, I understood more about the application of technology and what exactly working environment is and developed my communicative skills.
I have also exposed myself to computer hardware as well . Two years ago, I acquired a desktop computer for myself, but instead of buying an off the shelf product, I bought parts and assembled the machine by myself. My ICT teacher was very helpful and guided me through the whole process of assembly step by step, I gained a lot of hardware knowledge that I can hardly learn at class, like how to look at the specification of CPU and motherboard. It was really a great experience for me to get in touch with and know more about hardwares.
I like computing very much and I truly want to develop my career in this field. Not only your university offers the most suitable course for me, but also the favorable learning environment. I believe that I shall enjoy studying the course very much in your university and open up my potentials.
output:
n para[n]
1 10
2 0
3 0
4 0
5 0
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm a bit confused about artificial intelligence.
I understand it as the capability of a machine to learn new things, or do different things without actually executing code (already written by someone).
In SO I see many threads about A.I. in games, but IMO that is not an A.I. Because if it is every software even a print command should be called A.I. In games there is just code that is executed. I would call it pseudo-AI.
Am I wrong? Should be also this considered as A.I.?
Wikipedia says this:
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it.
AI textbooks define the field as "the study and design of intelligent agents"
[1], where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.
What you are considering is more specifically referred to as Machine Learning, which is indeed a subbranch of AI. As you can see from the second sentence above, however, the "AI" considered in games also fits perfectly well into this definition.
Of course, the actual line between what is AI, and what not, is quite blurry. This is also due to the fact, that everyone and his mother believes to know what "AI" means.
I suggest you grab yourself a more scientific book (say the classical Russel,Norvig) to get a more thorough grasp on the different fields that are present under the huge roof of what we simply refer to as "AI".
"Minsky and McCarthy, both considered founders of AI, say that artificial intelligence is anything done by a program or a machine that if a human did the same activity, we would say the human had to apply intelligence to accomplish the task."
A more modern definition is to turn this on its head:
Artificial intelligence is anything done by a program or a machine that if a human did the same activity, we would say the human did not need to apply intelligence to accomplish the task.
Intelligence is the ability to do the things that don't require reasoning. Things like understanding and generating language, sequencing your leg muscles as you walk across the floor, or enjoying a symphony. You don't stop to reason for any of those things. You Understand. INTUITIVELY, how to interpret things in your visual field, language, and all other sensory input. And you can do the right thing without reasoning. You can easily prepare all of your breakfast without any reasoning. :-)
Doing things that "require thought" or reasoning, like playing chess or solving integrals are things that computers can already do.
This misunderstanding about what intelligence really is has cost us 60 years and a million man-years of banging our head against the wall.
Deep learning is the currently most popular expression of an alternative path to a "better kind of AI". Artificial Intuition is a special branch of Deep Learning tailored at understanding text.
The easiest way to know whether you are dealing with classical (futile) or modern AI is whether the system requires you to supply any models of the world (MOTW). Any MOTW means the AI is limited to operate in the domain specified by the MOTW and is therefore not a general intelligence. Also, anything with a MOTW is typically not designed to extend that model; this is a very difficult task.
Better to start from a Model of the Mind (MOTM) or a Model of Learning. These can be derived either from neuroscience (difficult) or from epistemology (much easier). A well done MOTM can then learn anything it needs to know to solve problems in any domain.
The main problem for most is to find what's called "a domain-independent method for determining saliency". In other words, all intelligences, natural or artificial, have to spend most of their time answering the question "what matters".
Google my name and "AI" for more.
Minsky and McCarthy, both considered founders of AI, say that artificial intelligence is anything done by a program or a machine that if a human did the same activity, we would say the human had to apply intelligence to accomplish the task.
Frank and Kirt sum up the academic field of AI pretty well. Any difficulty there is defining AI reflects the more general problem of defining real intelligence. If AI has proved anything, it's that we have precious little idea what intelligence is, how amazing organisms are at solving problems, and how difficult it is to get machines to achieve the same results.
As for the use of the term AI in the video games industry, your confusion justified. The prospect of intelligent characters in games is so compelling, that the term long ago took on a life of its own as marketing jargon. However, AI is really just a poorly chosen name for the solving of problems that computers find hard and people find easy. And in that sense, there is plenty of genuine AI work going on in the games industry.
Take a look at AIGameDev.com for a taste of what is currently considered noteworthy in AI game development.
The most important aspect of AI as I believe is 'curiosity'. Intelligence comes from this very fact that it is a result of curiosity.
There is no precise definition of AI because intelligence itself is relative and hard to define, this is due to the fact that many fields (ancient and modern) such as Philosophy and Neuroscience serve as the foundations of AI. It depends on what your AI is expected to do.
Artificial Intelligence is the attempt to create intelligence from a computer program.
Regardless if its a toy program or neural science, as long as a program is able to mimic human problem-solving skills or even go beyond it, is called Artificial Intelligence.
Of course, the expectation of computer scientists on how capable a program (or machine) is to solve problems in time increases. Playing tic-tac-toe programs before is considered intelligent until chess programs where invented. Then now we are attempting to mimic how human brain through neural networks.
A.I for layman's now a day's is applied in most computer games. It's also used in most machines, like in airplane for autopilot, NASA's explorer on Mars called curiosity (2012), who's able to detect terrain obstacles and move around it.
Very tricky stuff A.I. Its like if you design a mind that replies to all the right questions with all the answers, is it A.I. Or just a talking encyclopedia. What if you can teach the A.I. by simply talking to it, do you then consider that A.I. with a mind, or again just a program. Perhaps the question or answer is if someone someday makes a machine that looks human, acts human, and thinks its human. And then see if others feel the same, that its human, if they dont know its not. And then what if it passes that test. You see its not really about is the machine conscious, or does it have a mind as those will never be truly answered. Its all about does it "seem conscious" act conscious, act like it has a mind, given thats as close as humanity will ever get to understanding that riddle. If a machine acts like it cares, and does helpful things thats all that really matters, not the rest of unseen picture. We just half to get this far in the first place. By the way check out Webo a teachable A.I.Webo a teachable A.I.
I need help to chose a project to work on for my master graduation, The project must involve Ai / Machine learning or Business intelegence.. but if there is any other suggestion out of these topics it is Ok, please help me.
One of the most rapid growing areas in AI today is Computer Vision. There are many practical needs where the results of your Master Thesis can be helpful. You can try research something like Emotion Detection, Eye-Tracking, etc.
An appropriate work for a MS in CS in any good University can highlight the current status of research on this field, compare different approaches and algorithms. As a practical part, it makes also a lot of fun when your program recognizes your mood properly :)
Netflix
If you want to work more on non trivial datasets (not google size, but not trivial either and with real application), with an objective measure of success, why not working on the netflix challenge (the first one) ? You can get all the data for free, you have many papers on it, as well as pretty good way to compare your results vs other peoples (since everyone used exactly the same dataset, and it was not so easy to "cheat", contrary to what happens quite often in the academic literature). While not trivial in size, you can work on it with only one computer (assuming it is recent enough), and depending on the type of algorithms you are using, you can implement them in a language which is not C/C++, at least for prototyping (for example, I could get decent results doing things entirely in python).
Bonus point, it passes the "family" test: easy to tell your parents what you are working on, which is always a pain in my experience :)
Music-related tasks
A bit more original: something that is both cool, not trivial but not too complicated in data handling is anything around music, like music genre recognition (classical / electronic / jazz / etc...). You would need to know about signal processing as well, though - I would not advise it if you cannot get easy access to professors who know about the topic.
I can use the same answer I used on a previous, similar question:
Russ Greiner has a great list of project topics for his machine learning course, so that's a great place to start.
Both GAs and ANNs are learners/classifiers. So I ask you the question, what is an interesting "thing" to learn? Maybe it's:
Detecting cancer
Predicting the outcome between two sports teams
Filtering spam
Detecting faces
Reading text (OCR)
Playing a game
The sky is the limit, really!
Since it has a business tie in - given some input set determine probable business fraud from the input (something the SEC seems challenged in doing). We now have several examples (Madoff and others). Or a system to estimate investment risk (there are lots of such systems apparently but were any accurate in the case of Lehman for example).
A starting point might be the Chen book Genetic Algorithms and Genetic Programming in Computational Finance.
Here's an AAAI writeup of an award to the National Association of Securities Dealers for a system thatmonitors NASDAQ insider trading.
Many great answers posted already, but I wanted to add my 2 cents.There is one hot topic in which big companies all around are investing lots of resources into, and is still a very challenging topic with lots of potential: Automated detection of fake news.
This is even more relevant nowadays where most of us are connecting though social media and there's a huge crisis looming over.
Fake news, content removal, source reliability... The problem is huge and very exciting. It is as I said challenging as it can be seen from many perspectives (from analising images to detect fakes using adversarial netwotks to detecting fake written news based on text content (NLP) or using graph theory to find sources) and the possbilities for a research proyect are endless.
I suggest you read some general articles (e.g this or this) or have a look at research articles from the last couple of years (a quick google seach will throw you a lot of related stuff).
I wish I had the opportunity of starting over a project based on this topic. I think it's going to be of the upmost relevance in the next few years.
There are many papers about ranged combat artificial intelligences, like Killzones's (see this paper), or Halo. But I've not been able to find much about a fighting IA except for this work, which uses neural networs to learn how to fight, which is not exactly what I'm looking for.
Occidental AI in games is heavily focused on FPS, it seems! Does anyone know which techniques are used to implement a decent fighting AI? Hierarchical Finite State Machines? Decision Trees? They could end up being pretty predictable.
In our research labs, we are using AI planning technology for games. AI Planning is used by NASA to build semi-autonomous robots. Planning can produce less predictable behavior than state machines, but planning is a highly complex problem, that is, solving planning problems has a huge computational complexity.
AI Planning is an old but interesting field. Particularly for gaming only recently people have started using planning to run their engines. The expressiveness is still limited in the current implementations, but in theory the expressiveness is limited "only by our imagination".
Russel and Norvig have devoted 4 chapters on AI Planning in their book on Artificial Intelligence. Other related terms you might be interested in are: Markov Decision Processes, Bayesian Networks. These topics are also provided sufficient exposure in this book.
If you are looking for some ready-made engine to easily start using, I guess using AI Planning would be a gross overkill. I don't know of any AI Planning engine for games but we are developing one. If you are interested in the long term, we can talk separately about it.
You seem to know already the techniques for planning and executing. Another thing that you need to do is predict the opponent's next move and maximize the expected reward of your response. I wrote a blog article about this: http://www.masterbaboon.com/2009/05/my-ai-reads-your-mind-and-kicks-your-ass-part-2/ and http://www.masterbaboon.com/2009/09/my-ai-reads-your-mind-extensions-part-3/ . The game I consider is very simple, but I think the main ideas from Bayesian decision theory might be useful for your project.
I have reverse engineered the routines related to the AI subsystem within the Street Figher II series of games. It does not incorporate any of the techniques mentioned above. It is entirely reactive and involves no planning, learning or goals. Interestingly, there is no "technique weight" system that you mention, either. They don't use global weights for decisions to decide the frequency of attack versus block, for example. When taking apart the routines related to how "difficulty" is made to seem to increase, I did expect to find something like that. Alas, it relates to a number of smaller decisions that could potentially affect those ratios in an emergent way.
Another route to consider is the so called Ghost AI as described here & here. As the name suggests you basically extract rules from actual game play, first paper does it offline and the second extends the methodology for online real time learning.
Check out also the guy's webpage, there are a number of other papers on fighting games that are interesting.
http://www.ice.ci.ritsumei.ac.jp/~ftgaic/index-R.html
its old but here are some examples
Has anyone worked with the programming language Church? Can anyone recommend practical applications? I just discovered it, and while it sounds like it addresses some long-standing problems in AI and machine-learning, I'm skeptical. I had never heard of it, and was surprised to find it's actually been around for a few years, having been announced in the paper Church: a language for generative models.
I'm not sure what to say about the matter of practical applications. Does modeling cognitive abilities with generative models constitute a "practical application" in your mind?
The key importance of Church (at least right now) is that it allows those of us working with probabilistic inference solutions to AI problems a simpler way to model. It's essentially a subset of Lisp.
I disagree with Chris S that it is at all a toy language. While some of these inference problems can be replicated in other languages (I've built several in Matlab) they generally aren't very reusable and you really have to love working in 4 and 5 for loops deep (I hate it).
Instead of tackling the problem that way, Church uses the recursive advantages of lamda calaculus and also allows for something called memoization which is really useful for generative models since your generative model is often not the same one trial after trial--though for testing you really need this.
I would say that if what you're doing has anything to do with Bayesian Networks, Hierarchical Bayesian Models, probabilistic solutions to POMDPs or Dynamic Bayesian Networks then I think Church is a great help. For what it's worth, I've worked with both Noah and Josh (two of Church's authors) and no one has a better handle on probabilistic inference right now (IMHO).
Church is part of the family of probabilistic programming languages that allows the separation of the estimation of a model from its definition. This makes probabilistic modeling and inference a lot more accessible to people that want to apply machine learning but who are not themselves hardcore machine learning researchers.
For a long time, probabilistic programming meant you'd have to come up with a model for your data and derive the estimation of the model yourself: you have some observed values, and you want to learn the parameters. The structure of the model is closely related to how you estimate the parameters, and you'd have to be pretty advanced knowledge of machine learning to do the computations correctly. The recent probabilistic programming languages are an attempt to address that and make things more accessible for data scientists or people doing work that applies machine learning.
As an analogy, consider the following:
You are a programmer and you want to run some code on a computer. Back in the 1970s, you had to write assembly language on punch cards and feed them into a mainframe (for which you had to book time on) in order to run your program. It is now 2014, and there are high-level, simple to learn languages that you can write code in even with no knowledge of how computer architecture works. It's still helpful to understand how computers work to write in those languages, but you don't have to, and many more people write code than if you had to program with punch cards.
Probabilistic programming languages do the same for machine learning with statistical models. Also, Church isn't the only choice for this. If you aren't a functional programming devotee, you can also check out the following frameworks for Bayesian inference in graphical models:
Infer.NET, written in C# by the Microsoft Research lab in Cambridge, UK
stan, written in C++ by the Statistics department at Columbia
You know what does a better job of describing Church than what I said? This MIT article: http://web.mit.edu/newsoffice/2010/ai-unification.html
It's slightly more hyperbolic, but then, I'm not immune to the optimism present in this article.
Likely, the article was intended to be published on April Fool's Day.
Here's another article dated late march of last year. http://dspace.mit.edu/handle/1721.1/44963