Are Multi-Agent Systems just hype? - artificial-intelligence

As a researcher I am curious to hear what people think of Multi-Agent Systems if of course you came across the whole idea. Do you believe there is something more in there than just a hype and another buzzword? Can you see any potential uses in business or everyday computing? Or do you think that we can already achieve everything MAS has to offer but with simple more elegant solutions?

I am a research professor who has published many articles in the Autonomous Agents and Multiagent Systems Conference (AAMAS): the main vehicle for multiagent research.
MAS is a term used by researchers (coined around 1995, for the first International Conference on Multiagent Systems (ICMAS), which brought together the Distributed Artificial Intelligence (DAI) and the Autonomous Agents research communities under one tent: the MAS tent) that refers to algorithms and methods for organizing teams of autonomous agents. Researchers in MAS have developed algorithms for Robot soccer (see Robocup), coordinating autonomous robot rovers (as in Mars rovers), distributed allocation of resources (who does which task), and many other domains.
I don't see that there is any "hype" as you describe. You can read all the papers in past conferences and each one clearly states what the author tried to accomplish, how he tried it, and what the results were. I do not know of anyone making silly claims about the power of these techniques: they are all just algorithms (isn't everything). No magic here.
The question
do you think that we can already achieve everything MAS has to offer but with simple more elegant solutions?
is incorrect in that, if you can solve a MAS problem with a simple and more elegant solution, your solution is now a MAS solution!
MAS is a problem domain, along with the solutions we found so far. If you find a better solution then, awesome, publish it and join the MAS community.
As an aside, I see this confusion often. Journeymen programmers don't realize that research communities are (usually) defined by the problem they work on, not a solution approach.

Compared to many other fields of Artificial Intelligence and Technology, multi-agent systems aren't hyped enough!
I find people who know nothing about Multi-agent systems and are actively in the field of technology, programming, and "artificial intelligence". (quoted since it now is hype, and has effectively lost all meaning)
I learned about Multi-agent systems in 2008 through Netlogo, and it changed my perspective about problem solving using computational technology. I realized at the time that these types of programs would require increasing computing power. More recently I have learned all the hype driven stuff (data-science, ML, DNN, RL, etc.) I think all this hype will integrate well with MAS, and has yet to be fully understood. Many people are introduced to this thought through MMO gaming, which has been a huge hit, so there may be a leap yet to come.

Related

classical AI, ontology, machine learning, bayesian

I'm starting to study machine learning and bayesian inference applied to computer vision and affective computing.
If I understand right, there is a big discussion between
classical IA, ontology, semantic web researchers
and machine learning and bayesian guys
I think it is usually referred as strong AI vs weak AI related also to philosophical issues like functional psychology (brain as black box set) and cognitive psychology (theory of mind, mirror neuron), but this is not the point in a programming forum like this.
I'd like to understand the differences between the two points of view. Ideally, answers will reference examples and academic papers where one approach get good results and the other fails. I am also interested in the historical trends: why approaches fell out of favour and a newer approaches began to rise up. For example, I know that Bayesian inference is computationally intractable, problem in NP, and that's why for a long time probabilistic models was not favoured in information technology world. However, they've began to rise up in econometrics.
I think you have got several ideas mixed up together. It's true that there is a distinction that gets drawn between rule-based and probabilistic approaches to 'AI' tasks, however it has nothing to do with strong or weak AI, very little to do with psychology and it's not nearly as clear cut as being a battle between two opposing sides. Also, I think saying Bayesian inference was not used in computer science because inference is NP complete in general is a bit misleading. That result often doesn't matter that much in practice and most machine learning algorithms don't do real Bayesian inference anyway.
Having said all that, the history of Natural Language Processing went from rule-based systems in the 80s and early 90s to machine learning systems up to the present day. Look at the history of the MUC conferences to see the early approaches to information extraction task. Compare that with the current state-of-the-art in named entity recognition and parsing (the ACL wiki is a good source for this) which are all based on machine learning methods.
As far as specific references, I doubt you'll find anyone writing an academic paper that says 'statistical systems are better than rule-based systems' because it's often very hard to make a definite statement like that. A quick Google for 'statistical vs. rule based' yields papers like this which looks at machine translation and recommends using both approaches, according to their strengths and weaknesses. I think you'll find that this is pretty typical of academic papers. The only thing I've read that really makes a stand on the issue is 'The Unreasonable Effectiveness of Data' which is a good read.
As for the "rule-based" vs. " probabilistic" thing you can go for the classic book by Judea Pearl - "Probabilistic Reasoning in Intelligent Systems. Pearl writes very biased towards what he calls "intensional systems" which is basically the counter-part to rule-based stuff. I think this book is what set off the whole probabilistic thing in AI (you can also argue the time was due, but then it was THE book of that time).
I think machine-learning is a different story (though it's nearer to probabilistic AI than to logics).

How to design the artificial intelligence of a fighting game (Street Fighter or Soul Calibur)?

There are many papers about ranged combat artificial intelligences, like Killzones's (see this paper), or Halo. But I've not been able to find much about a fighting IA except for this work, which uses neural networs to learn how to fight, which is not exactly what I'm looking for.
Occidental AI in games is heavily focused on FPS, it seems! Does anyone know which techniques are used to implement a decent fighting AI? Hierarchical Finite State Machines? Decision Trees? They could end up being pretty predictable.
In our research labs, we are using AI planning technology for games. AI Planning is used by NASA to build semi-autonomous robots. Planning can produce less predictable behavior than state machines, but planning is a highly complex problem, that is, solving planning problems has a huge computational complexity.
AI Planning is an old but interesting field. Particularly for gaming only recently people have started using planning to run their engines. The expressiveness is still limited in the current implementations, but in theory the expressiveness is limited "only by our imagination".
Russel and Norvig have devoted 4 chapters on AI Planning in their book on Artificial Intelligence. Other related terms you might be interested in are: Markov Decision Processes, Bayesian Networks. These topics are also provided sufficient exposure in this book.
If you are looking for some ready-made engine to easily start using, I guess using AI Planning would be a gross overkill. I don't know of any AI Planning engine for games but we are developing one. If you are interested in the long term, we can talk separately about it.
You seem to know already the techniques for planning and executing. Another thing that you need to do is predict the opponent's next move and maximize the expected reward of your response. I wrote a blog article about this: http://www.masterbaboon.com/2009/05/my-ai-reads-your-mind-and-kicks-your-ass-part-2/ and http://www.masterbaboon.com/2009/09/my-ai-reads-your-mind-extensions-part-3/ . The game I consider is very simple, but I think the main ideas from Bayesian decision theory might be useful for your project.
I have reverse engineered the routines related to the AI subsystem within the Street Figher II series of games. It does not incorporate any of the techniques mentioned above. It is entirely reactive and involves no planning, learning or goals. Interestingly, there is no "technique weight" system that you mention, either. They don't use global weights for decisions to decide the frequency of attack versus block, for example. When taking apart the routines related to how "difficulty" is made to seem to increase, I did expect to find something like that. Alas, it relates to a number of smaller decisions that could potentially affect those ratios in an emergent way.
Another route to consider is the so called Ghost AI as described here & here. As the name suggests you basically extract rules from actual game play, first paper does it offline and the second extends the methodology for online real time learning.
Check out also the guy's webpage, there are a number of other papers on fighting games that are interesting.
http://www.ice.ci.ritsumei.ac.jp/~ftgaic/index-R.html
its old but here are some examples

Theories of software engineering [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
In my career I've come across two broad types of theory: physical theories and educational/management theories:
Physical theories are either correct (under appropriate conditions) or incorrect, as judged by the physical world.
Educational/management theories have the appearance of being like physical theories, but they lack rigorous testing. At best they give new ways of thinking about problems. Multiple theories are useful because one of them may speak to you in the right way.
As an hobbyist student of software engineering there appear to be a lot of theories of software engineering (such agile programming, test driven design, patterns, extreme programming). Should I consider these theories to be physical-like or educational/management-like?
Or have I mis-understood software engineering and find myself being in the position of "not even wrong"?
Software engineering is ultimately about psychology, how humans manage complexity. So software engineering principles are far more like education and management theories than physical principles.
Some software engineering has solid math behind it: O(n log n) sorts are faster than O(n^2) sorts, etc. But mostly software engineering is about how humans think about software. How to organize things so that maintainers don't go crazy, anticipating what is likely to change and what is not, preventing and detecting human errors, etc. It's a branch of psychology or sociology.
I think the appropriate theoretical split is those "harder" sciences (where there can be proofs) and the softer topics with qualitative answers and few proofs if any.
Software to me is mostly about language and communication, a topic that is qualitative and subjective mostly. Every now and then we touch into algorithms and other "hard" areas, where proofs and rigorous formalisms exist. So, yes, both please.
Not even wrong.
All the software engineering "theories" seem to be nothing but advice on particular things to try to see if they make you and your team more productive. Even if one could set them up to be falsifiable like scientific theories, there would be very little point to it. That is not to say that it is not worthwhile to learn about them -- au contraire, you should become familiar with as many of them as possible and try to figure out in what kinds of teams and environment they may work better. But be careful: avoid dogma and thinking there are silver bullets.
i wouldn't call agile programming, test driven design, patterns, extreme programming, etc 'theories', they're methodologies, or work styles. they don't make any assertions.
Generally the field of Informatics is divided into 4 areas (need to find a link to the source, SWEBOK?), which are distinct although related and interconnected:
Computer Science
Software Engineering
Computer Engineering
Information Systems
There is a good analysis of engineering vs. science in Steve McConnel's "Professional Software Development". Check out his Software Engineering, Not Computer Science.
Software development is more about engineering - finding practical solutions to practical problems - than anything else. That is correct that software engineering relies on computer science, mathematics, complexity theory, systematics, psychology, and other disciplines, but it can not be equated to any of them, nor is it a batch of any of them.
Besides theories, there are also frameworks, models and rules of thumb. Ideas, sure, but based on a less rigorous foundation, which loosely belong to your eduction/management category.
Computer Science has some strong foundational theories (physical ones by your definition), but these mostly consist of tying together the smaller elements.
Software Engineering on the other hand, is a relatively new discipline that involves utilizing computers and occasionally Computer Science to build software systems. Most of the practice in that arena is entirely based on non-rigorous experimental and anecdotal evidence. Since the jury is still out on even the simplest of issues, most of what passes for practices could be best described as pure guess-work and irrational preference. It's one of those disciplines where you really do have to know a lot to realize how much is built on a house of very unstable cards.
Paul.
Being intangible, programming is a very difficult activity to relate to another human being, even other programmers. Software engineering tries to add structure where there is none, but such structure is not rooted in the inevitability of reality. So all these approaches become like religions in how groups of people behave when trying to appease their technical gods (or demons).
All these theories and best practices still haven't brought us to the point where we can produce software systems reliably and predictably. The newest of these surveys is dated 2001; Jeff's column from 2006 still laments high failure rates.
It'd be interesting to see if anybody's working on an updated survey.
Avionics and the software running my car don't seem to fail at anything close to the rates quoted for enterprise software. Why don't enterprise developers follow their practices more closely? Maybe we should all be writing Ada....[just kidding]
They're like recipes: they're guidelines, whose success depends:
Partly, on the quality of the recipe
Partly, on the quality of the ingredients
Partly, on the skill of (and time available to) the practitioners
For me, it's my own theory with many of the others used as a base. I don't know any one that uses a single specific theory. And that's not a cop out answer.
Just as there are different languages, theories/practices/methodologies are to be used in distinct situations. The structure, rules, and definitions are all the ways in which people understand how things are to be accomplished, but what is to be accomplished is subjective.
Adapt, knowing the agile, extreme, or other methods at the discretion of the client, project, programmer, time, and especially what makes you successful/happy. Be a team and adjust/adapt to what your team is doing for the greater good; just keep in mind to have something that you have defined in your own mind, or it's not just chaos.
[SOAPBOX]
I started programming on the Atari 400 with a converted flat keyboard and 64K upgrade. When I started college, it was VB 1.0 which I saw my Economics Teacher use to build a teaching tool to help people learn more about economics using graphs and visual inputs. That was cool! And I knew I could do that.
This same Economics Teacher, who later become an IT teacher too (he was good), asked if I would teach a class on debugging. He said, "I haven't met someone that understands the concepts and has a natural ability to debug as fast as you do, would you teach us what you know and how you do it." This was a boost in my ego, of course, but to teach, mentor, and help others.
Every one of those instances has fuled my desire to help other people. For me, I want a computer to do exactly what I want, to help other people in the business and home life to increase their qualify of living, learn more, and get more done.
Someone said to me one time, "You're only as good as your tools". Learn, practice, and grow.
If you've defined something, it's working, has order, and it stretches you and the boundaries, you're not wrong.
Is there a think like "software engineering"?
Or Is software development is "engineering"?
Facts:
Our industry is very young realtive to many other "engineerings".
We still does not have "solid" practices and "theories".
So to be honest, if we look in the other mature engineering practices perpective, it is hard to call what we do as "engineering"
We have a very bad reputation at failing [ our failing rates can not be acceptable at many engineering branches]
Reality or Fantasy? Pick one :-)
Some guys say that we do not have "solid" paractices and "theories" since we are a young "engineering" branch, by time we will have.Those guys say that wee need to work more "theory" or foundations.
Some guys say that software develepment is "experimental social activity" because of the nature of our problem domain. We will have practices theories methodologies processes but they will always have second order effect. The unique people, their feelings qualities and and their interactions with the rest are more influential. Those guys see software development as Complex Adaptive System
And there is also another reality
%80 of software development activities really do not need very brilland mind. Any "average" person can do it.
But remaining %20 part is ver hard and multidiciplinary task.
Even there is an another new perspective My One:-)
This view say that software development is not a branch of "Engineering". It is brach of "Natural Sciences and Social Sciences". So we need Software Anthropology and Anthropologist.
Theory: I think a theory is anything that describes "how" a natural system works, and in order to prove it, has logical deductions based on previous knowledge, substantiated by logical inductions made using experiments.
You call the whole body of these theories and experiments as Science.
Software: Software is a man-made system aka. an engineered system. Engineering applies Science in order to create the new systems. In that regards, pure Software Engineering applies the science of discrete mathematical systems.
But Commercial Software Engineering has a different motivation called Economics.
In that regards, it has to take all the factors into account that affect Economics, the chief of them being People. So, Psychology plays a huge part .
But, since Psychology itself is just a theory of "how" human mind works based on just pattern recognition without any logical deductions based on human biology, it has many flaws like correlation implies causation.
So, yeah, I think from the above answer, you can better understand what Commercial Software Engineering in total is .

What are the most important technical skills for a computer software expert witness?

As a computer software expert witness, I am required to analyze a huge range of different software technologies. During my deposition or trial testimony, the opposing expert may direct questions targeted at exposing or revealing my weaknesses. There is no time for research or education.
Given that I can't be an expert in every technology, what are the most versatile and transferrable skills or technologies I should learn?
I will start with the obvious:
Databases are omnipresent (but which are the best archetypes?)
C is often involved due to the prevalence of older Windows and DOS based systems
What should be added this list?
I may be mis-reading your question, but I suspect that if you are being called upon as an expert witness, you already have the expertise they are seeking; I suppose that learning more technical aspects of any technology would make you more likely to become an expert witness, but ultimately I would recommend the best skill would be of truthfulness. If you don't know, say so. Any unknown questions can then become the "to be studied" list for later review.
just my 2 cents ...
It would be silly to call you as an expert witness if you cannot be an expert in the line of questioning.
Well, the big thing about being a witness is to listen to the counsel for which you are testifying. In the computer world, your credibility is not easily impugned. If they were to try to do so, it would be by calling into question formal education or training as insufficient to be an expert. They won't be asking you to explain what a Turing Machine is, or how to write a sorting algorithm in LISP, unless it is directly relevant to the matter at hand. They won't be playing "Gotcha!" with difficult technical questions, as it won't resonate with the judge/jury .How many jury members can you picture saying this: "I can't BELIEVE that "expert" doesn't understand database normalization! what a fraud!"? If the jury doesn't understand the question, they won't understand the answer. Any 1st year law student will tell you all about this problem (it comes up it all kinds of expert testimony situations).
No, your credibility will be questioned in laymen's terms. If you are being asked to testify, it's because you have the answers that are relevant. Stick to those and don't do any tricks (as your counsel will tell you), and you'll be fine. If your information is correct, and your degree/experience is solid, you may not even be cross-examined (they will just find their own expert to say the opposite of what you said).
Computer software expert witnesses need to also have a good understanding of networking technology and be able to explain it to a jury or judge. Because a great deal of software is client/server based, being able to explain the way firewalls, ip address, http, internet routers works and why you can tell that certain pieces of software were definitely used at certain times and locations is important.
Being familiar with server operating systems and the log files they generate is also helpful.
I would say forget learning new technology outside understanding industry concepts and how they're really applied in the real world. The key thing you need to be able to do as an expert witness is explain these concepts in terms that can be easily understood by the layman. You already know this stuff or you wouldn't be the expert witness. You're there because your name and reputation are thought well of and they [prosecution/defence] need your help.
I think of it like this: The lawyer/barrister/attorney's job is to sell their vision of the truth and get the jury to buy into their vision [skewed as that vision may or may not be]. Your job is to sell the facts. Either the two are one and the same, or they aren't. Sell the facts to the best of your ability, if you have easily understood examples [by easily understood, I mean by an 8 year old], all the better.
Key concepts I would think would be software systems that people will use/exploit to either commit or to cover up a crime:
Networking systems: Common protocols, packet tracing etc.
Firewall systems and common exploits.
Viruses and replication: Worms vs. Trojans etc.
Major Operating Systems: Basic concepts and common exploits.
Web Applications: How they're structured and how they can be exploited.
Common hacking concepts: DoS, OOB, SQL Injection etc.
Email concepts: transmission, receipt, tracking, header information.
Data storage and recovery concepts and key software.
Surveillance Techniques: Packet analysis, key loggers etc.
I'm sure there are a few others, but no others immediately spring to mind
Definitely learn about email systems. I'd imagine email communications come into play fairly often in court cases these days. Learn how SMTP and POP3 work. Learn the basics of email servers and what ways they can be manipulated and how difficult it is to do.
i think you're deceiving yourself, what is a "computer software expert witness"?. That's like saying, you're an electrical engineer, and so you have the capacity to answer any engineering questions, whether they're from chemical, mechanical, civil or other specific area of engineering.

How to design and verify distributed systems?

I've been working on a project, which is a combination of an application server and an object database, and is currently running on a single machine only. Some time ago I read a paper which describes a distributed relational database, and got some ideas on how to apply the ideas in that paper to my project, so that I could make a high-availability version of it running on a cluster using a shared-nothing architecture.
My problem is, that I don't have experience on designing distributed systems and their protocols - I did not take the advanced CS courses about distributed systems at university. So I'm worried about being able to design a protocol, which does not cause deadlock, starvation, split brain and other problems.
Question: Where can I find good material about designing distributed systems? What methods there are for verifying that a distributed protocol works right? Recommendations of books, academic articles and others are welcome.
I learned a lot by looking at what is published about really huge web-based plattforms, and especially how their systems evolved over time to meet their growth.
Here a some examples I found enlightening:
eBay Architecture: Nice history of their architecture and the issues they had. Obviously they can't use a lot of caching for the auctions and bids, so their story is different in that point from many others. As of 2006, they deployed 100,000 new lines of code every two weeks - and are able to roll back an ongoing deployment if issues arise.
Paper on Google File System: Nice analysis of what they needed, how they implemented it and how it performs in production use. After reading this, I found it less scary to build parts of the infrastructure myself to meet exactly my needs, if necessary, and that such a solution can and probably should be quite simple and straight-forward. There is also a lot of interesting stuff on the net (including YouTube videos) on BigTable and MapReduce, other important parts of Google's architecture.
Inside MySpace: One of the few really huge sites build on the Microsoft stack. You can learn a lot of what not to do with your data layer.
A great start for finding much more resources on this topic is the Real Life Architectures section on the "High Scalability" web site. For example they a good summary on Amazons architecture.
Learning distributed computing isn't easy. Its really a very vast field covering areas on communication, security, reliability, concurrency etc., each of which would take years to master. Understanding will eventually come through a lot of reading and practical experience. You seem to have a challenging project to start with, so heres your chance :)
The two most popular books on distributed computing are, I believe:
1) Distributed Systems: Concepts and Design - George Coulouris et al.
2) Distributed Systems: Principles and Paradigms - A. S. Tanenbaum and M. Van Steen
Both these books give a very good introduction to current approaches (including communication protocols) that are being used to build successful distributed systems. I've personally used the latter mostly and I've found it to be an excellent text. If you think the reviews on Amazon aren't very good, its because most readers compare this book to other books written by A.S. Tanenbaum (who IMO is one of the best authors in the field of Computer Science) which are quite frankly better written.
PS: I really question your need to design and verify a new protocol. If you are working with application servers and databases, what you need is probably already available.
I liked the book Distributed Systems: Principles and Paradigms by Andrew S. Tanenbaum and Maarten van Steen.
At a more abstract and formal level, Communicating and Mobile Systems: The Pi-Calculus by Robin Milner gives a calculus for verifying systems. There are variants of pi-calculus for verifying protocols, such as SPI-calculus (the wikipedia page for which has disappeared since I last looked), and implementations, some of which are also verification tools.
Where can I find good material about designing distributed systems?
I have never been able to finish the famous book from Nancy Lynch. However, I find that the book from Sukumar Ghosh Distributed Systems: An Algorithmic Approach is much easier to read, and it points to the original papers if needed.
It is nevertheless true that I didn't read the books from Gerard Tel and Nicola Santoro. Perhaps they are still easier to read...
What methods there are for verifying that a distributed protocol works right?
In order to survey the possibilities (and also in order to understand the question), I think that it is useful to get an overview of the possible tools from the book Software Specification Methods.
My final decision was to learn TLA+. Why? Even if the language and tools seem better, I really decided to try TLA+ because the guy behind it is Leslie Lamport. That is, not just a prominent figure on distributed systems, but also the author of Latex!
You can get the TLA+ book and several examples for free.
There are many classic papers written by Leslie Lamport :
(http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html) and Edsger Dijkstra
(http://www.cs.utexas.edu/users/EWD/)
for the database side.
A main stream is NoSQL movement,many project are appearing in the market including CouchDb( couchdb.apache.org) , MongoDB ,Cassandra. These all have the promise of scalability and managability (replication, fault tolerance, high-availability).
One good book is Birman's Reliable Distributed Systems, although it has its detractors.
If you want to formally verify your protocol you could look at some of the techniques in Lynch's Distributed Algorithms.
It is likely that whatever protocol you are trying to implement has been designed and analysed before. I'll just plug my own blog, which covers e.g. consensus algorithms.

Resources