Artificial Intelligence & Von Neumann Model - artificial-intelligence

As we advance further in building AI models it seems that the Von Neuman architecture has some certain limitations. In a real-life scenario, neurons work in bulk and information is stored in networks. Neurons have thousands of input and output connections with other neurons with some of them being weak while others are strong. When they fire together, a signal based on the weights of the connection path is created and that causes a pattern of other neurons firing back in response. There are no single units that store information.
The major distinction is that there is not discrepancy between storing/retrieving information and computation as in Von Neuman model.
Is there any system currently in the market or research sectors that uses a distinct architecture?
Can anyone refer or propose briefly, in a simple manner, a different framework?

As a starter, I would recommend a very interesting article that talks about this in a broad view: https://medium.com/swlh/the-explosion-of-new-architectures-is-fundamentally-changing-computing-f69b7faae89d
On a more technical level, I can recommend the work of Ganguly et al. https://ieeexplore.ieee.org/abstract/document/8697354/
I hope these are of interest to you.

Related

Is a neural network a lazy or eager learning method?

Is a neural network a lazy or eager learning method? Different web pages say different things so I want to get a solid answer with good literature to back it up. The most obvious book to look in would be Mitchell's famous Machine Learning book but skimming through the whole thing I can't see the answer. Thanks :).
Looking at the definition of the terms lazy and eager learning, and knowing how a neural network works, I believe that it is clear that it is eager. A trained network is a generalisation function, all the weights and paths used to arrive at a classification are entirely determined by training data, but the training data itself is not retained for the purposes of the decision making.
An important distinction is that a Lazy system stores its training data and uses it directly to determine a solution. An eager system determines a function from the training data, and thereafter the training data is no longer required. That is to say you cannot determine what the training data was from an eager system's function. A neural network certainly fits that description. An eager system can therfore be very storage efficient, but conversely is non-deterministic, in the sense that it is not possible to determine how or why it arrived a a particular solution, so problems of poor or inappropriate training data may be difficult deal with.
The eager article linked above even gives artificial neural networks as an example. You might of course prefer a cited text to Wikipedia but the page has existed with that assertion since 2007 without contradictory edits, so I'd say that was pretty robust.
Some neural networks are eager learners, and some are lazy. Feedforward neural networks (as are commonly trained by some variant of backpropagation) are eager: they attempt to derive a representation of the underlying relationships in the data at the time of training. Radial basis function networks (such as probabilistic NN or generalized regression NN), on the other hand, are lazy learners (very much like k-nearest neighbors, the classic lazy learner).
A neural network is generally considered to be an "eager" learning method.
"Eager" learning methods are models that learn from the training data in real-time, adjusting the model parameters as new examples are presented. Neural networks are an example of an eager learning method because the model parameters are updated during the training process, as the algorithm iteratively processes the training examples. This allows the model to adapt and improve its performance as more examples are seen.
On the other hand, "lazy" learning methods, also known as instance-based or memory-based learning, only learn from the training data when a new example is presented. The model does not update its parameters during the training process but instead, it memorizes the training data and uses it to make predictions. Lazy learning methods typically require less computation time to make predictions than eager learning methods, but they may not perform as well on unseen data.
In general, neural networks are considered eager learning methods because their parameters are updated during the training process.
Here are a few literature references:
"Eager Learning vs. Lazy Learning" by R. S. Michalski, J. G. Carbonell, and T. M. Mitchell. This paper provides a comprehensive overview of the distinction between eager and lazy learning, and discusses the strengths and weaknesses of each approach. It was published in Machine Learning, 1983.
"An overview of instance-based learning algorithms" by A. K. Jain and R. C. Dubes. This book chapter provides an overview of the main concepts and techniques used in instance-based or lazy learning, and compares them to other types of learning algorithms, such as decision trees and neural networks. It was published in "Algorithms for Clustering Data" by Prentice-Hall, Inc. in 1988.
" Machine Learning" by Tom Mitchell. This book provides a comprehensive introduction to the field of machine learning, including the concepts of eager and lazy learning. It covers a wide range of topics, from supervised and unsupervised learning to deep learning and reinforcement learning. It was published by McGraw-Hill Education in 1997.
"Introduction to Machine Learning" by Alpaydin, E. This book provides an introduction to the field of machine learning, including the concepts of eager and lazy learning, as well as a broad range of machine learning algorithms. It was published by MIT press in 2010
It's also worth noting, that this classification of lazy and eager learning is not always clear cut and can be somewhat subjective, and some algorithms can belong to both categories, depending on the specific implementation.

A homework exercise about database architectures

I don't need any answers I am simply looking for a breakdown in laymans terms of what this essay is asking for.
I have done part A its very straight forward however I am not sure what he means when he refers to Applications. Part d is also simple enough.
My main problem is I am struggling with the context of part b, c. For whatever reason the question (perhaps the wording) is going straight over my head. If anyone could simply breakdown and explain part b and c I would very grateful, I can finish this essay up.
Any links would be an added bonus but certainly not expected thank guys.
a) Discuss the 2-tier architecture and the 3-tier architecture of database application processing in terms of architectural layout, applications, performance, security, advantages and disadvantages of such environment. (40 marks)
b) Discuss the software environment of both architectures including the use of high level and mark-up languages, servers with interfaces on different platforms, communication links and tools used. (20 marks)
c) Discuss future planned developments in these two architectures environments in terms of hardware configuration, software utilisation and applications areas. (20 marks)
d) Provide a conclusion of the report indicating a summary of your investigation by arguing the suitability of architectural layout software environment and application areas with reasoning. (10 marks)
Okay guys if anyone is ever tasked with somthing similar to this coursework. Part A refers to a comparison of the Architecture layouts of 2 and 3 tiered architecture, discuss the layers of each and the performance and security issues with each. Part B is asking to discuss HTML SQL and other languages used to communicate each architecture such as a Java Application using JDBC, at this point I discussed the SQL Api etc.. Part C I discussed the the layout of the client and servers in hardware terms, API distirbuted database etc... I hope this may help someone in the future. Si

Are Multi-Agent Systems just hype?

As a researcher I am curious to hear what people think of Multi-Agent Systems if of course you came across the whole idea. Do you believe there is something more in there than just a hype and another buzzword? Can you see any potential uses in business or everyday computing? Or do you think that we can already achieve everything MAS has to offer but with simple more elegant solutions?
I am a research professor who has published many articles in the Autonomous Agents and Multiagent Systems Conference (AAMAS): the main vehicle for multiagent research.
MAS is a term used by researchers (coined around 1995, for the first International Conference on Multiagent Systems (ICMAS), which brought together the Distributed Artificial Intelligence (DAI) and the Autonomous Agents research communities under one tent: the MAS tent) that refers to algorithms and methods for organizing teams of autonomous agents. Researchers in MAS have developed algorithms for Robot soccer (see Robocup), coordinating autonomous robot rovers (as in Mars rovers), distributed allocation of resources (who does which task), and many other domains.
I don't see that there is any "hype" as you describe. You can read all the papers in past conferences and each one clearly states what the author tried to accomplish, how he tried it, and what the results were. I do not know of anyone making silly claims about the power of these techniques: they are all just algorithms (isn't everything). No magic here.
The question
do you think that we can already achieve everything MAS has to offer but with simple more elegant solutions?
is incorrect in that, if you can solve a MAS problem with a simple and more elegant solution, your solution is now a MAS solution!
MAS is a problem domain, along with the solutions we found so far. If you find a better solution then, awesome, publish it and join the MAS community.
As an aside, I see this confusion often. Journeymen programmers don't realize that research communities are (usually) defined by the problem they work on, not a solution approach.
Compared to many other fields of Artificial Intelligence and Technology, multi-agent systems aren't hyped enough!
I find people who know nothing about Multi-agent systems and are actively in the field of technology, programming, and "artificial intelligence". (quoted since it now is hype, and has effectively lost all meaning)
I learned about Multi-agent systems in 2008 through Netlogo, and it changed my perspective about problem solving using computational technology. I realized at the time that these types of programs would require increasing computing power. More recently I have learned all the hype driven stuff (data-science, ML, DNN, RL, etc.) I think all this hype will integrate well with MAS, and has yet to be fully understood. Many people are introduced to this thought through MMO gaming, which has been a huge hit, so there may be a leap yet to come.

How to design and verify distributed systems?

I've been working on a project, which is a combination of an application server and an object database, and is currently running on a single machine only. Some time ago I read a paper which describes a distributed relational database, and got some ideas on how to apply the ideas in that paper to my project, so that I could make a high-availability version of it running on a cluster using a shared-nothing architecture.
My problem is, that I don't have experience on designing distributed systems and their protocols - I did not take the advanced CS courses about distributed systems at university. So I'm worried about being able to design a protocol, which does not cause deadlock, starvation, split brain and other problems.
Question: Where can I find good material about designing distributed systems? What methods there are for verifying that a distributed protocol works right? Recommendations of books, academic articles and others are welcome.
I learned a lot by looking at what is published about really huge web-based plattforms, and especially how their systems evolved over time to meet their growth.
Here a some examples I found enlightening:
eBay Architecture: Nice history of their architecture and the issues they had. Obviously they can't use a lot of caching for the auctions and bids, so their story is different in that point from many others. As of 2006, they deployed 100,000 new lines of code every two weeks - and are able to roll back an ongoing deployment if issues arise.
Paper on Google File System: Nice analysis of what they needed, how they implemented it and how it performs in production use. After reading this, I found it less scary to build parts of the infrastructure myself to meet exactly my needs, if necessary, and that such a solution can and probably should be quite simple and straight-forward. There is also a lot of interesting stuff on the net (including YouTube videos) on BigTable and MapReduce, other important parts of Google's architecture.
Inside MySpace: One of the few really huge sites build on the Microsoft stack. You can learn a lot of what not to do with your data layer.
A great start for finding much more resources on this topic is the Real Life Architectures section on the "High Scalability" web site. For example they a good summary on Amazons architecture.
Learning distributed computing isn't easy. Its really a very vast field covering areas on communication, security, reliability, concurrency etc., each of which would take years to master. Understanding will eventually come through a lot of reading and practical experience. You seem to have a challenging project to start with, so heres your chance :)
The two most popular books on distributed computing are, I believe:
1) Distributed Systems: Concepts and Design - George Coulouris et al.
2) Distributed Systems: Principles and Paradigms - A. S. Tanenbaum and M. Van Steen
Both these books give a very good introduction to current approaches (including communication protocols) that are being used to build successful distributed systems. I've personally used the latter mostly and I've found it to be an excellent text. If you think the reviews on Amazon aren't very good, its because most readers compare this book to other books written by A.S. Tanenbaum (who IMO is one of the best authors in the field of Computer Science) which are quite frankly better written.
PS: I really question your need to design and verify a new protocol. If you are working with application servers and databases, what you need is probably already available.
I liked the book Distributed Systems: Principles and Paradigms by Andrew S. Tanenbaum and Maarten van Steen.
At a more abstract and formal level, Communicating and Mobile Systems: The Pi-Calculus by Robin Milner gives a calculus for verifying systems. There are variants of pi-calculus for verifying protocols, such as SPI-calculus (the wikipedia page for which has disappeared since I last looked), and implementations, some of which are also verification tools.
Where can I find good material about designing distributed systems?
I have never been able to finish the famous book from Nancy Lynch. However, I find that the book from Sukumar Ghosh Distributed Systems: An Algorithmic Approach is much easier to read, and it points to the original papers if needed.
It is nevertheless true that I didn't read the books from Gerard Tel and Nicola Santoro. Perhaps they are still easier to read...
What methods there are for verifying that a distributed protocol works right?
In order to survey the possibilities (and also in order to understand the question), I think that it is useful to get an overview of the possible tools from the book Software Specification Methods.
My final decision was to learn TLA+. Why? Even if the language and tools seem better, I really decided to try TLA+ because the guy behind it is Leslie Lamport. That is, not just a prominent figure on distributed systems, but also the author of Latex!
You can get the TLA+ book and several examples for free.
There are many classic papers written by Leslie Lamport :
(http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html) and Edsger Dijkstra
(http://www.cs.utexas.edu/users/EWD/)
for the database side.
A main stream is NoSQL movement,many project are appearing in the market including CouchDb( couchdb.apache.org) , MongoDB ,Cassandra. These all have the promise of scalability and managability (replication, fault tolerance, high-availability).
One good book is Birman's Reliable Distributed Systems, although it has its detractors.
If you want to formally verify your protocol you could look at some of the techniques in Lynch's Distributed Algorithms.
It is likely that whatever protocol you are trying to implement has been designed and analysed before. I'll just plug my own blog, which covers e.g. consensus algorithms.

What are areas where you can program artificial intelligence? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Welcome!
I very enjoyed programming artificial intelligence in my studies - neural networks, expert machines and other. But in work I develop mainly web applications.
And now I think about returning to such programming, maybe in hobby, or maybe in work. Are there areas where AI is commonly used in applications development and programmer with such skills can search work?
Or maybe I can sold some ideas to my boss and use AI to extend some of our applications.
What are you experience and ideas with using AI in applications?
I recently started reading the book Programming Collective Intelligence. It's an excellent book which discusses exactly what you are looking for - using AI techniques in web applications.
The book is written clearly, is easy to understand, explains everything in terms of real applications (it covers how some commonly used technology works: Google Pagerank, Amazons recommendation system, matchmaking websites, link recommendation systems, bayesian spam filters and more) and it uses actually useful examples using real data (ebay API, facebook API etc are used to collect data). In one chapter, it even explains how you can draw graphs (I mean the data structure, not bar/line/etc graphs) optimally (so that no nodes are too close together, minimum overlapping lines etc), which could be useful for, for example, mapping social networks.
I would recommend having a look at it and see the different ways AI can be applied to web applications.
As a counter-example, parsing data acquired from water testing equipment would probably be a bad place to use artificial intelligence:
The Daily WTF: No, We Need a Neural Network
Just a reminder for all of us to choose the right tool for the right job.
Neural networks are great for working on images, so one area of web applications you could use AI for would be identifying and/or manipulating patterns in images over large sets of data. For example, a site like Flickr or Facebook might have some interesting training material to identify people based on face or associating groupings of pixels (those being the features you work with) with certain items mentioned in captions or tags.
In terms of text manipulation, there's a lot of stuff, but it's usually icing on the cake for other web apps. I'm talking mostly in the areas of automatic completion in search bars and back-end things the user doesn't usually see, like automatic machine translation or improved search capability.
The problem with putting AI at the front of an application's offering is that usually, artificial intelligence is not a feature in and of itself, but rather a way of negotiating large data sets effectively without regular prompts from the designer. In general, a user will associate with an application on a one-to-one basis, and therefore judges it only on the quality of a relatively low number of responses.
Email spam filtering systems - definitely.
Any other security applications which need to spot patterns for malicious stuff.
You probably could analyze the behavior of the visitors of your web applications ; how do they navigate inside the website to provide a better, optimized interface. Now it depends on what kind of web applications you're working on. For on line shopping you can come with suggestions extrapolated from customers habits.
You can also detect "abnormal" behavior and fraud. Fraud and bot detection can take advantage of AI.
Forecasting, of course.
It has immense value for businesses (i.e.: inventory optimization) and is especially valuable in the time of global crisis.
Games do need AI.
Expert systems too.
Outside of games, I've seen very few commercial uses of AI.
It could, in theory, be very useful in industrial robotics and imaging, but those fields also tend to be very conservative, and uncomfortable with non-deterministic algorithms.
You might want to research what iRobot does, but even them use rather simple algorithms in their commercial robots.
In the area of cognitive architectures (e.g. Soar, ACT-R, etc), rather than concentrating on algorithms like A* and games, researchers investigate models of human behavior including decision-making, cultural interchange and learning. They often focus on cognitive plausibility, i.e. how close does a model track what a human would do, including timing, etc.
These systems tend to be strictly research-based with limited commercial applications. So far anyway. Military applications, especially for training, are fairly common though.
Image Processing for detecting cancer! (We actually code IEEE papers about it, creating the algoritms is way harder than coding them so we write papers about the performance of other papers)
Risk assessment is a pretty good case for neural networks, mostly because they're pretty good at pattern matching. Insurance and credit companies use them to some degree for determining the risk of a customer.
I have done some extensive research on using Artificial Neural Networks for classification of underwater sound sources. The algorithm seemed to work quite well, especially that I devoted a big portion of the work on figuring out what combination of fourier transform coefficient composed the best set for the classification (with Principal Component Analysis).
Anything (seriously):
http://highlevellogic.blogspot.com/2010/09/high-level-logic-rethinking-software.html
The High Level Logic (HLL) Open Source project is about finding and coding high level logic under which all the other AI (and in fact, all programming) fits. There are serious concrete ideas and code. HLL is already an application framework.

Resources