OpenAI Gym and Gazebo to test RL algorithm for robotics? - benchmarking

If I want to investigate an RL algorithm for robotics, how should I be using Gazebo and OpenAI Gym to test, train and benchmark the algorithm? Should I start with OpenAI Gym and take algorithms with good scores into the Gazebo environment for real-world scenarios?

Factors to consider when picking the framework to work within
How much time will it require to get up to speed with whatever you choose?
Will performance in the environment reliably predict performance on an actual robot?
How much will it cost?
If you want to have your approach eventually work on an actual robot, you should be testing in an environment that closely simulates your target environment and platform. OpenAI gym's first party robot simulation environments use MuJuCo, which is not free. Further, these simulations are more for toy control setups than actual robotics problems. You would be better served writing ROS nodes and simulating your problem in Gazebo. You might also look at something like Erle Robotics' gym-gazebo tool which will let you bridge between gym and ROS.

Related

AWS Sagemaker custom user algorithms: how to take advantage of extra instances

This is a fundamental AWS Sagemaker question. When I run training with one of Sagemaker's built in algorithms I am able to take advantage of the massive speedup from distributing the job to many instances by increasing the instance_count argument of the training algorithm. However, when I package my own custom algorithm then increasing the instance count seems to just duplicate the training on every instance, leading to no speedup.
I suspect that when I am packaging my own algorithm there is something special I need to do to control how it handles the training differently for a particular instance inside of the my custom train() function (otherwise, how would it know how the job should be distributed?), but I have not been able to find any discussion of how to do this online.
Does anyone know how to handle this? Thank you very much in advance.
Specific examples:
=> It works well in a standard algorithm: I verified that increasing train_instance_count in the first documented sagemaker example speeds things up here: https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-train-model-create-training-job.html
=> It does not work in my custom algorithm. I tried taking the standard sklearn build-your-own-model example and adding a few extra sklearn variants inside of the training and then printing out results to compare. When I increase the train_instance_count that is passed to the Estimator object, it runs the same training on every instance, so the output gets duplicated across each instance (the printouts of the results are duplicated) and there is no speedup.
This is the sklearn example base: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb . The third argument of the Estimator object partway down in this notebook is what lets you control the number of training instances.
Distributed training requires having a way to sync the results of the training between the training workers. Most of the traditional libraries, such as scikit-learn are designed to work with a single worker, and can't just be used in a distributed environment. Amazon SageMaker is distributing the data across the workers, but it is up to you to make sure that the algorithm can benefit from the multiple workers. Some algorithms, such as Random Forest, are easier to take advantage of the distribution, as each worker can build a different part of the forest, but other algorithms need more help.
Spark MLLib has distributed implementations of popular algorithms such as k-means, logistic regression, or PCA, but these implementations are not good enough for some cases. Most of them were too slow and some even crushed when a lot of data was used for the training. The Amazon SageMaker team reimplemented many of these algorithms from scratch to benefit from the scale and economics of the cloud (20 hours of one instance costs the same as 1 hour of 20 instances, just 20 times faster). Many of these algorithms are now more stable and much faster beyond the linear scalability. See more details here: https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html
For the deep learning frameworks (TensorFlow and MXNet) SageMaker is using the built-in parameters server that each one is using, but it is taking the heavy lifting of the building the cluster and configuring the instances to communicate with it.

How to make bots to learn from experience

I am writing bot for one rts game.
I am using fuzzy logic to evaluate current position (mine and enemies') and to issue commands.
I have couple fuzzy variables: military_buildings, civilian_building, army_power, enemy_power and distance. I also have couple fuzzy linguistic values like VERY_GOOD, GOOD, NORMAL, BAD, VERY_BAD.
My next task is to make bots to learn, to avoid to all behave on same way. Any advice or idea how to solve this?
To use GA for tuning parameters (but I don't know ratings of players so I don't know if bot wins over a weak player or loses to a strong player).
Does anyone have experience with similar problems (I can change implementation and replace fuzzy logic if there is easier way to learn bots from experience)?
Have a look at reinforcement learning. Here are a quick preview and a book that can help you.
Based on your description, this is what I'd use :)
The idea of using GAs to tune the parameters to Fuzzy Linguistic Variables is a good one (I wish I thought of it!); the fuzzy logic gives you a nice continuous response curve while the GA will search through a large solution space. I think it's definitely a strategy worth pursuing; you should write up your results.
If I were you I would look at the AIIDE annual Starcraft Competition, it is sponsored in part by AAAI so there are some really high quality approaches to this problem. In particular if you are concerned with higher-level reasoning like resource management etc. Starcraft Competition Site Also, the competitors source code is all available open source so if you want to check out some other techniques I recommend it. FYI, most of the top competitors for this type of problem have historically used some variant of a Probabilistic State Machine Paper on Probabilistic FSMs, so this may make a good test bed for parameter tuning. FYI this is also the approach that some of the top Game AI middleware software uses for Game AI, like XAIT.

Where can I get very simple inroduction to all Artificial intelligent techniques with Real world Examples

I know that Artificial Intelligence field is very vast and there are many books on it. But i just want to know the any resource where i can get the simple inroduction to all Artificail Intelligence techniques like
It would like to have 1 or 2 page introduction of all techniques and their examples of how they can be applied or for what purpose they can be used. I am interested in
Backpropagation Algoritm
Hebbs Law
Bayesian networks
Markov Chain Models
Simulated Annealing
Tabu Search
Genetic Algorithms or Evolutionary Algos
Now there are many variants and more AI techniques. And each one have many books written on them. i am unable to decide which algos i can use unless i know what they are capable of doing.
So can i find 1-2 page inroduction of them with Application examples
Essentials of Metaheuristics covers several of these - I can't promise it'll cover all of them, but I know there's good stuff on simulated annealing and genetic algorithms in there. Probably at least a few of the others, but I'd have to re-download it to check. It's available for free download.
It can be a bit light on the theory, but it'll give you a straightforward description, some explanation of when you'd want to use each, and a lot of useful pseudocode.
Here's an image on local search (= tabu search without tabu) from the Drools Planner manual:
I am working on similar images for Greedy algorithms, brute force, branch and bound and simulated annealing.
As an example of Genetic Algorithms implementation I can give you this.
It's an API I developed for GA, with one implementation for each operator, and one concrete example problem solved ((good) Soccer Team among ~600 players with budget restriction). Its all setup so you run it with mvn exec:java and watch it evolving in the console output. But you can implement your own problem structure, or even other operators (crossing, mutating, selection) methods.

How to design the artificial intelligence of a fighting game (Street Fighter or Soul Calibur)?

There are many papers about ranged combat artificial intelligences, like Killzones's (see this paper), or Halo. But I've not been able to find much about a fighting IA except for this work, which uses neural networs to learn how to fight, which is not exactly what I'm looking for.
Occidental AI in games is heavily focused on FPS, it seems! Does anyone know which techniques are used to implement a decent fighting AI? Hierarchical Finite State Machines? Decision Trees? They could end up being pretty predictable.
In our research labs, we are using AI planning technology for games. AI Planning is used by NASA to build semi-autonomous robots. Planning can produce less predictable behavior than state machines, but planning is a highly complex problem, that is, solving planning problems has a huge computational complexity.
AI Planning is an old but interesting field. Particularly for gaming only recently people have started using planning to run their engines. The expressiveness is still limited in the current implementations, but in theory the expressiveness is limited "only by our imagination".
Russel and Norvig have devoted 4 chapters on AI Planning in their book on Artificial Intelligence. Other related terms you might be interested in are: Markov Decision Processes, Bayesian Networks. These topics are also provided sufficient exposure in this book.
If you are looking for some ready-made engine to easily start using, I guess using AI Planning would be a gross overkill. I don't know of any AI Planning engine for games but we are developing one. If you are interested in the long term, we can talk separately about it.
You seem to know already the techniques for planning and executing. Another thing that you need to do is predict the opponent's next move and maximize the expected reward of your response. I wrote a blog article about this: http://www.masterbaboon.com/2009/05/my-ai-reads-your-mind-and-kicks-your-ass-part-2/ and http://www.masterbaboon.com/2009/09/my-ai-reads-your-mind-extensions-part-3/ . The game I consider is very simple, but I think the main ideas from Bayesian decision theory might be useful for your project.
I have reverse engineered the routines related to the AI subsystem within the Street Figher II series of games. It does not incorporate any of the techniques mentioned above. It is entirely reactive and involves no planning, learning or goals. Interestingly, there is no "technique weight" system that you mention, either. They don't use global weights for decisions to decide the frequency of attack versus block, for example. When taking apart the routines related to how "difficulty" is made to seem to increase, I did expect to find something like that. Alas, it relates to a number of smaller decisions that could potentially affect those ratios in an emergent way.
Another route to consider is the so called Ghost AI as described here & here. As the name suggests you basically extract rules from actual game play, first paper does it offline and the second extends the methodology for online real time learning.
Check out also the guy's webpage, there are a number of other papers on fighting games that are interesting.
http://www.ice.ci.ritsumei.ac.jp/~ftgaic/index-R.html
its old but here are some examples

What Artificial Neural Network or 'Biological' Neural Network library/software do you use?

What do you use?
Fast Artificial Neural Network Library (FANN) is a free open source neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks. Cross-platform execution in both fixed and floating point are supported. It includes a framework for easy handling of training data sets. It is easy to use, versatile, well documented, and fast. PHP, C++, .NET, Ada, Python, Delphi, Octave, Ruby, Prolog Pure Data and Mathematica bindings are available.
FannTool A graphical user interface is also available for the library.
There are a lot of different network simulators dependant on how detailed you want to do your sim, and what kind of network you want to simulate.
NEURON and GENESIS are good if you want to simulate full biological networks (Which I'm geussing you probably don't) even down to the behaviour of dendrites etc.
NEST and SPLIT and some others are good for doing population simulations where you create the population on a node-by-node basis and see what the whole population does. This is pretty much the 'industry' standard approach, and is used a lot in research and commercial applications, so there are worth looking into. I know that IBM use SPLIT for some of their research.
MIIND is good if you want to use differential equations to model what a population would do, but this approach is relatively new and computationally expensive (if very cool).
Not sure if that is exactly what you wanted!
(N.B. if you google any of the names in caps along with the word "simulator" you will end up at the relevant web page =)
Whenever I've wanted to play around with any data mining algorithm quickly, I just load up Weka. It's pretty complex but it implements a lot of algorithms (including neural networks) with a lot of customizability. Plus, it has some visualizations for NNs.
It is old, but I have always used NeuroShell 2 when not using my own code. Unfortunately, it is not free. I think The newer NeuroShells are designed only for predicting stocks.
If you're looking to experiment with deep learning, you should look into
Theano
Pylearn2 (which is based on Theano)

Resources