This is a conceptual question. I come from a Computer Vision background where the Object Detection DNNs are trained using a predefined dataset such as COCO, NYU-D etc and then the DNN is able to predict the results for an input image based on the training.
However, in the case of Deep Reinforcement Learning I am unable to find a dataset that can train Deep RL networks. Rather I find resources that talk about environment for training.
So the questions is whether Deep RL networks are required to be trained using environments only or is it possible to train them similar to Object Detection DNNs i.e by using some sort of dataset ?
This is a very common confusion in the AI community. Long story short, reinforcement learning (RL) method requires feedback (reward,state) from environment based on the action determined by RL. dataset is not able to provide that feedback. You can consider RL as a close-loop feedback system, whereas suerpervised learning (DNN) as the open-loop feedforward system.
To help you understand RL better. RL methods learn from the interaction with environment incrementally in the following steps:
Initialize RL agent policy and/or value functions;
Initialize the state that RL agent is starting with;
RL agent determines an action based on current state;
Action is applied to the environment;
Environment reacts to the action and state is updated, a reward is generated;
state and reward from the environment are transmitted to the RL agent;
RL agent updates its policy and/or value functions based on the state and reward feedback;
Then go back to step #3;
I suggest you to briefly read the RL text book from Richard Sutton: Reinforcement Learning: An Introduction. You can download free from here: https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf
Unlike DNNs, RL requires environment to interact with in order to train. You can look further into offline datasets and online datasets for RL.
Related
For my master's thesis, I will have to do an inference with a pre-built / pre-trained (with TensorFlow) deep neural network model. I received it in two different formats (hdf5 / h5 and frozen graph = .pb). The inference shall be done on a cluster, so far we only have a GPU-version (with TensorRT and a uff Model) running. So my first job seems to be to do inference on one CPU before making a usage possible on the cluster.
We are using the model within computational fluid dynamics (CFD) simulations – that is also my academic background, and as you can therefore imagine I have only a little knowledge about deep learning. Anyway, it is not my job to change/train the model but just to use it for inference. Our CFD-Code is written in C++, which is the only programming language I am using on an advanced level (obviously it is no problem to use C, but I have no idea of python).
After going through many Google searches I recognized that I do not have a real idea how to start things off. I thought it would be possible to skip all the training and TensorFlow stuff. I know how neural networks work and how they calculate their output values from their input values. I also have the most important theoretical knowledge, but no programming knowledge in this field. Is it somehow possible to use the model they gave me (so either hdf5/h5 or frozen graph) and build an inference code using exclusively C or C++? I already found the C API and installed it within a docker container (where I also have Tensorflow), but I am really not sure what the next step is. What can I do with the C API? How would you write a C/C++-Code for inference with a DNN-model that is prepared to inference with it?
Opencv provided tools to run deep learning models but they are just limited to computer vision field. See here.
You can perform classification, object detection, face detection, text detection, segmentation, and so on by using the API provided by opencv. These examples are fairly straightforward.
There are both python version and c++ version available.
If I want to investigate an RL algorithm for robotics, how should I be using Gazebo and OpenAI Gym to test, train and benchmark the algorithm? Should I start with OpenAI Gym and take algorithms with good scores into the Gazebo environment for real-world scenarios?
Factors to consider when picking the framework to work within
How much time will it require to get up to speed with whatever you choose?
Will performance in the environment reliably predict performance on an actual robot?
How much will it cost?
If you want to have your approach eventually work on an actual robot, you should be testing in an environment that closely simulates your target environment and platform. OpenAI gym's first party robot simulation environments use MuJuCo, which is not free. Further, these simulations are more for toy control setups than actual robotics problems. You would be better served writing ROS nodes and simulating your problem in Gazebo. You might also look at something like Erle Robotics' gym-gazebo tool which will let you bridge between gym and ROS.
i am implementing a surveillance system and i was looking for algorithm or any resource that could help me with activity Recognition. activity like punch, kick and etc. so when someone kick or punch in a record video the system could recognize that activity.
Human activity/action recognition is one of the hottest topics in ML.
There are challenges like THUMOS, Activity-Net, UCF-101 where this topic is widely studied.
Some of the winners of these challenges have released their codes, you can take a look at them. They have also given how to train our own network with custom dataset and labels. If you don't find the actions that you require in that list, you can build a dataset and train their networks on it.
CUHK
Activity Localisation
I am wondering what type of machine learning it uses and if someone could explain it to me. I have researched the different types and am unable to disngtuish what type it is due to my lack of knowledge.
While it's hard to say for sure, the fact that they're using deep learning on GPUs - pointing towards neural networks [1] seems to suggest that they're using a combination of unsupervised and supervised learning. The latter for bootstrapping the bot, and the former to learn on the job.
[1] http://www.existor.com/ai-parallel
call it weakly-supervised learning since we do not have exact labels but have intended document types. I dont know exaclty Cleverbot but state of the art takes large amount of documents and find models sequence relations of words by using Recurrent Neural Networks and LSTM in particular. Sometimes before it was Hidden Markov Models but now Deep Learning changed the game. If you search NLP with Recurrent Neural Network Google blows information.
Is a neural network a lazy or eager learning method? Different web pages say different things so I want to get a solid answer with good literature to back it up. The most obvious book to look in would be Mitchell's famous Machine Learning book but skimming through the whole thing I can't see the answer. Thanks :).
Looking at the definition of the terms lazy and eager learning, and knowing how a neural network works, I believe that it is clear that it is eager. A trained network is a generalisation function, all the weights and paths used to arrive at a classification are entirely determined by training data, but the training data itself is not retained for the purposes of the decision making.
An important distinction is that a Lazy system stores its training data and uses it directly to determine a solution. An eager system determines a function from the training data, and thereafter the training data is no longer required. That is to say you cannot determine what the training data was from an eager system's function. A neural network certainly fits that description. An eager system can therfore be very storage efficient, but conversely is non-deterministic, in the sense that it is not possible to determine how or why it arrived a a particular solution, so problems of poor or inappropriate training data may be difficult deal with.
The eager article linked above even gives artificial neural networks as an example. You might of course prefer a cited text to Wikipedia but the page has existed with that assertion since 2007 without contradictory edits, so I'd say that was pretty robust.
Some neural networks are eager learners, and some are lazy. Feedforward neural networks (as are commonly trained by some variant of backpropagation) are eager: they attempt to derive a representation of the underlying relationships in the data at the time of training. Radial basis function networks (such as probabilistic NN or generalized regression NN), on the other hand, are lazy learners (very much like k-nearest neighbors, the classic lazy learner).
A neural network is generally considered to be an "eager" learning method.
"Eager" learning methods are models that learn from the training data in real-time, adjusting the model parameters as new examples are presented. Neural networks are an example of an eager learning method because the model parameters are updated during the training process, as the algorithm iteratively processes the training examples. This allows the model to adapt and improve its performance as more examples are seen.
On the other hand, "lazy" learning methods, also known as instance-based or memory-based learning, only learn from the training data when a new example is presented. The model does not update its parameters during the training process but instead, it memorizes the training data and uses it to make predictions. Lazy learning methods typically require less computation time to make predictions than eager learning methods, but they may not perform as well on unseen data.
In general, neural networks are considered eager learning methods because their parameters are updated during the training process.
Here are a few literature references:
"Eager Learning vs. Lazy Learning" by R. S. Michalski, J. G. Carbonell, and T. M. Mitchell. This paper provides a comprehensive overview of the distinction between eager and lazy learning, and discusses the strengths and weaknesses of each approach. It was published in Machine Learning, 1983.
"An overview of instance-based learning algorithms" by A. K. Jain and R. C. Dubes. This book chapter provides an overview of the main concepts and techniques used in instance-based or lazy learning, and compares them to other types of learning algorithms, such as decision trees and neural networks. It was published in "Algorithms for Clustering Data" by Prentice-Hall, Inc. in 1988.
" Machine Learning" by Tom Mitchell. This book provides a comprehensive introduction to the field of machine learning, including the concepts of eager and lazy learning. It covers a wide range of topics, from supervised and unsupervised learning to deep learning and reinforcement learning. It was published by McGraw-Hill Education in 1997.
"Introduction to Machine Learning" by Alpaydin, E. This book provides an introduction to the field of machine learning, including the concepts of eager and lazy learning, as well as a broad range of machine learning algorithms. It was published by MIT press in 2010
It's also worth noting, that this classification of lazy and eager learning is not always clear cut and can be somewhat subjective, and some algorithms can belong to both categories, depending on the specific implementation.