Using Neural Networks Without Training Them - artificial-intelligence

My task for my university assignment is to create AI for a "MOBA" style strategy game. I have looked into using neural networks for this. I cannot see any need to train the network beforehand.
In other words, would it still be considered as a neural network if I hard code in the weights and simply apply minor weight changes at runtime?

Neural network is one thing (a structure of neurons, synapses, etc.) and the learning algorithm is a completely different thing.
So if you ask whether a NN without learning algorithm can still be called NN I think the answer is yes, it can.

Related

Is there a concept of neural networks that edit neural networks?

I think that neural networks can edit their own neural networks, and if we combine them with evolutionary algorithms, we can make a strong artificial intelligence.
Is there a concept of neural network editing itself? Neural networks can edit themselves.
There's really no point to this. Gradient descent is already a very good optimizer. There is something like this, called AutoML though, that basically uses another neural network to edit a neural network's hyperparameters...
If a neural network's only input was it's own weights, then how does it know that the error of the neural network is supposed to go down, then?

Master project: Video processing and data science: Finding a pattern on a dataframe of video

I am a master student (in applied maths) and in one year I will start my master project (which I am willing to continue in a phd thesis).
The question is about the feasibility of my project and how can I improve/modify it !
My project is about sport videos (example: freestyle snowboard).
There are a lot of Professional snowboarders that upload their tricks on the internet (which constitute a huge data basis) and what I want to do is to collect all the videos (I guess it won't be a problem) and try to find a pattern of the tricks (the figure made by the riders). By 'analyse them', I mean create a kind of artificial intelligence that first recognize the trick (I will construct a model for each trick) and then try to give advise on how to improve the trick (by analysing the position you have before the jump and the position of your body in the air).
This AI could be useful for judges in contest and for learning snowboarders.
I tried to imagine how to do it even if I do not have finish my master so this is why I am asking the question here: Is it a totally impossible algorithm (because of the time it would require or else) ? Should I focus on one part of this project (I guess that this project will mix différent topics maybe I should just do one step of my project).
Sorry for the long post, I thank you to read this unusual question and I hope someone will have the answer to my problem.
In a broader perspective, you might want to use video intelligence. A video is just a number of frames or images. These images could be feed to a Convolutional neural network. But the network needs to remember what it saw in the earlier frame. So you need to use Recurrent neural network.
A hybrid of the above networks will be a Deep Convolutional Recurrent neural network.
Place some Conv2D layers and input them some frames of the video.
Add LSTM layers.
Add Dense layers and the output layer.

Does ALICE use supervised or unsupervised machine learning

I am wondering what type of machine learning it uses and if someone could explain it to me. I have researched the different types and am unable to disngtuish what type it is due to my lack of knowledge.
While it's hard to say for sure, the fact that they're using deep learning on GPUs - pointing towards neural networks [1] seems to suggest that they're using a combination of unsupervised and supervised learning. The latter for bootstrapping the bot, and the former to learn on the job.
[1] http://www.existor.com/ai-parallel
call it weakly-supervised learning since we do not have exact labels but have intended document types. I dont know exaclty Cleverbot but state of the art takes large amount of documents and find models sequence relations of words by using Recurrent Neural Networks and LSTM in particular. Sometimes before it was Hidden Markov Models but now Deep Learning changed the game. If you search NLP with Recurrent Neural Network Google blows information.

Is it theoretically possible to emulate a human brain on a computer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Our brain consists of billions of neurons which basically work with all the incoming data from our senses, handle our consciousness, emotions and creativity as well as our hormone system, etc.
So I'm completely new to this topic but doesn't each neuron have a fixed function? E.g.: If a signal of strength x enters, if the last signal was x ms ago, redirect it.
From what I've learned in biology about our nerves system which includes our brain because both consist of simple neurons, it seems to me as our brain is one big, complicated computer.
Maybe so complicated that things such as intelligence and cognition become possible?
As the most complicated things about a neuron pretty much are the chemical aspects on generating an electric singal, keeping itself alive, and eventually segmenting itself, it should be pretty easy emulating some on a computer, or?
You won't have to worry about keeping your virtual neuron alive, or?
If you can emulate a single neuron on a computer, which shouldn't be too hard, could you theoretically emulate more than 1000 billions of them, recreating intelligence, cognition and maybe even creativity?
In my question I'm leaving out the following aspects:
Speed of our current (super) computers
Actually writing a program for emulating neurons
I don't know much about this topic, please tell me if I got anything wrong :)
(My secret goal: Make a copy of my brain and store it on some 10 million TB HDD and make someone start it up in the future)
A neuron-like circuit can be built with a handful of transistors. Let's say it takes about a dozen transistors on average. (See http://diwww.epfl.ch/lami/team/vschaik/eap/neurons.html for an example.)
A brain-sized circuit would require 100 billion such neurons (more or less).
That's 1.2 trillion transistors.
A quad-core Itanium has 2 billion transistors.
You'd need a server rack with 600 quad-core processors to be brain-sized. Think $15M US to purchase the servers. You'll need power management and cooling plus real-estate to support this mess.
One significant issue in simulating the brain is scale. The actual brain only dissipates a few watts. Power consumption is 3 square meals per day. A pint of gin. Maintenance is 8 hours of downtime. Real estate is a 42-foot sailboat (22 Net Tons of volume as ships are measured) and a place to drop the hook.
A server cage with 600 quad-core processors uses a lot more energy, cooling and maintenance. It would require two full-time people to keep this "brain-sized" server farm running.
It seems simpler to just teach the two people what you know and skip the hardware investment.
Roger Penrose presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. If it's like that you can forget about building a brain with a computer...
Simulating a neuron is possible and therefore theoretically simulating a brain is possible.
The two things that always stump me as an issue is input and output though.
We have a very large number of nerve endings that all provide input to the brain. Without them the brain is useless. How can we simulate something as complicated as the human brain without also simulating the entire human body!?!
Output, once the brain has "dealt" with all of the inputs that it gets, what is then the output from it? How could you say that the "copy" of your brain was actually you without again hooking it up to a real human body that could speak and tell you?
All in all, a fascinating subject!!!!
The key problem with simulating neural networks (and human brain is a neural network) is that they function continuously, while digital computers function in cycles. So in a neural network different neurons function independently in parallel while in a computer you only simulate discrete system states.
That's why adequately simulating real neural networks is very problematic at the moment and we're very far from it.
Yes, the Blue Brain Project is getting close, and I believe Moore's Law has a $1000 computer getting there by 2049.
The main issue is that our brains are based largely on controlling a human body, which means that our language comprehension and production, the basis of our high-level reasoning and semantic object recognition, is strongly tied to its potential and practiced outputs to a larynx, tongue, and face muscles. Further, our reward systems are tied to signals that indicate sustenance and social approval, which are not the goals we generally want a brain-based AI to have.
An exact simulation of the human brain will be useful in studying the effects of drugs and other chemicals, but I think that the next steps will be in isolating pathways that let us do things that are hard for computers (e.g. visual system, fusiform gyrus, face recognition), and developing new or modifying known structures for representing concepts.
Short: yes we will surely be able to reproduce artificial brains, but no it maybe won't be with our current computers models (Turing machines), because we simply don't know yet enough about the brain to know if we need new computers (super-Turing or biologically engineered brains) or if current computers (with more power/storage space) are enough to simulate a whole brain.
Long:
Disclaimer: I am working in computational neuroscience research and I am interested both by the neurobiological side and the computational (artificial intelligence) side.
Most of the answers assume as true OP's postulate that simulating neurons is enough to save the whole brain state and thus simulate a whole brain.
That's not true.
The brain is more than just neurons.
First, there is the connectivity, the synapses, that is of paramount importance, even maybe more than neurons.
Secondly, there are glial cells such as astrocytes and oligodendrocytes that also possess their own connectivity and communication system.
Thirdly, neurons are heterogenous, which means that there is not just one template model of a neuron that we could just scale up to the required amount to simulate a brain, we also have to define multiple types of neurons and place them pertinently at the right places. Plus, the types can be continuous, so in fact you can have neurons that are half way between 3 different types...
Fourthly, we don't know much about the rules of brain's information processing and management. Sure, we discovered that the cerebellum works pretty much like an artificial neural network using stochastic gradient descent, and that the dopaminergic system works like TD-learning, but then we have no clue about the rest of the brain, even memory is out of reach (although we guess it's something close to a Hopfield network, but there's no precise model yet).
Fifthly, there are so many other examples from current research in neurobiology and computational neuroscience showing the complexity of brain's objects and networks dynamics that this list can go on and on.
So in the end, your question cannot be answered, because we simply do not know yet enough about the brain to know if our current computers (Turing machines) are enough to reproduce the complexity of biological brains to give rise to the full spectrum of cognitive functions.
However, biology field is getting closer and closer to computer science field, as you can see with biologically engineered viruses and cells that are programmed pretty much like you develop a computer program, and genetical therapies that basically re-engineer a living system based on its "class" template (the genome). So I dare to say that once we know enough about the brain's architecture and dynamics, the in-silico reproduction won't be an issue: if our current computers cannot reproduce the brain because of theoretical constraints, we will devise new computers. And if only biological systems can reproduce the brain, we will be able to program an artificial biological brain (we can already 3D-print functional bladders and skin and veins and hearts etc.).
So I would dare say (even if it can be controversial, this is here my own claim) that yes, artificial brains will surely be possible someday, but whether it will be as a Turing machine computer, a super-Turing computer or a biologically engineered brain remain to be seen depending on our progress in the knowledge of brain's mechanisms.
I don't think they are remotely close enough to understanding the human brain to even begin thinking about replicating it.
Scientists would have you think we are nearly there, but with regards to the brain we're not much further along than Dr. Frankenstein.
What is your goal? Do you want a program that can make intelligent decisions or a program that provides a realistic model of how the human brain actually works? Artificial intelligence can be approached from the perspective of psychology, where the goal is to simulate the brain and thereby get a better understanding of how humans think, or from the perspective of mathematics, optimization theory, decision theory, information theory, and computer science, in which case the goal is to create a program that is capable of making intelligent decisions in a computationally efficient manner. The latter, I would say is pretty much solved, although advances are definitely still being made. When it comes to a realistic simulation of the brain, I think we were only recently able to simulate a brain of cat semi-realistically; when it comes to humans, it would not be very computationally feasible at present.
Researchers far smarter than most recon so, see Blue Brain from IBM and others.
The Blue Brain Project is the first
comprehensive attempt to
reverse-engineer the mammalian brain,
in order to understand brain function
and dysfunction through detailed
simulations.
Theoretically the brain can be modeled using a computer (as software and hard/wetware are compatible or mutually expressible). The question isn't a theoretical one as far as computer science goes, but a philosophical one:
Can we model the (chaotic) way in which a brain develops. Is a brains power it's hardware or the environment that shapes the development and emergent properties of that hardware as it learns
Even more mental:
If I, with 100% accuracy modeled my own brain, then started the simulation. And that brain had my memories (as it has my brain's physical form) ... is it me? If not, what do I have that it doesn't?
I think that if we are ever in a position to emulate the brain, we should have been working on logical system based on biological principles with better applications than the brain itself.
We all have a brain, and we all have access to it's amazing power already ;)
A word of caution. Current projects on brain simulation work on a model of a human brain. Your idea about storing your mind on a hard-disk is crazy: if you want a replica of your mind you'll need two things. First, another "blank" brain. Second, devise a method to perfectly transfer all the information contained in your brain: down to the quantum states of every atom in it.
Good luck with that :)
EDIT: The dog ate part of my text.

What Artificial Neural Network or 'Biological' Neural Network library/software do you use?

What do you use?
Fast Artificial Neural Network Library (FANN) is a free open source neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks. Cross-platform execution in both fixed and floating point are supported. It includes a framework for easy handling of training data sets. It is easy to use, versatile, well documented, and fast. PHP, C++, .NET, Ada, Python, Delphi, Octave, Ruby, Prolog Pure Data and Mathematica bindings are available.
FannTool A graphical user interface is also available for the library.
There are a lot of different network simulators dependant on how detailed you want to do your sim, and what kind of network you want to simulate.
NEURON and GENESIS are good if you want to simulate full biological networks (Which I'm geussing you probably don't) even down to the behaviour of dendrites etc.
NEST and SPLIT and some others are good for doing population simulations where you create the population on a node-by-node basis and see what the whole population does. This is pretty much the 'industry' standard approach, and is used a lot in research and commercial applications, so there are worth looking into. I know that IBM use SPLIT for some of their research.
MIIND is good if you want to use differential equations to model what a population would do, but this approach is relatively new and computationally expensive (if very cool).
Not sure if that is exactly what you wanted!
(N.B. if you google any of the names in caps along with the word "simulator" you will end up at the relevant web page =)
Whenever I've wanted to play around with any data mining algorithm quickly, I just load up Weka. It's pretty complex but it implements a lot of algorithms (including neural networks) with a lot of customizability. Plus, it has some visualizations for NNs.
It is old, but I have always used NeuroShell 2 when not using my own code. Unfortunately, it is not free. I think The newer NeuroShells are designed only for predicting stocks.
If you're looking to experiment with deep learning, you should look into
Theano
Pylearn2 (which is based on Theano)

Resources