backpropagation with multiple outputs - artificial-intelligence

I am currently writing a Neural Network module and I already understand how everything works with just one output. But when having multiple outputs I was told to sum up the error of each output in order to calculate the loss function, which doesn't make any sense to me, because then we don't really know which synapse/weight is responsible for the error.
For example we have a NN with the shape 2|1|2 (inputs, hidden, outputs)...
So the Neuron in the hidden layer is connected to each output neuron by some weigh. If we now propagate forward and receive an error for each output neuron and sum this error up, each weight connected with the neuron in the hidden layer is adjusted by exactly the same amount. Does someone now if I am mistaken or if I understood something wrong?

I think you misunderstood, the loss function is usually calculated individually for each output for backpropagation. If you want to know the total error in the output to track your progress, then I suppose you could use the sum of the errors for that.

Related

XOR neural network seems to converge around 0.5

I'm trying to code a XOR neural network for some weeks, but I always face the same problem. First of all you have to know that I spent hours and hours trying all I found on the net but nothing worked.
After trying to do it using 3Blue1Brown videos on the subject without success, I am now using this http://neuralnetworksanddeeplearning.com/chap2.html. I coded a Matrix library
with all the necessary functions.
My network does have 3 layers with: 2 input neurons, 2 hidden neurons, 1 output neuron.
Moreover I have 2 biases pointing the hidden neurons, and one pointing the output neuron. I use the sigmoid function to have values between 0 and 1, and the quadratic cost function. Everytime I train the network (ie everytime I use backpropagation) I choose a random input with its corresponding output.
The problem is, whatever how many times I train it, the output is never even close to 0 or 1 but always messing around 0.5, and my cost function is stuck around 0.14.
ANY HINT OR HELP IS APPRECIATED -- I really don't get where the problem is, I feel like I've tried everything. PS: Did not show any code here, if needed, don't hesitate to say it.
I've managed to resolve my problem by adding layers in my network. Moreover, when I improved it in order to code an OCR, I added a learning rate to escape from local miminas which were partly the problem everytime my network was getting stuck.

changing HM reference software to display some information about the bitstream

I am very new to the HM HEVC (and the JEM) reference software, and I am currently trying to understand the source code. I want to add some lines to display for each component: name of Algo (i.e. inter/intra Algos) + length of the bitstream+ position in output bin file.
To know which component cost more bits to code and how codec is working. I want to do same thing for the JEM also after that.
my problem first is that I am unable of understanding a lot of function there, the comment is not sufficient, so is there any references to understand the code??!! (I already read the Manuel ,doesn’t help).
2nd I don’t know where & how exactly to add these lines; is it in TEncGOP, TEncSlice or TEncCU. Ps: I don’t think in TEncGOP.compressGOP so maybe in the 2 other classes.
(I put the answer to comment that #Mourad put four hours ago here, becuase it will be long)
I assume that you could manage to find where the actual encoding after the RDO loop is implemented. As you correctly mentioned, xEncodeCU is the function you need to refer to make sure you are not still in the RDO.
Now you need to find the exact function in xEncodeCU that is responsible for your target codec tool.
For instance, if you want to count the number of bits for coefficient coding, you should be looking into the m_pcEntropyCoder->encodeCoeff() (It's a JEM function and may have different name in the HM). Once you find this line in the xEncodeCU, you may do this and get the number of bits written inside encodeCoeff() function:
UInt b_before = m_pcEntropyCoder->getNumberOfWrittenBits();
m_pcEntropyCoder->encodeCoeff( ... );
UInt b_after = m_pcEntropyCoder->getNumberOfWrittenBits();
UInt writtenBitsCoeff = b_after - b_before;
One important point: as you cas see, the function getNumberOfWrittenBits() gives you integer rates, which is obtained by rounding sum of fractional rates corresponding to all syntax elements coded inside the function encodeCoeff. This error might or might not be acceptable, depending on your problem. For example, if instead of coefficient coding rate, you wanted to know the rate of CBF, then this error would not be acceptable at all. Because, CBF rate is mostly less than one bit. If this is your case, then you would need to calculate the fractional bits one-by-one. It would be totally different and relatively more complicated than this.
Point 1: There is one rule of tumb that logging coding decisions (e.g. pred mode, MV, IPM, block size) is much easier at the decoder side than encoder. This is because of the fact that you have super complicated RDO process at the encoder side that can easily make you get lost in the loops. But at the decoder side, everything appears only once. However, if you insist on doing it at the encoder side, you may find some tips here: Get some information from HEVC reference software
Point 2: Unlike coding decisions, logging rate (i.e. number of written bits for different syntax elements) is more complicated at the decoder side than encoder. This is particularly true for fractional bits associated to anything that is encoded in non-EP mode (i.e. with CABAC contexts). So you may do this part at the ecoder side. But I am afraid it is not easy.
Point 3: I think the best way to understand the code is to read it line-by-line. It's very time-consuming but if you theoritically know the standard(s), you will probably be able to distiguish important parts and ignore the rest.
PS: I think there are too many questions, mostly too general, in your post. It makes it a bit difficult for me to answer them all together. So you I'll wait for you to take your next step and ask more precise questions.

Understanding AForge SOM implementation

Hello neural enthusiasts‎ out there,
i am a little bit confused about the SOM Learning Algorithm in AForge.
I figured out that the implementation assumes the most common case, a 2 dimensial SOM.
When i take a look at other SOM Graphics in the web, it figures out, that the position of the neuron changes over time. Similar neurons are put together.
I took a look at the source code and found out that the position of the neurons in the map is some kind of fixed. It is:
int wx = neuronIndex % width;
int wy = neuronIndex / width;
Is this just another Type of SOM with fixed possitions, or am i missinterprating something?
I also thought that mainly you want to get an informational graphic out of a SOM,
but there are no public available methods to retrieve the position of a Neuron.
Not familiar with AForge, but....
EDIT: First I was thinking that the weights are 2D and taught to resemble a grid, but this is even more educated guess: The moving grid of neurons you've seen is still not the SOM's nodes. The position of a SOM node is constant. The SOM is taught to abstract some dataset and a Sammon's mapping is likely used as a visualization method for the nodes' weights. The result is something like this and can probably be confused with the original SOM's lattice in which the nodes or "neurons" never move.
Note that this is still just an educated guess.

Is there a C lib to find peaks in noisy data equivalent to findPeaks.m

i have a noisy set of data and want to find the peaks in it. There is a matlab function for this exact task which includes smoothing of the data. I is called findpeaks.m
Now as im working in C i would either would have to code this by myself or use some functions which im not aware of. I hope you can tell me if they exist and where i can find them, as this is a very common problem.
To be clear what im searching of: a function to first smooth my data and then calculate the peaks, both preferably with some parameters for smoothing method, peak width etc.
Thanks!

Signal classification - recognise a signal with AI

I have a problem with recognising a signal. Let say the signal is a quasiperiodic signal, the period time has finite limits. The "shape" of the signal must match some criteria, so the actual algorithm using signal processing techniques such as filtering, derivating the the signal, looking for maximum and minimum values. It has a good rate at finding the good signals, but the problem is it also detects wrong shapes too.
So I want to use Aritifical Intelligence - mainly Neural Networks - to overcome this problem. I thought that a multi layer network with some average inputs (the signal can be reduced) and one output whould shows the "matching" from 0..1. However the problem is that I never did such a thing, so I am asking for help, how to achive something like this? How to teach the neural network to get the expected results? (let say I have vectors for inputs which should give 1 as output)
Or this whole idea is a wrong approximation for the problem? I am open to any learning algorithms or idea to learn and use to overcome on this problem.
So here is a figure on the measured signal(s) (values and time is not a concern now) and you can see a lot "wrong" ones, the most detected signals are good as mentioned above.
Your question can be answered in a broad manner. You should consider editing it to prevent it to be closed.
But anyway, Matlab had a lot of built-in function and toolbox to support Artificial Intelligence, with a lot of sample code available, which you can modify and refer to. You can find some in Matlab FileExchange.
And I know reading a lot of technical paper for Artificial Intelligence is a daunting task, so good luck!
You can try to build a neural network using Neuroph. You can inspire from "http://neuroph.sourceforge.net/TimeSeriesPredictionTutorial.html".
On the other hand, it is possible to approximate the signal using Fourier transformation.
You can try 1D convolution. So the basic idea is you give a label 0: bad, 1: good to each signal value at each timestamp. After this, you can model
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', padding = 'same', input_shape=(1,1)))
model.add(Bidirectional(LSTM(20, return_sequences=True)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', padding = 'same'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation='sigmoid'))
model.add(Dense(2, activation='softmax'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
Train the model and then give it a new signal to predict. It will predict given series to 0 and 1 values. if count of 0 is more than count of 1, the signal is not good.

Resources