Matlab Anfis More than one output? - artificial-intelligence

Can Anfis (Adaptive Neuro-Fuzzy Inference Systems) in Matlab can have more than one output?
By the way , is it a good idea to use Anfis to decide to ON/OFF fan and lights ?
acording to the example in matlab website , I can see there is one output only. But it didn't mention it. Anyone know anything about it?

I don't know anything about it, but couldn't you just have a unique network for each output?
Arguably, each output could be determined independently of the others, couldn't it?

as mentioned here
(http://www.researchgate.net/post/How_to_generate_multiple_output_for_ANFIS2) anfis just supports one output which is the last column of train data

Did you try like this. anfis_input = *[input1 input2 [output1 output2]?*

Related

how to get designed serial numbers of parts like "FrontFacingCameraModuleSerialNumber" but designed

how to get designed serial numbers of parts like "FrontFacingCameraModuleSerialNumber" but designed?
And second question:
how to get serial number of touchID?
I will answer my own question. Maybe it will help other people.
I found a hash that works with iPhones <7 )
mobilegestalt jWdMTTxiAZc+KNO6Bz2jNg
could you extract FrontFacingCameraModuleSerialNumber from AppleDiagnosticDataSysCfg

How to repeat a command on different values of the same variable using SPSS LOOP?

Probably an easy question:
I want to run this piece of syntax:
SUMMARIZE
/TABLES=AGENCY
PIN
AGE
GENDER
DISABILITY
MAINSERVICE
MRESAGENCY
MRESSUPPORT
/FORMAT=LIST NOCASENUM TOTAL
/TITLE='Case Summaries'
/MISSING=VARIABLE
/CELLS=COUNT.
for 264 different agencies which are all values contained in the variable 'AGENCY'.
I want to create a different table for each agency outlining the above information for them.
I think I can do this using a DO REPEAT or LOOP on SPSS.
Any advice would be much appreciated.
Thank you :)
note: I have Google'd and read endless amounts on looping I am just a little unsure as to which method is what I am looking for
Take a look at SPLIT FILE, which meets your needs

Find-S concept learning algorithm

I am implementing and analysing the Find-S Algorithm (which I understood quite well). However, for the testing part, I am not sure whether the order of the examples in the training set affect the output.
Is this known or still unproven?
The order of examples will not affect the output if the function which expands the hypothesis is associative -- that is, if f(f(h0),x1),x2) = f(f(h0,x2),x1) for all h0,x1,x2.
The order of instances will affect your output because when FIND-S try to maximally specific hypothesis, it looks attributes and their values. It is discussed in Tom Mitchell's Machine Learning Book under title of '2.4 FIND-S FINDING A MAXIMALLY SPECIFIC HYPOTHESIS'.

What's a good way to store simple data that's used by a Command Line?

I'm new to both Mac and the C programming language; but, I recently installed Xcode, and I've been trying to figure somethings out..
Currently I want to work on making a small Command Line game. But I need to find a good way to store simple data, like strings and integers..
So, is there a way to store information in an XML file through C? If so, would that be a good way to go about things? If not, what do you suggest?
For simple data you can use NSUserDefaults.
Example where 32 is the:
int score = 32.
[[NSUserDefaults standardUserDefaults] setInteger:score forKey:#"CurrentScore"];
later:
score = [[NSUserDefaults standardUserDefaults] integerForKey:#"CurrentScore"];
See the documentation for NSUserDefaults, it can handle many types of data.

Matching with SIFT (Conceptual)

I have two images of real world. (IMPORTANT)I approximately know transformation of one real world to another. Due to texture problem I don't get enough matches between two images. How can I bring transformation information into account to get more and correct matches by using SIFt.
Any idea will be helpful.
Have you tried other alternatives? Are you sure SIFT is the answer? First, OpenCV provides SIFT, among other tools. (At the moment, I can't speak highly enough of OpenCV).
If I were solving this problem, I would first try:
Downsample your two images to reduce the influence of "texture", i.e. cvPyrDown.
Perform some feature detection: edge detection, etc. OpenCV provides a Harris corner detector, among others. Google "cvGoodFeaturesToTrack" for some detail.
If you have good confidence in your transformations, take advantage of your a priori information and look for features in neighborhoods corresponding to the transformed locations.
If you still want to look at SIFT or SURF, OpenCV provides those capabilities, as well.
If you know the transform, then apply the transform and then apply SURF/SIFT to the transformed image. That's one standard way to extend the robustness of feature descriptors/matchers across large perspective changes.
There is another alternative:
In sift parameters, Contrast Threshold is set to 0.04. If you reduce it and set it to a lower value ( 0.02,0.01) SIFT would find more enough matches:
SIFT(int nfeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6)
The first step I think is to try with the settings of the SIFT algorithm to find the best efficiency with respect to your problem.
One another way to use SIFT more effectively is adding the COLOR information to SIFT. So you can add the color information (RGB) of the points which are being used in the descriptor to it. For instance if your descriptor size is 10x128 then it shows that you are using 10 points in each descriptor. Now you can extract and add three column and make the size 10x(128+3) [R-G-B for each point]. In this way the SIFT algorithm will work more efficient. But remember, you need to apply weight to your descriptor and make the last three columns be stronger than the other 128 columns. Actually I do not know in your case how the images are. but this method helped me a lot. and you can see that this modification makes SIFT a stronger method than before.
A similar implementation can be find here.

Resources