how gene ranking is done for microarray data using information gain and chi-square statistics ?? Please illustrate with a simple example..
You could use the open source machine learning software Weka. Load your dataset and go to "Select attribute" tab. Use the following attributes evaluators:
ChiSquaredAttributeEval : Evaluates the worth of an attribute by computing the value of the chi-squared statistic with respect to the class.
InfoGainAttributeEval : Evaluates the worth of an attribute by measuring the information gain with respect to the class.
..using Ranker in the "Search Method" . That way the attributes are ranked by their individual evaluations
I don't exactly understand your question, but a very successful package for analyzing microarray data can be found here:
BioConductor
This is a software project that has a variety of different modules for reading data from microarrays and performing statistical analysis. This is very useful, because the file formats for microarray data are constantly changing as the technology develops, and the algorithms for analyzing microarray data have advanced significantly as well.
you can use InfoGainAttributeEval for calculating Information gain
and for more information check this answer
Related
I made use of Bert document embeddings to perform information retrieval on the CACM dataset. I achieved a very low accuracy score of around 6%. However when I used the traditional BM-25 method, the result was a lot closer to 40% which is close to the average accuracy found in literature for this dataset. This is all being performed within Apache Solr.
I also attempted to perform information retrieval using Doc2Vec and acheived similarly poor results as with BERT. Is it not advisable to use document embeddings for IR tasks such as this one ?
Many people find document embeddings work really well for their purposes!
If they're not working for you, possible reasons include:
insufficiency of training data
problems in your unshown process
different end-goals – what's your idea of 'accuracy'? – than others
It's impossible to say what's affecting your process, & raw perception of its usefulness, without far more details on what you're aiming to achieve, and then doing.
Most notably, if there's other published work using the same dataset, and a similar definition of 'accuracy' on which the other published work claims a far better result using the same methods as give worse results for you, then it's more likely that there are errors in your implementation.
You'd have to name the results you're trying to match (ideally with links to the exact writeups), & show the details of what your code does, for others to have any chance of guessing what's happening for you.
Business case:
Forecasting fuel consumption at site.
Say fuel consumption C, is dependent on various factors x1,x2,...xn. So mathematically speaking, C = F{x1,x2,...xn}. I do not have any equation to put this.
I do have historical dataset from where I can get a correlation of C to x1,x2 .. etc. C,x1,x2,.. are all quantitative. Finding out the correlation seems tough for a person like me with limited statistical knowledge, for a n variable equation.
So, I was thinking of employing some supervised machine learning techniques for the same. I will train a classifier with the historic data to get a prediction for the next consumption.
Question: Am I thinking in the right way?
Question: If this is correct, my system should be an evolving one. So the more real data I am going to feed to the system, that would evolve my model to make a better prediction the next time. Is this a correct understanding?
If the above the statements are true, does the AdaptiveLogisticRegression algorithm, as present in Mahout, will be of help to me?
Requesting advises from the experts here!
Thanks in advance.
Ok, correlation is not a forecasting model. Correlation simply ascribes some relationship between the datasets based on covariance.
In order to develop a forecasting model, what you need to peform is regression.
The simplest form of regression is linear univariate, where C = F (x1). This can easily be done in Excel. However, you state that C is a function of several variables. For this, you can employ linear multivariate regression. There are standard packages that can perform this (within Excel for example), or you can use Matlab, etc.
Now, we are assuming that there is a "linear" relationship between C and the components of X (the input vector). If the relationship were not linear, then you would need more sophisticated methods (nonlinear regression), which may very well employ machine learning methods.
Finally, some series exhibit auto-correlation. If this is the case, then it may be possible for you to ignore the C = F(x1, x2, x3...xn) relationships, and instead directly model the C function itself using time-series techniques such as ARMA and more complex variants.
I hope this helps,
Srikant Krishna
I'm starting up looking into doing some machine translation of search queries, and have been trying to think of different ways to rate my translation system between iterations and against other systems. The first thing that comes to mind is getting translations of a set of search terms from mturk from a bunch of people and saying each is valid, or something along those lines, but that would be expensive, and possibly prone to people putting in bad translations.
Now that I'm trying to think of something cheaper or better, I figured I'd ask StackOverflow for ideas, in case there's already some standard available, or someone has tried to find one of these before. Does anyone know, for example, how Google Translate rates various iterations of their system?
There is some information here that might be useful as it provides a basic explanation of the BLEU scoring technique that is often used to measure the quality of an MT system by developers.
The first link provides a basic overview of BLEU and the second points out some problems with BLEU in terms of it's limitations.
http://kv-emptypages.blogspot.com/2010/03/need-for-automated-quality-measurement.html
and
http://kv-emptypages.blogspot.com/2010/03/problems-with-bleu-and-new-translation.html
There is also some very specific pragmatic advice on how to develop a useful Test Set at this link: AsiaOnline.Net site in the November newsletter. I am unable to put this link in as there is a limit of two.
I'd suggest refining your question. There are a great many metrics for machine translation, and it depends on what you're trying to do. In your case, I believe the problem is simply stated as: "Given a set of queries in language L1, how can I measure the quality of the translations into L2, in a web search context?"
This is basically cross-language information retrieval.
What's important to realize here is that you don't actually care about providing the user with the translation of the query: you want to get them the results that they would have gotten from a good translation of the query.
To that end, you can simply measure the discrepancy of the results lists between a gold translation and the result of your system. There are many metrics for rank correlation, set overlap, etc., that you can use. The point is that you need not judge each and every translation, but just evaluate whether the automatic translation gives you the same results as a human translation.
As for people proposing bad translations, you can assess whether the putative gold standard candidates have similar results lists (i.e. given 3 manual translations do they agree in results? If not, use the 2 that most overlap). If so, then these are effectively synonyms from the IR perspective.
In our MT Evaluation we use hLEPOR score (see the slides for details)
Do you have an example or an explanation of ANFIS (Adaptive Neuro-Fuzzy Inference System), I am reading that this could be applied to classify some diseases, What do you think about it?
Usually in order to develop a fuzzy system you have to determine the if-then rules, suitable membership functions, and their parameters. This is not always a trivial task, especially the development of correct if-then rules may be time consuming as we first have to "extract" the expert knowledge somehow.
This is where ANFIS comes into play: Under certain circumstances it can automatically determine suitable parameters for the membership functions. This is the case in particular when we already have a set of input and related output variables and values. Like in an artificial neural network the ANFIS system is able to adapt its nodes and connections between them "automatically".
To your question: you could of course create an ANFIS system for your desease classification, as long as you already have input and output data for system training available. But its not necessarily tied to such systems, you can see ANFIS more an approach usable under the mentioned circumstances, than a tool for a specific problem. It all depends on the requirements for the system you want to create, as well as the known (external) preconditions...
Hope that helps!
As Matthias said ANFIS is not mapped to a particular problem, you can use it on the basis of problem requirement. But where to use ANFIS: You can use it with any problem where something is ambiguous.
Actually this is the property of FIS(Fuzzy Inference System). Adaptive come in role as Matthias explained.
For ex. took famous classification problem, classifying a input to any class is not always perfectly determined, it somewhat ambiguous. So there using ANFIS may give better results then other classification algorithms depending upon whether you are able to model the system correctly or not using ANFIS.
But using ANFIS is computationally expensive as compared to other non-fuzzy approches. As to make FIS to perfect model your problem you will add AN part to it. This only make membership function selection adaptive. What about if-then rules. For that you have to do unsupervised rule selection from the complete possible rule base(this is basically a kind of unsupervised clustering problem, where you are trying to group all the rules whose effect would be same).
So far I have found a university 'Monash' that explains (based on the guide of Matlabs's Fuzzy Logic Toolbox) ANFIS.
The fuzzy inference system that we have considered is a model that maps:
input characteristics to input
membership functions
input
membership function to rules
rules
to a set of output characteristics
output characteristics to output
membership functions
the output membership function to a
single-valued output, or, a decision
associated with the output.
Yes it can be used for Diseases Classification.
Since the idea of ANFIS is combine fuzzy system in architecture of ANN. In this case, ANFIS have two main benefit.
first, you can use fuzzy variable which is support for Linguistic variable and it's fit for Diseases's symptoms that are commonly used as system's input (example of input >> pain levels : low, mid, high).
Second, since the architecture is mapped to ANN layers, ANFIS can do training process which aims to create more accurate result (ex : use Backpropagation method).
I am trying to do the following:
we are trying to design a fraud detection system for stock market.
I know the Specification for the frauds (they are like templates).
so I want to know if I can design a template, and find all records that match this template.
Notice:
I can't use the traditional queries cause the templates are complex
for example one of my Fraud is circular trading,it's like this :
A bought from B, and B bought from C, And C bought from A (it's a cycle)
and this cycle can include 4 or 5 persons.
is there any good suggestion for this situation.
I don't see why you can't use "traditional queries" as you've stated. SQL can be used to write extraordinarily complex queries. For that matter I'm not sure that this is a hugely challenging question.
Firstly, I'd look at the behavior you have described as vary transactional, therefore I treat the transactions as a model. I'd likely have a transactions table with some columns like buyer, seller, amount, etc...
You could alternatively have the shares as its own table and store say the previous 100 owners of that share in the same table using STI (Single Table Inheritance) buy putting all the primary keys of the owners into an "owners" column in your shares table like 234/823/12334/1234/... that way you can do complex queries and see if that share was owned by the same person or look for patterns in the string really easily and quickly.
-update-
I wouldn't suggest making up a "small language" I don't see why you'd want to do something like that when you have huge selection of wonderful languages and databases to choose from, all of which have well refined and tested methods to solve exactly what you are doing.
My best advice is pop open your IDE (thumbs up for TextMate) and pick your favorite language (Ruby in my case). Find some sample data and create your database and start writing some code! You can't go wrong trying to experiment like this, it'll will totally expose better ways to go about it than we can dream up here on Stackoverflow.
Definitely Data Mining. But as you point out, you've already got the models (your templates). Look up fraud DETECTION rather than prevention for better search results?
I know a some banks use SPSS PASW Modeler for fraud detection. This is very intuitive and you can see what you are doing as you play around with the data. So you can implement your templates. I agree with Joseph, you need to get playing, making some new data structures.
Maybe a timeseries model?
Theoretically you could develop a "Small Language" first, something with a simple syntax (that makes expressing the domain - in your case fraud patterns - easy) and from it generate one or more SQL queries.
As most solutions, this could be thought of as a slider: at one extreme there is the "full Fraud Detection Language" at the other, you could just build stored procedures for the most common cases, and write new stored procedures which use the more "basic" blocks you wrote before to implement the various patterns.
What you are trying to do falls under the Data Mining umbrella, so you could also try to learn more about it: maybe you can find a Data Mining package for your specific DB (you didn't specify) and see if it helps you finding common patterns in your data.