Hello please I am new to making predictions of multiple targets, but not sure how to go about it. So i have a total of 12 columns and 3 out of them are my target variables, what are the possible models for this kind of problem. I have been doing single target predictions but not sure how to go about this one. below is sample of the table, my targets are the three coloured columns sample_data picth, yaw and degree. Is there a model for this kind of situation or I have to predict one after the other.
my motive is to train and predict the three targets using fresh data.
Related
(I struggled a bit to phrase the title - please feel free to suggest another title).
I have a text-dataset which I need to classify, say there's three classes. I need to create the targets by manually setting the labels based on the text (say the three classes are dog,cat,bird).
When I do so I notice we have, say, 70% dog, 20% cat and 10% bird.
Since a lot of machine learning models struggle with imbalanced data, my first thought would be to force the dataset being balanced simply to ignore some of the dog and cat text (i.e "undersampling") thus ending up with (almost) a balanced dataset, making it more easy to train the model.
My concern is though that if we want to train e.g a neural network and get the probability for each class, not training over the correct distribution of the data would result in over/under-confident predictions?
Indeed if your dataset is imbalanced, there is a risk of affecting the performance of your classifier.
You'll find plenty of libraries to help you deal with this problem (see below) and the bottom line is if classes are equally represented in your dataset, it can only help prevent biais of your classifier:
https://imbalanced-learn.org/stable/auto_examples/index.html#general-examples
https://github.com/ufoym/imbalanced-dataset-sampler
https://github.com/MaxHalford/pytorch-resample etc...
(but you can also do that sampling yourself, shouldn't be too difficult, eg libraries like pandas have such functionality)
As a safeguard, split your dataset into 3:
Training (eg 70% of our data): the bulk of the data used for learning
Validation (eg 20%): what your classifier uses for regularization (ie to prevent over fitting)
Test (eg 10%): this data is NEVER exposed to your classifier for learning purposes, you keep it separate and just use it at the end on your model to evaluate its true performance (you call predict and compare with expected classes).
This should be a good starting point.
I'm working on a fantasy turn base game.
I now have to create the database structure for my spells. The problem is that I don't really have a good idea on how to create it. Maybe the effects of those spells should not be stored in a database?
For instance, effects could be; increase attack, pull an enemy, heal, teleport, hide, put a mine and so on... Effects are pretty different and I would like the database structure to be extensible.
Edit:
It's a turn based game, time is the same as turns and distance represents the squares.
Some examples of what I mean below.
Let's say we have Incinerate:
it can target only 1 enemy (not ally)
it can be casted at a distance of 3 squares
it deals 5 damage per turn
it lasts 3 turns
Now we can take Shock Wave:
it travels in a line for 4 squares
it starts from a square near the caster
it damages the first target it hits (ally or enemy)
it deals 5 damage to the target and knocks it back 1 square
And the last one Rain Call:
it can be casted at any distance
it's a cloud the size of a 5x5 square
it can target both ally and enemies
only fire creatures take damage
while casting the caster is immobilized and it loses 5 mana/turn
As you can see there are a lot of possible columns: the distance it travels, turns, casting distance, type (damage, heal, armor, etc), value (+2), target (enemy, ally, both), size, etc.
I would not use a relational database for storing spells. Relational databases are good in cases when most of the following conditions apply:
you have very large amount of data,
the data can logically be organized as n-ary relations (tables, rows, columns),
you have many users that access to the data concurrently,
you need ACID properties,
et cetera
Databases are like trucks. They are big. They are difficult to use. They are expensive. (in terms of needed expertise, maintenance time, run time efficiency, etc. if not monetarily) They are very good at what they are good at, but not at anything else. Don't use a truck when a bicycle would suffice.
Let's come to your problem. The number of different types of spells is surely bounded and known at compile time, why don't you define an interface ISpell, and let each spell type be a class that implements ISpell? (You can also define an abstract class for common code) Then a SpellFactory may construct and provide access to all the spells when the program starts. Do you really need the spells be accessible from outside independent of your code?
If hard coding a SpellFactory is not flexible enough for your purposes, you can use xml configuration files. <spell type="blind" description="bla bla" picture="file.jpg"> <effects> <effect .. /> .. </effects> <range>5</range> etc. I don't know much about computer games, but this is what they did in sid meier civilization game, for example. Then, instead of hard coding the different spells in the SpellFactory, you can let it read them from the configuration file at the start up.
As far as I can see, using configuration files instead of a database has the following advantages:
It is a fast, easy, lightweight solution,
It is much more flexible than having all the spells having the same set of columns, (most of which will not make sense for a specific spell)
It is much easier to have more than one version of set of spells at the same time, for experiments, variations, etc,
You can let end users access and manipulate xml files for customizing the game without letting them access the database that would also contain sensitive data,
et cetera.
The disadvantages:
More people know about relational databases than xml format, so you might need a couple of hours to learn how to read and manipulate xml "elements".
Your question is pretty large. It depends on a lot of things, are you going to load the spell during runtime? Maybe you will load them at the beginning of the game? What database will you be using?
Amit Bhargava's suggestion is good and has the advantage of being user-understandable. However string are pretty slow, so what you could do is use flags in your spell table. Then, based on the flag you know which type of spell it is.
I'm implementing a nonlinear SVM and I want to test my implementation on a simple not linearly separable data. Google didn't help me find what I want. Can you please advise me where I can find such data. Or at least, how can I generate such data manually ?
Thanks,
Well, SVMs are two-class classifiers--i.e., these classifiers place data on either side of a single decision boundary.
Therefore, i would suggest a data set comprised of just two classes (that's not strictly necessary because of course an SVM can separate more than two classes by passing the Classifier multiple times (in series) over the data, it's cumbersome to do this during initial testing).
So for instance, you can use the iris data set, linked to in Scott's answer; it's comprised of three classes, Class I is linear separable from Class II and III; Class II and III are not linear separable. If you want to use this data set, for convenience-sake you might prefer to remove Class I (approx. the first 50 data rows), so what remains is a two-class system, in which the two remaining classes are not linearly separable.
The iris data set is quite small (150 x 4, or 50 rows/class x four features)--depending where you are with your SVM prototype testing, this might be exactly what you want, or you might want a larger data set.
An interesting family of data sets that are comprised of just two classes and that are definitely non-linearly separable are the the anonymized data sets supplied by the mega-dating site eHarmony (no affiliation of any kind). In addition to the iris data, I like to use these data sets for SVM prototype evaluation because they are large data sets with quite a few features yet still comprised of just two non-linearly separable classes.
I am aware of two places from which you can retrieve this data. The first Site has a single data set (PCI Code downloads, chapter9, matchmaker.csv) comprised of 500 data points (row) and six features (columns). Although this set is simpler to work with, the data is more or less in a 'raw' form and will require some processing before you can use it.
The second source for this data, contains two eHarmony data sets, one of them is comprised of over half million rows and 59 features. In addition, these two data sets have undergone substantial processing such that the only task required before feeding them to your SVM is routine rescaling of the features.
The particular data set you need will depend highly on your choice of kernel function, so It seems the easiest method is simply creating a toy data set yourself.
Some helpful ideas:
Concentric circles
Spiral-shaped classes
Nested banana-shaped classes
If you just want a random data set which is not linearly separable, may I suggest the Iris dataset? It is a multivariate data set where at least a couple of the classes in question are not linearly separable.
Hope this helps!
You can start with simple datasets like Iris or two-moons both of which are linearly non-separable. Once you are satisfied, you can move on to bigger datasets from the UCI ML repository, classification datasets.
Be sure to compare and benchmark against standard SVM solvers like libSVM and SVM-light.
If you program in Python, you can use a few functions in the package of sklearn.datasets.samples_generator to manully generate nested moon-shape data set, concentric circle data set etc. Here is a page of plots of these data sets.
And if you don't want to generate data set manually, you can refer to this website, where in the seciton of "shape sets", you can download these data set and test on them directly.
I need to carry out a data capture exercise, which is looking like a large task, that unfortunately may end up being done in Excel. I believe a database is more suitable, but the structure of it is probably very complicated.
I've created 4 categories per Unit (30 units). Each category has 8 graphs/dimensions. Each graph/dimension has a scale that I've visually broken down into 4 major interval points (inerval1, Interval2, etc). I'm intending to put a figure in a box that represents the change against these 4 interval points. Therefore, 4(categories)*8(dimensions) = 32. Then 32*4(intervals) = 128. This means per unit I need to record 128 changes.
...and the best thing, there are 3 distincy scales. 4 of the graphs use one scale. 2 use another and the last 2 use a final one.
Like I said this is a monster of a task and doing this in excel is possible, but doesn't give me the flexibility I think I need when it comes to comparing the data.
30 Units (tblInventory)
4 Categories per Unit (tblCategories1, tbleCategories2, etc)
8 Dimensions/Grpahs per categroy. (Dim1, Dim2, etc)
3 Scales (tblScale1, tblScale2, etc)
I'm trying to figure out where the actual data would be captured. Would I have a single table called tblIntervalData that is related to a linking table that connects to each of the 3 tblScales, which in turn are linked to the tblDimensions?
Below is a screen grab of what I've done, but it doesn't feel right. Your views and advice will be much appreciated. '
A higher resolution image can bee seen here
I can't see your pic behind my stupid corporate firewall, but...
A). Since excel only really manages to handle two (arguably three) dimensions of data at all well it's very unlikely that not going the DB route is correct if you have any kind of relations to deal with.
B). Stop usign hungarian notation, by which I mean drop the "tbl" prefixes.
C). I'd agree that sort of sounds like you do want a table (or similar tables) "Intervals" (avoid the word data - everything is data) which will have FK relationships to Units, Scales, etc, but it's hard for me to be sure without seeing your diagram I think. Limited help I know.
In relation to my previous question where I was asking for some database suggestions; it just occured to me that I don't even know if what I'm trying to store there is appropriate for a database. Or should some other data storage method be used.
I have some physical models testing (let's say wind tunnel data; something similar) where for every model (M-1234) I have:
name (M-1234)
length L
breadth B
height H
L/B ratio
L/H ratio
...
lot of other ratios and dimensions ...
force versus speed curve given in the form of a lot of points for x-y plotting
...
few other similar curves (all of them of type x-y).
Now, what I'm trying to accomplish is store that in some reasonable way, so that the user who will be using the database can come and see what are the closest ten models to L/B=2.5 (or some similar demand). Then for that, somehow get all the data of those models, including the curve data (in a plain text file format).
Is a sql database (or any other, for that matter) an appropriate way of handling something like this ? Or should I take some other approach ?
I have about a month to finish this, and in that time I have to learn enough about databases as well, so ... give your suggestions, please, bearing that in mind. Assume no previous knowledge on the subject, whatsoever.
I think what you're looking for is possible. I'm using Postgresql here, but any database should work. This is my test database
CREATE TABLE test (
id serial primary key,
ratio double precision
);
COPY test (id, ratio) FROM stdin;
1 0.29999999999999999
2 0.40000000000000002
3 0.59999999999999998
4 0.69999999999999996
.
Then, to find the nearest values to a particular ratio
select id,ratio,abs(ratio-0.5) as score from test order by score asc limit 2;
In this case, I'm looking for the 2 nearest to 0.5
I'd probably do a datamodel where you have one table for the main data, the ratios and so on, and then a second table which holds the curve points, as I'm assuming that the curves aren't always the same size.
Yes, a database is probably the best approach for this.
A relational database (which usually uses SQL for data access) is suitable for data that is more or less structured as tables.
To give you an idea:
You could have a main table model with fields name, width etc. . Then subtable(s) for any values which can appear more than once, which refers back to model (look up "foreign key").
Then a subtable for your actual curves, again refering back to model.
How to actually model the curves in the DB I don't know, as I don't know how you model them. But if its lots of numbers, it can go into the DB.
It seems you know little about relational DBMS. Consider reading something on WIkipedia, or doing a few simple DBMS tutorials (PostgreSQL has some: http://www.postgresql.org/docs/8.4/interactive/tutorial.html , but there are many others). Then pick a DBMS for trying out (PostgreSQL is probably not a bad choice, but again there are many others).
Then try implementing a simple table schema, and get back to us with any detail questions (which you'll probably have).
One more thing: Those questions are probably more appropriate to serverfault.com.
This is arguably scientific data: you might find libraries/formats intended for arbitrary scientific data useful: HDF5 http://www.hdfgroup.org/ (note I am not an expert)