I've started learning of recommendation systems theory and I want to start practice. But unfortunately, I've faced with next problem : I want test my code on some big data set (size around 100 MB - 1 GB) and I didn't find any open sources for such data. Also, I found few test data generators, but I want find some example of "real life data", instead of purely artificial data.
Where can I find such data sets?
Thanks in advance!
Related
I made use of Bert document embeddings to perform information retrieval on the CACM dataset. I achieved a very low accuracy score of around 6%. However when I used the traditional BM-25 method, the result was a lot closer to 40% which is close to the average accuracy found in literature for this dataset. This is all being performed within Apache Solr.
I also attempted to perform information retrieval using Doc2Vec and acheived similarly poor results as with BERT. Is it not advisable to use document embeddings for IR tasks such as this one ?
Many people find document embeddings work really well for their purposes!
If they're not working for you, possible reasons include:
insufficiency of training data
problems in your unshown process
different end-goals – what's your idea of 'accuracy'? – than others
It's impossible to say what's affecting your process, & raw perception of its usefulness, without far more details on what you're aiming to achieve, and then doing.
Most notably, if there's other published work using the same dataset, and a similar definition of 'accuracy' on which the other published work claims a far better result using the same methods as give worse results for you, then it's more likely that there are errors in your implementation.
You'd have to name the results you're trying to match (ideally with links to the exact writeups), & show the details of what your code does, for others to have any chance of guessing what's happening for you.
I've just started my residency as a radiation oncologist. I have a little background in programming (Python, VBA).
I'd like your insights on an issue I have at work.
The issue : For each patient, the radiation oncologist needs to do a contouring. Basically, he contours the main structures (like the aorta, the heart, the lungs, and so on) on a CT scan. This is essential for computing the spatial distribution of the radiations (because you want to avoid those structures). The contouring is done within a 3rd party software (called Isogray). The CT scans come from the hospital database and the radiation distribution is computed on another software.
It takes at least one hour to do a complete contouring. Multiply that by each patients (maybe a dozen per week) and by each oncologists (we are a team of 15 members) and you can see that it represents hundred (maybe even thousand) manhours every year.
There exists softwares that do this automatically, but the hospital doesn't want to rent/buy them. But, seriously, how hard can this be to do a little automation ? Can't I do this myself ?
My plan of action : Here I'd like your insights. How can I automate this task ? The first thing is that I can't change anything within Isogray, so I need to do the automation externally. What I think I should do :
Create a database of the historical contourings : this means I need to be able to read what Isogray uses as an output files
Design an automatic model : I'm thinking deep learning models here. I don't know if there's anything more optimal to do than calibrating a deep learning model on the contoured CT scans I already have
Create a little software : based on the automatic model, the software will take a 'not contoured' Isogray file and turn it into a 'contoured' file. The oncologist only needs to load the new file into Isogray and validate the contouring
What do you think ? Do you see an easier way to do that ? I don't know anything about Isogray (I just know how to use it). Do you think this is doable? What information do I need before I start this project ?
Any insights will be welcomed :)
From what I have understood it is a problem of semantic segmentation.
You have an input image of N dimensions (or black and white) and you use the neural network to indicate which regions correspond to a specific organ.
You can use an architecture like the U-Net for this task: https://medium.com/#keremturgutlu/semantic-segmentation-u-net-part-1-d8d6f6005066
What I do not know is if the degree of reliability would be very high, that depends on many factors.
Neural networks look for differentiating patterns to discriminate zones, the first important component is shape and color. That is why it is more difficult when both the color and the shape are very different.
On the other hand you will need a lot of images but you can create a process called data-augmentation to generate more (artificial).
Another method that is currently used is to work in reverse, we know that the problem of image segmentation is difficult. But you can design a program that simulates real images where segmentation is known perfectly.
There are only some keypoints, I hope I have helped you.
EDIT:
Semantic segmentation in biomedic context: https://towardsdatascience.com/review-u-net-biomedical-image-segmentation-d02bf06ca760
You need to provide more background on the specifics on the contouring, especially given the fact that this is for medical diagnosis. Truthfully, I wouldn't try and automate this for liability reasons.
If you make an error someone it could cause a misdiagnosis, which as you already know can lead to numerous problems including lawsuits and death. The nice thing about 3rd party products is that it is already being tested robustly against numerous scenarios and approved for medical usage and liability reasons.
I'm pretty sure you could make a masters thesis doing something like this
With that being said, there is a nice github repo for problems like this that I think you could potentially start generating ideas from.
I have to develop a personality/job suitability online test for an HR department. Basically, users will answer questions, on a scale of 0-10 for example, and after say 50 questions, I want to translate that to a rating in 5 different personality/ job suitability characteristics.
I don't have any real data to start with, so first, is it even worth it to use a recommendation engine like MyMediaLite (github). How many samples will I need to train it to a decent performance?
I previously built a training course recommender, by simply doing and hand-weighted sum where each question increased the weight of several courses that were related to that question. It was an expert system, built like a feed-forward neural network, where I personally tuned all the weights based on my knowledge of the questions and the courses' content.
I would like to this time around use a recommender system, but I'm wondering how many times I would have to take the 50 question test, and then assign the results manually. would 100 examples do? that could be possible. 1000 would be too long. How can I know ahead of time?
Though useless, I want to say this is not possible to give a definite number. You should focus on learning curve when adding new samples.
You can process the samples by hand and engine on parallel, and compare the result given by both. Once the measurement e.g. recall and precision of the result given by engine achieve your expectation, then you get enough samples.
Hope this helpful!
I'm a grad student in astrophysics. I run big simulations using codes mostly developed by others over a decade or so. For examples of these codes, you can check out gadget http://www.mpa-garching.mpg.de/gadget/ and enzo http://code.google.com/p/enzo/. Those are definitely the two most mature codes (they use different methods).
The outputs from these simulations are huge. Depending on your code, your data is a bit different, but it's always big data. You usually take billions of particles and cells to do anything realistic. The biggest runs are terabytes per snapshot and hundreds of snapshots per simulation.
Currently, it seems that the best way to read and write this kind of data is to use HDF5 http://www.hdfgroup.org/HDF5/, which is basically an organized way of using binary files. It's a huge improvement over unformatted binary files with a custom header block (still give me nightmares), but I can't help but think there could be a better way to do this.
I imagine the sheer data size is the issue here, but is there some sort of datastore that can handle terabytes of binary data efficiently, or are binary files the only way at this point?
If it helps, we typically store data columnwise. That is, you have a block of all particle id's, block of all particle positions, block of particle velocites, etc. It's not the prettiest, but it is the fastest for doing something like a particle lookup in some volume.
edit: Sorry for being vague about the issues. Steve is right that this might just be an issue of data structure rather than the data storage method. I have to run now, but I will provide more details late tonight or tomorrow.
edit 2: So the more I look into this, the more I realize that this probably isn't a datastore issue anymore. The main issue with unformatted binary was all the headaches reading the data correctly (getting the block sizes and order right and being sure about it). HDF5 pretty much fixed that and there isn't going to be a faster option until the file system limitations are improved (thanks Matt Turk).
The new issues probably come down to data structure. HDF5 is as performant as we can get, even if it is not the nicest interface to query against. Being used to databases, I thought it would be really interesting/powerful to be able to query something like "give me all particles with velocity over x at any time". You can do something like that now, but you have to work at a lower level. Of course, given how big the data is and depending on what you are doing with it, it might be a good thing to work at a low level for performance sake.
MongoDB: http://www.mongodb.org/
Netezza
Products:
http://www.netezza.com/data-warehouse-appliance-products/skimmer.aspx
Hadoop: http://hadoop.apache.org/
Wikipedia's List of Distributed File
Systems:
http://en.wikipedia.org/wiki/List_of_file_systems#Distributed_file_systems
EDIT
Rationale for my lack of explanation / etc.:
OP says: "[HDF5]'s a huge improvement over unformatted binary files with a custom header block (still give me nightmares), but I can't help but think there could be a better way to do this."
What does "better" mean? Better structured? He seems to allude to the "unformatted binary files" as being an issue - so maybe that's what he means by better. If so, he'll need something with some structure - hence the first couple suggestions.
OP says: "I imagine the sheer data size is the issue here, but is there some sort of datastore that can handle terabytes of binary data efficiently, or are binary files the only way at this point?"
Yes, there are several. Both structured, and "unstructured" - does he want structure, or is he happy to leave them in some sort of "unformatted binary format"? We still don't know - so I suggest checking out some Distributed File Systems.
OP says: "If it helps, we typically store data columnwise. That is, you have a block of all particle id's, block of all particle positions, block of particle velocites, etc. It's not the prettiest, but it is the fastest for doing something like a particle lookup in some volume."
Again, Does the OP want better structure, or doesn't he? Seems like he wants both - better structure AND faster.... maybe scaling OUT will give him this. This further reinforces the first few options I listed.
OP says (in comments): "I don't know if we can take the hit on io though."
Are there IO requirements? Cost restrictions? What are they?
We can't get something for nothing here. There is no "silver-bullet" storage solution. All we have to go on here for requirements is "lots of data" and "I don't know if I like the lack of structure, but I'm not willing to increase my IO to accommodate any additional structure"... so I don't know what kind of answer he's expecting. He hasn't listed a single complaint about the current solution he has other than the lack of structure - and he's already said he's not willing to pay any overhead to do anything about that... so.... ?
Are there any good technical solutions for extremely long term archiving of data, for example for 25 to 100 years?
Somehow I just don't have a lot of confidence that a SQL 2000 backup file will be usable in court cases or for historians in 25 to 100 years.
This is a customer requirement, not just speculation.
This is comparable to trying to trying to do something useful with a back up for ENIAC or reading Atari Writer wordprocesing files. The hardware doesn't necessarily exist anymore, the storage media is likely corrupt, the professionals for using the technology probably don't exist anymore, etc.
Actually, printing on Acid-free paper is probably a much better solution than any more advanced technological one. It is much more likely that the IT tech of +100 years will be able to high-speed scan and load print than any digital data storage based on 100 year-old media access HW, technology and standards, 100 year-old disk/file format standards and 100 year-old data encoding standards.
Disagree? I've got a whole attic full of vinyl CD's, 8-tracks, cassette tapes, floppy disks (4 different densities!) that argue otherwise. And they are only 20 years old! (OK, the 8-tracks are closer to 30).
The fact is that there is only one data storage & archiving technology that has ever withstood the test of time over 100 years or more and still been cost effectively retrievable, and thats writing/printing on physical media.
My advice? Don't trust any archival strategy until it's been tested, and there's only one that has passed the 100-year test so far.
You'll need to convert to text - perhaps XML.
Then upload it to the cloud, make archival copies etc.
I think you need to pick a multi-modal approach.
If you have the budget: http://www.archives.gov/era/papers/thic-04.html
<joke>Print it.</joke>
script the data into flat files (either one file per table or summarize multiple tables into a file), write them to high end archival CDs. in 100 years they will have to load this data into whatever "database" they have, so so some manual conversion will be necessary, so a nice schema script dump into a single file would help the poor guy trying to read these files and make the proper joins.
EDIT
offer the client a service contract, where you make sure they are up to date with the latest archival technology on a yearly basis. this could be a good thing $
I suggest you consult a specialist company in this field.
You might also be interested in this article:
Strategies for long-term data retention
It might help to speak to one of those companies/organisations
I don't know if anyone reads this thread or not anymore but there is a really good solution for this.
There is a new company called Millenniata, the have a product called M-Disc. The M-Disc is essentially a DVD made out of rock like materials that give it an estimated shelf life of 1,000 years +. You have to have a special DVD burner to burn the DVD's but it is not that expensive. Plus any normal DVD reader can read them. I have a professor at BYU that helped form this company, it is some pretty cool technology. Good Luck.
Link to M-Disc Website