I am working on data of Mathematics Genealogy Project. I collect all information about students and advisors and do some query processing on that data. To be precise, I crawl all the HTML pages from the root URL of Mathematics Genealogy Project http://www.genealogy.ams.org/ and collect all information that I need and query on that. For experimental purposes, I need some more data on net which is available in similar format.
Can anybody suggest good websites which I can crawl for some interesting information. any data other than genealogy is also welcome but it should have at least some heirarchy.
Thanks for all your suggestions.
There is a list of such sites at http://en.wikipedia.org/wiki/Academic_genealogy. For instance, http://academictree.org/.
Related
I feel like i should almost give a friggin synopsis to this/these lengthy question(s)..
I apologize if all of these questions have been answered specifically in a previous question/answer post, but I have been unable to locate any that specifically addresses all of the following queries.
This question involves data extraction from the web (ie web scraping, data mining etc). I have spent almost a year doing research into these fields and how it can be applied to a certain industry. I have also familiarized myself with php and mysql/myphpmyadmin.
In a nutshell I am looking for a way to extract information from a site (probably several gigs worth) as fast and efficiently as possible. I have tried web scraping programs like scrapy and webharvey. I have also experimented with programs like HTTrack. All have their strengths and weaknesses. I have found that webharvey works pretty good yet it has its limitations when scraping images that are stored in gallery widgets. Also I find that many of the sites I am extracting from use other methods to make mining data a pain. It would take months to extract the data using webharvey. Which I can't complain given that I'd be extracting millions of rows worth of data exported in csv format into excel. But again, images and certain ajax widgets throw the program off when trying to extract image files.
So my questions are as follows:
Are there any quicker ways to extract said data?
Is there any way to get around the webharvey image limitations (ie only being able to extract one image within a gallery widget / not being able to follow sub-page links on sites that embed their crap funny and try to get cute with coding)?
Are their any ways to bypass site search form parameters that limit the number of search results (ie obtaining all business listings within an entire state instead of being limited to a county per search form restrictions)**
Also, this is public information so therefore it cannot be copyrighted; anybody can take it :) (case in point: Feist Publications v. Rural Telephone Service). Extracting information is extracting information. Its legal to extract as long as we are talking facts/public information.
So with that said, wouldn't the most efficient method (grey area here) of extracting this "public" information (assuming vulnerabilities existed), be through the use of sql injection?... If one was so inclined? :)
As a side question just how effective is Tor at obscuring ones IP address? Lol
Any help, feedback, suggestions or criticism would be greatly appreciated. I am by no means an expert in any of the above mentioned fields. I am just a motivated individual with a growing interest in programming and automation who has a lot of crazy ideas. Thank you.
You may be better off writing your own Linux command-line scraping program using either a headless browser library like PhantomJS (JavaScript), or a test framework like Selenium WebDriver (Java).
Once you have your scrape program completed, you can then scale it up by installing it on a cloud server (e.g. Amazon EC2, Linode, Google Compute Engine or Microsoft Azure) and duplicating the server image to as many are required.
I'm with a company that is building a venue / artist database for live music and recently came across Freebase. It looks very compelling, even if the data isn't there for new, up-and-coming bands. For those of you who have worked with Freebase, I have a couple questions:
Are there downsides to integrating all of the data entry with Freebase? We are not looking to sell or privatize this information.
What are the weaknesses of Freebase, with regards to usability?
Disclosure: I work on Freebase at Google.
The music data in Freebase is one of our strongest areas and is going to continue to get broader and richer as we continue to load more datasets. For example, we import data from MusicBrainz, clean it up and match the topics against existing topics in Freebase to avoid duplicates.
In terms of downsides, you should be prepared to work with a lot of data. For example, Freebase currently has 4 musical artists named "John Smith" which may or may not be useful for your application but you'll still need to figure out which one(s) map to the John Smith that your users are interested in. We call this "reconciliation" and its necessary so that your app knows precisely which topics to query the API for.
Since you mentioned music venues I should also point out that while Freebase has a lot of data about places, we don't yet have a geosearch API so you'd need to roll your own if that's something you need.
Since anyone can edit Freebase, you should also consider using as_of_time to protect your site against vandalism.
Freebase is great for developers because you can easily jump in and clean up bad data or add missing topics. However, one area that has always been a challenge is loading large amounts of data from outside of Google. We've built the OpenRefine which allows folks to upload datasets, but these datasets must pass a QA process that takes some time to complete. Its necessary to have these QA processes to maintain the level of quality in Freebase, but it does slow down the process of loading large datasets.
I really hope that you choose to make use of Freebase music data to build your company. I know that there are already a number of music startups happily using our data.
I hope anyone can help me out in this topic, even if it's not a specific programming question.
I'm writing a bachelor thesis, where I compare MySQL to MongoDB and I want to write something about Youtube, as the platform has to handle many requests with heavy dataload.
The only good resource which I found was this video: Seattle Conference on Scalability: YouTube Scalability
As the conference was in 2007, I can imagine there were some updates regarding to the database.
The last information that I have from this talk is that the thumbnails are stored in a BigTable database and the metadata in MySQL. Are there any changes since then?
Where are the videos stored? Is there an entry in the MySQL table, which refers to the stored video?
Thanks in advance for the answer!
According to this, youtube still uses mysql: http://code.google.com/p/vitess/wiki/ProjectGoals
I am not sure of how things are at youtube but I am in process of developing a similar application for our client. So what we are doing is we are making the use of best of both worlds i.e SQL and NoSQL..
We store the videos on disk and store the path to these videos in MySQL db table. Then we have a separate table which holds the genre and video mapping i.e which video belongs to which particular genre.
Today with vast of pool of user data we are in position to leverage upon these data like we had never been before, so you see things are now way different then 2007 and with the popularity and dependency of people on internet when it comes to sites like you tube we have vast set of unstructured data which if used properly can give you great results. So in our project we store the site admin and reporting stuff like user db, video locations and genre mapping etc in MySQL and store the unstructured data about user interaction in NoSQL database. We then use the NoSQL data to do all the analytics and give appropriate results to the user.
They are using mysql with Bigdata.
The user information such has who uploaded the file,file information all will be stored in mysql and data will be stored in Bigdata.
I think they are using database that can use FileTable
I want to store in db crawled sites (html code). Sites will be millions. I will be searching in that sites special strings.
Now i am using PostrgreSQL, but i have doubts if relational database is proper. Maybe some NoSQL soultions?
What soultion do you recommend?
I have used Apache Nutch for the same purpose (crawlig, storing and searching millions of sites) with success. It is based on Lucene and it scales (thanks to Hadoop).
Does the work out of the box.
http://nutch.apache.org/
http://lucene.apache.org/
After you fetch your web page you need to truncate extra invaluable information from your web pages (ads, unrelated text, ...). using this strategy you will decrease the page size you should store in database and your search results more relevant information.
I suggest you to create a program and extract valuable information and store those in database (if you don't need original page) after that you can create a lucene library above to search for your information
If you want more accurate information you can analyze your page and store some rules (content direction, category, links to external resources resources, valuable information to all text rate, ....) to create a rank for your page which is techniques of text mining.
My application needs to retrieve information about any published book based on a provided ISBN, title, or author. This is hardly a unique requirement---sites like Amazon.com, Chegg.com, and even software like Book Collector seem to be able to do this easily. But I have not been able to replicate it.
To clarify, I do not need to search the entire database of books---only a limited subset which have been inputted, as in a book collection. The database would simply allow me to tag the inputted books with the necessary metadata to enable search on that subset of books. So scale is not the issue here---getting the metadata is.
The options I have tried are:
Scrape Amazon. Scraping the regular Amazon pages was not very robust to things like missing authors, and while scraping the smaller mobile pages was faster, they shared the same issues with robustness of extraction. Plus, building this into an application is a clear violation of Amazon's Terms of Service.
Scrape the Library of Congress. While this seems to have fewer legal ramifications, ease and robustness were again issues.
ISBNdb.com API. While the service is free up to a point, and does a good job of returning the necessary metadata, I need to do this for over 500 books on a daily basis, at which point this service costs money proportional to use. I'd prefer a free or one-time payment solution that allows me to do the same.
Google Book Data API. While this seems to provide the information I need, I cannot display the book preview as their terms of service requires.
Buy a license to a database of books. For example, companies like Ingram or Baker & Taylor provide these catalogs to retailers and libraries. This solution is obviously expensive, so I'm hoping that there's a more elegant solution I've missed. But if not, and someone on SO has had a good experience with a particular database, I'm willing to go with that.
I've tried to describe my approach in detail so others with fewer books can take advantage of the above solutions. But given my requirements, I'm at my wits' end for retrieving book metadata.
Since it is unlikely that you have to retrieve the same 500 books every day: store the data retrieved from isbndb.com in a database and fill it up book by book.
Instead of scraping Amazon, you can use the API they expose for their affiliate program: https://affiliate-program.amazon.com/gp/advertising/api/detail/main.html
It allows about 3k requests per hour and returns well-formed XML. It requires you to set a link to the book that you show the information about, and you must state that you are an affiliate partner.
This might be what you're looking for. They even offer a complete download!
https://openlibrary.org/data
As it seems, a lot of libraries and other organisations make information such as "ISBN" available through MAchine-Readable Cataloging aka MARC, you can find more information about it here as well.
Now knowing the "right" term to search for I discovered WorldCat.org.
Maybe this whole MARC thing gives you a new kind of an idea :)