Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I want a .csv list, mysql database, or any other list of all U.S states and cities, including which cities are in which state. From this list I will generate a mysql database with the following fields:
states:
- id (int, auto_increment, primary)
- name (varchar 255)
cities:
- id (int, auto_increment, primary)
- stateId (id of the state from states table to which this city belongs)
- name (varchar 255)
Thanks in advance.
You can get city/state information in tab-separated value format from GeoNames.org. The data is free, comprehensive and well structured. For US data, grab the US.txt file at the free postal code data page. The readme.txt file on that page describes the format.
I spent a while looking for such a file, and ended up doing one myself, you can get it from here:
https://github.com/grammakov/us_cities_and_states/tree/master
Check out the MySQL world sample database. This db is used by mysql documentation as a sample db to test query on.
It already have a 'cities' table you are looking for.
Are you ready to pay for this content?
If YES, then you can find it at uscities.trumpetmarketing.net
I have also seen this information provided along with some programming books especially ones dealing with .NET database programming. Let me refer to my library and ge back to you on this:
You can also refer the following:
http://www.world-gazetteer.com/wg.php?x=1129163518&men=stdl&lng=en&gln=xx&dat=32&srt=npan&col=aohdq
http://www.geobytes.com/FreeServices.htm
Please dont bother voting for this answer. There is no information here that cannot be obtained via a simple google search!
Someone has posted a list here:
http://mhinze.com/archive/list-of-us-cities-all-city-names-and-states-regex-groups/
I use the us city and county database for this purpose and I just checked that it got updated in August. They claim to include 198,703 populated places (a GNIS term for a city or village). I see you need full state names and these names are included in a free database called us state list.
Both of them are in CSV files and they provide very detailed instructions about how to import them to both local and remote MySQL servers. You can join them in a select statement to pull records with full state names for your needs.
You can find csv or sql format or html format at below website. They have cities+states for some countries including usa.
http://www.countrystatecity.com/. They keep updating the site and doing good job. hope this will help to other developers also.
for usa you can check below link.
http://www.countrystatecity.com/USAStatesCities.php
That's a tall order. Consider creating one by scraping the links off this page:
WP: List of cities, towns, and villages in the US. It is much simpler if you scrape the wiki markup code rather than the raw HTML.
Will require some skill at regexes or at least parsers, but should be do-able.
This helped me a great deal: http://www.farinspace.com/us-cities-and-state-sql-dump/
It has a .zip file with 3 .sql files which you can just run in ssms.
You may need to replace some of the weirdly encoded single quotes with double quotes.
Related
So there are over 200 documents on this link (http://goo.gl/IdvhMf) each document has over a hundred pages of Questions and Answers from each respondent. Each document represents answers from one respondent. I want to create a table in a db (not dependent on any db technology) that would have a schema something like this:
Respondent | Question Number | Answer
e.g
UBS, 1, "Our opinions is that..."
I can then query the db to say fpr example: "show me all responses from Question 34 for respondents A,B,C"
The step after that might include some for of sentiment analysis on responses.
So what is the best way for someone who is not a programmer by day to to this? Any off-shelf configurable tools I could use?
Split your problem in two.
The first is how you parse out the question and answer pairs from the document.
Storing those in the database is a second unrelated problem.
Addressing the first problem (and not having looked at the documents), that would typically be done on the basis of styles (eg Question style, Answer style), magic text ("Question", "Answer"), or formatting (eg questions are bold/red). If I had control over the creation of the documents, I'd probably use content controls.
How you do this in code depends to some extent on your preferred language, but things are easier if the documents are in docx format (as opposed to legacy binary doc format, or RTF). Assuming docx format, there are Open XML toolsets for most languages.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Does this implementation scale well:
Requirement is:
System must support x number of languages; where x will = as many languages as business can translate
All system maintained values (page content, email content, values stored in the hundreds of lookup tables which are user facing) need to support multi-language.
My implementation:
Tables: (sample names used)
local_text_table
language_lookup_table
Content_table_1
Content_table_2
Content_table_3
Content_table_4
....
Plan:
language_lookup_table has list of all possible languages
lang_id lang_name
local_text_table has list of all possible text used on system (emails, page content, menu labels, footer text etc) along with 1 colunm for each language that system will support - FK to the language_lookup_table.
text_id
eng_text
spanish_text
arabic_text
...
This way all translation is stored in 1 table for entire system. I can enable/disabled/update/edit/add/remove translations in 1 step. in the code all text is stored as a keyword referencing to (text_id). System detects what language the user's session is running and accordingly pulls the text from the column for that keyword. if a particular row is NULL (not translated) it will default to the English text column.
Is this good?
Of course this will not work for the lookup values stored in the hundreds of tables, for that I have no plan as yet apart from giving each table it's own colunms for each language. Then I have user content also to allow users to translate their user postings like blogs, comments etc for which I don't have a plan. But I want to fist focus on the system text and finalize that.
Your design is flawed in that you won't be able to add a new language without adding a column to local_text_table.
A better design for that table would be:
text_id
lang_id (foreign key to language_lookup_table)
translated_text
Now you can add a language to language_lookup_table and then start adding translations to local_text_table without making any changes to your relational model. If you have the means to enter this data via a UI (or even directly in the database), you should be able to add new languages directly in production.
Clearly you will need an intersection between every table you want to have support in multi language and language table. Also I recommend to use a flag in languages table with the meaning of "installed language" which means that in a particular implementation can be used only some useful languages. This flag will helps you to display in a list only interest languages, not all. Also, the language table you can find all LCID codes in Microsoft forums, already with LCID code which is already used and is common.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Where can I find historical raw weather data for a project I am doing with focus on the USA and Canada. I need temperatures mainly, but other details would be nice. I am having a very hard time finding this data. I really dont want to have to scrape a weather site.
I found myself asking this same question, and will share my experience for future Googlers.
Data sources
I wanted raw data, and lots of it... an API wouldn't do. I needed to head directly to the source. The best source for all of that data seemed to be either the NCEP or NCDC NOMADS servers:
http://nomads.ncdc.noaa.gov/dods/ <- good for historical data
http://nomads.ncep.noaa.gov/dods/ <- good for recent data
(Note: A commenter indicated that you must now use https rather than http. I haven't tested it yet, but if you're having issues, try that!)
To give an idea of the amount of data, their data goes all the way back to 1979! If you're looking for Canada and the US, the North American Regional Reanalysis dataset is probably your best answer.
Using the data
I'm a big python user, and either pydap or NetCDF seemed like good tools to use. For no particular reason, I started playing around with pydap.
To give an example of how to get all of the temperature data for a particular location from the nomads website, try the following in python:
from pydap.client import open_url
# setup the connection
url = 'http://nomads.ncdc.noaa.gov/dods/NCEP_NARR_DAILY/197901/197901/narr-a_221_197901dd_hh00_000'
modelconn = open_url(url)
tmp2m = modelconn['tmp2m']
# grab the data
lat_index = 200 # you could tie this to tmp2m.lat[:]
lon_index = 200 # you could tie this to tmp2m.lon[:]
print tmp2m.array[:,lat_index,lon_index]
The above snippet will get you a time series (every three hours) of data for the entire month of January, 1979! If you needed multiple locations or all of the months, the above code would easily be modified to accommodate.
To super-data... and beyond!
I wasn't happy stopping there. I wanted this data in a SQL database so that I could easily slice and dice it. A great option for doing all of this is the python forecasting module.
Disclosure: I put together the code behind the module. The code is all open source -- you can modify it to better meet your needs (maybe you're forecasting for Mars?) or pull out little snippets for your project.
My goal was to be able to grab the latest forecast from the Rapid Refresh model (your best bet if you want accurate info on current weather):
from forecasting import Model
rap = Model('rap')
rap.connect(database='weather', user='chef')
fields = ['tmp2m']
rap.transfer(fields)
and then to plot the data on a map of the good 'ole USA:
The data for the plot came directly from SQL and could easily modify the query to get out any type of data desired.
If the above example isn't enough, check out the documentation, where you can find more examples.
At the United States National Severe Storms Laboratory Historical Weather Data Archive (note: this has since been retired).
Also, the United States National Climatic Data Center Geodata Portal.
The United States National Climatic Data Center Climate Data Online.
The United States National Climatic Data Center Most Popular Products.
wunderground.com has a good API. It is free for 500 calls per day.
http://www.wunderground.com/weather/api/
We have an IBM UniData server. I just installed UniObject .net. It looks like you just issue unidata queries through the .net classes.
Where can I learn the query language/syntax and to work with UniData in general? What books, sites, or videos do you recommend?
The best resource is going to be Rocket Software's UniData library.
Rocket recently acquired the U2 family of products, which includes UniData and UniVerse, from IBM. They've got a pretty extensive catalog of documentation for UniData. You might want to check out the "Using UniQuery" document, which discusses the UniQuery in particular.
Unfortunately, you won't find many books, screencasts, or programming communities devoted to UniData because it's pretty esoteric. If you run into anything specific that you've got questions on, it can't hurt to post here using the UniData tag and I'll do my best.
You can find a lot of information on the U2UG (U2 User Group). There is a learner pack:
http://212.241.202.162/cms/cmsview.wsp?id=learner_pack
This will help.
International Spectrum has webinars that cover the Query language, and can put in touch with a trainer if you are interested:
http://www.intl-spectrum.com/
Besides the Using UniQuery document, the UniQuery Commands Reference is also useful.
The general structure of the query is
verb table filter order display
SORT CUSTOMER IF HATSIZE = "7" BY SHOESIZE NAME CITY STATE ZIP
Where
verb = SORT
table = CUSTOMER
filter = IF HATSIZE = "7" (you can have multiple filters
order = BY SHOESIZE (you can have multiple order elements)
display = ID NAME CITY STATE ZIP (ID isn't on the list, but it is implied)
For this to work, the TABLE (also called a FILE) named CUSTOMER has to exist.
CUSTOMER must have a dictionary (schema/view repository) which defines HATSIZE SHOESIZE NAME CITY STATE and ZIP.
A more coherent example:
SORT CUSTOMER IF ORDER.LIMIT > "12,000.00" AND WITHOUT STATUS "INACTIVE" BY-DSND ORDER.LIMIT BY ZIP ORDER.LIMIT ZIP NAME STATUS
Which would select CUSTOMERs with $12K or more ORDER.LIMIT who are not INACTIVE and sort them form biggest limit to least... you get the idea.
This question is based on my plan at the thread.
I have the following table
alt text http://files.getdropbox.com/u/175564/table-problem-2.png
where kysymys is a question in English.
I would like to know how I should store the data of an user's question:
in a separate table where I have the parameters question-id and question-body OR
in the current table where I have other parameters too
I need to neutralize the question-body somehow in the future such that user does not give code which breaks my system.
How would you store the data of the user's text?
This will depend:
You mention: "where kysymys is a question in English."
Are you planning to have the same question in other languages?
If that's the case, normalize the question and question body out to another table. That way, given a language and a question ID, you can retrieve the right one.
However, if the question is only going to be in English, just leave it in the same table. That's perfectly fine.
Are you planning to store revisions of a question ? e.g. StackOverflow allows you to revise the question text and it stores the history.
If this is the case I would store the text separately. You would store answers/comments referenced against the question-id, but the question text would be held in a separate table.
Your data neutralisation issue (above) is orthogonal to this (a separate issue of data sanitisation/cleansing).