Missing States in creating county map visuals using Python3 Vincent Vega - maps

I am using Vincent to plot county level map for US. Took example data for 2016 elctions. It however doesnt plot for some states like California. I have checked data and FIPS codes seems to exist but still showing blank there. Any ideas what may be going on? I got county data from topo.json.
geo_data_c2 = [{'name': 'counties',
'url': county_topo,
'feature': 'us_counties.geo'}]
vis_election_counties = vincent.Map(data=merged, geo_data=geo_data_c2, scale=1000,
projection='albersUsa', data_bind='per_dem',
data_key='combined_fips', map_key={'counties': 'properties.FIPS'})
#Change our domain for an even inteager
vis_election_counties.scales['color'].domain = [0,1]
vis_election_counties.legend[![enter image description here][1]][1](title='per_dem')
vis_election_counties.to_json('vega.json')
vis_election_counties.display()

The FIPS codes for counties in the first ~7 states alphabetically need to be zero-padded to 5 characters.
Arapahoe County, CO has FIPS code 8005, which is represented as "08005" in https://raw.githubusercontent.com/jgoodall/us-maps/master/topojson/county.topo.json
merged['combined_fips'] = merged['combined_fips'].map(lambda i: str(i).zfill(5))

Related

SPSS produces 1 scatter plot with split file

I am working with data where I need to create multiple scatter plots for different populations. I also recently upgraded from SPSS v26 to v28 and the code that I used for this worked in v26 but is no longer working correctly in v28. Instead of producing multiple scatter plots like it's supposed to, it is now producing 1 plot in v28, presumably the first subpopulation in the split. I tested the split file function with descriptives and it worked as intended.
I have scoured everything in the GUI menus to find any kind of setting that was ticked by default and came up with nothing. I also tried to use a filter based on the criterion variables in my split file function and ran a scatter plot but that gave me a graph of the whole population instead of the subpopulation I filtered. Any guidance on what could be going on with scatter plots and the split file function in SPSS v28 would be greatly appreciated.
Here is my code for reference:
SORT CASES BY exit_yr service sigscore.
SPLIT FILE SEPARATE BY exit_yr service sigscore.
*Chart Builder.
GGRAPH
/GRAPHDATASET NAME="graphdataset" VARIABLES=wait.time[name="wait_time"]
score.change[name="score_change"] service.ideal[name="service_ideal"] MISSING=VARIABLEWISE
REPORTMISSING=NO
/GRAPHSPEC SOURCE=INLINE
/FITLINE TOTAL=NO SUBGROUP=NO.
BEGIN GPL
SOURCE: s=userSource(id("graphdataset"))
DATA: wait_time=col(source(s), name("wait_time"))
DATA: score_change=col(source(s), name("score_change"))
DATA: service_ideal=col(source(s), name("service_ideal"), unit.category())
GUIDE: axis(dim(1), label("wait.time: Difference in days between referral submission ",
"and referral acceptance"))
GUIDE: axis(dim(2), label("score.change: score change from pre to post"))
GUIDE: legend(aesthetic(aesthetic.color.interior), label("service.ideal"))
GUIDE: text.title(label("Grouped Scatter of score.change: score change from pre to ",
"post by wait.time: Difference in days between referral submission and referral ",
"acceptance by service.ideal"))
ELEMENT: point(position(wait_time*score_change), color.interior(service_ideal))
END GPL.
SPLIT FILE OFF.

Biblatex doesn't compile. Probably .bib file not recognised

I've spent many hours trying to get my bibliography working - unsuccessfully. I suspect that, somehow, my .bib file doesn't get recognised.
Help would be greatly appreciated.
MWE:
\documentclass[a4paper, 12pt]{article}
\usepackage{array}
\usepackage{lscape}
\usepackage[paper=portrait,pagesize]{typearea}
\usepackage[showframe=false]{geometry}
\usepackage{changepage}
\usepackage{tabularx}
\usepackage{graphicx}
\usepackage{adjustbox}
\usepackage[utf8]{inputenc}
\usepackage{babel,csquotes,xpatch}
\usepackage[backend=biber,style=authoryear, natbib]{biblatex}
\addbibresource{test.bib}
\usepackage{xurl}
\usepackage[colorlinks,allcolors=blue]{hyperref}
\begin{document}
This is a test... test test\\
\cite{glaeser_gyourko}\\
\cite{hsieh-moretti:2019}\\
\cite{glaeser_gyourko}\\
\printbibliography
\end{document}
test.bib file:
#article{hsieh-moretti:2019,
Author = {Hsieh, Chang-Tai and Moretti, Enrico},
Title = {Housing Constraints and Spatial Misallocation},
Journal = {American Economic Journal: Macroeconomics},
Volume = {11},
Number = {2},
Year = {2019},
Month = {4},
Pages = {1-39},
DOI = {10.1257/mac.20170388},
URL = {https://www.aeaweb.org/articles?id=10.1257/mac.20170388}
}
#article{glaeser_gyourko,
Author = {Glaeser, Edward and Gyourko, Joseph},
Title = {The Economic Implications of Housing Supply},
Journal = {Journal of Economic Perspectives},
Volume = {32},
Number = {1},
Year = {2018},
Month = {2},
Pages = {3-30},
DOI = {10.1257/jep.32.1.3},
URL = {https://www.aeaweb.org/articles?id=10.1257/jep.32.1.3}
}
In PDF it looks like this: enter image description here
I get the following information in the source viewer:
Process started
INFO - This is Biber 2.14 INFO - Logfile is 'test.blg' INFO - Reading
'test.bcf' INFO - Found 2 citekeys in bib section 0 INFO - Processing
section 0 INFO - Globbing data source 'test.bib' INFO - Globbed data
source 'test.bib' to test.bib INFO - Looking for bibtex format file
'test.bib' for section 0 INFO - LaTeX decoding ... INFO - Found
BibTeX data source 'test.bib'
Process exited with error(s)
I use texmaker 5.0.4 on MacOS and I post my configurations here:
enter image description here enter image description here
I really have very little idea on what goes on. Today, I started a work session, added a new source and it didn't work. I deleted the new source so that the bibliography would be the same as prior to me changing it, and it didn't work either. So, this let's me assume that, somehow, the program doesn't understand where the bibliography is. The .bib file and the document are in the same folder.
What I tried:
Triple checked code in bibliography using tools such as https://biblatex-linter.herokuapp.com/
Clear the cache of all documents.
change the natbib in the command \usepackage[backend=biber,style=authoryear, natbib]{biblatex} to biber -> doesn't seem to work.
Left out natbib and got same result. \usepackage[backend=biber,style=authoryear, natbib]{biblatex} => \usepackage[backend=biber,style=authoryear]{biblatex}
Add the command \usepackgage{natbitb} in addition to biblatex but this produces compatibility issues.
Add the codes \usepackage[utf8]{inputenc} &
\usepackage{babel,csquotes,xpatch} because they are recommendet by this biblatex cheat sheet: http://tug.ctan.org/info/biblatex-cheatsheet/biblatex-cheatsheet.pdf. Didn't change anything.
Thanks for your time!
I had a similar problem, what helped me was looking up the articles and rewriting them via the Google Scholar bibTex version.
The problem arose as I changed the name manually. This results in an error which is not recognized. And this threw me into researching exactly the same kind of .bib file not recognized error.
Your housing article should be formatted like this:
#article{hsieh2019housing,
title={Housing constraints and spatial misallocation},
author={Hsieh, Chang-Tai and Moretti, Enrico},
journal={American Economic Journal: Macroeconomics},
volume={11},
number={2},
pages={1--39},
year={2019}
}
I found another source of this problem Citavi generates invalid bibtex syntax. Often the year field is not correctly filled or special characters are not escaped properly. Maybe these are data errors which have their origin in the sources not in Citavi, but nonetheless often Citavi does not export valid bibtex format.

-ADVICE REQUEST- MS-ACCESS Hyperlink Comparison Script Advice

I'm brand new to MS-Access and had a few guideline-questions,
My organization uses MS-Access to track a large electronic-part inventory. These parts have a hyperlink field that links to the product webpage. Here's an example:
Part Number Part Type Value Description Component_Height Voltage Tolerance Schematic Part Layout PCB Footprint Manufacturer Part Number Manufacturer Distributor Part Number Distributor Price Availability Link
UMK105CG100DV-F Ceramic 10pF CAP CER 10PF 50V NP0 0402 0.35 MM 50V ±0.5pF xxxxx\C_NP,xxxxx\C_NP_Small c_0402 UMK105CG100DV-F Taiyo Yuden 587-1947-2-ND Digi-Key 0.00378 In Stock http://www.digikey.com/product-detail/en/UMK105CG100DV-F/587-1947-2-ND/1473246
Links Here:
http://www.digikey.com/product-detail/en/UMK105CG100DV-F/587-1947-2-ND/1473246
Nearly the entire majority of our hyperlinks point to the supplier DigiKey.
Right now the verification flow goes like this:
Every month or so a large group of us sits down and one by one copies the hyperlink into google.
We then open the corresponding webpage and verify component availability etc.
We have nearly 1000 components and this process takes hours. All I'm looking for is advice on how to improve our workflow. I was hoping there was say a way to write a "open hyperlink with default browser and search string" macro or scripting interface. The pseudo-script would then check that the string "Quantity Available" was greater than 1, and if it wasn't (the part was out of stock) mark the part as obsolete.
Any advice would be greatly appreciated, I'm really aiming to optimize our workflow.
You can traverse the DOM of the web page. A quick look at the web page and you can see a table with a name of product-details.
So the following VBA code would load the sample web page, and pull out the values.
Option Compare Database
Option Explicit
Enum READYSTATE
READYSTATE_UNINITIALIZED = 0
READYSTATE_LOADING = 1
READYSTATE_LOADED = 2
READYSTATE_INTERACTIVE = 3
READYSTATE_COMPLETE = 4
End Enum
Sub GetWebX()
Dim ie As New InternetExplorer
Dim HTML As New HTMLDocument
Dim strURL As String
Dim Htable As New HTMLDocument
Dim i As Integer
strURL = "http://www.digikey.com/product-detail/en/UMK105CG100DV-F/587-1947-2-ND/1473246"
ie.Navigate strURL
Do While ie.READYSTATE < READYSTATE_COMPLETE
DoEvents
Loop
Set HTML = ie.Document
Set Htable = HTML.getElementById("product-details")
For i = 0 To Htable.Rows.Length - 1
With Htable.Rows(i)
Debug.Print Trim(.Cells(0).innerText), Trim(.Cells(1).innerText)
End With
Next I
ie.Quit
Set ie = Nothing
End Sub
output of above:
Digi-Key Part Number 587-1947-2-ND
Quantity Available 230,000
Can ship immediately
Manufacturer Taiyo Yuden
Manufacturer Part Number UMK105CG100DV-F
Description CAP CER 10PF 50V NP0 0402
Expanded Description 10pF ±0.5pF 50V Ceramic Capacitor C0G, NP0 0402(1005 Metric)
Lead Free Status / RoHS Status Lead free / RoHS Compliant
Moisture Sensitivity Level (MSL) 1 (Unlimited)
Manufacturer Standard Lead Time 11 Weeks
Since the above is a array, then you could place a button right on the form, and have a few extra lines of VBA to write the values into the form. So a user would just have to go to the given record/form in Access - press a button and the above values would be copied right into the form.
the above VBA code requires a reference to:
Microsoft Internet Controls
Microsoft HTML Object Library
I would suggest that after testing you use late binding for the above two libraries.

How to use BOM Api for weather, tide and swell

I have lot of search on the BOM api of Australia. There is no easy way to get the weather details like wind, temp, humidity etc. They provide data in by ftp in .xml format. There is no json format at all. Some where they provide the data in json format.Below is link of the json response.
http://www.bom.gov.au/fwo/IDW60801/IDW60801.94802.json
but the biggest problem with the product id, there is IDW60801 product ID of west Australia for the "observations" data. It is has the following information :- weather, swell, pressure and wind. but it has the previous day details not forecast details.
There is wmo id :- 94802
I got some wmo id from somewhere but it not for all the location of the Australia. I want to access weather forecast of all the location of the Australia in json or . xml format.
If anybody know how we get all the details please let me know.
Here is the ftp link for the products :-
ftp://ftp.bom.gov.au/anon/sample/catalogue/
ftp://ftp.bom.gov.au/anon/sample/catalogue/Observations/
ftp://ftp.bom.gov.au/anon/sample/catalogue/Forecasts/
ftp://ftp.bom.gov.au/anon/sample/catalogue/Tide/
I also got the AAC identifier list of the Australia's cities by the BOM.If getting the details by AAC identifier please let me know the url for that so i can retrive the details by it.
Thanks
You expressed your problem as:
I have lot of search on the BOM api of Australia. There is no easy way
to get the weather details like wind, temp, humidity etc. They provide
data in by ftp in .xml format. There is no json format at all.
If I understand your need, is to pull weather data from BOM in JSON format?
So the first thing is to identify an IDV near you. In this case I'm using, for Melbourne, it's IDV60901.
So here's the JSON request: http://www.bom.gov.au/fwo/IDV60901/IDV60901.95936.json
You can find these under "Observations - individual stations" on http://www.bom.gov.au/catalogue/data-feeds.shtml
The response includes a header and then the following data on a half-hourly basis (where "sort order" is the most recent observation). Note that because this location is not on the coast it doesn't provide ocean/bay conditions. However if you select an IDV where that data is relevant, then you will find observations for ocean conditions:
{
"sort_order": 0,
"wmo": 95936,
"name": "Melbourne (Olympic Park)",
"history_product": "IDV60901",
"local_date_time": "12/12:30pm",
"local_date_time_full": "20171012123000",
"aifstime_utc": "20171012013000",
"lat": -37.8,
"lon": 145.0,
"apparent_t": 12.4,
"cloud": "-",
"cloud_base_m": null,
"cloud_oktas": null,
"cloud_type_id": null,
"cloud_type": "-",
"delta_t": 5.9,
"gust_kmh": 28,
"gust_kt": 15,
"air_temp": 16.6,
"dewpt": 4.0,
"press": 1014.7,
"press_qnh": 1014.7,
"press_msl": 1014.7,
"press_tend": "-",
"rain_trace": "0.0",
"rel_hum": 43,
"sea_state": "-",
"swell_dir_worded": "-",
"swell_height": null,
"swell_period": null,
"vis_km": "-",
"weather": "-",
"wind_dir": "WNW",
"wind_spd_kmh": 15,
"wind_spd_kt": 8
}
It wasn't immediately clear to me how to do this, here's what I found:
You can use this endpoint: http://www.bom.gov.au/fwo/<PARAMS...> to retrieve last ~72 hours of weather observations for a particular site.
I got a full list of available weather stations (ID's + labels + coordinates) here:
ftp://ftp.bom.gov.au/anon2/home/ncc/metadata/sitelists/stations.zip
http://www.bom.gov.au/climate/cdo/about/site-num.shtml
the request format is: http://www.bom.gov.au/fwo/ID<STATE>60701/ID<STATE>60701/<STATION_ID>.json
an example query for Cape Bruny in Tasmania: http://www.bom.gov.au/fwo/IDT60701/IDT60701.95967.json
the 95967 in my above example is the station ID
the IDT60701 part is for the state of Tasmania, so for other states:
QLD: IDQ60701
NSW: IDN60701
VIC: IDV60701
NT: IDD60701 (slightly inconsistent convention on this one..)
etc.
There's also a bunch of product codes here: http://www.bom.gov.au/catalogue/anon-ftp.shtml which might be useful somehow..
The AAC codes can be linked with the précis forecast given in the XML files, they also specify the AAC code if you want a forecast for a location.
I've been working on an R package with a couple other people, bomrang, that does this and a couple other things, it's still under development right now but it is installable from GitHub if you use R.
The current weather is served in json files and can be retrieved and returned in a data frame.
The forecast can be retrieved and linked to location names via AAC codes and returned as a data frame.
https://github.com/ToowoombaTrio/bomrang

Interactive zoom maps on R

So I am doing a project where I am looking to create a US map with counties. I currently have a map successfully created but am looking to refine it. I am interested in generating a map in which it will show the US with states and if I clicked on a state it would zoom in and show the state with counties listed. Then I want to be able to click on the county to get corresponding information that is in a data set in R. Is this possible to do in R and if so any help would be greatly appreciated.
This is the code I am currently using:
library(devtools)
find_rtools()
devtools::install_github("hafen/housingData")
library(housingData);library(devtools)
head(geoCounty)
geo <- divide(geoCounty, by = c("state", "county"))
geo[[1]]
install.packages(maps)
install.packages(maptools)
library(maps)
library(maptools)
US <- map("state",fill=TRUE, plot=FALSE)
US.names <- US$names
US.IDs <- sapply(strsplit(US.names,":"),function(x) x[1])
US_poly_sp <- map2SpatialPolygons(US,IDs=US.IDs,proj4string=CRS("+proj=longlat
+ datum=wgs84"))
plot(US_poly_sp,col="white",axes=TRUE)
points(geoCounty$lon, geoCounty$lat)
Thanks,
Joey
Joey,
how about this?
library(mapview)
library(raster)
counties <- getData("GADM", country = "USA", level = 2)
mapview(counties)
Best
Tim

Resources