geom_sf: plot multiple series - sf

I have a sf polygon dataframe with multiple
series (T1, T2, T3, all on the same scale:
they're observations at different time points).
I can plot say T1 with
ggplot(map)+geom_sf(aes(fill=T1))
what I'd like to do is plot all three (T1, T2 and T3)
as facets (separate maps) on the same drawing.
I'm sure there's a way to do this, but I can't find it.
Can anyone tell me how? Thanks!
ADDED: Two additional notes on this question.
First, the data structure described above is
one that could be plotted using spplot, with
the T's being the arguments to spplot's zcol argument.
So in this connection, my question amounts to
asking how to convert an spplot structure to
be usable by geom_sf.
Second, suppose I use sf to read in a shp file
for say 20 polygons. I also have a data frame
consisting of stacked observations for these
same polygons, say for 3 periods, so the
dataframe has 60 rows. How do I merge these
in order to be usable? Can I just stack
3 copies of the sf structure, and than cbind the
dataframe (assuming the rows match up correctly)?

At least in one sense this turns out to be very simple. Given a data structure (ds_sp) that can be plotted with spplot, you can just do the following:
ds_sf <- st_as_sf(ds_sp) # convert to sf form
plot(ds_sf[c("T1","T2")]) # plot the desired series
This isn't quite the same as using facet_wrap with ggplot, but at least it gives you something to work with.
ANOTHER LATER ADDITION : As to the longitudinal + facet_wrap issue, the following seems to work:
If necessary, create a data frame (df1) with the
longitudinal data (longit), an area indicator (fips) and a time
indicator (date) which will be used for faceting, and
anything else you may need.
If necessary, create an sf-compatible version of
the spatial geometry via st_to_sf, as new_poly .
This will be of classes "sf" and "data.frame"
and should have a spatial indicator matching fips
in df1.
Merge the two:
data_new<-df1<-dplyr::inner_join(df1,new_poly,by="fips",all.x=TRUE)
Now produce the plot
ggplot(data_new)+geom_sf(aes(fill=longit,geometry=geometry))+facet_wrap(~date)
and make adjustments from there.

Related

How to efficiently access Microsoft.Maui.Devices.Sensor.Locations in SQL Server

This is more a design question so please bear with me.
I have a system that stores locations consisting of the ID, Longitude and Latitude.
I need to compare the distance between my current location and the locations in the database a only choose ones that are within a certain distance.
I have the formula that calculates the distance between 2 locations based on the long/lat and that works great.
My issue is I may have 10 of thousands of locations in the database and don't want to loop through them all every time I need a list of locations close by.
Not sure what other datapoint I can store with the location to make it so I only have to compare a smaller subset.
Thanks.
As was mentioned in the comments, SQL Server has had support for geospatial since (iirc) SQL 2008. And I know that there is support within .NET for that as well so you should be able to define the data and query it from within your application.
Since the datatype is index-able, k nearest neighbor queries are pretty efficient. There's even a topic in the documentation for that use case. Doing a lift and shift from that page:
DECLARE #g geography = 'POINT(-121.626 47.8315)';
SELECT TOP(7) SpatialLocation.ToString(), City
FROM Person.Address
WHERE SpatialLocation.STDistance(#g) IS NOT NULL
ORDER BY SpatialLocation.STDistance(#g);
If you need all the points within that radius, omit the top clause and change the predicate on STDistance() to something like SpatialLocation.STDistance(#g) < 1000 (the SRID I typically use has meters as the unit of measure, so this would say 'within 1 km').
https://gis.stackexchange.com/ is a good place for in-depth advice on this topic.
A classic approach to quickly locating "nearby" values, is to "grid" the area of interest:
Associate each location with a "grid cell", where each cell is a convenient size. Pick a cell-edge-length such that most cells will hold a small number of values and/or that is similar to the distance range you typically query.
If cell edge is 1 km, and you need locations within 2 km, then get data from 5x5 cells centered at the "target" location.
This is guaranteed to include all data +- 2 km from any location within the central cell.
Apply distance formula to each returned location; some will be beyond 2 km.
I've only done this in memory, not from a DB. I think you add two columns, one for X cell number, other for Y cell number.
With indexes on both of those. So can efficiently get a range of Xs by a range of Ys.
Not sure if a combined "X,Y" index helps or not.

How to multiply values within a nested array...times values in an another array (in Google Sheets)?

This is hard to explain so my title sucks, and is just my best guess at how I might be able to approach this. I have a Google Sheet of sales data for cases of various bottle sizes of kombucha. Column E is the sale date, Column G contains the item code, and column J is the quantity sold of said cases. See my (vastly simplified) sample data:
https://docs.google.com/spreadsheets/d/17-LzGrNJtBr-FwOZtdaoCws3ayeGOHu_TdtGOfXj4cA/edit?usp=sharing
See my current test code below (also present in the Formula tab of the linked spreadsheet). It successfully gives me the combined number of cases sold of half-liter bottles and Growlers. The values in E4 and E5 are cells containing my start and end dates, respectively, so I'm constraining the results only to those which fall within a certain date range.
This code works, but now I need to figure out a way to sum the total number of bottles sold instead of # of cases. The data set is already massive and pushing the limits of google sheets, so adding a column to the source data sheet with # of bottles per case is not an option. Half liter cases hold 13 bottles, and growlers hold 5. Is there any way to do this with my current approach, using another array perhaps? Or any other approach that keeps the formula as simple as possible?
FYI the current formula is a proof of concept and I will be adding many additional types of cases to the existing formula, each containing a different number of bottles per case, and using it as part of a larger dynamic formula that allows you to switch between showing # cases vs # bottles vs # of actual liters sold, so this is why I am hoping to find an array-based approach that will let me do this without needing to resort to an absurdly long and complex formula of nested IF statements.
=SUMPRODUCT(--((XeroInvoiceData!$E$3:$E>=B4)*(XeroInvoiceData!$E$3:$E<=B5)), (--(ISNUMBER(MATCH(XeroInvoiceData!$G$3:$G, {"HalfLiterCase","GrowlerCase"}, 0)))), XeroInvoiceData!$J$3:$J)
I would be eternally grateful for any assistance.
Here is my solution:
https://docs.google.com/spreadsheets/d/1ig0krumJu4Lj9-nIKJyRfPLTYbU-mzOL0JokRUDEqNc/edit?usp=sharing
My idea was to filter your table on date and sum by the type of container.
I wanted also to allow new types of containers that contain smaller units (bottles or liters).
I divided this job into 3 stages.
First we have to filter this table according to selected dates and container types.
I prepared a list that may be extended (all you need is to extend the filter range).
Then I have to vlookup values of units in each container and I try to do it inside the same formula.
General idea is
={[query results],arrayformula(ifna(vlookup([first column of query],$C$21:$D$26,2,0)*[second column of query])}
I divide it into 2 stages.
First stage referrs to query results in adjacent table:
Second stage uses indexes of query so formula is quite long:
Tell me if it solves your problem.

Using multiple dataset in the same chart with different values

I am using react-chartjs-2. I have a Line chart and x: time y: value. I have 2 different dataset. First dataset value is very as low as 0.00000145 and etc. The second dataset value is as high as millions. When i use this in the same chart, first dataset is at the bottom and looks like a single straight line as chart comparing 2 datasets. Time is same for both datasets. So what is the best way to see two different datasets self comparing?
I think, the best way is to make single chart, which will show correlation between two datasets - the division of first dataset's values by another's. I think, it may be representative

How to format data in (a) CSV file(s) so that it can easily be imported in R?

Edit:
So, this format would work:
featureID charge xcoordinate ycoordinate
1 2 5105.9217 336.125209180674
1 2 5108.7642 336.124751115092
2 0 2434.9217 145.893331325278
But what if I have two columns with multiple value that are linked. Say column quality has a machine and a quality linked and the column looks like this
MachineQuality
[[{1:1224}, {2:3453}], [{1:2242}, {2:4142}]
Now if I want to split that up like I did with the coordinates of the convexhull I would need 2 rows instead of 1. But wouldn't I need 2 rows for every row that is already in (so 4, because there are already 2 extra for the coordinates) like this:
featureID charge xcoordinate ycoordinate quality1 quality2
1 2 5105.9217 336.125209180674 1224 3453
1 2 5105.9217 336.125209180674 2242 4142
1 2 5108.7642 336.124751115092 1224 3453
1 2 5108.7642 336.124751115092 2242 4142
[...]
Would it have to be like this?
I'm very new to R, my knowledge doesn't go much further than knowing how to make a vector and some simple plots. I'm going to use R for an internship project the next couple of months and during this time I will (hopefully) learn some of the ins and outs of R. However, before I start I need to produce the data that I'm going to do the statistics on. I need to know beforehand how I should format my output CSV data so that I can easily read it in once I start my R analysis.
One thing that I've been asked to do is make a CSV file out of the data so that it can be read in by R. The example CSV files for importing with R that I've seen all look like this
featureID Charge value
1 2 10
2 0 9
However, my data mostly consists out of columns for which the values contain multiple values. To clarify:
As an example, my data exists of "features" that, amongs other information has a "convexhull". This convexhull consists of paired x and y coordinates. So what I could have for data is (only showing two coordinates, can be many)
featureID Charge Convexhull
1 2 [[{'y': '336.125209180674'}, {'x': '5105.9217'}], [{'y': '336.124751115092'}, {'x': '5108.7642'}]]
Is it possible to get this in one CSV file, being able to read it in R correctly (so that the paired x and y coordinates are preserved)? If so, how should the CSV file look like? For example, I've seen examples for CSV files with multiple values that look like this:
featureID charge xcoordinate ycoordinate
1 2 5105.9217 336.125209180674
5108.7642 336.124751115092
2 0 2434.9217 145.893331325278
But I can't find if this is easily imported by R.
If this is not doable in one CSV file, are the CSV files easily imported independently, with a primary key idea, like database linking?
The only critical things are that you have a unique character separating your data columns and that each column is the same length. As long as the second row in your last example is filled in that will import fine.
You need to consider what you want to do with the data after it's in R to decide how you might want any other special formatting beforehand. But, as long as the column separator is a unique character and the columns are of equal length then it will import.
(You can violate the unique separator requirement if your entries are wrapped in quotes. And if you want to get really fancy you could "import" almost anything. But if someone's asking you to format the data then they probably want a rectangular data.frame compatible layout. They probably want unique values in each column (no columns of points). But that's between you and them.)
long vs. wide form. Your last example is known as long form (except all cells should be filled in) and your first example is roughly wide form as discussed on the ?reshape page and illustrated in the examples at the end of that page. You likely want to stick with long form. For an alternative see the reshape2 package.
save & load. Note that if you are only writing it out to read it back in to R later (as opposed to communicating it to some other software) you could use save and load which don't require any change to the object at all.
json. Another possibility given the form of your example is that you might want to look at the rjson package .

Calculate distance between Zip Codes... AND users.

This is more of a challenge question than something I urgently need, so don't spend all day on it guys.
I built a dating site (long gone) back in 2000 or so, and one of the challenges was calculating the distance between users so we could present your "matches" within an X mile radius. To just state the problem, given the following database schema (roughly):
USER TABLE
UserId
UserName
ZipCode
ZIPCODE TABLE
ZipCode
Latitude
Longitude
With USER and ZIPCODE being joined on USER.ZipCode = ZIPCODE.ZipCode.
What approach would you take to answer the following question: What other users live in Zip Codes that are within X miles of a given user's Zip Code.
We used the 2000 census data, which has tables for zip codes and their approximate lattitude and longitude.
We also used the Haversine Formula to calculate distances between any two points on a sphere... pretty simple math really.
The question, at least for us, being the 19 year old college students we were, really became how to efficiently calculate and/store distances from all members to all other members. One approach (the one we used) would be to import all the data and calculate the distance FROM every zip code TO every other zip code. Then you'd store and index the results. Something like:
SELECT User.UserId
FROM ZipCode AS MyZipCode
INNER JOIN ZipDistance ON MyZipCode.ZipCode = ZipDistance.MyZipCode
INNER JOIN ZipCode AS TheirZipCode ON ZipDistance.OtherZipCode = TheirZipCode.ZipCode
INNER JOIN User AS User ON TheirZipCode.ZipCode = User.ZipCode
WHERE ( MyZipCode.ZipCode = 75044 )
AND ( ZipDistance.Distance < 50 )
The problem, of course, is that the ZipDistance table is going to have a LOT of rows in it. It isn't completely unworkable, but it is really big. Also it requires complete pre-work on the whole data set, which is also not unmanageable, but not necessarily desireable.
Anyway, I was wondering what approach some of you gurus might take on something like this. Also, I think this is a common issue programmers have to tackle from time to time, especially if you consider problems that are just algorithmically similar. I'm interested in a thorough solution that includes at least HINTS on all the pieces to do this really quickly end efficiently. Thanks!
Ok, for starters, you don't really need to use the Haversine formula here. For large distances where a less accurate formula produces a larger error, your users don't care if the match is plus or minus a few miles, and for closer distances, the error is very small. There are easier (to calculate) formulas listed on the Geographical Distance Wikipedia article.
Since zip codes are nothing like evenly spaced, any process that partitions them evenly is going to suffer mightily in areas where they are clustered tightly (east coast near DC being a good example). If you want a visual comparison, check out http://benfry.com/zipdecode and compare the zipcode prefix 89 with 07.
A far better way to deal with indexing this space is to use a data structure like a Quadtree or an R-tree. This structure allows you to do spatial and distance searches over data which is not evenly spaced.
Here's what an Quadtree looks like:
To search over it, you drill down through each larger cell using the index of smaller cells that are within it. Wikipedia explains it more thoroughly.
Of course, since this is a fairly common thing to do, someone else has already done the hard part for you. Since you haven't specified what database you're using, the PostgreSQL extension PostGIS will serve as an example. PostGIS includes the ability to do R-tree spatial indexes which allow you to do efficient spatial querying.
Once you've imported your data and built the spatial index, querying for distance is a query like:
SELECT zip
FROM zipcode
WHERE
geom && expand(transform(PointFromText('POINT(-116.768347 33.911404)', 4269),32661), 16093)
AND
distance(
transform(PointFromText('POINT(-116.768347 33.911404)', 4269),32661),
geom) < 16093
I'll let you work through the rest of the tutorial yourself.
http://unserializableone.blogspot.com/2007/02/using-postgis-to-find-points-of.html
Here are some other references to get you started.
http://www.bostongis.com/PrinterFriendly.aspx?content_name=postgis_tut02
http://www.manning.com/obe/PostGIS_MEAPCH01.pdf
http://postgis.refractions.net/docs/ch04.html
I'd simply just create a zip_code_distances table and pre-compute the distances between all 42K zipcodes in the US which are within a 20-25 mile radius of each other.
create table zip_code_distances
(
from_zip_code mediumint not null,
to_zip_code mediumint not null,
distance decimal(6,2) default 0.0,
primary key (from_zip_code, to_zip_code),
key (to_zip_code)
)
engine=innodb;
Only including zipcodes within a 20-25 miles radius of each other reduces the number of rows you need to store in the distance table from it's maximum of 1.7 billion (42K ^ 2) - 42K to a much more manageable 4 million or so.
I downloaded a zipcode datafile from the web which contained the longitudes and latitudes of all the official US zipcodes in csv format:
"00601","Adjuntas","Adjuntas","Puerto Rico","PR","787","Atlantic", 18.166, -66.7236
"00602","Aguada","Aguada","Puerto Rico","PR","787","Atlantic", 18.383, -67.1866
...
"91210","Glendale","Los Angeles","California","CA","818","Pacific", 34.1419, -118.261
"91214","La Crescenta","Los Angeles","California","CA","818","Pacific", 34.2325, -118.246
"91221","Glendale","Los Angeles","California","CA","818","Pacific", 34.1653, -118.289
...
I wrote a quick and dirty C# program to read the file and compute the distances between every zipcode but only output zipcodes that fall within a 25 mile radius:
sw = new StreamWriter(path);
foreach (ZipCode fromZip in zips){
foreach (ZipCode toZip in zips)
{
if (toZip.ZipArea == fromZip.ZipArea) continue;
double dist = ZipCode.GetDistance(fromZip, toZip);
if (dist > 25) continue;
string s = string.Format("{0}|{1}|{2}", fromZip.ZipArea, toZip.ZipArea, dist);
sw.WriteLine(s);
}
}
The resultant output file looks as follows:
from_zip_code|to_zip_code|distance
...
00601|00606|16.7042215574185
00601|00611|9.70353520976393
00601|00612|21.0815707704904
00601|00613|21.1780461311929
00601|00614|20.101431539283
...
91210|90001|11.6815708119899
91210|90002|13.3915723402714
91210|90003|12.371251171873
91210|90004|5.26634939906721
91210|90005|6.56649623829871
...
I would then just load this distance data into my zip_code_distances table using load data infile and then use it to limit the search space of my application.
For example if you have a user whose zipcode is 91210 and they want to find people who are within a 10 mile radius of them then you can now simply do the following:
select
p.*
from
people p
inner join
(
select
to_zip_code
from
zip_code_distances
where
from_zip_code = 91210 and distance <= 10
) search
on p.zip_code = search.to_zip_code
where
p.gender = 'F'....
Hope this helps
EDIT: extended radius to 100 miles which increased the number of zipcode distances to 32.5 million rows.
quick performance check for zipcode 91210 runtime 0.009 seconds.
select count(*) from zip_code_distances
count(*)
========
32589820
select
to_zip_code
from
zip_code_distances
where
from_zip_code = 91210 and distance <= 10;
0:00:00.009: Query OK
You could shortcut the calculation by just assuming a box instead of a circular radius. Then when searching you simply calculate the lower/upper bound of lat/lon for a given point+"radius", and as long as you have an index on the lat/lon columns you could pull back all records that fall within the box pretty easily.
I know that this post is TOO old, but making some research for a client I've found some useful functionality of Google Maps API and is so simple to implement, you just need to pass to the url the origin and destination ZIP codes, and it calculates the distance even with the traffic, you can use it with any language:
origins = 90210
destinations = 93030
mode = driving
http://maps.googleapis.com/maps/api/distancematrix/json?origins=90210&destinations=93030&mode=driving&language=en-EN&sensor=false%22
following the link you can see that it returns a json. Remember that you need an API key to use this on your own hosting.
source:
http://stanhub.com/find-distance-between-two-postcodes-zipcodes-driving-time-in-current-traffic-using-google-maps-api/
You could divide your space into regions of roughly equal size -- for instance, approximate the earth as a buckyball or icosahedron. The regions could even overlap a bit, if that's easier (e.g. make them circular). Record which region(s) each ZIP code is in. Then you can precalculate the maximum distance possible between every region pair, which has the same O(n^2) problem as calculating all the ZIP code pairs, but for smaller n.
Now, for any given ZIP code, you can get a list of regions that are definitely within your given range, and a list of regions that cross the border. For the former, just grab all the ZIP codes. For the latter, drill down into each border region and calculate against individual ZIP codes.
It's certainly more complex mathematically, and in particular the number of regions would have to be chosen for a good balance between the size of the table vs. the time spent calculating on the fly, but it reduces the size of the precalculated table by a good margin.
I would use latitude and longitude. For example, if you have a latitude of 45 and a longitude of 45 and were asked to find matches within 50 miles, then you could do it by moving 50/69 ths up in latitude and 50/69 ths down in latitude (1 deg latitude ~ 69 miles). Select zip codes with latitudes in this range. Longitudes are a little different, because they get smaller as you move closer to the poles.
But at 45 deg, 1 longitude ~ 49 miles, so you could move 50/49ths left in latitude and 50/49ths right in latitude, and select all zip codes from the latitude set with this longitude. This gives you all zip codes within a square with lengths of a hundred miles. If you wanted to be really precise, you could then use the Haversine formula witch you mentioned to weed out zips in the corners of the box, to give you a sphere.
Not every possible pair of zip codes are going to be used. I would build zipdistance as a 'cache' table. For each request calculate the distance for that pair and save it in the cache. When a request for a distance pair comes, first look in the cache, then compute if it's not available.
I do not know the intricacies of distance calculations, so I would also check whether computing on the fly is cheaper than looking up (also taking into consideration how often you have to compute).
I have the problem running great, and pretty much everyone's answer got used. I was thinking about this in terms of the old solution instead of just "starting over." Babtek gets the nod for stating in in simplest terms.
I'll skip the code because I'll provide references to derive the needed formulas, and there is too much to cleanly post here.
Consider Point A on a sphere, represented by latitude and longitude. Figure out North, South, East, and West edges of a box 2X miles across with Point A at the center.
Select all point within the box from the ZipCode table. This includes a simple WHERE clause with two Between statements limiting by Lat and Long.
Use the haversine formula to determine the spherical distance between Point A and every point B returned in step 2.
Discard all points B where distance A -> B > X.
Select users where ZipCode is in the remaining set of points B.
This is pretty fast for > 100 miles. Longest result was ~ 0.014 seconds to calculate the match, and trivial to run the select statement.
Also, as a side note, it was necessary to implement the math in a couple of functions and call them in SQL. Once I got past a certain distance the matching number of ZipCodes was too large to pass back to SQL and use as an IN statement, so I had to use a temp table and join the resulting ZipCodes to User on the ZipCode column.
I suspect that using a ZipDistance table will not provide a long-term performance gain. The number of rows just gets really big. If you calculate the distance from every zip to to every other zip code (eventually) then the resultant row count from 40,000 zip codes would be ~ 1.6B. Whoah!
Alternately, I am interested in using SQL's built in geography type to see if that will make this easier, but good old int/float types served fine for this sample.
So... final list of online resources I used, for your easy reference:
Maximum Difference, Latitude and Longitude.
The Haversine Formula.
Lengthy but complete discussion of the whole process, which I found from Googling stuff in your answers.

Resources