GFS model and clouds - weather

I have the task of producing a cloudiness forecasts from GFS model. I found a way to get GFS data and manipulate it with metpy, but there are multiple data about clouds, for example:
Pressure_convective_cloud_bottom
Total_cloud_cover_convective_cloud
Pressure_high_cloud_bottom_6_Hour_Average
etc
How can I produce a meaningful forecast from them? Any tips to point me in the right direction?
Thank you

GFS output includes a value for the total cloud cover over the depth of the entire atmosphere. When accessing this data from a THREDDS server, it's called Total_cloud_cover_entire_atmosphere.

Related

Kats Ensemble Models how to retrieve the results after the prediction

Time series newbie here! I was following the section 2 of this tutorial about Kats.
While I was able to finish it I am still not able to display the whole time series with m.plot() as they show in the tutorial. I can display only the forecasted values.
Whenever I try to retrieve any information after the prediction, I get a
KatsEnsemble object is not subscriptable
Here my questions:
How can I have access to the results in a dataframe manner?
Is there also a way to retrieve the parameters used to create the forecast?

Neo4j output format

After working with neo4j and now coming to the point of considering to make my own entity manager (object manager) to work with the fetched data in the application, i wonder about neo4j's output format.
When i run a query it's always returned as tabular data. Why is this??
Sure tables keep a big place in data and processing, but it seems so strange that a graph database can only output in this format.
Now when i want to create an object graph in my application i would have to hydrate all the objects and this is not really good for performance and doesn't leverage true graph performace.
Consider MATCH (A)-->(B) RETURN A, B when there is one A and three B's, it would return:
A B
1 1
1 2
1 3
That's the same A passed down 3 times over the database connection, while i only need it once and i know this before the data is fetched.
Something like this seems great http://nigelsmall.com/geoff
a load2neo is nice, a load-from-neo would also be nice! either in the geoff format or any other formats out there https://gephi.org/users/supported-graph-formats/
Each language could then implement it's own functions to create the objects directly.
To clarify:
Relations between nodes are lost in tabular data
Redundant (non-optimal) format for graphs
Edges (relations) and vertices (nodes) are usually not in the same table. (makes queries more complex?)
Another consideration (which might deserve it's own post), what's a good way to model relations in an object graph? As objects? or as data/method inside the node objects?
#Kikohs
Q: What do you mean by "Each language could then implement it's own functions to create the objects directly."?
A: With an (partial) graph provided by the database (as result of a query) a language as PHP could provide a factory method (in C preferably) to construct the object graph (this is usually an expensive operation). But only if the object graph is well defined in a standard format (because this function should be simple and universal).
Q: Do you want to export the full graph or just the result of a query?
A: The result of a query. However a query like MATCH (n) OPTIONAL MATCH (n)-[r]-() RETURN n, r should return the full graph.
Q: you want to dump to the disk the subgraph created from the result of a query ?
A: No, existing interfaces like REST are prefered to get the query result.
Q: do you want to create the subgraph which comes from a query in memory and then request it in another language ?
A: no i want the result of the query in another format then tabular (examples mentioned)
Q: You make a query which only returns the name of a node, in this case, would you like to get the full node associated or just the name ? Same for the edges.
A: Nodes don't have names. They have properties, labels and relations. I would like enough information to retrieve A) The node ID, it's labels, it's properties and B) the relation to other nodes which are in the same result.
Note that the first part of the question is not a concrete "how-to" question, rather "why is this not possible?" (or if it is, i like to be proven wrong on this one). The second is a real "how-to" question, namely "how to model relations". The two questions have in common that they both try to find the answer to "how to get graph data efficiently in PHP."
#Michael Hunger
You have a point when you say that not all result data can be expressed as an object graph. It reasonable to say that an alternative output format to a table would only be complementary to the table format and not replacing it.
I understand from your answer that the natural (rawish) output format from the database is the result format with duplicates in it ("streams the data out as it comes"). I that case i understand that it's now left to an alternative program (of the dev stack) to do the mapping. So my conclusion on neo4j implementing something like this:
Pro's - not having to do this in every implementation language (of the application)
Con's - 1) no application specific mapping is possible, 2) no performance gain if implementation language is fast
"Even if you use geoff, graphml or the gephi format you have to keep all the data in memory to deduplicate the results."
I don't understand this point entirely, are you saying that these formats are no able to hold deduplicated results (in certain cases)?? So infact that there is no possible textual format with which a graph can be described without duplication??
"There is also the questions on what you want to include in your output?"
I was under the assumption that the cypher language was powerful enough to specify this in the query. And so the output format would have whatever the database can provide as result.
"You could just return the paths that you get, which are unique paths through the graph in themselves".
Useful suggestion, i'll play around with this idea :)
"The dump command of the neo4j-shell uses the approach of pulling the cypher results into an in-memory structure, enriching it".
Does the enriching process fetch additional data from the database or is the data already contained in the initial result?
There is more to it.
First of all as you said tabular results from queries are really commonplace and needed to integrate with other systems and databases.
Secondly oftentimes you don't actually return raw graph data from your queries, but aggregated, projected, sliced, extracted information out of your graph. So the relationships to the original graph data are already lost in most of the results of queries I see being used.
The only time that people need / use the raw graph data is when to export subgraph-data from the database as a query result.
The problem of doing that as a de-duplicated graph is that the db has to fetch all the result data data in memory first to deduplicate, extract the needed relationships etc.
Normally it just streams the data out as it comes and uses little memory with that.
Even if you use geoff, graphml or the gephi format you have to keep all the data in memory to deduplicate the results (which are returned as paths with potential duplicate nodes and relationships).
There is also the questions on what you want to include in your output? Just the nodes and rels returned? Or additionally all the other rels between the nodes that you return? Or all the rels of the returned nodes (but then you also have to include the end-nodes of those relationships).
You could just return the paths that you get, which are unique paths through the graph in themselves:
MATCH p = (n)-[r]-(m)
WHERE ...
RETURN p
Another way to address this problem in Neo4j is to use sensible aggregations.
E.g. what you can do is to use collect to aggregate data per node (i.e. kind of subgraphs)
MATCH (n)-[r]-(m)
WHERE ...
RETURN n, collect([r,type(r),m])
or use the new literal map syntax (Neo4j 2.0)
MATCH (n)-[r]-(m)
WHERE ...
RETURN {node: n, neighbours: collect({ rel: r, type: type(r), node: m})}
The dump command of the neo4j-shell uses the approach of pulling the cypher results into an in-memory structure, enriching it and then outputting it as cypher create statement(s).
A similar approach can be used for other output formats too if you need it. But so far there hasn't been the need.
If you really need this functionality it makes sense to write a server-extension that uses cypher for query specification, but doesn't allow return statements. Instead you would always use RETURN *, aggregate the data into an in-memory structure (SubGraph in the org.neo4j.cypher packages). And then render it as a suitable format (e.g. JSON or one of those listed above).
These could be a starting points for that:
https://github.com/jexp/cypher-rs
https://github.com/jexp/cypher_websocket_endpoint
https://github.com/neo4j-contrib/rabbithole/blob/master/src/main/java/org/neo4j/community/console/SubGraph.java#L123
There are also other efforts, like GraphJSON from GraphAlchemist: https://github.com/GraphAlchemist/GraphJSON
And the d3 json format is also pretty useful. We use it in the neo4j console (console.neo4j.org) to return the graph visualization data that is then consumed by d3 directly.
I've been working with neo4j for a while now and I can tell you that if you are concerned about memory and performances you should drop cypher at all, and use indexes and the other graph-traversal methods instead (e.g. retrieve all the relationships of a certain type from or to a start node, and then iterate over the found nodes).
As the documentation says, Cypher is not intended for in-app usage, but more as a administration tool. Furthermore, in production-scale environments, it is VERY easy to crash the server by running the wrong query.
In second place, there is no mention in the docs of an API method to retrieve the output as a graph-like structure. You will have to process the output of the query and build it.
That said, in the example you give you say that there is only one A and that you know it before the data is fetched, so you don't need to do:
MATCH (A)-->(B) RETURN A, B
but just
MATCH (A)-->(B) RETURN B
(you don't need to receive A three times because you already know these are the nodes connected with A)
or better (if you need info about the relationships) something like
MATCH (A)-[r]->(B) RETURN r

How to get the bounding coordinates for a US postal(zip) code?

Is there a service/API that will take a postal/zip code and return the bounding(perimeter) coordinates so I can build a Geometry object in a MS SQL database?
By bounding coordinates, I mean I would like to retrieve a list of GPS coordinates that construct a polygon that defines the US zip code.
An elaboration of my comment, that ZIP codes are not polygons....
We often think of ZIP codes as areas (polygons) because we say, "Oh, I live in this ZIP code..." which gives the impression of a containing region, and maybe the fact that ZIP stands for "Zone Improvement Plan" helps the false association with polygons.
In actuality, ZIP codes are lines which represent, in a sense, mail carrier routes. Geometrically, lines do not have area. Just as lines are strings of points along a coordinate plane, ZIP code lines are strings of delivery points in the abstract space of USPS-designated addresses.
They are not correlated to geographical coordinates. What you will find, though, is that they appear to be geographically oriented because it would be inefficient for carriers to have a route completely irrelevant of distance and location.
What is this "abstract space of USPS-designated addresses"? That's how I am describing the large and mysterious database of deliverable locations maintained by the US Postal Service. Addresses are not allotted based on geography, but on the routes that carriers travel which usually relates to streets and travelability.
Some 5-digit ZIP codes are only a single building, or a complex of buildings, or even a single floor of a building (yes, multiple zip codes can be at a single coordinate because their delivery points are layered vertically). Some of these -- among others -- are "unique" ZIPs. Companies and universities frequently get their own ZIP codes for marketing or organizational purposes. For instance, the ZIP code "12345" belongs to General Electric up in Schenectady, NY. (Edit: In a previous version of Google Maps, when you follow that link, you'd notice that the placement marker was hovering, because it points to a ZIP code, which is not a coordinate. While most US ZIP codes used to show a region on Google Maps, these types cannot because the USPS does not "own" them, so to speak, and they have no area.)
Just for fun, let's try verifying an address in a unique ZIP code. Head over to SmartyStreets and punch in a bogus address in 12345, like:
Street: 999 Sdf sdf
ZIP Code: 12345
When you try to verify that, notice that... it's VALID! Why? The USPS will deliver a piece to the receptacle for that unique ZIP code, but at that point, it's up to GE to distribute it. Pretty much anything internal to the ZIP code is irrelevant to the USPS, including the street address (technically "delivery line 1"). Many universities function in a similar manner. Here's more information regarding that.
Now, try the same bogus address, but without a ZIP code, and instead do the city/state:
Street: 999 Sdf sdf
City: Schenectady
State: NY
It doesn't validate. This is because even though Schenectady contains 12345, where the address is "valid," it geometrically intersects with the "real" ZIP codes for Schenectady.
Take another instance: military. Certain naval ships have their own ZIP codes. Military addresses are an entirely different class of addresses using the same namespace. Ships move. Geographical coordinates don't.
ZIP precision is another fun one. 5-digit ZIP codes are the least "precise" (though the term "specific" might be more meaningful here, since ZIP codes don't pinpoint anything). 7- and 9-digit ZIP codes are the most specific, often down to block or neighborhood-level in urban areas. But since each ZIP code is a different size, it's really hard to tell what actual distances you're talking.
A 9-digit ZIP code might be portioned to a floor of a building, so there you have overlapping ZIP codes for potentially hundreds of addresses.
Bottom line: ZIP codes don't, contrary to popular belief, provide geographical or boundary data. They vary widely and are actually quite un-helpful unless you're delivering mail or packages... but the USPS' job was to design efficient carrier routes, not partition the population into coordinate regions so much.
That's more the job of the census bureau. They've compiled a list of cartographic boundaries since ZIP codes are "convenient" to work with. To do this, they sectioned bunches of addresses into census blocks. Then, they aggregated USPS ZIP code data to find the relation between their census blocks (which has some rough coordinate data) and the ZIP codes. Thus, we have approximations of what it would look like to plot a line as a polygon. (Apparently, they converted a 1D line into a 2D polygon by transforming a 2D polygon based on its contents to fit linear data -- for each non-unique, regular ZIP code.)
From their website (link above):
A ZIP Code tabulation area (ZCTA) is a statistical geographic entity
that approximates the delivery area for a U.S. Postal Service
five-digit or three-digit ZIP Code. ZCTAs are aggregations of census
blocks that have the same predominant ZIP Code associated with the
addresses in the U.S. Census Bureau's Master Address File (MAF).
Three-digit ZCTA codes are applied to large contiguous areas for which
the U.S. Census Bureau does not have five-digit ZIP Code information
in its MAF. ZCTAs do not precisely depict ZIP Code delivery areas, and
do not include all ZIP Codes used for mail delivery. The U.S. Census
Bureau has established ZCTAs as a new geographic entity similar to,
but replacing, data tabulations for ZIP Codes undertaken in
conjunction with the 1990 and earlier censuses.
The USCB's dataset is incomplete, and at times inaccurate. Google still has holes in their data, too (the 12345 is a somewhat good example) -- but Google will patch it eventually by going over each address and ZIP code by hand. They do this already, but haven't made all their map data perfect quite yet. Naturally, access to this data is limited to API terms, and it's very expensive to raise these.
Phew. I'm beat. I hope that helps clarify things. Disclaimer: I used to be a developer at SmartyStreets. More information on geocoding with address data.
Even more information about ZIP codes.
What you are asking for is a service to provide "Free Zip code Geocoding". There are a few out there with varying quality. You're going to have a bad time coding something like this yourself because of a few reasons:
Zip codes can be assigned to a single building or to a post office.
Zip codes are NOT considered a polygonal area. Projecting Zip codes to a polygonal area will require you to make an educated guess where the boundary is between one zipcode and the next.
ZIP code address data specifies only a center location for the ZIP code. Zip code data provides the general vicinity of an address. Mailing addresses that exist between one zipcode and another can be in dispute on which zipcode it actually is in.
A mailing address may be physically closer to zipcode 11111, yet its official zip code is a more distant zip code point 11112.
Google Maps has a geocoding API:
The google maps API is client-side javascript. You can directly query the geocoding system from php using an http request. However, google maps only gives you what the United States Postal Service gives them. A point representing the center of the zipcode.
https://developers.google.com/maps/#Geocoding_Examples
map city/zipcode polygons using google maps
Thoughts on projecting a zipcode to its lat/long bounding box
There are approximately 43,000 ZIP Codes in the United States. This number fluctuates from month to month, depending on the number of changes made. The zipcodes used by the USPS are not represented as polygons and do not have hard and fast boundaries.
The USPS (United States Postal Service) is the authority that defines each zipcode lat/long. Any software which resolves a zipcode to a geographical location would be in need of weekly updates. One company called alignstar provides demographics and GIS data of zipcodes ( http://www.alignstar.com/data.html ).
Given a physical (mailing) address, find the geographical coordinates in order to display that location on a map.
If you want to reliably project what shape the zipcode is in, you are going to need to brute force it and ask: "give me every street address by zipcode", then paint boxes around those mis-shapen blobs. Then you can get a general feel for what geographical areas the zipcodes cover.
http://vterrain.org/Culture/geocoding.html
If you were to throw millions of mailing address points into an algorithm resolving every one to a lat/long, you might be able to build a rudimentary blob bounding box of that zipcode. You would have to re-run this algorithm and it would theoretically heal itself whenever the zipcode numbers move.
Other ideas
http://shop.delorme.com/OA_HTML/DELibeCCtpSctDspRte.jsp?section=10075
http://www.zip-codes.com/zip-code-map-boundary-data.asp
step 1:download cb_2018_us_zcta510_500k.zip
https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html
if you want to keep them in mysql
step 2: in your mysql create a db of named :spatialdata
run this command
ogr2ogr -f "MySQL" MYSQL:"spatialdata,host=localhost,user=root" -nln "map" -a_srs "EPSG:4683" cb_2018_us_zcta510_500k.shp -overwrite -addfields -fieldTypeToString All -lco ENGINE=MyISAM
i uploaded the file on github(https://github.com/sahilkashyap64/USA-zipcode-boundary/blob/master/USAspatialdata.zip)
In the your "spatialdata db" there will be 2 table named map & geometry_columns .
In 'map' there will be a column named "shape".
shape column is of type "geometry" and it contains polygon/multipolygon files
In 'geometry_columns' there will will be srid defined
how to check if point falls in the polygon
SELECT * FROM map WHERE ST_Contains( map.SHAPE, ST_GeomFromText( 'POINT(63.39550 -148.89730 )', 4683 ) )
and want to show boundary on a map
select zcta5ce10 as zipcode, ST_AsGeoJSON(SHAPE) sh from map where ST_Contains( map.SHAPE, ST_GeomFromText( 'POINT(34.1116 -85.6092 )', 4683 ) )
"ST_AsGeoJSON" this returns spatial data as geojson.
Use http://geojson.tools/
"HERE maps" to check the shape of geojson
if you want to generate topojson
mapshaper converts shapefile to topojson (no need to convert it to kml file)
npx -p mapshaper mapshaper-xl cb_2018_us_zcta510_500k.shp snap -simplify 0.1% -filter-fields ZCTA5CE10 -rename-fields zip=ZCTA5CE10 -o format=topojson cb_2018_us_zcta510_500k.json
If you want to convert shapefile to kml
`ogr2ogr -f KML tl_2019_us_zcta510.kml -mapFieldType Integer64=Real tl_2019_us_zcta510.shp
I have used mapbox gl to display 2 zipcodes
example: https://sahilkashyap64.github.io/USA-zipcode-boundary/
code :https://github.com/sahilkashyap64/USA-zipcode-boundary
SQL Server Solution
Download the Shape files from the US Census:
https://catalog.data.gov/dataset/2019-cartographic-boundary-shapefile-2010-zip-code-tabulation-areas-for-united-states-1-500000
I then found this repository to import the shape file to SQL Server, it was very fast and required no additional coding: https://github.com/xfischer/Shape2SqlServer
Then I could write my own script to find out which zip codes are in a polygon I created:
DECLARE #polygon GEOMETRY;
DECLARE #isValid bit = 0;
DECLARE #p nvarchar(2048) = 'POLYGON((-120.1547 39.2472,-120.3758 39.1950,-120.2124 38.7734,-119.6590 38.8162,-119.6342 39.3672,-120.1836 39.2525,-120.1547 39.2472))'
SET #polygon = GEOMETRY::STPolyFromText(#p,4326)
SET #isValid = #polygon.STIsValid()
IF (#isValid = 1)
SET #polygon = #polygon.MakeValid();
SET #isValid = #polygon.STIsValid()
IF (#isValid = 1)
BEGIN
SELECT * FROM cb_2019_us_zcta510_500k
WHERE geom.STIntersects(#polygon) = 1
END
ELSE
SELECT 'Polygon not valid'
I think this is what you need it uses US Census as repository: US Zipcode
Boundaries API: https://www.boundaries-io.com
Above API shows US Boundaries(GeoJson) by zipcode,city, and state. you should use the API programatically to handle large results.
Disclaimer,I work here
I think the world geoJson link and the google map geocode api can help you.
example: you can use the geocode api to code the zip,you will get the city,state,country,then,you search from the world and us geoJson get the boundry,I have an example of US State boundry,like dsdlink

Multi-location entity query solution with geographic distance calculation

in my project we have an entity called Trip. This trip has two points: start and finish. Start and finish are geo coordinates with some added properties like address atc.
what i need is to query for all Trips that satisifies search criteria for both start and finish.
smth like
select from trips where start near 16,16 and finish near 18,20 where type = type
So my question is: which database can offer such functionality?
what i have tried
i have explored mongodb which has support for geo indexes but does not support this use case. current solution stores the points as separate documents which have a reference to a Trip. we run two separate quesries for starts and finishes, then extract ids of their associated trips and then select trip ids that are found both in starts and finishes and finally return a collection of trips.
on a small sample it works fine but with a larger collection it gets slow and it's like scratching my left ear with my right hand.
so i am looking for a better solution.
i know about neo4j and its spatial plugin but i couldn't even make it work on windows. would it support our use case?
or are there any better solutions? preferably with a object mapper written in php.
like edze already said Postgres (PostGIS) or SQLite(SpatiaLite) is what your looking for
SELECT
*
FROM
trips
WHERE
ST_Distance(ST_StartPoint(way), ST_GeomFromText('POINT(16 16)',4326) < 5
AND ST_Distance(ST_EndPoint(way), ST_GeomFromText('POINT(18 20)',4326) < 5
AND type = 'type'

How to convert other source maps to Obf ( Eg Navteq,garmin,tom Tom)

I am looking for a solution to create a better OSMAND map for a region , as existing OSM maps are not complete.
If I get a map in Navteq/Garmin/Tom Tom format, will I be able to convert it to Osmand OBF format and replace in Osmand ?
There seems to be less references to this topic.
I would expect Navteq/Garmin/Tom Tom would give maps in shapefile format, and OSM can take data from shapefiles per http://wiki.openstreetmap.org/wiki/Shapefiles#Obtaining_OSM_data_from_shapefiles
HOWEVER... Navteq, Garmin, and Tom Tom are commercial organizations which value their data; I would be very surprised if their license would allow you to give it to OSM for free use. Furthermore, I expect that OSM would not allow you to give it to them. So I would be very surprised if this approach worked.

Resources