PostGIS OSM query - postgis

I am new in Geo problems and need to ask about using PostGIS.
I imported OpenStreetMap data through osmosis into PgSQL and I need to query this data.
Essentially I have location from GPS and I need to get just all tags of Nodes, Ways and Areas that are at least from part within some distance from me (lets say 100 meters).
I do not really know where to start.
Thanks.

If you have a latitude and a longitude you could try something like the following:
select * from nodes_with_tags
where st_distance(geography(geom),
geography(st_geomfromewkt('SRID=4326;POINT(' || lng || ' ' || lat || ')')))
<= 100;
Converting with the 'geography' function slows things down but gives the distance in meters rather than unintelligible degrees.
You can also perform the same query on the 'ways' table, replacing 'geom' column with the ways.linestring column.

Check this link: LINK
It works with ZIP Codes but if you familiar with SQL, you could change it easily.
Note: if you use large DB, you should use a bounding box (see Box3D function), too... it helps to decrease the query time.

Related

How do I achieve this output in a Google Data Studio report, given the raw data?

I have a small set of test records in Google Data Studio and am attempting to create a table that gives me breakdowns of particular values relative to the total number of values, by dimension. I think the image here below describes the need clearly. I have tried using an approach I saw online entailing creating a calculated field like this:
case when Action = 'Clicked' then 1 else 0 end
and then creating a metric based upon that field, which does the 'Percentage of Total - Relative to Corresponding Data' but this produces incorrect numbers and seems really cumbersome (needing one calculated field per distinct value). My client wants this exact tabular presentation, not a chart(s).
How do I achieve the desired report?
Thanks!
Solution entails creating fields like 'Opened' which outputs 1 when Action = 'Opened', 0 otherwise. Then create fields like 'Opened Rate' with e.g. sum(Opened) / Record Count. Then set those as percentages.

Google Sheets Stacked Chart with data structured as a DB table

On the paper it was simple to do, but at the end I'm stucked.
Herebelow the chart I would like to produce (I took an easy example to understand the objective to reach):
On this chart we see that the series for each product are well colored and well structured for each cart.
Now, let's jump on my problem: the data structure.
In my world, data is coming from a database, so the data are structured like that:
I managed to aggregate the values but I loose colors of my series and I need absolutely to identify visually the different product volumes.
Is there any possiblity to reach the first stacked chart layout but with the second dataset format ?
Yes, transfer your data and chart from there (I don't think there's a native solution for this, but happy to learn otherwise).
For example, use =QUERY([data],"select [cart], sum([qty]) group by [cart] pivot [product] label sum([qty]) ''",1) replacing the terms in the brackets with the respective column letters (e.g. B if cart is in column B).

how to visualize geodata (Points) in databricks builtin map?

I simply have a dataframe of "lat" and "lon" coordinates that I would like to visualize on a map.
I am trying to view my data data on the map. But nothing is being displayed.
In my case, the mmsi is the id.
Please note that I have around 2M points to display. Is that possible in databricks?
If there is no way around (plotting 2,000,000 points) in databricks, then what tool can handle large amount of data?
Any help is much appreciated!!
The built in map does not support latitude / longitude data points.
Delivering 2 million data points directly to a browser is problematic. Databricks has a protective limit of 20 MB for HTML display. Your viewers won't be able to take in and visually process that level of detail at any point in time.
I recommend implementing a filter/summarization strategy to enable the display within Databricks notebooks. Some ideas can be taken from this Databricks post: https://databricks.com/blog/2019/12/05/processing-geospatial-data-at-scale-with-databricks.html
Also this Anaconda post gives an excellent survey of data visualization packages, some (like Datashader, Vaex) implement a visual summarization strategies before rendering images: https://www.anaconda.com/blog/python-data-visualization-2018-why-so-many-libraries
The error message clearly says that " Unrecognizable values in the first column. The values should be either country codes in ISO 3166-1 alpha-3 format (e.g. "GBR") or US state abbreviations (e.g. "TX")."
Note: To plot a graph of the world, use country codes in ISO 3166-1 alpha-3 format as the key.
A Map Graph is a way to visualize your data on a map.
Plot Options... was used to configure the graph below.
Keys should contain the field with the location.
Series groupings is always ignored for World Map graphs.
Values should contain exactly one field with a numerical value.
Since there can multiple rows with the same location key, choose "Sum", "Avg", "Min", "Max", "COUNT" as the way to combine the values for a single key.
Different values are denoted by color on the map, and ranges are always spaced evenly.
Reference: Databricks - Charts and Graph
Hope this helps.

How to use superset's mapbox view when longitude and latitude are in postgis point format?

I am trying to make an apache superset chart with map box view.
I have to set latitude and longitude columns. But these data are in a postgresql + postgis database. So, latitude and longitude are in the same column location. An sql query would be like this:
SELECT ST_X(location), ST_Y(location) FROM Address
How can I make superset get latitude with the function ST_X()?
Mapbox also supports other geometry flavours, such as WKT (Well Known Text) and GeoJSON, so it is normally not necessary to split them in X,Y or Y,X pairs.
To retrieve GeoJSON from PostgreSQL (+PostGIS):
SELECT ST_AsGeoJSON(location) FROM t;
And as WKT:
SELECT ST_AsText(location) FROM t;
Superset makes it possible to create "virtual" columns in tables.So I created columns latitude with this expression:
ST_X(location)
And that worked
thanks to this superset team:
https://github.com/apache/incubator-superset/issues/4640#event-1527353388

How can I calculate locationGeo space in pgadmin 3 type with PostGIS functions?

My table name is map and I have latitude(double),longitude(double) and locationGeo(geometry) columns. I want to use a function to automatically compute the LocationGeo value using the given latitude and longitude values when I save a record. That's my wanted format:
Point(latitude,longitude),SRID
I can't find documentation for it in web.
Using the default value is not the adequate way. What if you insert a record with no latitude or longitude? Instead you should either construct the geometry when you insert it, or make use of a trigger to make validation and compute it for you.

Resources