I would like to build a real scenario using real road network and real traffic data. This is for testing purposes of some research work.
I know how to extract a map form OSM according to the tutorial on SUMO website. I have 2 questions:
1- from where can I extract real traffic data that corresponds to extracted map? data like avg. speed of roads? number of vehicles on each road, vehicle density, etc.
2- how can I use that data to generate routes to be used in the scenario?
Thanks
There is no general open data source for traffic demand data. It is possible to generate something from population statistics using SUMO's activitygen or from traffic counts using the dfrouter or the flowrouter. There are also tools available for importing origin-destination-matrices but most of these sources you have to get from the city or region in question.
Related
I started exploring Overpass Turbo and Mapbox with hopes of building my travel app. I can query some data in OT and get towns or islands, no problem, I understand the whole process of querying and exporting as Geojson.
But for learning purposes, I always do queries within a small area so I don't get too much data back.
Also, various resources mention that OSM data for the whole planet is huge, like here: https://wiki.openstreetmap.org/wiki/Downloading_data it says: The entire planet is a huge amount of data. Start with a regional extract to make sure your setup works properly. Common tools like Osmosis or various import tools for database imports and converters take hours or days to import data, depending largely on disk speed.
But when I go to apps like AllTrails, Maps.me or Mapbox, they seem to be showing a huge amount of data, definitely the main POIs.
here's an example screenshot from All Trails
Can someone briefly explain how is this done then? Do they actually download all of data? Or little by little depending on the current bounding box. Any info I can research further, I'd appreciate it!
Thanks
P.S. I am hoping to build my app with Node.js, if that makes a difference.
Several reasons:
They don't always display everything. You will always only see a limited region, never the whole world in full detail. If you zoom in, you will see a smaller region but with more details. If you zoom out, you will see a larger region but with reduced details (less or no POIs, smaller roads and waterways disappear etc.).
They don't contain all the available data. OSM data is very diverse. OSM contains roads, buildings, landuse, addresses, POI information and much more. For each of the mentioned elements, there is additional information available. Roads for instance have maxspeed information, lane count, surface information, whether they are lit and if they have sidewalks or cycleways etc. Buildings may have information about the number of building levels, the building color, roof shape and color and so on. Not all of these information are required for the apps you listed and thus can be removed from the data.
They perform simplifications. It isn't always necessary to show roads, buildings, waterways and landuse in full detail. Instead, special algorithms reduce the polygon count so that the data becomes smaller while keeping sufficient details. This is often coupled with the zoom level, i.e. roads and lakes will become less detailed if zoomed out.
They never ship the whole world offline. Depending on the app, the map is either online or offline available, or both. If online, the server has to store the huge amount of data, not the client device. If offline, the map is split into smaller regions that can be handled by the client. This usually means that a certain map only covers a state, a certain region or sometimes a city but rarely a whole country except for smaller countries. If you want to store whole countries offline you will need a significant amount of data storage.
They never access OSM directly. All apps and websites that display OSM maps don't obtain this information live from OSM. Instead, they either already have a local database containing the required data. This database is periodically updated from the main OSM database via planet dumps. Or they use a third-party map provider (such as MapBox from your screenshot) to display a base map with layers on top. In this case they don't have to store much information on their server, just the things they want to show on top of OSM.
None of the above is specifically for OSM. You will find similar mechanisms in other map apps and for other map sources.
Can we use hazel-cast database to link and design the data according to tracker with bar graph, below are the points which I need to confirm to build the application for hardware:
- I am using temperature sensor interfacing with Arduino Yun and wanted to upload the data given by temperature sensor on hazel-cast server.
By using single database output uploaded in hazelcast server, reads the data through database through Arduino MKR1000.
Link the data to different development tools to design different types of dashboards like Pie chart, Bar chart, Line chart etc.
Please suggest how the best way to link to create the database in data-grid
How you want to use data on your dashboard will basically depend on how you have modelled your data - one map or multiple maps etc. Then you can retrieve data through single key-based lookups or by running queries and use that for your dashboard. You can define the lifetime of the data - be it few minutes or hours or days. See eviction: http://docs.hazelcast.org/docs/3.10.1/manual/html-single/index.html#map-eviction
If you decide to use a visualisation tool for dashboard that can use JMX then you can latch on to Hazelcast exposed JMX beans that would give you information about data stored in the cluster and lot more. Check out this: http://docs.hazelcast.org/docs/3.10.1/manual/html-single/index.html#monitoring-with-jmx
You can configure Hazelcast to use a MapLoader - MapStore to persist the cached data to any back-end persistence mechanism – relational or no-sql databases may be good choices.
On your first point, I wouldn’t expect anything running on the Arduino to update the database directly, but the MKR1000 is going to get you connectivity, so you can use Kafka/MQTT/… - take a look at https://blog.hazelcast.com/hazelcast-backbone-iot-internet-things/.
If you choose this route, you’d set up a database that is accessible to all cluster members, create the MapLoader/MapStore class (see the example code, for help) and configure the cluster to read/write.
Once the data is in the cluster, access is easy and you can use a dashboard tool of your choice to present the data.
(edit) - to your question about presenting historical data on your dashboard:
Rahul’s blog post describes a very cool implementation of near/real-time data management in a Hazelcast RingBuffer. In that post, I think he mentioned collecting data every second and buffering two minutes worth.
The ring buffer has a configured capacity, but note that he is over-writing, on add - this is kind of a given for real-time systems; given the choice is losing older data or crashing.
For a generalized query-tool approach, I think you’d augment this. Off the top of my head, I could see using the ring-buffer in conjunction with a distributed map. You could (but wouldn’t need to) populate the map, using an map-event interceptor to populate the ring buffer. That should leave the existing functionality intact. The map, though, would allow you to configure a map-store/map-loader, so that your data is saved in a backing store. The map would support queries - but keep in mind that IMDG queries do not read through to the backing store.
This would give you flexibility, at the cost of some complexity. The real-time data in the ring buffer would be always available, quickly and easily. Data returned from querying the map would be very quick, too. For ‘historical’ data, you can query your backing-store - which is slower, but will probably have relatively great storage capacity. The trick here is to know when to query each. The most recent data is a given, with it’s fixed capacity. You need to know how much is in the cluster - i.e. how far back your in-memory history goes. I think it best to configure the expiry to a useful limit and provision the storage so that data leaves the map by expiration - not eviction. In this way, you can know what the beginning of the in-memory history is. Monitoring eviction events would tell you that your cluster has a complete view of data back to a known time.
i was making an angular front end for a real-time for an application --
( a mock global open market simulation sotware, in a fictitious world with fake countries with fake maps.-it will generate large amount( not huge though) of data in near real time, it will do complex market economy analysis, predictions and such.. ).
I found out about kibana, literally 40 minutes before i'm writing this, (So i've no idea).
I see that, kibana uses something like, geo_points and geo hashes on top of the real world map. If, i could make a custom designed world map, for my needs, in QGIS,by georeferencing some png images ,also store them as tile maps in my server as POSTGIS datasets.
How to load a custom base map in kibana from my server.?
what base map is used with kibana anyway.?
From what I see, there's no way to upload multiple training sets to the new Watson NLC tooling. I need to manage separate training sets and their associated classifiers. What am I missing here?
Preferred option: Provision an NLC service instance for each set of training data you'd like to work with and separately access the tooling for each.
Workaround: Currently, the flow for managing multiple training sets in one NLC service instance is as follows:
(Optional to start fresh) Go to the training data page and click on the garbage icon to delete all training data.
Upload a training set on the training data page using the upload icon.
Manipulate the data as necessary. Add texts and classes, tag texts with classes, etc.
Create a classifier. When you create a classifier, it is essentially a snapshot of your current training data since you are able to retrieve it later from the classifiers page.
Repeat steps 1-4 as necessary until you have uploaded all of your training data sets and created the corresponding classifiers.
When you want to continue working on a previous training set:
Clear your training data (step 1 from above).
Go to the classifiers page.
Click on the download icon for the classifier which contains the training data you'd like to work with.
Return to the training data page and upload the file downloaded from step 3.
The best way to manage multiple training sets is to use a different NLC service instance for each training set.
The current beta NLC tooling is not intended to manage separate training sets within a single service instance. For example, the tool makes suggestions when you add texts without classes- these are based on the most recently trained classifier which won't make sense if that was based on a completely different training set.
The work around suggested by #John Bufe will work if you have a hard limit on the number of NLC services you can use for some reason, e.g. you have reached your limit of Bluemix services. Cost is not a factor here as additional NLC service instances will not increase the overall price since the monthly charge is for trained classifier instances. For example, if you have four service instances with a single classifier in each, you'll see 3 charged and 1 free.
If you want to use the NLC beta tooling to manage your training data, I would recommend using separate NLC services for each training set you require.
I am wondering if someone can provide some insight about an approach for google maps. Currently I am developing a visualization with google maps api v3. This visualization will map out polygons for; country, state, zip code, cities, etc. As well as map 3 other markers(balloon, circle..). This data is dynamically driven by an underlying report which can have filters applied and can be drilled to many levels. The biggest problem I am running into is dynamically rendering the polygons. The data necessary to generate a polygon with Google Maps V3 is large. It also requires a good deal of processing at runtime.
My thought is that since my visualization will never allow the user to return very large data sets(all zip codes for USA). I could employ the use of dynamically created fusion tables.
Lets say for each run my report will return 50 states or 50 zip codes. Users can drill from state>zip.
The first run of the visualization users will run a report ad it will return the state name and 4 metrics. Would it be possible to dynamically create a fusion table based on this information? Would I be able to pass through 4 metrics and formatting for all of the different markers to be drawn on the map?
The second run the user will drill from state to zip code. The report will then return 50 zip codes and 4 metrics. Could the initial table be dropped and another table be created to map a map with the same requirements as above? Providing the fusion tables zip code(22054, 55678....) and 4 metric values and formatting.
Sorry for being long winded. Even after reading the fusion table documentation I am not 100% certain on this.
Fully-hosted solution
If you can upload the full dataset and get Google to do the drill-down, you could check out the Google Maps Engine platform. It's built to handle big sets of geospatial data, so you don't have to do the heavy lifting.
Product page is here: http://www.google.com/intl/en/enterprise/mapsearth/products/mapsengine.html
API doco here: https://developers.google.com/maps-engine/
Details on hooking your data up with the normal Maps API here: https://developers.google.com/maps/documentation/javascript/mapsenginelayers
Dynamic hosted solution
However, since you want to do this dynamically it's a little trickier. Neither the Fusion Tables API nor the Maps Engine API at this point in time support table creation via their APIs, so your best option is to model your data in a consistent schema so you can create your table (in either platform) ahead of time and use the API to upload & delete data on demand.
For example, you could create a table in MapsEngine ahead of time for each drill-down level (e.g. one for state, one for zip-code) & use the batchInsert method to add data at run-time.
If you prefer Fusion Tables, you can use insert or importRows.
Client-side solution
The above solutions are fairly complex & you may be better off generating your shapes using the Maps v3 API drawing features (e.g. simple polygons).
If your data mapping is quite complex, you may find it easier to bind your data to a Google Map using D3.js. There's a good example here. Unfortunately, this does mean investigating yet another API.