Get Distance per Country with Graphhopper - maps

We have a running Graphhopper Server in our company for getting routes.
Is there a way to find out how far we are driving in each country? e.g if I'm driving from Munich to Vienna I would like to know how many km we are driving in Germany and how many in Austria.

This is not yet possible out of the box. You'll have to create a feature request for this or implement this on your own: grab the country boundaries, identify the edges/nodes to mark in the graph and then create additional instructions or store the point indices to later calculate the distances/times for every path.

We now have built our own solution which displays us the distance per country and also separates which road types are used.
commercial_info: {
DE: {
distance: 461221.4580000006,
distance_per_highway: {
trunk: "801.1859999999999",
motorway_link: "6521.838000000001",
motorway: "447824.3810000005",
primary: "5922.418000000001"
}
},
RS: {
distance: 491452.9660000001,
distance_per_highway: {
secondary: "17662.566",
unclassified: "4864.14",
primary_link: "273.186",
service: "398.826",
motorway_link: "727.6659999999999",
motorway: "445850.3279999998",
primary: "21407.297000000002"
}
},
default: {
distance: 79.43371162874004,
distance_per_highway: {
nan: "39.71685581437002"
}
}
}

Related

Is a bucket pattern in MongoDb the best way to handle large unbounded arrays?

I'm implementing social features to a MERN stack app (follow/unfollow users), and trying to come up with a good MongoDB solution for avoiding issues with potentially large unbounded arrays of followers. Specifically I'm hoping to avoid:
MongoDB having to move a large follower array on disk and rebuild indexes as it grows larger
hitting the 16mb bson limit if a user ever hits a very large number of followers (> 1 million)
slow performance when querying/returning followers to display via pagination, or when calculating/displaying follower count
From everything Iv'e researched, it seems like using a bucket pattern approach is the best solution... two good articles I found on this:
https://www.mongodb.com/blog/post/paging-with-the-bucket-pattern--part-1
https://www.mongodb.com/blog/post/paging-with-the-bucket-pattern--part-2
I've started to approach it like this...
Follower model:
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const FollowerSchema = new Schema({
user: {
type: Schema.Types.ObjectId,
ref: 'user',
},
// creating an array of followers
followers: [
{
user: {
type: Schema.Types.ObjectId,
ref: 'user',
},
datefol: {
type: Date,
default: Date.now,
},
},
],
count: {
type: Number,
},
createdate: {
type: Date,
default: Date.now,
required: true,
},
});
module.exports = Follower = mongoose.model('follower', FollowerSchema);
Upsert in Node.js api to add a follower to an array bucket (each bucket will contain 100 followers):
const follow = await Follower.updateOne(
{ user: req.params.id, count: { $lt: 100 } },
{
$push: {
followers: {
user: req.user.id,
datefol: Date.now(),
},
},
$inc: { count: 1 },
$setOnInsert: { user: req.params.id, createdate: Date.now() },
},
{ upsert: true }
);
Basically every time a follower is added, this will add them to the first bucket found that contains less than 100 followers (tracked by the count).
Is this the best approach for handling potentially large arrays? My concerns are:
if someone unfollows a user and the app runs a $pull to remove the follower from the array in one of the buckets... multiple buckets could then contain less than 100 followers. New followers will no longer be added to the most recent bucket so later when querying and trying to return followers based on most recent by bucket createdate... some of the newest followers might be in an older bucket and not returned correctly. The articles above mention some expressive update instructions introduced in MongoDb 4.2 that solve this problem, but it's not really clear to me how.
if I corrected for that by returning all follower buckets for a user and sorting by follow date... it seems like that could become very slow if someone had tons of followers
if I want to be able to paginate and return 100 followers per page, starting with the latest, how would that work with this approach? Should I add a pagenumber entry to the model and somehow have it be incremented each time a bucket is created (first bucket contains pagenumber 1, next pagnumber 2 etc), then on the front end if a user jumps to follower page 500 a query runs to pull bucket 500?
The bucket pattern is not the perfect match on the case you expose.
The pattern best fits your needs is the outlier pattern https://www.mongodb.com/blog/post/building-with-patterns-the-outlier-pattern
Your case is practically the same as the example on this article.

How to model this NoSQL data structure in Firestore (Review my first approach)

I am a fairly new web developer and would need your help with a project I am currently working on. I have worked in the past on a very simple realtime database example and have little to none experience in firestore or NoSql in general.
I want to create a system which allows end-users to get an email once a week that contains a list of special offers from bars the end-user has subscribed to. The offers change each day of the week. Bar owners can fill out a form in a vue.js web application every week with their weekly special offers.
Every Monday morning a cron job has to look up which end user has subscribed to which bars and then aggregate the data and send it via email.
The question is how would you structure the data so that I can easily compose the email and send it via a cloud function?
My approach would be to have three main collections: RestaurantOwner, EndUser, SpecialOfferings
Please see the graphic for an example process:
BarOwner and EndUser are pretty straight forward. However, the difficult part is how to structure the SpecialOffers in order to be queried the right way.
My idea would be to structure it based on the calendar week and link it to the uid from the barOwner:
specialOffers: {
2019_CW27: {
barUID001: {
mon: {
title: 'Banana Daiquir',
price: 4.99,
},
tue: {
title: 'After Five',
price: 2.99,
},
wed: {
title: 'Cool Colada',
price: 6.99
},
thu: {
title: 'Crantini',
price: 5.99
},
fri: {
title: 'French Martini',
price: 4.99
}
},
barUID002: {
mon: {
title: 'Gin & Tonic',
price: 8.99,
},
tue: {
title: 'Cratini',
price: 4.99,
},
wed: {
title: 'French Martini',
price: 4.99
},
thu: {
title: 'After Five',
price: 3.99
},
fri: {
title: 'Cool Colada',
price: 6.99
}
}
},
2019_CW28: {
barUID01: {~~~},
barUID02: {~~~}
}
}
The disadvantage of this approach is that it creates a deeply nested object when you imagine that there are 52 calendar weeks, f.e 100 signed up bars à 5 special offers per week and I am not sure if I am able to query it the way I need to.
Is this approach reasonable or what would you do differently?
Thank you so much for your help! I highly appreciate it.
I'm assuming the following scenarios:
1) The bar owners make modifications to their offers very often.
2) The bar owners should be the only ones allowed to modify each bar's offers.
If you have these two scenarios, I would recommend a sub-collections approach here.
When to use sub-collections:
1) When there are lot of fields in a document. Cloud Firestore has 20,000 field limit. (If the number of Bars can exceed more than 20,000 fields)
2) When updating the parent collection is a common operation. Firestore only lets you update the document at rate of 1 write/second. (If the SpecialOffers information of each bar is modified very often. If two bar owners modify their offers, only 1 write is successful and the second write operation waits until the first is completed. This can delay the updation offers particularly at the end of a week when almost all the bars update the offers.)
3) When you want to limit the access to particular fields of a document. (If you want to restrict the access to a Bar's Offers to the barOwner alone. You can restrict the access to each document in the Bars sub-collection according to its owner using Firestore Security Rules)
So I would recommend a sub-collection Bars under the main collection SpecialOffers. This way the design becomes scalable and you can add restaurants and super-markets as other similar sub-collections in the future without heavily altering your design.
Another advantage is that sub-collections are basically collections and they don't have a limit for number of documents it can hold. So even if the number of bars registered is above 20,000 which is the limit of number of fields for a fire-store document, your sub-collection wont be having a problem but your document will run out of fields to save the offers for a new bar.
Ultimately the choice depends on your use cases.
Hope this helps.

Mongodb schema best storage of Achievement system [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
Im going to create a achievement system in Mongodb. But im not sure how i would format/store it in the database.
As of the users should have a progress (on each achievement they would have some progress value stored), im really confused what would be the best way to perform this, and without having an performence issue.
what should i do?, cause i dont know, what i had in mind, was maybe something like:
Should i store each achievement in an unique row in a Achievement collection, and an user array within that row, containing object with userid and achievement progress?
Would i then get an performance issue when its 1000+ achievements, that is beeing checked fairy often?
or should i do something else?
example schema for the option above:
{
name:{
type:String,
default:'Achievement name'
},
users:[
{
userid:{
type:String,
default:' users id here'
},
progress:{
type:Number,
default:0
}
}
]
}
Even though the question is specifically about the database design, I will give a solution for the tracking/awarding logic as well to establish more accurate context for the db design.
I would store the achievements progress separately from the already awarded achievements for cleaner tracking and discovery.
The whole logic is event based and has multiple layers of event handling. This gives you TONS of flexibility on how you track your data and also gives you a pretty good mechanism to track history. Basically, you can look at it as a form of logging.
Of course, your system design and contracts are highly dependent on the information you're gonna be tracking and its complexity. A simple progress field may not suffice for each case(you might want to track something more complex, not a simple number between X and Y). There is also the case of tracking data which updates quite frequently(as distance travelled in games, for example). You didn't give any context on the topic of your achievement system so we're gonna stick with a generic solution. It's just a couple of things that you should take a note about as it will affect the design.
Okay, so, let's start from the top and track the entire flow for a tracked piece of data and its eventual achievement progress. Let's say we're tracking consecutive days of user login and we're gonna award him with an achievement when he reaches [10].
Note that everything below is just a pseudo-code.
So, let's say today is [8th of July, 2017]. For now, our User entity looks like this:
User: {
id: 7;
trackingData: {
lastLogin: 7 of July, 2017 (should be full DateTime object, but using this for brevity),
consecutiveDays: 9
},
achievementProgress: [
{
achievementID: 10,
progress: 9
}
],
achievements: []
}
And our achievements collection contains the following entity:
Achievement: {
id: 10,
name: '10 Consecutive Days',
rewardValue: 10
}
The user tries to login(or visit the site). The application handler takes note of that and after handling the login logic fires an event of type ACTION:
ACTION_EVENT = {
type: ACTION,
name: USER_LOGIN,
payload: {
userID: 7,
date: 8 of July, 2017 (should be full DateTime object, but using this for brevity)
}
}
We have an ActionHandler which listens for events of type ACTION:
ActionHandler.handleEvent(actionEvent) {
subscribersMap = Map<eventName, handlers>;
subscribersMap[actionEvent.name].forEach(subscriber => subscriber.execute(actionEvent.payload));
}
subscribersMap gives us a collection of handlers that should respond to each specific action(this should resolve to USER_LOGIN for us). In our case we can have 1 or 2 that concern themselves with updating the user tracking information of lastLogin and consecutiveDays tracking properties in the user entity. The handlers in our case will update the tracking information and fire new events further down the line.
Once again, for brevity, we're gonna incorporate both into one:
updateLoginHandler: function(payload) {
user = db.getUser(payload.userID);
let eventType;
let eventValue;
if (date - user.trackingData.lastLogin > 1 day) {
user.trackingData = 1;
eventType = 'PROGRESS_RESET';
eventValue = 1;
}
else {
const newValue = user.trackingData.consecutiveDays + 1;
user.trackingData.consecutiveDays = newValue;
eventType = 'PROGRESS_INCREASE';
eventValue = newValue;
}
user.trackingData.lastLogin = payload.date;
/* DISPATCH NEW EVENT OF TYPE ACHIEVEMENT_PROGRESS */
AchievementProgressHandler.dispatch({
type: ACHIEVEMENT_PROGRESS
name: eventType,
payload: {
userID: payload.userID,
achievmentID: 10,
value: eventValue
}
});
}
Here, PROGRESS_RESET have the same contract as the PROGRESS_INCREASE but have a different semantic meaning and I would keep them separate for history/tracking purposes. If you wish, you can combine them into a single PROGRESS_UPDATE event.
Basically, we update the tracked fields that are dependent on the lastLogin date and fire a new ACHIEVEMENT_PROGRESS event which should be handled by a separate handler with the same pattern(AchievementProgressHandler). In our case:
ACHIEVEMENT_PROGRESS_EVENT = {
type: ACHIEVEMENT_PROGRESS,
name: PROGRESS_INCREASE
payload: {
userID: 7,
achievementID: 10,
value: 10
}
}
Then, in AchievementProgressHandler we follow the same pattern:
AchievementProgressHandler: function(event) {
achievementCheckers = Map<achievementID, achievementChecker>;
/* update user.achievementProgress code */
switch(event.name): {
case 'PROGRESS_INCREASE':
achievementCheckers[event.payload.achievementID].execute(event.payload);
break;
case 'PROGRESS_RESET':
...
}
}
achievementCheckers contains a checker function for each specific achievement that decides if the achievement has reached its desired value(a progress of 100%) and should be awarded. This enables us to handle all kinds of complex cases. If you only track a single X out of Y scenario, you can share the function between all achievements.
The handler basically does this:
achievementChecker: function(payload) {
achievementAwardHandler;
achievement = db.getAchievement(payload.achievementID);
if (payload.value >= achievement.rewardValue) {
achievementAwardHandler.dispatch({
type: ACHIEVEMENT_AWARD,
name: ACHIEVEMENT_AWARD,
payload: {
userID: payload.userID,
achievementID: achievementID,
awardedAt: [current date]
}
});
/* Here you can clear the entry from user.achievementProgress as you no longer need it. You can also move this inside the achievementAwardHandler. */
}
}
We once again dispatch an event and use an event handler - achievementAwardHandler. You can skip the event creation step and award the achievement to the user directly but we keep it consistent with the whole history logging flow.
An added benefit here is that you can use the handler to defer the achievement awarding to a specific later time thus effectively batching awarding for multiple users, which serve a couple of purposes including performance enhancement.
Basically, this pseudo code handles the flow from [a user action] to [achievement rewarding] with all intermediate steps included. It's not set in stone, you can modify it as you like but all in all, it gives you a clean separation of concerns, cleaner entities, it's performant, let's you add complex checks and handlers which are easy to reason about while in the same time provide a great history log of the user overall progress.
Regarding the DB schema entities, I would suggest the following:
User: {
id: any;
trackingData: {},
achievementProgress: {} || [],
achievements: []
}
Where:
trackingData is an object that contains everything you're willing
to track about the user. The beauty is that properties here are
independent from achievement data. You can track whatever and eventually use it for achievement purposes.
achievementProgress: a map of <key: achievementID, value: data> or
an array containing the current progress for each achievement.
achievements: an array of awarded achievements.
and Achievement:
Achievement: {
id: any,
name: any,
rewardValue: any (or any other field/fields. You have complete freedom to introduce any kind of tracking with the approach above),
users?: [
{
userID: any,
awardedAt: date
}
]
}
users is a collection of users who have been rewarded the given achievement. This is optional and is here only if you have the use for it and query for this data frequently.
What you might be looking for is a Badge style implementation. Just like Stack Overflow rewards it's users with badges for specific achievements.
Method 1: You can have flags in the user profile for each badge. Since you're doing it in NoSQL database, you just have to set a flag for each badge.
const badgeSchema = new mongoose.Schema({
badgeName: {
type: String,
required: true,
},
badgeDescription: {
type: String,
required: true,
}
});
const userSchema = new mongoose.Schema({
userName: {
type: String,
required: true,
},
badges: {
type: [Object],
required: true,
}
});
If your application architecture is event based, you can trigger awarding badges to users. And that operation is just inserting Badge object with progress in User badges array.
{
badgeId: ObjectId("602797c8242d59d42715ba2c"),
progress: 10
}
Update operation will be to find and update the badges array with progress percentage number
And while displaying user achievements on user interface, you can just loop over badges array to show the badges this user has achieved and their progress with it.
Method 2: Have a separate mongo collection for Badge and User Mapping. Whenever a user achieves a badge you insert a record in that collection. It will be one to one mapping of user _id and badge _id and progress value. But as the table will grow huge you will need to do indexing to efficiently query user and badge mapping.
You will have to do analysis on best approach according to your specific use case.
MongoDB is flexible enough to allow teams develop applications quickly, and involve their model with litter friction as the application needs it. In cases where you need a robust model from day one, theirs is a flexible methodology that can guide you through the process of modeling your data.
The methodology is composed of:
Workload: This stage is about gathering as much information as possible to understand your data. This will allow you formulate assumptions about, you data size the operations that will be performance against it (reads and writes), quantify operations and qualify operations.
You can get this by:
Scenarios
Prototype
Production Logs & Stats (if you are migrating).
Relationships: Identify the relationship between the different entities in your data, quantify those relationships and apply embedding or linking. In general you should prefer embedding by default, but remember that arrays should not grow without bound (6 Rules of Thumb for MongoDB Schema Design: Part 3).
Patterns: Apply schema design patterns. Take a look at Building with Patterns: A Summary, it presents a matrix that highlights the pattern that could be useful for a given use case.
Finally, the goal of this methodology is help you create a model, that can scale and perform well under stress.
If you design the achievement schema like this:
{
name: {
type: String,
default: "Achievement name",
},
userid: {
type: String,
default: " users id here",
},
progress: {
type: Number,
default: 0,
},
}
}
When an achievement is gained you just add another entry
for getting achievements Map-Reduce is a good candidate for running map reduce on the database. you can run them on a less regular basis, using them for offline computation of the data that you want.
based on documentation you can do like the following photo

Firebase database structure

I'm just starting to experiment with Firebase. It's a real head bender when you're used to relational databases!
I'm trying to design an app that will allow users to search for meals by barcode or name and retrieve the number of calories. Additionally, I need to be able to store the meals eaten by a user, and finally retrieve the food eaten by a user each day, week or month.
I was thinking each meal would have a unique ID (e.g. M1234 for Pizza), then I'd have 2 lookup sections - one by barcode and one by name, so that should hopefully cover the search functionality.
Each user would have the meals eaten stored in the eaten 'table' (what is the correct term for 'table' in a Firebase database?) by date, just referencing the meal by ID.
This is how I've designed the database.
{
// Here are the users.
"users": {
"mchen": {
"name": "Mary Chen",
"email": "mary#chen.com",
}
},
...
},
// Here are the meals eaten by date.
"eaten": {
"mchen": {
// index Mary's meals in her profile /eaten/mchen/meals/20161217 should return 'M1234' (pizza) and 'M8765' (chips)
"meals": {
"20161217": {
"M1234": true,
"M8765": true
},
"20161218": {
"M2222": true,
"M8765": true
}
},
...
},
// Here are the meals with calorie information.
"meals": {
"M1234": {
"name": "Pizza"
"calories": 400
},
"M2222": {
"name": "Curry"
"calories": 250
},
"M8765": {
"name": "Chips"
"calories": 100
},
},
// Here is the barcode lookup
"barcode-lookup": {
"12345678": {
"id": "M1234"
},
"87654321": {
"id": "M2222"
},
"11223344": {
"id": "M8765"
}
},
// Here is the name lookup
"name-lookup": {
"Chips": {
"id": "M8765"
},
"Pizza": {
"id": "M1234"
},
"Curry": {
"id": "M2222"
}
}
}
Does it seem reasonable or are there any obvious flaws?
You will want to leverage .childByAutoId() and let Firebase create the parent key names. It's best practice to disassociate your child data from the parent node and allowing Firebase to create 'random' key's for the parents will make that work.
Along with that, it's customary to create a /users node and the parent nodes for each user would be the uid which was created by Firebase when the user was first created.
In your original structure, there's a barcode and name lookup which I have integrated into the following structure to reduce complexity.
users
uid_0
name: "Mary Chen",
email: "mary#chen.com"
uid_1
name: "Larry David"
email: "ldavid#david.com"
and then the dining
dining
-Yuiia09skjspo
dining_timestamp: "20161207113010"
Y79joa90ksss: true
Yjs9990kokod: true
user: uid_0
uid_timestamp: "uid_0_ 20161207113010"
-Yi9sjmsospkos
dining_timestamp: "20161207173000"
Y79joa90ksss: true
Yjs9990kokod: true
user: uid_1
uid_timestamp: "uid_1_ 20161207173000"
and the meals the user can choose from
meal
-Y79joa90ksss
name: "Pizza"
calories: "400"
barcode: "008481816164"
-Yjs9990kokod
name: "Burger"
calories: "520"
barcode: "991994411815"
As you can see, the dining node contains a dining event for each user (so all of the dining events are in one node)
This enables you to query for all kinds of things:
All dining for all users by date or range of dates.
All dining that contain a certain meal
All meals by a user
->The cool one<- all dining for a specific user within a date range.
The one omission is a search for dining that contains two meals, however, the solution to that is also in this answer.
All in all, your structure is sound - just needs a little tweaking.
The structure looks fine (though I would let firebase generate the ids). The only thing that won't work like what you're expecting is searching. Based on your data if I searched for pizza you couldn't write a query that would return the Pizza entry. My suggestion would be to either use Algolia (or something similar) for searching or to roll another key with your name lowerCased to make it possible for a query to work. The only issue with running your own is you won't be able to search for things like izz and have Pizza turn up. See my answer Firebase - How can I filter similarly to equalTo() but instead check if it contains the value? for how to do a search.

How to change fusion table maps markers' size?

I have a table with a location column and "count" column (with values from 1 to 100).
I'd like to map the records with markers that change in size, i.e. the bigger the count value is, the bigger the marker is.
Is that possible in Google Fusion? How would you suggest to do that?
Thanks.
Currently there are only 2 sizes of icons available: small and large, I put together a little example to show you how to use them together with the FusionTablesLayer, which is a special layer for Google Maps that can use to query your Google Fusion Tables.
FusionTablesLayer allow you apply a style to your data (markers, lines or polygons), it boils down to this:
layer = new google.maps.FusionTablesLayer({
query: {
select: 'Location',
from: '3609183'
},
styles: [
{ where: "Number > 1000",
markerOptions: {
iconName: 'large_green'
}
},
{ where: "Number <= 1000",
markerOptions: {
iconName: 'large_red'
}
},
{ where: "Number <= 100",
markerOptions: {
iconName: 'small_purple'
}
}
]});
If two sizes are not enough, then maybe you can play around with different colors/icons (there is a list with supported icons). Otherwise you have to retrieve your data and create custom markers with images of different size.
Javram pointed to one approach, but the list of available marker icons is limited in Fusion Tables and AFAIK there is no way to vary the icon size. Another approach might be to use the JSONP support provided by Fusion Tables to retrieve you your data and create your own makers. This blog post explains how to do it.
The answer is here, http://support.google.com/fusiontables/bin/answer.py?hl=en&answer=185991 basically, you need to add a column in your table that is the name of the marker type you want to use for that location.

Resources