this is a design question.
Imagine this, I have two tables.
|user|
------
|id|
|username|
|team_id|
|team|
------
|id|
|name|
So when receiving a POST /users
Should I send
{
"username": "newUser",
"name": "myTeam" /
}
Get the team id first or includes if using ORMs
or
{
"username": "newUser",
"team_id": 1 // references the "myTeam"
}
Insert it directly and if the team_id doesn't exists fail
Which one is the best and why?
This is just an example with only one relationship and it could happen that the user table has a lot of relationships
It depends upon which data is important and which isn't. If your front end wants to show the name of the team then send the name of the team or if it just wants to show the id then just send the id.
In my opinion, you should send the name because it will be much more clear to the user than just the id.
Related
I have an Account object in Salesforce and I have an custom field called ExternalText. I have marked the field as and External Id and
"Set this field as the unique record identifier from an external system"
There are 2 accounts that have this field set to a value of E1 in Salesforce.
I want to do an upsert from a csv file using DataLoader and the csv looks something like this:
External Description
E1 Description 1
E1 Description 2
But when i do the upsert i get the error:
ExternalTest: more than one record found for external id field: [<id1>, <id2>]
I would have expected the Description field for both to be updated to Description 1 and then Description 2, so if i view the object in Salesforce the Description field would say Description 2
How can i do this ?
You can't do it like that. Upsert has to find 0 or exactly 1 record with that external id. On 0 it'll try to create, on 1 it'll try to update, anything else - error.
For most normal usages you'll want fields marked as ext id to also be marked unique. If this isn't unique at source - you need different value in your field or bite the bullet, learn SF record IDs and do plain old query + update for example.
There's 1 edge case why ext id doesn't automatically mark field unique but if you rely on that technicality I'd say you have bigger problems. Imagine system where both UK and Germany created customer ID 123 and they want to push it to Salesforce. They both claim they were first and absolutely won't change their unique ID. So the trick is you can pull it off with right sharing rules. Upsert done with user that only sees UK data will work and update only UK customer. As I said - it's a technicality, in a "you think you're clever but you just made admin's job trickier" area.
I am trying to build a schema for chat application in mongodb. I have 2 types of user models - Producer and Consumer. Producer and Consumer can have conversations with each other. My ultimate goal is to fetch all the conversations for any producer and consumer and show them in a list, just like all the messaging apps (eg Facebook) do.
Here is the schema I have come up with:
Producer: {
_id: 123,
'name': "Sam"
}
Consumer:{
_id: 456,
name: "Mark"
}
Conversation: {
_id: 321,
producerId: 123,
consumerId: 456,
lastMessageId: 1111,
lastMessageDate: 7/7/2018
}
Message: {
_id: 1111,
conversationId: 321,
body: 'Hi'
}
Now I want to fetch say all the consersations of Sam. I want to show them in a list just like Facebook does, grouping them with
each Consumer and sorting according to time.
I think I need to do following queries for this:
1) Get all Conversations where producerId is 123 sorted by lastMessageDate.
I can then show the list of all Conversations.
2) If I want to know all the messages in a conversation, I make query on Message and get all messages where conversationId is 321
Now, here for each new message, I also need to update the conversation with new messageId and date everytime. Is this the right way to proceed and is this optimal considering the number of queries involved. Is there a better way I can proceed with this? Any help would be highly appreciated.
Design:
I wouldn't say it's bad. Depending on the case you've described, it's actually pretty good. Such denormalization of last message date and ID is great, especially if you plan a view with a list of all conversations - you have the last message date in the same query. Maybe go even one step further and add last message text, if it's applicable in this view.
You can read more on pros and cons of denormalization (and schema modeling in general) on the MongoDB blog (parts 1, 2 and 3). It's not that fresh but not outdated.
Also, if such multi-document updates might scary you with some possible inconsistencies, MongoDB v4 got you covered with transactions.
Querying:
On one hand, you can involve multiple queries, and it's not bad at all (especially, when few of them are easily cachable, like the producer or consumer data). On the other hand, you can use aggregations to fetch all these things at once if needed.
Recently I moved my data model from Firebase to Firestore. All my code is working, but I'm having some ugly troubles regarding my nested queries for retrieve some data. Here is the point:
Right now my data model for this part looks like this(Yes! Another followers/feed example):
{
"Users": { //Collection
"UserId1" : { //Document
"Feed" : { //Subcollection of Id of posts from users this user Follow
"PostId1" : { //Document
"timeStamp" : "SomeDate"
},
"PostId2" : {
"timeStamp" : "SomeDate"
},
"PostId3" : {
"timeStamp" : "SomeDate"
}
}
//Some data
}
},
"Posts":{ //Collection
"PostId1":{ //Document
"Comments" :{ //Subcollection
"commentId" : { //Document
"authorId": "UserId1"
//comentsData
}
},
"Likes" : { //Subcollection
"UserId1" : { //Document
"liked" : true
}
}
}
}
}
My problem is that for retrieve the Posts of the feed of an user I should query in the next way:
Get the last X documents orderer by timeStamp from my Feed
feedCol(userId).orderBy(CREATION_DATE, Query.Direction.DESCENDING).limit(limit)
After that I should do a single query of each post retrieved from the list: workoutPostCol.document(postId)
Now I have the data of each post, but I want shot the username, picture, points.. etc of the author, which is in a different Document, so, again I should do another single query for each authorId retrieved in the list of posts userSocial(userId).document(toId)
Finally, and not less important, I need to know if my current user already liked that post, so I need to do a single query for each post(again) and check if my userId is inside posts/likes/{userId}
Right now everything is working, but thinking that the price of Firestore is depending of the number of database calls, and also that it doesn't make my queries more simple, I don't know if it's just that my data model is not good for this kind of database and I should move to normal SQL or just back to Firebase again.
Note: I know that EVERYTHING, would be a lot more easier moving this subcollections of likes, feed, etc to arraylists inside my user or post documents, but the limit of a Document is 1MB and if this grow to much, It will crash in the future. In other hand Firestore doesnt allow subdocument queries(yet) or an OR clause using multiple whereEqualTo.
I have read a lot of posts from users who have problems looking for a simple way to store this kind of ID's relationship to make joins and queries in their Collections, use Arraylists would be awesome, but the limit of 1MB limit it to much.
Hope that someone will be able to clarify this, or at least teach me something new; maybe my model is just crap and there is a simple and easiest way to do this? Or maybe my model is not possible for a non-sql database.
Not 100% sure if this solves the problem entirely, since there may be edge cases for your usage. But with a 5 min quick thinking, I feel like the following could solve your problem :
You can consider using a model similar to Instagram's. If my memory serves me well, what they use is an events-based collection. By events in this specific context I mean all actions the user takes. So a comment is an event, a like is an event etc.
This would make it so that you'll need three main collections in total.
users
-- userID1
---- userdata (profile pic, bio etc.)
---- postsByUser : [postID1, postID2]
---- followedBy : [userID2, ... ]
---- following : [userID2, ... ]
-- userID2
---- userdata (profile pic, bio etc.)
posts
-- postID1 (timestamp, so it's sortable)
---- contents
---- author : userID1
---- authorPic : authorPicUrl
---- authorPoints : 12345
---- taggedUsers : []
---- comments
------ comment1 : { copy of comment event }
---- likes : [userID1, userID2]
-- postID2 (timestamp)
---- contents
...
events
-- eventID1
---- type : comment
---- timestamp
---- byWhom : userID
---- toWhichPost : postID
---- contents : comment-text
-- eventID2
---- type : like
---- timestamp
---- byWhom : userID
---- toWhichPost : postID
For your user-bio page, you would query users.
For the news feed you would query posts for all posts by userIDs your user is following in the last 1 day (or any given timespan),
For the activity feed page (comments / likes etc.) you would query events that are relevant to your userID limited to the last 1 day (or any given timespan)
Finally query the next days for posts / events as the user scrolls (or if there's no new activity in those days)
Again, this is merely a quick thought, I know the elders of SOF have a habit of crucifying these usually, so forgive me fellow members of SOF if this answer has flaws :)
Hope it helps Francisco,
Good luck!
I'm building an application using loopback as backend and angularjs as frontend with MySql as database choice.
Loopback version is 2.22.0, Loopback angular sdk version is 1.5.0
There are models Person and Post. Both have auto generated "id" fields by loopback (i.e. "idInjection": true).
They both are related as Person hasMany Post and Post belongsTo Person linked by a foreign key on personId column in Post model.
Suppose there are already some records in both the tables.
I generated lbServices.js file by using lb-ng command.
So now when I try to use the function
Person.posts.create({
content: "Some content",
id: $rootScope.currentUser.id
})
it gives me error of duplicate entry.
I investigated this and found out that it's because the rest api url "/People/:id/posts" in lbServices.js file has an id parameter and also the Post model has an id column too which is a primary key.
So it passes id into both of them and fails. An ambiguity is formed.
For this example, $rootScope.currentUser.id=1 and there already exists a row in Post table with id=1
Now when I change the Post model's property ("idInjection": false) and create a custom primary key column as "uid" with auto_increment.
I'm able to insert with
Person.posts.create({
content: "Some content",
id: $rootScope.currentUser.id
})
So I want to know if I am inserting into a related model in the correct way or is this some issue with loopback? Or is there a better way to insert from AngularJs frontend?
I really want to avoid to change the primary column names of every model to something other than "id".
Please help.
I figured out what I was doing wrong.
The correct way to insert should be:
Person.posts.create(
{id: $rootScope.currentUser.id},
{
content: "Some content",
title: "Some title"
})
As the id field is an autogenerated number, your call should be:
Person.posts.create({
content: "Some content",
personId: $rootScope.currentUser.id
})
I would like to store some information as follows (note, I'm not wedded to this data structure at all, but this shows you the underlying information I want to store):
{ user_id: 12345, page_id: 2, country: 'DE' }
In these records, user_id is a unique field, but the page_id is not.
I would like to translate this into a Redis data structure, and I would like to be able to run efficient searches as follows:
For user_id 12345, find the related country.
For page_id 2, find all related user_ids and their countries.
Is it actually possible to do this in Redis? If so, what data structures should I use, and how should I avoid the possibility of duplicating records when I insert them?
It sounds like you need two key types: a HASH key to store your user's data, and a LIST for each page that contains a list of related users. Below is an example of how this could work.
Load Data:
> RPUSH page:2:users 12345
> HMSET user:12345 country DE key2 value2
Pull Data:
# All users for page 2
> LRANGE page:2:users 0 -1
# All users for page 2 and their countries
> SORT page:2:users By nosort GET # GET user:*->country GET user:*->key2
Remove User From Page:
> LREM page:2:users 0 12345
Repeat GETs in the SORT to retrieve additional values for the user.
I hope this helps, let me know if there's anything you'd like clarified or if you need further assistance. I also recommend reading the commands list and documentation available at the redis web site, especially concerning the SORT operation.
Since user_id is unique and so does country, keep them in a simple key-value pair. Quering for a user is O(1) in such a case... Then, keep some Redis sets, with key the page_id and members all the user_ids..