I need to find a document in mongodb using it's ID . This operation should be so fast. So we need to get the exact document which has the given ID . Is there any way to do that . I am a beginner here . So I would be much thankful to you if you could give some in-depth answer
Okay so your are really a beginner.
First thing you should know from now that getting any kind of record
from a database is done by a querying the database and its called as
Search.
It simply means that when you want any data from your database, database engine has to search it in database using the query you provided.
So I think this sufficient to get you know that whenever you ask(using database query) database to give some records from it, it will perform a search based on conditions you provided, then it doesn't matter
You provide condition with a single unique key or multiple complex combinations of columns or joins of multiple tables
Your database contain no records or billions of records.
it has to search in database.
So above explanation holds true for I guess every database as far as I know.
Now coming to MongoDB
So referring to above explanation, MongoDB engine query database to get result.
Now main question is - How to get result Fast!!!
And I think that's what your main concern should be.
So query speed (Search Speed) is mainly depend on 2 things:
Query.
Number of records in your database.
1. Query
Here factors affecting are :
a. Nature of parameters used in a query - (Indexed or UnIndexed)
If you use Indexed parameters in your query it always going to be a faster search operation for database.
For example the _id field is default indexed by mongodb. So if you search document in collection using _id filed alone is going to be always a faster search.
b. Combination of parameters with operators
This refers to number of parameters used for query (more the parameter, more slower) and kind of query operators you used in query (simple query operator give result faster as compare to aggregation query operators with pipelines)
c. Read Preferences
Read preference describes how MongoDB will route read operations to the members of a replica set. It actually describe your preference of confidence in data that you are getting.
Above are two main parameters, but there are many things such as :
Schema of your collection,
Your understanding of the schema (specifically data-types of documents)
Understanding of query operator you used. For example - when to use $or, $and operators and when to use $in and $nin operators.
2. Number of records in your database.
Now this happens when you have enormous data in database, of course with single database server it will be slower i.e. more records more slower.
In such cases Sharding (Clustering) your data on multiple database server will give you faster search performance.
MongoDB has Mongos component which will route our query to perfect database server in cluster. In order to perform such routing it uses config servers which stores the meta-data about our collections using Indexes and Shard-Key.
Hence in sharded environment choosing proper shard-key plays important role in faster query response.
I hope this will give you a descent idea of how actually a search is affected by various parameters.
I will improve this answer in future time.
Its pretty starlight forward, you can try for the following:
var id = "89e6dd2eb4494ed008d595bd";
Model.findById(id, function (err, user) { ... } );
with mongoose:
router.get("/:id", (req, res) => {
if (!mongoose.Types.ObjectId.isValid(req.params.id)) { // checking if the id is valid
return res.send("Please provide valid id");
}
var id = mongoose.Types.ObjectId(req.params.id);
Item.findById({ _id: id })
.then(item=> {
res.json(item);
})
.catch(err => res.status(404).json({ success: false }));
});
Related
How is aggregation achieved with dynamodb? Mongodb and couchbase have map reduce support.
Lets say we are building a tech blog where users can post articles. And say articles can be tagged.
user
{
id : 1235,
name : "John",
...
}
article
{
id : 789,
title: "dynamodb use cases",
author : 12345 //userid
tags : ["dynamodb","aws","nosql","document database"]
}
In the user interface we want to show for the current user tags and the respective count.
How to achieve the following aggregation?
{
userid : 12,
tag_stats:{
"dynamodb" : 3,
"nosql" : 8
}
}
We will provide this data through a rest api and it will be frequently called. Like this information is shown in the app main page.
I can think of extracting all documents and doing aggregation at the application level. But I feel my read capacity units will be exhausted
Can use tools like EMR, redshift, bigquery, aws lambda. But I think these are for datawarehousing purpose.
I would like to know other and better ways of achieving the same.
How are people achieving dynamic simple queries like these having chosen dynamodb as primary data store considering cost and response time.
Long story short: Dynamo does not support this. It's not build for this use-case. It's intended for quick data access with low-latency. It simply does not support any aggregating functionality.
You have three main options:
Export DynamoDB data to Redshift or EMR Hive. Then you can execute SQL queries on a stale data. The benefit of this approach is that it consumes RCUs just once, but you will stick with outdated data.
Use DynamoDB connector for Hive and directly query DynamoDB. Again you can write arbitrary SQL queries, but in this case it will access data in DynamoDB directly. The downside is that it will consume read capacity on every query you do.
Maintain aggregated data in a separate table using DynamoDB streams. For example you can have a table UserId as a partition key and a nested map with tags and counts as an attribute. On every update in your original data DynamoDB streams will execute a Lambda function or some code on your hosts to update aggregate table. This is the most cost efficient method, but you will need to implement additional code for each new query.
Of course you can extract data at the application level and aggregate it there, but I would not recommend to do it. Unless you have a small table you will need to think about throttling, using just part of provisioned capacity (you want to consume, say, 20% of your RCUs for aggregation and not 100%), and how to distribute your work among multiple workers.
Both Redshift and Hive already know how to do this. Redshift relies on multiple worker nodes when it executes a query, while Hive is based on top of Map-Reduce. Also, both Redshift and Hive can use predefined percentage of your RCUs throughput.
Dynamodb is pure key/value storage and does not support aggregation out of the box.
If you really want to do aggregation using DynamoDB here some hints.
For you particular case lets have table named articles.
To do aggregation we need an extra table user-stats holding userId and tag_starts.
Enabled DynamoDB streams on table articles
Create a new lambda function user-stats-aggregate which is subscribed to articles DynamoDB stream and received OLD_NEW_IMAGES on every create/update/delete operation over articles table.
Lambda will perform following logic
If there is no old image, get current tags and increase by 1 every occurrence in the db for this user. (Keep in mind there could be the case there is no initial record in user-stats this user)
If there is old image see if tag was added or removed and apply change +1 or -1 depending on the case for each affected tag for received user.
Stand an API service retrieving these user stats.
Usually aggregation in DynamoDB could be done using DynamoDB streams , lambdas for doing aggregation and extra tables keeping aggregated results with different granularity.(minutes, hours, days, years ...)
This brings near realtime aggregation without need to do it on the fly per every request, you query on aggregated data.
Basic aggregation can be done using scan() and query() in lambda.
I am thinking about some smart workaround of "no unique constraint" problem in ElasticSearch.
I can't use _id to store my unique field, because I am using _id for other purpose.
I crawl Internet pages and store them in ElasticSearch index. My rule is, that url must be unique (only one document with given url in index) so as ElasticSearch doesn't allow to set unique constraint on one field, I must query index before inserting new page to check if there is already site with given url.
So adding new page to document looks like that:
Query(match) index in ES to check if there is document with given url field.
If not, I insert new document.
The solution has two disadvantages:
I must execute extra query to check if there is already document with given url. It slows down inserting process and generates extra load.
If I try to add 2 documents with the same url in short amount of time and the index doesn't refresh before adding second document, the second query returns, that there is no document with given url and finally I have two documents with the same url
So I am looking for something else. Please tell me if you have any idea or please tell me what do you think about such solutions:
Solution 1
To use other database system (or maybe another ES index with url in _id) where I will store only urls and I will query it to check if there is already url
Solution 2
2. To queue documents before inserting and to disable index refreshing when other process will process the queue and add queued documents to index.
You've hit upon one of the things that Elasticsearch does not do well (secondary indexes and constraints) when compared to some other NoSQL solutions. In addition to Solution 1 and Solution 2 I'd suggest you look at Elasticsearch Rivers:
Rivers
A river is a pluggable service running within elasticsearch cluster
pulling data (or being pushed with data) that is then indexed into the
cluster.
For example, you could use the MongoDB river and then insert your data into MongoDB. MongoDB supports secondary unique indexes so you could prevent insertion of duplicate urls. The River will then take care of pushing the data to Elasticsearch in realtime.
https://github.com/richardwilly98/elasticsearch-river-mongodb
ES supports CouchDB officially and there are a number of other databases that have rivers too -
I'm currently specing out a project that stored threaded comment trees.
For those of you unfamiliar with what I'm talking about I'll explain, basically every comment has a parent comment, rather than just belonging to a thread. Currently, I'm working on a relational SQL Server model of storing this data, simply because it's what I'm used to. It looks like so:
Id int --PK
ThreadId int --FK
UserId int --FK
ParentCommentId int --FK (relates back to Id)
Comment nvarchar(max)
Time datetime
What I do is select all of the comments by ThreadId, then in code, recursively build out my object tree. I'm also doing a join to get things like the User's name.
It just seems to me that maybe a document storage like MongoDB which is NoSql would be a better choice for this sort of model. But I don't know anything about it.
What would be the pitfalls if I do choose MongoDB?
If I'm storing it as a Document in MongoDB, would I have to include the User's name on each comment to prevent myself from having to pull up each user record by key, since it's not "relational"?
Do you have to aggressively cache "related" data on the objects you need them on when you're using MongoDB?
EDIT: I did find this arcticle about storing trees of information in MongoDB. Given that one of my requirements is the ability to list to a logged in user a list of his recent comments, I'm now strongly leaning towards just using SQL Server, because I don't think I'll be able to do anything clever with MongoDB that will result in real performance benefits. But I could be wrong. I'm really hoping an expert (or two) on the matter will chime in with more information.
The main advantage of storing hierarchical data in Mongo (and other document databases) is the ability to store multiple copies of the data in ways that make queries more efficient for different use cases. In your case, it would be extremely fast to retrieve the whole thread if it were stored as a hierarchical nested document, but you'd probably also want to store each comment un-nested or possibly in an array under the user's record to satisfy your 2nd requirement. Because of the arbitrary nesting, I don't think that Mongo would be able to effectively index your hierarchy by user ID.
As with all NoSQL stores, you get more benefit by being able to scale out to lots of data nodes, allowing for many simultaneous readers and writers.
Hope that helps
I am creating a new database, which I am basically designing for the logging/history purpose. So, I'll make around 8-10 tables in this database. Which will keep the data and I'll retrieve it for showing history information to the user.
I am creating database from the SQL Server 2005 and I can see that there is a check box of " Use full Indexing". I am not sure whether I make it check or unchecked. As I am not familiar with the database too much, suggest me that by checking it, will it increase the performance of my database in retrieval?
I think that is the check box for FULLTEXT indexing.
You turn it on only if you plan to do some natural language queries or a lot of text-based queries.
See here for a description of what it is used to support.
http://msdn.microsoft.com/en-us/library/ms142571.aspx
From that base link, you can follow through to http://msdn.microsoft.com/en-us/library/ms142547.aspx (amongst others). Interesting is this quote
Comparison of LIKE to Full-Text Search
In contrast to full-text search, the LIKE Transact-SQL predicate works
on character patterns only. Also, you cannot use the LIKE predicate to
query formatted binary data. Furthermore, a LIKE query against a large
amount of unstructured text data is much slower than an equivalent
full-text query against the same data. A LIKE query against millions
of rows of text data can take minutes to return; whereas a full-text
query can take only seconds or less against the same data, depending
on the number of rows that are returned.
There is a cost for this of course which is in the storage of the patterns and relationships between words in the same record. It is really useful if you are storing articles for example, where you want to enable searching by "contains a, b and c". A LIKE pattern would be complicated and extremely slow to process like %A%B%C% OR LIKE '%B%A%C' Or ... and all the permutations for the order of appearance of A, B and C.
I am trying to visualize how to create a search for an application that we are building. I would like a suggestion on how to approach 'searching' through large sets of data.
For instance, this particular search would be on a 750k record minimum table, of product sku's, sizing, material type, create date, etc;
Is anyone aware of a 'plugin' solution for Coldfusion to do this? I envision a google like single entry search where a customer can type in the part number, or the sizing, etc, and get hits on any or all relevant results.
Currently if I run a 'LIKE' comparison query, it seems to take ages (ok a few seconds, but still), and it is too long. At times making a user sit there and wait up to 10 seconds for queries & page loads.
Or are there any SQL formulas to help accomplish this? I want to use a proven method to search the data, not just a simple SQL like or = comparison operation.
So this is a multi-approach question, should I attack this at the SQL level (as it ultimately looks to be) or is there a plug in/module for ColdFusion that I can grab that will give me speedy, advanced search capability.
You could try indexing your db records with a Verity (or Solr, if CF9) search.
I'm not sure it would be faster, and whether even trying it would be worthwhile would depend a lot on how often you update the records you need to search. If you update them rarely, you could do an Verity Index update whenever you update them. If you update the records constantly, that's going to be a drag on the webserver, and certainly mitigate any possible gains in search speed.
I've never indexed a database via Verity, but I've indexed large collections of PDFs, Word Docs, etc, and I recall the search being pretty fast. I don't know if it will help your current situation, but it might be worth further research.
If your slowdown is specifically the search of textual fields (as I surmise from your mentioning of LIKE), the best solution is building an index table (not to be confiused with DB table indexes that are also part of the answer).
Build an index table mapping the unique ID of your records from main table to a set of words (1 word per row) of the textual field. If it matters, add the field of origin as a 3rd column in the index table, and if you want "relevance" features you may want to consider word count.
Populate the index table with either a trigger (using splitting) or from your app - the latter might be better, simply call a stored proc with both the actual data to insert/update and the list of words already split up.
This will immediately drastically speed up textual search as it will no longer do "LIKE", AND will be able to use indexes on index table (no pun intended) without interfering with indexing on SKU and the like on the main table.
Also, ensure that all the relevant fields are indexed fully - not necessarily in the same compund index (SKU, sizing etc...), and any field that is searched as a range field (sizing or date) is a good candidate for a clustered index (as long as the records are inserted in approximate order of that field's increase or you don't care about insert/update speed as much).
For anything mode detailed, you will need to post your table structure, existing indexes, the queries that are slow and the query plans you have now for those slow queries.
Another item is to enure that as little of the fields are textual as possible, especially ones that are "decodable" - your comment mentioned "is it boxed" in the text fields set. If so, I assume the values are "yes"/"no" or some other very limited data set. If so, simply store a numeric code for valid values and do en/de-coding in your app, and search by the numeric code. Not a tremendous speed improvement but still an improvement.
I've done this using SQL's full text indexes. This will require very application changes and no changes to the database schema except for the addition of the full text index.
First, add the Full Text index to the table. Include in the full text index all of the columns the search should perform against. I'd also recommend having the index auto update; this shouldn't be a problem unless your SQL Server is already being highly taxed.
Second, to do the actual search, you need to convert your query to use a full text search. The first step is to convert the search string into a full text search string. I do this by splitting the search string into words (using the Split method) and then building a search string formatted as:
"Word1*" AND "Word2*" AND "Word3*"
The double-quotes are critical; they tell the full text index where the words begin and end.
Next, to actually execute the full text search, use the ContainsTable command in your query:
SELECT *
from containstable(Bugs, *, '"Word1*" AND "Word2*" AND "Word3*"')
This will return two columns:
Key - The column identified as the primary key of the full text search
Rank - A relative rank of the match (1 - 1000 with a higher ranking meaning a better match).
I've used approaches similar to this many times and I've had good luck with it.
If you want a truly plug-in solution then you should just go with Google itself. It sounds like your doing some kind of e-commerce or commercial site (given the use of the term 'SKU'), So you probably have a catalog of some kind with product pages. If you have consistent markup then you can configure a google appliance or service to do exactly what you want. It will send a bot in to index your pages and find your fields. No SQl, little coding, it will not be dependent on your database, or even coldfusion. It will also be quite fast and familiar to customers.
I was able to do this with a coldfusion site in about 6 hours, done! The only thing to watch out for is that google's index is limited to what the bot can see, so if you have a situation where you want to limit access based on a users role or permissions or group, then it may not be the solution for you (although you can configure a permission service for Google to check with)
Because SQL Server is where your data is that is where your search performance is going to be a possible issue. Make sure you have indexes on the columns you are searching on and if using a like you can't use and index if you do this SELECT * FROM TABLEX WHERE last_name LIKE '%FR%'
But it can use an index if you do it like this SELECT * FROM TABLEX WHERE last_name LIKE 'FR%'. The key here is to allow as many of the first characters to not be wild cards.
Here is a link to a site with some general tips. https://web.archive.org/web/1/http://blogs.techrepublic%2ecom%2ecom/datacenter/?p=173