Currently, I working on personal project. I want to build a test online.
I'm using Firestore(NoSQL) for storing Test and Question
This is my current schema
{
"id": "Test ID",
"name": "Test Name",
"number_of_question": 20, // Number of question will fetch from question_bank
"question_bank": [
{"id": "Question ID",
"name": "Question Name 1 ?",
"answer": ["A","B","C","D"],
"correct_answer": ["A","B"]
},
{
"id": "Question ID 2",
"name": "Question Name 2 ?",
"answer": ["A","B","C","D"],
"correct_answer": ["A"]
}, ...
]
}
Because in the future, there are possibility that the question_bank become very large (1000 questions)
Is there a way or a better schema that we can tell NoSQL to fetch (randomly limited to number_of_question)questions in question_banks.
(I really want to hit the database only 1 for this action)
Firestore will always return the whole document so you cannot fetch just a few items from that array. The question_bank can be a sub-collection where each question in question_bank array is a document. Then you can specify number of documents to query from the sub-collection.
const snap = await db.collection('quizzes/{quizId}/questions').limit(20).get()
// can add more query clauses if required
If you want to fetch random documents from that sub-collection, checkout:
Firestore: How to get random documents in a collection
It sounds like you'll want to use a subcollection for the question_bank of each test. With a subcollection you can query the questions for a specific test, retrieving a subset of them.
I recommend checking out the Firebase documentation on the hierarchical data model of Firestore, and on performing queries.
Related
I am building a site using Couchdb and ReactJS.
One of my pages displays a list of up to 10,000 financial transactions, each txn consisting of:
date
in amount
out amount
payee
category item
notes
I have a pagination strategy and only load and display 100 transactions at a time.
At any one time, I want to be able to search a single column - I use a drop down to tell the search functionality which index to use for searching.
I also want to be able to sort each column.
So far I have used multiple views and I have all of the above functionality working.
During development I used a string for the category item. Now that I have worked out how to get all of the above to work, I need to properly tackle the category item column entry.
A category item belongs to a category, so a category can have one or more category items so there is a one to many relationship between the category and the items.
Each txn can have one and only one category item.
A category is made up of a small number of fields.
A category item is made up of a small number of fields.
I am struggling to find the best approach to doing this.
I have considered each of the approaches described in https://docs.couchbase.com/server/5.0/data-modeling/modeling-relationships.html.
At this point, I am considering one of the following approaches and I was wondering if anyone had any advice - I have include examples of the txns, cats and cat items at the end of this post?
Embed the cat item in the txn and hopefully suss how to both search and sort on the cat item.name
Abandon pagination and load all the txns into the virtual dom, and sort and search the dom directly
Currently each distinct item is a separate document and I use referencing to maintain the relationship. I have considered using the id to store searching and sorting data but I don't see how this would work to give me all that I need.
Txn
{
"_id": "1",
"type": "txn"
"date": "2020-01-20",
"cat": "3",
"notes": "xxxx",
"out": 10,
"in": 0
}
Category
{
"_id": "2",
"type": "cat",
"name": "Everyday Expenses",
"weight": 2
}
Category Item
{
"_id": "3",
"type": "catitem",
"cat": "2",
"name": "Groceries (£850)",
"weight": 0,
"notes": "blah, blah, blah"
}
I am running ReactJS on node.js and I am using pouchdb.
While faceting azure search returns the count for each facet field by default.How do I also get other searchable fields for every facet?
Ex When I facet for area , I want something like this.(description is a searchable field)
{
"area": [
{
"count": 1,
"description": "Acrylics",
"value": "ACR"
},
{
"count": 1,
"description": "Power",
"value": "POW"
}
]
}
Can someone please help with the extra parameters I need to send in the query?
Unfortunately there is no good way to do this as there is no direct support for nested faceting in Azure search (you can upvote it here). To achieve the result you want you would need to store the data together as a composite value as described by this workaround.
Suppose I have an index called "posts" with the following properties:
{
"uid": "<user id>",
"date": "<some date>",
"message": "<some message>"
}
And another index called "users" with the following properties:
{
"uid": "<user id>",
"gender": "Male"
}
Now, I'm searching for posts posted by people who are males. How can I do that?
I definitely don't want to have a "user" property in a post and store the gender of the user in there. Because when a user updates his/her gender, I'd have to go to every single post that he/she has ever posted to update the gender.
Elasticsearch doesn't support inter index relation till now. There is 'join' datatype but it supports only fields within the same index.
Sometimes when using Azure Search's paging there may be duplicate documents in the results. Here is an example of a paging request:
GET /indexes/myindex/docs?search=*$top=15&$skip=15&$orderby=rating desc
Why is this possible? How can it happen? Are there any consistency guarantees when paging?
The results of paginated queries are not guaranteed to be stable if the underlying index is changing, or if you are relying on sorting by relevance score. Paging simply changes the value of $skip for each page, but each query is independent and operates on the current view of the data (i.e. – there is no snapshotting or other consistency mechanism like you’d find in a general-purpose database).
Here is an example of how you might get duplicates. Assume an index with four documents:
{ "id": "1", "rating": 5 }
{ "id": "2", "rating": 3 }
{ "id": "3", "rating": 2 }
{ "id": "4", "rating": 1 }
Now assume you want to page through the results with a page size of two, ordered by rating. You’d execute this query to get the first page:
$top=2&$skip=0&$orderby=rating desc
And get these results:
{ "id": "1", "rating": 5 }
{ "id": "2", "rating": 3 }
Now you insert a fifth document into the index:
{ "id": "5", "rating": 4 }
Shortly thereafter, you execute a query to fetch the second page of results:
$top=2&$skip=2&$orderby=rating desc
And get these results:
{ "id": "2", "rating": 3 }
{ "id": "3", "rating": 2 }
Notice that you’ve fetched document 2 twice. This is because the new document 5 has a greater value for rating, so it sorts before document 2 and lands on the first page.
In situations where you're relying on document score (either you don't use $orderby or you're using $orderby=search.score()), paging can return duplicate results because each query might be handled by a different replica, and that replica may have different term and document frequency statistics -- enough to change the relative ordering of documents at page boundaries.
For these reasons, it’s important to think of Azure Search as a search engine (because it is), and not a general-purpose database.
I have a database with documents like these:
{_id: "1", module:["m1"]}
{_id: "2", module:["m1", "m2"]}
{_id: "3", module:["m3"]}
There is an search index created for these documents with the following index function:
function (doc) {
doc.module && doc.module.forEach &&
doc.module.forEach(function(module){
index("module", module, {"store":true, "facet": true});
});
}
The index uses "keyword" analyzer on module field.
The sample data is quite small (11 documents, 3 different module values)
I have two issues with queries that are using group_field=module parameter:
Not all groups are returned. I get 2 out of 3 groups that I expect. Seems like if a document with ["m1", "m2"] is returned in the "m1" group, but there is no "m2" group. When I use counts=["modules"] I get complete lists of distinct values.
I'd like to be able to get something like:
{
"total_rows": 3,
"groups": [
{ "by": "m1",
"total_rows": 1,
"rows": [ {_id: "1", module: "m1"},
{_id: "2", module: "m2"}
]
},
{ "by": "m2",
"total_rows": 1,
"rows": [ {_id: "2", module: "m2"} ]
},
....
]
}
When using group_field, bookmark is not returned, so there is no way to get the next chunk of the data beyond 200 groups or 200 rows in a group.
Cloudant Search is based on Apache Lucene, and hence has its properties/limitations.
One limitation of grouping is that "the group field must be a single-valued indexed field" (Lucene Grouping), hence a document can be only in one group.
Another limitation/property of grouping is that topNGroups and maxDocsPerGroup need to be provided in advance, and in Cloudant case the max numbers are 200 and 200 (they can be set lower by using group_limit and limit parameters).