Show ALL items in order in Mongodb database - database

For some reason when I run db.products.find().pretty(), it doesn't list all the items in my database, and the ones it does list are not in order. Any idea why or how to list everything? It does give me the option to run 'it' after to show more, but it still doesn't show them all or in order. I just want to see all 100 products in order and pretty().
I can understand it not being on order, of productId, because I may not know to do so, but at least can I get it to list everything??

for setting order you can use sort().
db.sortData.find().sort({id:-1}).pretty()
here, -1 = Descending Order and 1 = Ascending Order on id field of collection.
By Default, mongo shell batch size returns 20 records at a time, then show more have to enter, if you want to changes size you can fire this command.
DBQuery.shellBatchSize = 30
so, now 30 records of collection mongo shell returns rather than 20.

Related

How to stop Heap Analytics grouping assets into "OTHER" Category

I think this might be very simple.
I wrote a query in heap to tell me which users were part of an event and how many times they engaged in it during the year.
The result is a simple table with username and number of occurrences.
It worked. However, Heap has this weird behavior of choosing multiple results (maybe at random?) and throwing them into a single "Other (X other results)" category. Where x is a number of others.
So i end up with a table of 20 maybe 30 users and occurences, and one row of "Other (X other results)".
I shrunk the query to see results from a smaller subset of dates and the "Other" category disappeared.
I really need to see every individual row in my query results! Even if it's paginated.
Help! Thank you
You can export the result as a CSV. The downloaded file will contain all the results (all single entries without the grouped OTHER).
Inte the current UI, you can find Export to CSV at the top of the report view.

How do I display only orders where all items are complete?

I am new to programming so please be kind.
I do not even know where to start with this problem...
I am trying to write a sql view to display only orders that are complete.
I have a table that looks something like this
The result should display orders 1 and 3 since they have all completed items. Order 2 should not be displayed since one of the items is still "F" I only want to show the order once regardless of how many items.
Can anyone please point me in the right direction?
Thanks
software - SQL Server 2005
you can use GROUP BY with HAVING for the conditions that you wanted
SELECT Order_nbr
FROM yourtable
GROUP BY Order_nbr
HAVING MIN (completed) = 'P'
for completed order, the completed column will all be 'P', so MIN(completed) will be 'P'
for non-completed order, the completed column will contain at least a F, so MIN(completed) will gives you F

couchdb map / reduce multiple keys filtering by date

I have a view setup with a map reduce. Right now this code works great:
function(doc) {
if (doc.type == 'test'){
if(doc.trash != 1){
for (var id in doc.items) {
emit([id,doc.items[id].name], 1);
}
}
}
}
function(keys,prices){
return (keys, sum(prices));
}
I get a return and when using the group parameter, it condenses everything just fine.
My issue/question, I want to add a third key.... DATE, so I may only reduce records from certain dates. So for example:
function(doc) {
if (doc.type == 'test'){
if(doc.trash != 1){
for (var id in doc.items) {
emit([date,id,doc.items[id].name], 1);
}
}
}
}
My issue is that since date is at the beginning of the array, the reduce groups by date, id etc. I know I use group_level and say just take the first key from the array or the first 2 keys, but that doesn't help either because afaik, group_level goes from left to right in the array. I could put the date on the end of the emit array, but that doesn't help either because I need to have values at the beginning of my startkey and endkey to search on.
Here is an example of the output of data:
{"key":["2012-03-13","356752b8a5f6871f3","Apple"],"value":1},
{"key":["2012-03-20","123752b8a76986857","Pear"],"value":1},
{"key":["2012-04-12","3013531de05871194","Grapefruit"],"value":1},
{"key":["2012-04-12","356752b8a5f6871f3","Apple"],"value":1},
I want APPLE to be added up in one row, here it's adding up apples by date first. I was able to successfully just add up all the apples if I remove DATE as the first key in the array, but then I can't search by date range.
Any ideas on how to accomplish this?
If I correctly understand what you want to do, then you'd want to put the date as the first element of your array, and use group_level as well as start_key and end_key.
Eg. startkey=[1, "someid"] endkey=[1,"someid",{}] group_level=2
Will get you all items from date 1 (obviously choose your own format here), with id "someid" and any name. It seems funny that you emit id's before names, and without having more information about what you're actually trying to accomplish, it's hard to advise your general data model. If ID is a "type" id meaning that many items share the same ID then this makes sense. If ID is a unique per item ID, then it does not. In that case, you'd want to emit "name" before ID...
Edit 1
As per your comment, to do a range of dates you do this:
startkey=[1] endkey=[5,{}] group_level=2
You will get everything from date 1 to date 5 grouped by id ie. apples, oranges etc. I use this exact technique in a very large scale production application. I actually formatted the dates as an easily human readable integers of the format yyyymmdd, so 20140624 would sort to the top. If I want everything from the start of the month till now grouped by my group ids, I call
startkey=[20140601] endkey=[20140624,{}] group_level=2
It works perfectly and as far as I can tell that's what you're looking to do. I also have a third key layer "detail" which allows me to provide a deeper level of grouping for items that need it. I can then call
startkey=[20140601, "someid"] endkey=[20140624, "someid",{}] group_level=3
To drill to the detail level for a particular id, or just use the previous query with group_level=3 if I want the details for every id. I'm certain you can make this work - I've solved this exact problem in a production application using the techniques described.
Edit 2
If you want to group all apples regardless of date, then you'll need to let apples be the first element in the key. You can then get all apples over all time as a single row in the view result using group_level=1, and Apples over a date range using group_level=2. The difference here is that you'll only be able to do the group_level=2 query on a single item type at a time. If you want the best of both worlds, you unfortunately just need to make 2 views. That's just how key ordering works... If you need fast response times for both types of queries, all item types over a date range, and all of a particular item not grouped by date, I believe 2 views is the only way to achieve that.
Note
Another thing to note is about your reduce function. Wherever possible it is highly recommended that you use the built in reduce functions. They're implemented in erlang and are highly optimized compared to custom javascript reduce functions.
In your case, just replace your reduce function with this
_sum
Easy hey?
If you post more info about your application, data model etc. then I'd be happy to help out more with your database design.

Returning only specific rows (eg. every 10th: #1, #11, #21...) from query

I need to fetch only specific (kind of "nth rows") from a Solr index. For example, if the full result contains 10000 rows, I want to receive only the first and last row of each 100 item bucket.
items 1 and 100
items 101 and 200
items 201 and and 300...
This grouping is dynamic and dependent on the number of results. So, if there are only 5000 total result rows, bucket size is 50 instead of 100. I can calculate the actual indexes but the problem is how to fetch those from Solr.
There are no indexed fields that could be used directly as query parameters. In practise, I am doing a search "name starts with A" (or some other letter) and want to receive 1st item starting with A, 100th item starting with A, 101st item starting with A etc...
Query parameters http://wiki.apache.org/solr/CommonQueryParameters have "rows" and "start" but these can't skip items, so I would need to get each item with a separate query which is inefficient. I was also thinking about implementing a Filter Query which would just filter out items 2...99, 192...199 but I do not know how to implement that.
I don't know of an easy way to do this, but this will reduce the amount of data that needs to be passed back and forth: Do a regular query with the usual start and rows parameters, but tell Solr to only return the ID field of each document (via the fl parameter). In your client code, store the IDs of the first and last documents, and repeat the query with the next value for start. Once you reach the end of the search results, you have a list of the document IDs you want. Run a new query and give it the list of document IDs you want returned, and this time get the full documents.

How do I store this in Redis?

I have many products (product_id). Users (user_id) view the products.
I want to query which users viewed whatever product in the last 24 hours. (In other words, I want to keep a list of user_ids attached to that product_id...and when 24 hours is up for a user, that user pops off that list and the record disappears)
How do I store this in Redis? Can someone give me a high-level schema because I'm new in Redis.
For something similar I use a sorted set with values being user ids and score being the current time. When updating the set, remove older items with ZREMRANGEBYSCORE as well as updating the time score for the current user.
Update with code:
Whenever a new item is added:
ZREMRANGEBYSCORE recentitems 0 [DateTime.Now.AddMinutes(-10).Ticks]
ZADD recentitems [DateTime.Now.Ticks] [item.id]
To get the ids of items added in the last 10 minutes:
ZREVRANGEBYSCORE recentitems [DateTime.Now.Ticks] [DateTime.Now.AddMinutes(-10).Ticks]
Note that you could also use
ZREVRANGE recentitems 0 -1
if you don't mind that the set could include older items if nothing has been added recently.
That gets you a list of item ids. You then use GET/MGET/HGET/HMGET as appropriate to retrieve the actual items for display.
If you want redis keys to drop off automatically then you'll probably want to use a redis key for every user_id-to-product_id map. So, you would write by doing something like redis.set "user-to-products:user_id:product_id", timestamp followed by redis.expire "user-to-products:user_id:product_id" 86400 (24hrs, in seconds).
To retrieve the current list you should be able to do redis.keys "user-to-products:user_id:*"

Resources