Get delta at cursor (no selection) without splitting it - quill

Here's a delta with an attribute:
Trying to get the delta with
editor.getContents(range.index, range.length);
returns
Delta: {
ops: []
}
which is expected - range.length is 0.
Is there a way of returning the entire delta (from left to right) so it looks like this:
Delta: {
ops: [
{
attributes: { test: '123' },
insert: 'A selection'
},
...
]
}

Assuming a slightly more complex example to disambiguate and assuming the test 123 attribute is implemented with a class Attributor, given the document:
<div class="ql-editor">
<p><strong>ab</strong><span class="ql-test=123">cd<em>ef</em></span></p>
</div>
I think what you are asking however is getting the Delta for the "cdef" text when the user's cursor is between the "e" and "f" and so your range is index: 5.
This is an experimental/undocumented API but quill.scroll.path(5) will get you an array [[blockBlot, 0], [inlineBlot, 2], [italicBlot, 1]] and the blot you want in this case is the second one so by summing the offsets to it you will have 2 (0 + 2) and then you can call quill.getContents(2, blot.length()).
If the class is unique (or you can access the DOM node some other way) you can also do:
const Parchment = Quill.import("parchment");
let node = document.querySelector('.ql-test-123');
let blot = Parchment.find(node);
let offset = quill.scroll.offset(blot);
quill.getContents(offset, blot.length());

Related

insertAt using fp-ts to insert element at specific index

Trying to simply insert some element at specific index using insertAt util function from fp-ts but insertAt returns Option<T[]> | NonEmptyArray which returns object like
const x = insertAt(1, 100)([0, 10]) => { _tag: "someTag", value: [0, 100, 10] }
and I can't just print x.value because Option or NonEmptyArray type have no 'value' key. How can I get access to the whole array to for example render it in a view? How can I iterate through them? fp-ts documentation gives me absolutely 0 knowledge about how it works.
It's not easy to get started with fp-ts and the answer doesn't mean to deeply explain it, but in short: you need a way to get the value from that option, if there's any.
Here's an example with a brief explanation:
import { insertAt } from 'fp-ts/lib/Array';
import { pipe } from 'fp-ts/lib/function';
import { option as O } from 'fp-ts';
const x = pipe( // 1
[0, 10],
insertAt(1, 100),
O.getOrElse(() => []) // 2
);
I used pipe to transform the initial value and apply all the needed functions.
You could also use flow, which returns a function instead and you must pass your array as an argument.
const x = flow(
insertAt(1, 100),
O.getOrElse(() => [])
)([0, 10]);
Here you can see an example on how to retrieve the value. Since the value in option can be "none" or "some", you need a way to get a fallback value.
An example for the "none" case:
const x = pipe(
[0, 10],
insertAt(4, 100), // NOTE: cannot insert at index 4 since the array has 2 items
O.getOrElse(() => [1000])
);
// x === [1000]

Ruby on Rails - Ruby, How to add the values from two hashes with same key, without overwriting the values?

first of all thank you for helping me with my SQL question, at this point.
Now I'm struggling with another thing, makes me think if I should quit being a programmer to be honest.
Anyway, my problem is : I have an array of hashes( and inside that hash another ) like this:
[
{"A1"=>{:month=>1.0, :balance=>"0.0000", :price=>"9.0000"}},
{"A1"=>{:month=>7.0, :balance=>"34030.0000", :price=>"34030.0000"}},
{"A3"=>{:month=>4.0, :balance=>"34030.0000", :price=>"34030.0000"}},
...
]
What I'm trying to accomplish is that, if there are two values with the same key, ie "A1" add those values into one whole hash, without overwriting the old values and having the month as a key desired output:
[
{"A1"=> { 1 => { :balance=> "0.0000", :price=>"9.0000"} },
{ 7 => { :balance => "34030.0000", :price => "34030.0000" } }},
and so on...
]
Is this posible?
Due the current format of the data you have, you'll need more than a couple of transformations. Most of them based on transforming the values of the resulting hash, after grouping the hashes in the array by their only key:
data
.group_by { |hash| hash.keys.first } # (1)
.transform_values { |value| value.flat_map(&:values) } # (2)
.transform_values { |values| values.index_by { |value| value[:month] } } # (3)
The first transformation is to group the current object, holding an array of hashes, by its only hash key, hence the keys.first, resulting in:
{
"A1"=>[
{"A1"=>{:month=>1.0, :balance=>"0.0000", :price=>"9.0000"}},
{"A1"=>{:month=>7.0, :balance=>"34030.0000", :price=>"34030.0000"}}
],
"A3"=>[{"A3"=>{:month=>4.0, :balance=>"34030.0000", :price=>"34030.0000"}}]
}
The second, is to extract only the values from each hash, in the resulting hash, with arrays of hashes:
{
"A1"=>[
{:month=>1.0, :balance=>"0.0000", :price=>"9.0000"},
{:month=>7.0, :balance=>"34030.0000", :price=>"34030.0000"}
],
"A3"=>[{:month=>4.0, :balance=>"34030.0000", :price=>"34030.0000"}]
}
Then, it just lacks to transform the array of hashes, to simply an hash, whose keys are the value of month:
{
"A1"=>{
1.0=>{:month=>1.0, :balance=>"0.0000", :price=>"9.0000"},
7.0=>{:month=>7.0, :balance=>"34030.0000", :price=>"34030.0000"}
},
"A3"=>{4.0=>{:month=>4.0, :balance=>"34030.0000", :price=>"34030.0000"}}
}
#Sebastian's answer is excellent. For variety, let's also consider an iterative approach. Not sure if it's more efficient or easier to understand, but it's always good to understand multiple perspectives.
Setting up the input data you gave us:
arr = [
{"A1"=>{:month=>1.0, :balance=>"0.0000", :price=>"9.0000"}},
{"A1"=>{:month=>7.0, :balance=>"34030.0000", :price=>"34030.0000"}},
{"A3"=>{:month=>4.0, :balance=>"34030.0000", :price=>"34030.0000"}}]
We create a new empty hash for our results.
new_hash = {}
And now iterating over the original data. We're going to make some assumptions about the form of the data.
# We know each thing is going to be a hash.
arr.each do |hsh|
# Set up some convenient variables for keys and
# values we'll need later.
key = hsh.keys.first
value = hsh.values.first
month = value[:month]
# If the output hash doesn't yet have the key,
# give it the key and assign an empty hash to it.
new_hash[key] ||= {}
# Assign the value to the hash, keyed to the current month.
new_hash[key][month] = value
# ... and get rid of the month key that's now redundant.
new_hash[key][month].delete(:month)
end
And the result is:
{"A1"=>{1.0=>{:balance=>"0.0000", :price=>"9.0000"},
7.0=>{:balance=>"34030.0000", :price=>"34030.0000"}},
"A3"=>{4.0=>{:balance=>"34030.0000", :price=>"34030.0000"}}}
Arguably it would be more useful for the desired return value to be a hash:
h = {"A1"=>{1=>{:balance=> "0.0000", :price=> "9.0000"},
7=>{:balance=>"34030.0000", :price=>"34030.0000"}},
"A3"=>{4=>{:balance=>"34030.0000", :price=>"34030.0000"}}}
That way you could write, for example:
require 'bigdecimal'
BigDecimal(h['A1'][7][:price])
#=> 0.3403e5
See BigDecimal. BigDecimal is generally used in financial calculations because it avoids round-off errors.
This result can be obtained by changing the values of :month to integers in arr:
arr = [
{"A1"=>{:month=>1, :balance=> "0.0000", :price=> "9.0000"}},
{"A1"=>{:month=>7, :balance=>"34030.0000", :price=>"34030.0000"}},
{"A3"=>{:month=>4, :balance=>"34030.0000", :price=>"34030.0000"}}
]
and by computing:
h = arr.each_with_object({}) do |g,h|
k,v = g.flatten
(h[k] ||= {}).update(v[:month]=>v.reject { |k,_| k == :month })
end
See Hash#flatten, Hash#update (aka merge!) and Hash#reject.
One could alternatively write:
h = arr.each_with_object(Hash.new { |h,k| h[k] = {} }) do |g,h|
k,v = g.flatten
h[k].update(v[:month]=>v.reject { |k,_| k == :month })
end
See the form of Hash::new that takes a block.

Terraform how to loop over count (number type)

I am fairly new to terraform. I have a map that looks like this
{ Small: 2, medium: 1 }
I need to create a local list that looks like below so I can easily loop over the list to create VMs.
[
small,
small,
medium
]
So far, I have tried something like this
([for k, v in var.webservers : [
for s in v: v]
])
My logic was to loop on the count(value) of each key-value pair to generate a list. But TF expects a collection for iteration.
Please help!
You can do this as follows:
variable "webservers" {
default = {
small = 2,
medium = 1
}
}
output "test" {
value = flatten([for k, v in var.webservers :
[
for t in range(v): k
]
])
}
The order may be different, but this is because maps do not have order.

Groovy dictionary map - how to sort according to a map's key's value - if the value is in x.x.x.x format - numberically sort version value with . char

I have the following dictionary aka MAP in Groovy.
list = [
[
name:ProductA-manifest-file.json,
path:ProductA,
properties: [
[
key:release,
value:RC1.0
],
[ key:PIPELINE_VERSION,
value:1.0.0.11
]
],
repo:some-generic-repo-local,
],
[
name:ProductA-manifest-file.json,
path:ProductA,
properties: [
[
key:release,
value:RC1.0
],
[ key:PIPELINE_VERSION,
value:1.0.0.75
]
],
repo:some-generic-repo-local,
],
[
name:ProductA-manifest-file.json,
path:ProductA,
properties: [
[
key:release,
value:RC1.0
],
[ key:PIPELINE_VERSION,
value:1.0.0.1104
]
],
repo:some-generic-repo-local,
],
[
more similar entries here containing
],
[
more similar entries here
]
]
I'm trying to sort this map acc. to properties's key = PIPELINE_VERSION's value which is in the format of x.x.x.x i.e. 4 digit set case.
I tried the following command but it's not giving me the entry which contains 1.0.0.1104 as PIPELINE_VERSION. It's giving me 1.0.0.75 (which seems like some kind of string type sort.
// Sort the list entries acc. to pipeline version
def sortedList = list.sort { it.properties.PIPELINE_VERSION.value }
println "###### sortedList" + sortedList
println "\n^^^^\n"
println sortedList.last() // this should return me the entry which contains 1.0.0.1104 but I'm getting 1.0.0.75
}
Also tried using .toInteger() as def sortedList = list.sort { it.properties.PIPELINE_VERSION.toInteger().value } but that didn't work giving an error.
17:07:22 Caught: groovy.lang.MissingMethodException: No signature of method: java.util.ArrayList.toInteger() is applicable for argument types: () values: []
17:07:22 Possible solutions: toUnique(), toUnique()
17:07:22 groovy.lang.MissingMethodException: No signature of method: java.util.ArrayList.toInteger() is applicable for argument types: () values: []
17:07:22 Possible solutions: toUnique(), toUnique()
Tried:list.sort {it.value.tokenize('.').last()} that didn't do either.
Smaller example will be:
map = ['a':'1.0.0.11', d:'1.0.0.85', 'b':'1.0.0.1104', 'c':"1.0.0.75"]
println " before sorting : " + map
//map = map.sort {it.value } // this doesn't work if the value is not a pure number format aka x.x.x. format ok lets try the following
map = map.sort {it.value.tokenize('.').last()} // cool that didn't work either
println " after sorting : " + map
Questions:
How can I get the entry which has the highest PIPELINE_VERSION value?
How can I get the Nth array index entry which contains the highest PIPELINE_VERSOIN in its value.
How to handle N no. of digit set set cases? i.e. 1.0.0 or 1.2 or 1.0.0.12 or 1.4.1.9.255
Below should work (assuming the format X.X.X.X always has X as a number)
def sortClosure = { a, b ->
// Extract the pattern
def extract = {
it.properties.find { it.key == 'PIPELINE_VERSION' }?.value?.tokenize(/./)
}
// Transpose the numbers to compare
// gives [[1,1], [0,0], [0,0], [11, 1104]] for example
def transposed = [extract(a), extract(b)].transpose()
// Then compare the first occurrence of non-zero value (-1 or 1)
def compareInt = transposed.collect {
it[0].toInteger() <=> it[1].toInteger()
}.find()
compareInt ?: 0
}
list.sort(sortClosure)
This one-liner solution worked.
For the smaller example!
def versions = ['a':'1.0.0.11', d:'1.0.0.85', 'b':'1.0.0.1104', 'c':"1.0.0.75"]
map = map.sort {it.value.tokenize('.').last().toInteger() }
OK, found the shenzi(one-liner) solution for the complex structure (hint from dmahapatro's answer):
i.e. a map > containing array > containing another map for PIPELINE_VERSION.
println "\n\n before sorting : " + list
list = list.sort {it.properties.find { it.key == 'PIPELINE_VERSION' }?.value?.tokenize('.').last().toInteger() }
println " after sorting : " + list
println "\n\n The last entry which contains the sorted shenzi is: " + map.last()
NOTE: The above solution and other answers so far, will only if the PIPELINE first 3 digit sets are 1.0.0 i.e. it's only deciding the highest number based on the 4th digit set (.last()). It'd be fun to use a similar one-liner to find highest PIPELINE_VERSION which actually covers all 4 or N no. of digit sets.
def versions = ['a':'1.0.0.11', d:'1.0.0.85', 'b':'1.0.0.1104', 'c':"1.0.0.75"]
//sort:
def sorted = versions.sort{ (it.value=~/\d+|\D+/).findAll() }
result:
[a:1.0.0.11, c:1.0.0.75, d:1.0.0.85, b:1.0.0.1104]
Given this:
def map = ['a':'1.0.0.11', d:'1.0.0.85', 'b':'1.0.0.1104', 'c':"1.0.0.75"]
map = map.sort { a, b ->
compareVersion(a.value, b.value)
}
the goal becomes to write a compareVersion function that satisfies these (incomplete) tests:
assert 0 == compareVersion('1.0.0.0', '1.0.0.0')
assert 1 == compareVersion('1.1.0.0', '1.0.0.0')
assert -1 == compareVersion('1.1.0.0', '1.2.0.0')
assert 1 == compareVersion('1.1.3.0', '1.1.2.0')
assert 1 == compareVersion('1.1.4.1104', '1.1.4.11')
Here is one implementation. It's not the shortest but is quite "Groovy" in style:
//
// e.g. a = '1.0.0.11', b = '1.0.0.85'
//
def compareVersion = { a, b ->
// e.g. [1, 0, 0, 11]
def listA = a.tokenize('.').collect { it as int }
// e.g. [1, 0, 0, 85]
def listB = b.tokenize('.').collect { it as int }
// e.g. [0, 0, 0, -1]
def compareList = [listA, listB].transpose().collect { it[0] <=> it[1] }
// return first non-zero value in compareList, or 0 if there are none
compareList.inject(0) { result, item ->
(result) ?: item
}
}
Output of the original map, and sorted:
$ groovy Q.groovy
before sorting : [a:1.0.0.11, d:1.0.0.85, b:1.0.0.1104, c:1.0.0.75]
after sorting : [a:1.0.0.11, c:1.0.0.75, d:1.0.0.85, b:1.0.0.1104]

How do a Fisher–Yates Shuffle with values in a MongoDB document? [duplicate]

I am looking to get a random record from a huge collection (100 million records).
What is the fastest and most efficient way to do so?
The data is already there and there are no field in which I can generate a random number and obtain a random row.
Starting with the 3.2 release of MongoDB, you can get N random docs from a collection using the $sample aggregation pipeline operator:
// Get one random document from the mycoll collection.
db.mycoll.aggregate([{ $sample: { size: 1 } }])
If you want to select the random document(s) from a filtered subset of the collection, prepend a $match stage to the pipeline:
// Get one random document matching {a: 10} from the mycoll collection.
db.mycoll.aggregate([
{ $match: { a: 10 } },
{ $sample: { size: 1 } }
])
As noted in the comments, when size is greater than 1, there may be duplicates in the returned document sample.
Do a count of all records, generate a random number between 0 and the count, and then do:
db.yourCollection.find().limit(-1).skip(yourRandomNumber).next()
Update for MongoDB 3.2
3.2 introduced $sample to the aggregation pipeline.
There's also a good blog post on putting it into practice.
For older versions (previous answer)
This was actually a feature request: http://jira.mongodb.org/browse/SERVER-533 but it was filed under "Won't fix."
The cookbook has a very good recipe to select a random document out of a collection: http://cookbook.mongodb.org/patterns/random-attribute/
To paraphrase the recipe, you assign random numbers to your documents:
db.docs.save( { key : 1, ..., random : Math.random() } )
Then select a random document:
rand = Math.random()
result = db.docs.findOne( { key : 2, random : { $gte : rand } } )
if ( result == null ) {
result = db.docs.findOne( { key : 2, random : { $lte : rand } } )
}
Querying with both $gte and $lte is necessary to find the document with a random number nearest rand.
And of course you'll want to index on the random field:
db.docs.ensureIndex( { key : 1, random :1 } )
If you're already querying against an index, simply drop it, append random: 1 to it, and add it again.
You can also use MongoDB's geospatial indexing feature to select the documents 'nearest' to a random number.
First, enable geospatial indexing on a collection:
db.docs.ensureIndex( { random_point: '2d' } )
To create a bunch of documents with random points on the X-axis:
for ( i = 0; i < 10; ++i ) {
db.docs.insert( { key: i, random_point: [Math.random(), 0] } );
}
Then you can get a random document from the collection like this:
db.docs.findOne( { random_point : { $near : [Math.random(), 0] } } )
Or you can retrieve several document nearest to a random point:
db.docs.find( { random_point : { $near : [Math.random(), 0] } } ).limit( 4 )
This requires only one query and no null checks, plus the code is clean, simple and flexible. You could even use the Y-axis of the geopoint to add a second randomness dimension to your query.
The following recipe is a little slower than the mongo cookbook solution (add a random key on every document), but returns more evenly distributed random documents. It's a little less-evenly distributed than the skip( random ) solution, but much faster and more fail-safe in case documents are removed.
function draw(collection, query) {
// query: mongodb query object (optional)
var query = query || { };
query['random'] = { $lte: Math.random() };
var cur = collection.find(query).sort({ rand: -1 });
if (! cur.hasNext()) {
delete query.random;
cur = collection.find(query).sort({ rand: -1 });
}
var doc = cur.next();
doc.random = Math.random();
collection.update({ _id: doc._id }, doc);
return doc;
}
It also requires you to add a random "random" field to your documents so don't forget to add this when you create them : you may need to initialize your collection as shown by Geoffrey
function addRandom(collection) {
collection.find().forEach(function (obj) {
obj.random = Math.random();
collection.save(obj);
});
}
db.eval(addRandom, db.things);
Benchmark results
This method is much faster than the skip() method (of ceejayoz) and generates more uniformly random documents than the "cookbook" method reported by Michael:
For a collection with 1,000,000 elements:
This method takes less than a millisecond on my machine
the skip() method takes 180 ms on average
The cookbook method will cause large numbers of documents to never get picked because their random number does not favor them.
This method will pick all elements evenly over time.
In my benchmark it was only 30% slower than the cookbook method.
the randomness is not 100% perfect but it is very good (and it can be improved if necessary)
This recipe is not perfect - the perfect solution would be a built-in feature as others have noted.
However it should be a good compromise for many purposes.
Here is a way using the default ObjectId values for _id and a little math and logic.
// Get the "min" and "max" timestamp values from the _id in the collection and the
// diff between.
// 4-bytes from a hex string is 8 characters
var min = parseInt(db.collection.find()
.sort({ "_id": 1 }).limit(1).toArray()[0]._id.str.substr(0,8),16)*1000,
max = parseInt(db.collection.find()
.sort({ "_id": -1 })limit(1).toArray()[0]._id.str.substr(0,8),16)*1000,
diff = max - min;
// Get a random value from diff and divide/multiply be 1000 for The "_id" precision:
var random = Math.floor(Math.floor(Math.random(diff)*diff)/1000)*1000;
// Use "random" in the range and pad the hex string to a valid ObjectId
var _id = new ObjectId(((min + random)/1000).toString(16) + "0000000000000000")
// Then query for the single document:
var randomDoc = db.collection.find({ "_id": { "$gte": _id } })
.sort({ "_id": 1 }).limit(1).toArray()[0];
That's the general logic in shell representation and easily adaptable.
So in points:
Find the min and max primary key values in the collection
Generate a random number that falls between the timestamps of those documents.
Add the random number to the minimum value and find the first document that is greater than or equal to that value.
This uses "padding" from the timestamp value in "hex" to form a valid ObjectId value since that is what we are looking for. Using integers as the _id value is essentially simplier but the same basic idea in the points.
Now you can use the aggregate.
Example:
db.users.aggregate(
[ { $sample: { size: 3 } } ]
)
See the doc.
In Python using pymongo:
import random
def get_random_doc():
count = collection.count()
return collection.find()[random.randrange(count)]
Using Python (pymongo), the aggregate function also works.
collection.aggregate([{'$sample': {'size': sample_size }}])
This approach is a lot faster than running a query for a random number (e.g. collection.find([random_int]). This is especially the case for large collections.
it is tough if there is no data there to key off of. what are the _id field? are they mongodb object id's? If so, you could get the highest and lowest values:
lowest = db.coll.find().sort({_id:1}).limit(1).next()._id;
highest = db.coll.find().sort({_id:-1}).limit(1).next()._id;
then if you assume the id's are uniformly distributed (but they aren't, but at least it's a start):
unsigned long long L = first_8_bytes_of(lowest)
unsigned long long H = first_8_bytes_of(highest)
V = (H - L) * random_from_0_to_1();
N = L + V;
oid = N concat random_4_bytes();
randomobj = db.coll.find({_id:{$gte:oid}}).limit(1);
You can pick a random timestamp and search for the first object that was created afterwards.
It will only scan a single document, though it doesn't necessarily give you a uniform distribution.
var randRec = function() {
// replace with your collection
var coll = db.collection
// get unixtime of first and last record
var min = coll.find().sort({_id: 1}).limit(1)[0]._id.getTimestamp() - 0;
var max = coll.find().sort({_id: -1}).limit(1)[0]._id.getTimestamp() - 0;
// allow to pass additional query params
return function(query) {
if (typeof query === 'undefined') query = {}
var randTime = Math.round(Math.random() * (max - min)) + min;
var hexSeconds = Math.floor(randTime / 1000).toString(16);
var id = ObjectId(hexSeconds + "0000000000000000");
query._id = {$gte: id}
return coll.find(query).limit(1)
};
}();
My solution on php:
/**
* Get random docs from Mongo
* #param $collection
* #param $where
* #param $fields
* #param $limit
* #author happy-code
* #url happy-code.com
*/
private function _mongodb_get_random (MongoCollection $collection, $where = array(), $fields = array(), $limit = false) {
// Total docs
$count = $collection->find($where, $fields)->count();
if (!$limit) {
// Get all docs
$limit = $count;
}
$data = array();
for( $i = 0; $i < $limit; $i++ ) {
// Skip documents
$skip = rand(0, ($count-1) );
if ($skip !== 0) {
$doc = $collection->find($where, $fields)->skip($skip)->limit(1)->getNext();
} else {
$doc = $collection->find($where, $fields)->limit(1)->getNext();
}
if (is_array($doc)) {
// Catch document
$data[ $doc['_id']->{'$id'} ] = $doc;
// Ignore current document when making the next iteration
$where['_id']['$nin'][] = $doc['_id'];
}
// Every iteration catch document and decrease in the total number of document
$count--;
}
return $data;
}
In order to get a determinated number of random docs without duplicates:
first get all ids
get size of documents
loop geting random index and skip duplicated
number_of_docs=7
db.collection('preguntas').find({},{_id:1}).toArray(function(err, arr) {
count=arr.length
idsram=[]
rans=[]
while(number_of_docs!=0){
var R = Math.floor(Math.random() * count);
if (rans.indexOf(R) > -1) {
continue
} else {
ans.push(R)
idsram.push(arr[R]._id)
number_of_docs--
}
}
db.collection('preguntas').find({}).toArray(function(err1, doc1) {
if (err1) { console.log(err1); return; }
res.send(doc1)
});
});
The best way in Mongoose is to make an aggregation call with $sample.
However, Mongoose does not apply Mongoose documents to Aggregation - especially not if populate() is to be applied as well.
For getting a "lean" array from the database:
/*
Sample model should be init first
const Sample = mongoose …
*/
const samples = await Sample.aggregate([
{ $match: {} },
{ $sample: { size: 33 } },
]).exec();
console.log(samples); //a lean Array
For getting an array of mongoose documents:
const samples = (
await Sample.aggregate([
{ $match: {} },
{ $sample: { size: 27 } },
{ $project: { _id: 1 } },
]).exec()
).map(v => v._id);
const mongooseSamples = await Sample.find({ _id: { $in: samples } });
console.log(mongooseSamples); //an Array of mongoose documents
I would suggest using map/reduce, where you use the map function to only emit when a random value is above a given probability.
function mapf() {
if(Math.random() <= probability) {
emit(1, this);
}
}
function reducef(key,values) {
return {"documents": values};
}
res = db.questions.mapReduce(mapf, reducef, {"out": {"inline": 1}, "scope": { "probability": 0.5}});
printjson(res.results);
The reducef function above works because only one key ('1') is emitted from the map function.
The value of the "probability" is defined in the "scope", when invoking mapRreduce(...)
Using mapReduce like this should also be usable on a sharded db.
If you want to select exactly n of m documents from the db, you could do it like this:
function mapf() {
if(countSubset == 0) return;
var prob = countSubset / countTotal;
if(Math.random() <= prob) {
emit(1, {"documents": [this]});
countSubset--;
}
countTotal--;
}
function reducef(key,values) {
var newArray = new Array();
for(var i=0; i < values.length; i++) {
newArray = newArray.concat(values[i].documents);
}
return {"documents": newArray};
}
res = db.questions.mapReduce(mapf, reducef, {"out": {"inline": 1}, "scope": {"countTotal": 4, "countSubset": 2}})
printjson(res.results);
Where "countTotal" (m) is the number of documents in the db, and "countSubset" (n) is the number of documents to retrieve.
This approach might give some problems on sharded databases.
You can pick random _id and return corresponding object:
db.collection.count( function(err, count){
db.collection.distinct( "_id" , function( err, result) {
if (err)
res.send(err)
var randomId = result[Math.floor(Math.random() * (count-1))]
db.collection.findOne( { _id: randomId } , function( err, result) {
if (err)
res.send(err)
console.log(result)
})
})
})
Here you dont need to spend space on storing random numbers in collection.
The following aggregation operation randomly selects 3 documents from the collection:
db.users.aggregate(
[ { $sample: { size: 3 } } ]
)
https://docs.mongodb.com/manual/reference/operator/aggregation/sample/
MongoDB now has $rand
To pick n non repeat items, aggregate with { $addFields: { _f: { $rand: {} } } } then $sort by _f and $limit n.
I'd suggest adding a random int field to each object. Then you can just do a
findOne({random_field: {$gte: rand()}})
to pick a random document. Just make sure you ensureIndex({random_field:1})
When I was faced with a similar solution, I backtracked and found that the business request was actually for creating some form of rotation of the inventory being presented. In that case, there are much better options, which have answers from search engines like Solr, not data stores like MongoDB.
In short, with the requirement to "intelligently rotate" content, what we should do instead of a random number across all of the documents is to include a personal q score modifier. To implement this yourself, assuming a small population of users, you can store a document per user that has the productId, impression count, click-through count, last seen date, and whatever other factors the business finds as being meaningful to compute a q score modifier. When retrieving the set to display, typically you request more documents from the data store than requested by the end user, then apply the q score modifier, take the number of records requested by the end user, then randomize the page of results, a tiny set, so simply sort the documents in the application layer (in memory).
If the universe of users is too large, you can categorize users into behavior groups and index by behavior group rather than user.
If the universe of products is small enough, you can create an index per user.
I have found this technique to be much more efficient, but more importantly more effective in creating a relevant, worthwhile experience of using the software solution.
non of the solutions worked well for me. especially when there are many gaps and set is small.
this worked very well for me(in php):
$count = $collection->count($search);
$skip = mt_rand(0, $count - 1);
$result = $collection->find($search)->skip($skip)->limit(1)->getNext();
My PHP/MongoDB sort/order by RANDOM solution. Hope this helps anyone.
Note: I have numeric ID's within my MongoDB collection that refer to a MySQL database record.
First I create an array with 10 randomly generated numbers
$randomNumbers = [];
for($i = 0; $i < 10; $i++){
$randomNumbers[] = rand(0,1000);
}
In my aggregation I use the $addField pipeline operator combined with $arrayElemAt and $mod (modulus). The modulus operator will give me a number from 0 - 9 which I then use to pick a number from the array with random generated numbers.
$aggregate[] = [
'$addFields' => [
'random_sort' => [ '$arrayElemAt' => [ $randomNumbers, [ '$mod' => [ '$my_numeric_mysql_id', 10 ] ] ] ],
],
];
After that you can use the sort Pipeline.
$aggregate[] = [
'$sort' => [
'random_sort' => 1
]
];
My simplest solution to this ...
db.coll.find()
.limit(1)
.skip(Math.floor(Math.random() * 500))
.next()
Where you have at least 500 items on collections
If you have a simple id key, you could store all the id's in an array, and then pick a random id. (Ruby answer):
ids = #coll.find({},fields:{_id:1}).to_a
#coll.find(ids.sample).first
Using Map/Reduce, you can certainly get a random record, just not necessarily very efficiently depending on the size of the resulting filtered collection you end up working with.
I've tested this method with 50,000 documents (the filter reduces it to about 30,000), and it executes in approximately 400ms on an Intel i3 with 16GB ram and a SATA3 HDD...
db.toc_content.mapReduce(
/* map function */
function() { emit( 1, this._id ); },
/* reduce function */
function(k,v) {
var r = Math.floor((Math.random()*v.length));
return v[r];
},
/* options */
{
out: { inline: 1 },
/* Filter the collection to "A"ctive documents */
query: { status: "A" }
}
);
The Map function simply creates an array of the id's of all documents that match the query. In my case I tested this with approximately 30,000 out of the 50,000 possible documents.
The Reduce function simply picks a random integer between 0 and the number of items (-1) in the array, and then returns that _id from the array.
400ms sounds like a long time, and it really is, if you had fifty million records instead of fifty thousand, this may increase the overhead to the point where it becomes unusable in multi-user situations.
There is an open issue for MongoDB to include this feature in the core... https://jira.mongodb.org/browse/SERVER-533
If this "random" selection was built into an index-lookup instead of collecting ids into an array and then selecting one, this would help incredibly. (go vote it up!)
This works nice, it's fast, works with multiple documents and doesn't require populating rand field, which will eventually populate itself:
add index to .rand field on your collection
use find and refresh, something like:
// Install packages:
// npm install mongodb async
// Add index in mongo:
// db.ensureIndex('mycollection', { rand: 1 })
var mongodb = require('mongodb')
var async = require('async')
// Find n random documents by using "rand" field.
function findAndRefreshRand (collection, n, fields, done) {
var result = []
var rand = Math.random()
// Append documents to the result based on criteria and options, if options.limit is 0 skip the call.
var appender = function (criteria, options, done) {
return function (done) {
if (options.limit > 0) {
collection.find(criteria, fields, options).toArray(
function (err, docs) {
if (!err && Array.isArray(docs)) {
Array.prototype.push.apply(result, docs)
}
done(err)
}
)
} else {
async.nextTick(done)
}
}
}
async.series([
// Fetch docs with unitialized .rand.
// NOTE: You can comment out this step if all docs have initialized .rand = Math.random()
appender({ rand: { $exists: false } }, { limit: n - result.length }),
// Fetch on one side of random number.
appender({ rand: { $gte: rand } }, { sort: { rand: 1 }, limit: n - result.length }),
// Continue fetch on the other side.
appender({ rand: { $lt: rand } }, { sort: { rand: -1 }, limit: n - result.length }),
// Refresh fetched docs, if any.
function (done) {
if (result.length > 0) {
var batch = collection.initializeUnorderedBulkOp({ w: 0 })
for (var i = 0; i < result.length; ++i) {
batch.find({ _id: result[i]._id }).updateOne({ rand: Math.random() })
}
batch.execute(done)
} else {
async.nextTick(done)
}
}
], function (err) {
done(err, result)
})
}
// Example usage
mongodb.MongoClient.connect('mongodb://localhost:27017/core-development', function (err, db) {
if (!err) {
findAndRefreshRand(db.collection('profiles'), 1024, { _id: true, rand: true }, function (err, result) {
if (!err) {
console.log(result)
} else {
console.error(err)
}
db.close()
})
} else {
console.error(err)
}
})
ps. How to find random records in mongodb question is marked as duplicate of this question. The difference is that this question asks explicitly about single record as the other one explicitly about getting random documents.
For me, I wanted to get the same records, in a random order, so I created an empty array used to sort, then generated random numbers between one and 7( I have seven fields). So each time I get a different value, I assign a different random sort.
It is 'layman' but it worked for me.
//generate random number
const randomval = some random value;
//declare sort array and initialize to empty
const sort = [];
//write a conditional if else to get to decide which sort to use
if(randomval == 1)
{
sort.push(...['createdAt',1]);
}
else if(randomval == 2)
{
sort.push(...['_id',1]);
}
....
else if(randomval == n)
{
sort.push(...['n',1]);
}
If you're using mongoid, the document-to-object wrapper, you can do the following in
Ruby. (Assuming your model is User)
User.all.to_a[rand(User.count)]
In my .irbrc, I have
def rando klass
klass.all.to_a[rand(klass.count)]
end
so in rails console, I can do, for example,
rando User
rando Article
to get documents randomly from any collection.
you can also use shuffle-array after executing your query
var shuffle = require('shuffle-array');
Accounts.find(qry,function(err,results_array){
newIndexArr=shuffle(results_array);
What works efficiently and reliably is this:
Add a field called "random" to each document and assign a random value to it, add an index for the random field and proceed as follows:
Let's assume we have a collection of web links called "links" and we want a random link from it:
link = db.links.find().sort({random: 1}).limit(1)[0]
To ensure the same link won't pop up a second time, update its random field with a new random number:
db.links.update({random: Math.random()}, link)

Resources