Promises or async/await in mongosh? - mongodb-shell

I have a mongosh script that should execute things synchronously. Can I use promises or async/await in mongosh? It seems like I can't. Is there a way to ensure things don't get executed out of order?
For example
(1) db.clients.find({}).forEach((client) => {
db.addresses.insertMany([
{ ....
}
(2) db.addresses.find({}).forEach...
2 gets executed by the mongosh when 1 is still looping. Any thoughts?

You can use two separate scripts, so be sure to finish the first before running the second.
mongosh $MONGODB_URI queries/first.js queries/second.js

Related

Why is my synchronous code executing asynchronously?

I'm trying to run some synchronous function which should wait for a certain time and then return control. But for some reason, it keeps executing asynchronously, or at least that's how it looks.
This is the code:
function pause(ms) {
var dt = new Date();
while (new Date() - dt <= ms) {
/* Do nothing */
}
}
console.log("start calculation..");
pause(3000);
console.log("calculation finished!");
this immediately shows the following in the console:
start calculation..
calculation finished!
There is no delay at all, as if the pause function is executed asynchronously.
I've tried 3 different versions of the pause function, it doesn't make any difference.
You can see this in the sandbox I put the code in.
(you must go to preferences - sandbox.config.json and turn of "Infinite loop protection" for it to work)
Why isn't the code running synchronously, first showing the "start calculation..." message, and then the second message after a 3 second delay?
Is there a better way to simulate running a time-expensive synchronous function?
Since my question has only been partially answered, I will provide the solution myself, in case other people have similar issues:
First of all, this code now works as intended in the console of Chrome, Chromium and Firefox. It didn't before, so perhaps there's been an upgrade to the javascript engine in this regard.
When running the code in the provided CodeSandbox though, it still logs the two lines simultaneously, after a 3 seconds delay.
As people have pointed out, 2 things could possibly be the cause of this:
the compiler removing empty loops
the console messages not refreshing as long as the loop is running
This code will fix both issues:
function pause(ms, start=new Date()) {
while (new Date() - start <= ms) {
setTimeout(()=>pause(ms,start))
}
}
console.log("start calculation..");
pause(3000);
console.log("calculation finished!");
Calling a SetTimeout (even with a 0 ms delay) will cause the rest of the code execution (a recursive function call) to be thrown back on the event loop for later processing, after allowing for the execution of event callbacks first. This ensures other events - such as logging of the console - will not be blocked.
It also puts some actual code inside the loop, so that the loop will not be removed by any compiler code optimisation.

What is the purpose of next('r') in the context of an RxJS Subject

I'm still fairly new to the RxJS world (please pardon my semantics), but I've seen a few examples of code that creates a Subject to do some work, and then calls next(0), or next('r') on the subscription. It appears to re-run the stream, or rather fetch the next value from the stream.
However, when I tried using this to call an API for some data, it completely skips over the work it's supposed to do as defined in the stream (assuming it would "run" the stream again and get new data from the server), and instead my subscriber gets the 'r' or zero value back when I try to call next like that.
I get that making the subscription "starts execution of the stream", so to speak, but if I want to "re-run" it, I have to unsubscribe, and resubscribe each time.
Is it a convention of some kind to call next with a seemingly redundant value? Am I just using it in the wrong way, or is there a good use-case for calling next like that? I'm sure there's something fundamental that I'm missing, or my understanding of how this works is very wrong.
It's a good question, I definitely recommend you to read about hot and cold Observables.
cold Observables execute each time someone subscribes to it.
const a$ = of(5).pipe(tap(console.log))
a$.subscribe(); // the 'tap' will be executed here
a$.subscribe(); // and here, again.
hot Observables do not care about subscriptions in terms of execution:
const a$ = of(5).pipe(
tap(console.log),
shareReplay(1)
);
a$.subscribe(); // the 'tap' will be executed here
a$.subscribe(); // but not here! console.logs only once
In your example you are using Subject that represents cold Observable.
You can try to use BehaviorSubject or ReplaySubject - both of them are hot but be aware that they behave differently.
IN you example you can modify your Subject like the following:
const mySubject = new Subject();
const myStream$ = mySubject.pipe(
shareReplay(1)
);
myStream$.subscribe(x => console.log(x))
mySubject.next(1);
mySubject.next(2);
mySubject.next(3);

RxJS: how to reflect array of observable results in ordered to the UI?

I have an array of observables. Each of them will make a http call to a REST endpoint and return a result so I can update the UI.
I am using zip to run them all like this:
Observable.zip(allTransactions).subscribe(result=> {blab});
In subscribe, I update a page-level collection, so UI gets updated via 2-way binding (angular).
however, there are a few problems:
1) when I construct each observable in the array, I added .delay(1000) to it, so I expect each run will delay at least 1 second to the previous one. In fact, that's not true. Based on my log, it seems all those transactions were fired at the same time. But the subscribe was delayed on second. I really need them run in sequence, as the order I setup the array, because I have some dependency in those transactions. Running all together won't work for me.
2) zip doesn't seem to guarantee to bring back my results in ordered. So my UI is totally in random ordered. Because I was doing this.items.push(result), where items is a variable being bound to the UI.
I am currently trying to merge all the transactions and add an empty observable with delay between every 2 transactions (still working on it).
Can anyone provide any suggestion what other alternatives I can do? or a better way I can try?
Thanks
1) You are correct that adding .delay(1000) to all observables will not make them wait for the previous one. The delay operator will delay the execution from the moment you subscribe to them, and since you subscribe to them all at the same time and delay them for the same amount of time, they will all execute at the same time.
If you want to execute the observable in sequence, and wait for one to finish before proceeding to the next then use the flatMap operator:
obs1.get()
// First call
.flatMap(response1 => {
return obs2.get(response1.something);
})
.subscribe(response2 => {
// Result from second call
});
2) Looking at the zip documentation the result should return in an ordered list:
Merges the specified observable sequences or Promises into one
observable sequence by using the selector function whenever all of the
observable sequences have produced an element at a corresponding
index. If the result selector function is omitted, a list with the
elements of the observable sequences at corresponding indexes will be
yielded.
But as you have noted: the calls are all executed simultaneously and one observable does not wait for the next one before starting.
So the call:
Observable.zip(obs1, obs2, obs3).subscribe(result => console.log(result));
Will log:
[response1, response2, response3]

Objectify async: at what point RPC call is made?

Quite often I want to make two or more independent queries to fetch entities from Datastore. But I'm not sure if they are really parallel. For example:
loadResult1 = ofy().load().key(Key.create(Foo.class, 1));
loadResult2 = ofy().load().key(Key.create(Bar.class, 1));
loadResult1.now();
loadResult2.now();
Is there any benefit of arranging the code like this?
Same goes for search queries
iterable1 = ofy().load().type(Foo.class).iterable();
iterable2 = ofy().load().type(Bar.class).iterable();
iterable1.hasNext();
iterable2.hasNext();
Will the iterable2 load in parallel with iterable1?
Side question: is .iterable() in this regard any different from .list()?
I tried to debug the code, but it doesn't look like the call is made until call to .now(), or first call to .next()/.hasNext() in. Is it really so?
Yes - until you materialize a result, the queries proceed asynchronously in parallel.

Issue with task system - tasks will run exactly twice (re-post once)

I've attempted to write a simple task system for my AVR. The code for this is here. (Sadly, this is also the MWE.)
The basic idea behind the system is that a periodic timer interrupt sets a flag, which the main application loop then checks in order to run a task. The task processing function is re-entrant, so will be executed exactly once per iteration for each pending task:
while (1) {
if (flTask) {
flTask = task_process_next();
}
// Do other awesome stuff in the loop
}
In order to keep the design simple, a task which wants to run periodically is required to re-post itself.
So a simple heartbeat task might be added like this:
task_add(heartbeat_task, 0);
And its code might look like this:
void heartbeat_task(void)
{
task_add(heartbeat_task, 10000); // Re-post task
led_toggle(LEDGreen);
xbee_send_heartbeat(BASE_STATION_ID);
}
My problem is this: each periodic task will run exactly twice.
I have confirmed through toggling pins (as you can see in the code I linked to) that during each task's first and repeat execution the task_add method is called.
However, despite apparently adding the task the second time, it never runs.
I have further tried simplifying the code in task_process_next considerably (including by adding a loop to process all tasks in one call, and by changing the run condition to ignore overflow). Neither of these modifications proved successful.
My question is this: have I messed up some detail of my linked-list implementation which could cause re-posted tasks to be ignored?
In particular, have I accidentally made it so that nodes in the list can be skipped over without being evaluated or run?
I understand that it is difficult to debug this sort of problem without running on the hardware, but I'm hoping that another set of eyes will see what I've missed.
I'm happy to provide any additional information / do any tests which are necessary.
The queue was being corrupted when a task at its end was removed:
if (prev) {
prev->next = task->next;
} else {
tasks.head = task->next;
}
// Adding these lines fixed the problem
if (task == tasks.tail) {
tasks.tail = prev;
}

Resources