How to track time with an Observable? - timer

I have http observable and I want to add a timer to run parallel with it. For example, I want to show loading in at least 3s, so if http response before that, I want it to wait till timer emit after 3s. But if http took too long, say more than 5s, I'd like to throw an timeout error.
My code looks like this:
const tracker$ = timer(3e3).pipe(take(1));
const timeout$ = timer(5e3).pipe(take(1), map(() => { throw new Error('timeout!!!') }));
const query$ = combineLatest(http$, tracker$).pipe(map(([httpRes, tracker]) => (httpRes)));
this.loadData$ = race(timeout$, query$).pipe(// do next)
I'm using combineLatest and it works. But if http$ response with error in less than 3s (before tracker$ emits value), then combineLatest complete immediately with error, without waiting tracker$ to emit.
pipe catchError to http$ seems to keep it going but I also want to throw error because I want to catch error after the race and do something about it.
How can I archive this 3s effect?

Related

is requestAnimationFrame belong to microtask or macrotask in main thread task management? if not, how can we categorize this kind of render side task

how react schedule effects? I made some test, it seems hooks is called after requestAnimationFrame, but before setTimeout. So, I was wondering, how is the real implementation of scheduler. I checked react source code, it seems built upon MessageChannel api.
Also, how event-loop runs the macrotask sequence, for instance setTimeout/script etc.?
const addMessageChannel = (performWorkUntilDeadline: any) => {
const channel = new MessageChannel();
const port = channel.port2;
channel.port1.onmessage = performWorkUntilDeadline;
port.postMessage(null);
}
const Component1 = () => {
const [value,] = useState('---NOT INITIALISED')
requestIdleCallback(() => {
console.log('requestIdleCallback---')
})
useEffect(() => {
console.log('useEffect---')
}, [])
Promise.resolve().then(() => {
console.log('promise---')
})
setTimeout(() => {
console.log('setTimeout---')
});
addMessageChannel(()=> {
console.log('addMessageChannel---')
})
requestAnimationFrame(() => {
console.log('requestAnimationFrame---')
})
return <div>{value}</div>;
}
export default Component1
browser console result:
promise---
requestAnimationFrame---
addMessageChannel---
useEffect---
setTimeout---
requestIdleCallback---
I'm not sure about the useEffect so I'll take your word they use a MessageChannel and consider both addMessageChannel and useEffect a tie.
First the title (part of it at least):
[Does] requestAnimationFrame belong to microtask or macrotask[...]?
Technically... neither. requestAnimationFrame (rAF)'s callbacks are ... callbacks.
Friendly reminder that there is no such thing as a "macrotask": there are "tasks" and "microtasks", the latter being a subset of the former.
Now while microtasks are tasks they do have a peculiar processing model since they do have their own microtask-queue (which is not a task queue) and which will get visited several times during each event-loop iterations. There are multiple "microtask-checkpoints" defined in the event-loop processing model, and every time the JS callstack is empty this microtask-queue will get visited too.
Then there are tasks, colloquially called "macro-tasks" here and there to differentiate from the micro-tasks. Only one of these tasks will get executed per event-loop iteration, selected at the first step.
Finally there are callbacks. These may be called from a task (e.g when the task is to fire an event), or in some particular event-loop iterations, called "painting frames".
Indeed the step labelled update the rendering is to be called once in a while (generally when the monitor sent its V-Sync update), and will run a series of operations, calling callbacks, among which our dear rAF's callbacks.
Why is this important? Because this means that rAF (and the other callbacks in the "painting frame"), have a special place in the event-loop where they may seem to be called with the highest priority. Actually they don't participate in the task prioritization system per se (which happens in the first step of the event loop), they may indeed be called from the same event-loop iteration as even the task that did queue them.
setTimeout(() => {
console.log("timeout 1");
requestAnimationFrame(() => console.log("rAF callback"));
const now = performance.now();
while(performance.now() - now < 1000) {} // lock the event loop
});
setTimeout(() => console.log("timeout 2"));
Which we can compare with this other snippet where we start the whole thing from inside a rAF callback:
requestAnimationFrame(() => {
setTimeout(() => {
console.log("timeout 1");
requestAnimationFrame(() => console.log("rAF callback"));
});
setTimeout(() => console.log("timeout 2"));
});
While this may seem like an exceptional case to have our task called in a painting-frame, it's actually quite common, because browsers have recently decided to break rAF make the first call to rAF trigger a painting frame instantly when the document is not animated.
So any test with rAF should start long after the document has started, with an rAF loop already running in the background...
Ok, so rAF result may be fluck. What about your other results.
Promise first, yes. Not part of the task prioritization either, as said above the microtask-queue will get visited as soon as the JS callstack is empty, as part of the clean after running a script step.
rAF, fluck.
addMessageChannel, see this answer of mine. Basically, in Chrome it's due to both setTimeout having a minimum timeout of 1ms, and a higher priority of the message-tasksource over the timeout-tasksource.
setTimeout currently has a 1ms minimum delay in Chrome and a lower priority than MessageEvents, still it would not be against the specs to have it called before the message.
requestIdleCallback, that one is a bit complex but given it will wait for the event-loop has not done anything in some time, it will be the last.

Why does my react tests fail in CI-pipeline due to "not wrapped in act()", while working fine locally?

I have a test-suite containing 37 tests that are testing one of my views. Locally, all tests pass without any issues, but when I push my code, the test-suite fails in our pipeline (we are using GitLab).
The error-output from the logs in CI are extremely long (thousands of lines, it even exceeds the limit set by GitLab). The error consists of many "not wrapped in act()"-, and "nested calls to act() are not supported"-warnings (Moslty triggered by useTranslation() from I18Next and componens like Tooltip from Material-UI).
My guess is that async-data from the API (mocked using msw) triggers a state-update after a call to act() has completed, but I'm not sure how to prove this, or even figure out what tests are actually failing.
Has anyone experienced something similar, or knows what's up?
Example of a failing test:
it.each([
[Status.DRAFT, [PAGE_1, PAGE_11, PAGE_2, PAGE_22, PAGE_3]],
[Status.PUBLISHED, [PAGE_1, PAGE_12, PAGE_2, PAGE_21, PAGE_22, PAGE_221]],
])('should be possible to filter nodes by status %s', async (status, expectedVisiblePages) => {
renderComponent();
await waitFor(() => {
expect(screen.queryByRole('progressbar')).not.toBeInTheDocument();
});
userEvent.click(screen.getByLabelText('components.FilterMenu.MenuLabel'));
const overlay = await screen.findByRole('presentation');
await waitFor(() => expect(within(overlay).queryByRole('progressbar')).not.toBeInTheDocument());
userEvent.click(within(overlay).getByText(`SiteStatus.${status}`));
userEvent.keyboard('{Esc}');
const items = await screen.findAllByRole('link');
expect(items).toHaveLength(expectedVisiblePages.length);
expectedVisiblePages.forEach((page) => expect(screen.getByText(page.title)).toBeInTheDocument());
});
Update 1
Okay. So I've narrowed it down to this line:
const items = await screen.findAllByRole('link');
There seems to be a lot of stuff happening while waiting for things to appear. I believed that the call to findAllByRole was already wrapped in act() and that this would make sure all updates has been applied.
Update 2
It seems to be a problem partly caused by tests timing out.
I believe multiple calls to waitFor(...) and find[All]By(...) in the same test, in addition to a slow runner, collectively exceeds the timout for the test (5000ms by default). I've tried to adjust this limit by running the tests with --testTimeout 60000. And now, some of the tests are passing. I'm still struggling with the "act()"-warnings. Theese might be caused by a different problem entirely...
The bughunt continues...
After many attempts, I finally found the answer. The CI-server only has 2 CPUs available, and by running the tests with --maxWorkers=2 --maxConcurrent=2, instead of the default --maxWorkers=100% --maxConcurrent=5, proved to solve the problem.
This is a common issue ;)
I guess, you see this problem on CI Server because of the environment (less cpu/mem/etc).
This warning is because you do some async action but did not finish for complete it (because it's async).
You can read more about this issue in this article: https://kentcdodds.com/blog/fix-the-not-wrapped-in-act-warning
The best solution is waiting for the operation to finish. For example by adding loading indicator and waiting for element remove.
For example:
it('should show empty table', async () => {
const [render] = createRenderAndStore()
mockResponse([])
const { container } = render(<CrmClientsView />) // - this view do async request in first render
await waitForElementToBeRemoved(screen.queryByRole('test-loading'))
await waitFor(() => expect(container).toHaveTextContent('There is no data'))
})

Best/Quickest way to execute Promises in-parallel? (React)

Suppose I need to fetch data to create a card. What is the quickest way to get this data using promises? This is the current way I'm doing it:
async function getCards() {
const promises = []
for (let i = 0; i < 10; i++) {
promises.push(getCard(i))
}
const cards = await Promise.allSettled(promises)
setCards(cards)
}
async function getCard(i) {
const property1 = await getProperty1(i)
const property2 = await getProperty2(i)
const property3 = await getProperty3(i)
const card = <div>
<div>Property 1: {property1}</div>
<div>Property 2: {property2}</div>
<div>Property 3: {property3}</div>
</div>
return card
}
For my purposes, I don't need Promise.allSettled, since I don't need to wait for all 10 cards to finish awaiting (I may just create a component), I can render each one as they complete. But I'd still like it to be parallel/execute as fast as possible. What other options do I have there? And is there a better way to handle what I'm doing in getCard?
If getPropertyN() are indeed an asynchronous operation (such as a networking request), then getCards() will run all the calls in your for loop in parallel, such that they are all in-flight at the same time and it will generally reduce the end-to-end time vs. run them serially.
There are some other factors in play, such as what the receiving host does when it receives a bunch of requests at once. If it only handles them one at a time, then you may not gain a whole lot. But, if the host has any parallelism, then you will definitely see a speedup by putting multiple requests in flight at the same time.
Note that your getCard(i) implementation is serializing the three calls to getProperty1(), getProperty2() and getProperty3() which perhaps could also be done in parallel with something like:
const [property1, property2, property3] = await Promise.all([
getProperty1(i),
getProperty2(i),
getProperty3(i)
]);
Instead of this:
const property1 = await getProperty1(i)
const property2 = await getProperty2(i)
const property3 = await getProperty3(i)
Another thing to keep in mind is that a browser (such as a fetch() call) will only make N simultaneous requests to the same host (where N is around 6). Once you exceed that number of requests to the same host that are all in-flight at the same time, then the browser will queue the rest of the requests until one of the previous ones finishes. The way it's implemented, it doesn't slow things down to do more than the max requests, but you don't gain any more parallelism after the browser's limit. If you were running this code from a different Javascript environment such as nodejs, then that limit would not apply as this is a browser-specific thing.
Note, the key thing to achieving the parallelism is launching multiple requests to be in-flight at the same time. There is no requirement that you use Promise.allSettled() before acting on any results unless you need to get all the results in order before you can process the results.
If the results can be processed individually as they finish and can be processed in any order, you can also write the code that way without using Promise.allSettled() such as:
getProperty(1).then(processResult).catch(processErr);
getProperty(2).then(processResult).catch(processErr);
getProperty(3).then(processResult).catch(processErr);
Note: I also don't see any error handling in your code. Any outside network request can fail and you must have some handler for rejected promises.

Wait for a response from a function

I tried making a command in my discord.js bot, however the embed with the info gets sent BEFORE all the variables were set (this is beacuse I use Firestore, it takes a second to get the data)
The function would look something like this:
function getData(){
var level = 0
let levels = database.collection('guilds').doc(message.guild.id).collection('levels').doc(person.id)
levels.get().then((q) => {
if(q.exists){
let data = q.data()
level = data.level
return level
}
})
}
message.channel.send(getData())
// would send 0 because the function didnt get the data in time
How can I call a function and then wait for a response?
I tried messing around with async and await but I dont really know how to use them, which kept throwing errors.

Akka.net - Streams with parallelism, backpressure and ActorRef

Tying to learn how use Akka.net Streams to process items in parallel from a Source.Queue, with the processing done in an Actor.
I've been able to get it to work with calling a function with Sink.ForEachParallel, and it works as expected.
Is it possible to process items in parallel with Sink.ActorRefWithAck (as I would prefer it utilize back-pressure)?
About to press Post, when tried to combine previous attempts and viola!
Previous attempts with ForEachParallel failed when I tried to create the actor within, but couldn't do so in an async function. If I use an single actor previous declared, then the Tell would work, but I couldn't get the parallelism I desired.
I got it to work with a router with roundrobin configuration.
var props = new RoundRobinPool(5).Props(Props.Create<MyActor>());
var actor = Context.ActorOf(props);
flow = Source.Queue<Element>(2000,OverflowStrategy.Backpressure)
.Select(x => {
return new Wrapper() { Element = x, Request = ++cnt };
})
.To(Sink.ForEachParallel<Wrapper>(5, (s) => { actor.Tell(s); }))
.Run(materializer);
The Request ++cnt is for console output to verify the requests are being processed as desired.
MyActor has a long delay on every 10th request to verify the backpressure was working.

Resources