I am using Camels resequencer in the 'stream' mode to insure that files are processed in the correct order
.resequence(new MySequencingExpression()).stream().timeout(60000))
The timeout has to be in the order of minutes as we have the occasionally get files that are completely out of order
When I run this up I have noticed that the processing on the first file will be delayed the timeout period - which is totally unacceptable for us.
Having examined the code the initial delay occurs because the code is essentially trying to compare the first file to its non existent predecessor and then timing out. What it should be doing is something like 'have I had a file in the last timeout period, then if so, was it the correct predecessor?'
Is there any workaround for this?
Thanks
Richard
This is not a bug, please refer to the ResequencerEngine documentation
If the last-delivered element is null i.e. the resequencer was newly
created the first arriving element needs timeout milliseconds in any
case for becoming ready-for-delivery.
Regarding your comment:
Give the sequence 1, 2, 3, 4 and given that 1 and 2 have arrived - at
this point the code could start processing
What about if after 1 and 2 came 0?
I think it would be great if the developers allowed user to set the first expected value, because usually we know what it is and it would save unnecessary delay at resequencer startup, but I don't think its currently possible.
Related
The problem I want to solve is as follow:
Each task (the green bar) represents a pair of turn-on (green-dashed-line) and turn-off (red-dashed-line) commands. Tasks may or may not overlap with one another. This a real-time application where we don't know when or if another task is coming. The goal is to turn the valve on if it's not already on and avoid turn off the valve prematurely.
What I mean by not turn off the valve prematurely is, for example, if we turn off the valve at time off-1, that's wrong because the valve should still stay on at that point in time, the correct action is to turn off the valve at time off-4.
Regarding the implementation details, each task is an async task (via. GLib async API). I simulate the waiting for task-duration with the sleep function, a timer is probably more appropriate. Right now, the tasks run independently and there is no coordination between them, thus the valve is being turn-off prematurely.
I searched around to find a similar problem but the closest I found is interval scheduling whose goal is different. Has anyone encountered a similar problem before and can give me some pointers on how to solve this problem?
It seems like this could be solved with a simple counter. You increment the counter for each opening command, and decrement it for each closing command - when the count reaches zero, you close the valve.
I'm very new to the whole OpenCL world so I'm following some beginners tutorials. I'm trying to combine this and this to compare the the time required to add two arrays together on different devices. However I'm getting confusing results. Considering that the code is too long I made this GitHub Gist.
On my mac I have 1 platform with 3 devices. When I assign the j in
cl_command_queue command_queue = clCreateCommandQueue(context, device_id[j], 0, &ret);
manually to 0 it seems to run the calculation on CPU (about 5.75 seconds). when putting 1 and 2 then calculation time drops drastically (0.01076 seconds). Which I assume is because the calculation is being ran on my Intel or AMD GPU. But Then there are some issues:
I can adjust the j to any higher numbers and it still seems to run on GPUs.
When I put all the calculation in a loop, the time measured for all the devices are the same as calculating on CPU (as I persume).
The time required to do the calculation for all 0<j are suspiciously close. I wonder if they are really being ran on different devices.
I have clearly no clue about OpenCL so I would appreciate if you could take a look at my code and let me know what are my mistake(s) and how I can solve it/them. Or maybe point me towards a good example which runs a calculation on different devices and compares the time.
P.S. I have also posted this question here in Reddit
Before submitting a question for an issue you are having, always remember to check for errors (specifically, in this case, that every API call returns CL_SUCCESS). The results are meaningless otherwise.
In the specific case, the problem in your code is that when getting the device IDs, you're only getting one device ID (line 60, third argument), meaning that everything else in the buffer is bogus, and results for j > 0 are meaningless.
The only surprising thing is that it doesn't crash.
Also, when checking runtimes, use OpenCL events, not host-side clock times. In your case you're at least doing after the clFinish, so you are ensuring that the kernel execution terminates, but you're essentially counting the time necessary for all the setup, rather than just the copy time.
When using snd_pcm_writei() in non-blocking mode everything works perfect for a while but eventually the audio gets choppy. It sounds like the ring buffer pointers are getting out of sync (ie. sometimes I can tell that the audio is playing out of order). How long it takes for the problem to start it's hardware dependent. On a Gentoo box on real hardware it seldom happens, but on a buildroot system running on QEMU it happens after about 5 minutes. On both cases draining the pcm stream fixes the problem. I have verified that I'm writing the samples correctly by also writting them to a file and playing them with aplay.
Currently I'm setting avail_min to the period size (1024 frames) and calling snd_pcm_wait() before writting chunks of the period size. But I tried a number of different variations (different chunk sizes, checking avail myself and use pthread_cond_timedwait() instead of snd_pcm_wait(), etc). But the only thing that works fine is using blocking mode but I can not do that.
You can see the current source code here: https://bitbucket.org/frodzdev/mediabox/src/5a6471316c7ae481b329e7e0d4af1bb68a32e71d/src/audio.c?at=staging&fileviewer=file-view-default (it needs a little cleanup since I'm trying all kinds of things). The code that does the actual IO starts at line 375.
Edit:
I think I got a solution but I don't understand why it seems to work. It seems that it does not matter if I'm using non-blocking mode, the problem is when I wait to make sure there's room on the buffer (either through snd_pcm_wait(), pthread_cond_timedwait(), or usleep()).
The version that seems to work is here: https://bitbucket.org/frodzdev/mediabox/src/c3eb290087d9bbe0d5f37653a33a1ba88ef0628b/src/audio.c?fileviewer=file-view-default. I switched to blocking mode while still waiting before calling snd_pcm_writei() and it didn't made a difference. Then I added the call to snd_pcm_avail() before calling snd_pcm_status() on avbox_audiostream_gettime(). This function is called constantly by another thread to get the stream clock and it only uses snd_pcm_status() to get the timestamps. Now it seems to work (at least it is a lot less probable to happen) but I don't understand exactly why. I understand that snd_pcm_avail() will synchronize the pointers with the kernel but I don't really understand when it needs to be called and the difference between snd_pcm_state() et al and snd_pcm_status(). Does snd_pcm_status() also synchronize anything? It seems not because sometimes snd_pcm_status_get_state() will return RUNNING when snd_pcm_avail() returns -EPIPE. The ALSA documentation is really vague. Perhaps understanding these things will help me understand my problem?
Now, when I said that it seems to be working I mean that I cannot reproduce it on real hardware. It still happens on QEMU though way less often. But considering that on the next commit I switched to blocking mode without waiting (which I've used in the past and never had a problem with on real hardware) and it still happens in QEMU and also the fact that this is a common issue with QEMU I'm starting to think that I may have fixed the issue on my end and now it's just a QEMU problem. Is there any way to determine if the problem is a bug on my end that is easier to trigger on the emulator or if it's just an emulator problem?
Edit: I realize that I should fill the buffer before waiting but at this point my concern is not to prevent underruns but to make sure that my code can handle them when they happen. Besides the buffer is filling up after a few iterations. I confirmed this by outputing avail, buffer_size, etc before writing each packet and the numbers I get don't make perfect sense, they show an error of 1 or 2 periods about every 8th period. Also (and this is the main problem) I'm not detecting any underruns, the audio get choppy but all writes succeed. In fact, if the problem start happening and I trigger an underrun by overloading the CPU it will correct itself when the pcm is reset.
In line 505: You're using time as argument to malloc.
In line 568: Weren't you playing audio? In this case you should do wait only after you wrote the frames. Let's think ...
Audio device generates an interrupt when it terminates to process a period.
| period A | period B |
^ ^
irq irq
Before you start the pcm, audio device doesn't generate any interrupt. Notice here that you're waiting and you haven't started the pcm yet. You only starts it when you call snd_pcm_writei().
When you wait for audio data you'll be awake only when the current period has been fully processed -- in your first wait the first period wasn't even written -- so in a comfortable situation you should write the whole buffer, wait for the first interrupt, and then write the just processed period, and on and on.
Initially, buffer is empty:
| | |
write():
|############|############|
wait():
..............
When we wake up:
| |############|
write():
|############|############|
I found the problem is you're writing audio just before it be played, then sometimes it may arrive delayed in the buffer.
![enter image description here][1]I have a high voltage control VI and I'd like it to increase the output voltage by a user set increment every x number of seconds. At the moment I have a timed sequence outside the main while loop but it never starts. When it's inside the while loop it delays all other functions. I'm afraid I'm such a beginner at this that I can't post a picture yet. All that needs to happen is an increase in voltage by x amount every y seconds. Is there a way to fix this or a better way of doing it? I'm open to suggestions! Thanks!
Eric,
Without seeing the code I am guessing that you have the two loops in series (i.e. the starting of the while loop depends upon an output of the timed loop; this is the only way that one loop might block another). If this is the case, then decouple the two loops so that they are not directly dependent on each other.
If the while loop is dependent on user input, then use an event structure and then pass the new parameters via a queue (this would be your producer-consumer pattern).
Also, get rid of the timed loop and replace with a while loop. The timed loop is only simulated on non-real time machines and it can disrupt determinisitic features of a real-time system. Given that you are looking for sending out a a signal on the order of seconds, it is absolutely not necessary.
Anyways, if I am off base, please throw the code in question up so that we can review it.
Cheers, Matt
I need to provide a way to perform actions at specific date/time repeatedly. Basically it should work like Cron and I'm thinking of the way of managing execution times.
One solution could be to run a loop in each job/process and constantly check (every minute or second) whether current time is the time we are waiting for.
Another solution could be to work with timers by waiting until the next execution. We calculate the difference between now and the next execution time, and supply that delay to the timer. But since the execution times should be manageable, we would need to have a way to interrupt that timer and create a new one, or we could simply kill that process and create a fresh one.
Does anyone have any thoughts on how it should be done properly, or are they any libraries for accomplishing this particular scenario?
Here are 4 libs you could take a look at:
https://github.com/erlware/erlcron
https://github.com/b3rnie/crontab
https://github.com/jeraymond/leader_cron
https://github.com/zhongwencool/ecron