I wanted to execute a task on the calling thread only but after some delay
I see Timer API as one option but it creates another thread. What would be the best possible solution
Related
I am implementing a thread library in C using makecontext(), getcontext() and swapcontext(). I need to implement a scheduler which is invoked every 5 ms in order to switch contexts with another thread (round robin). How could I implement this timer functionality? If I put the timer in the scheduler, then it will be impossible for time to be incremented while the scheduler is not running. Is there a way to associate a timer with a particular process that updates no matter what context is active?
A solution based on setitimer() system call could do the job. This can be programmed to trigger a cyclic SIGALRM signal. It could then be possible to attach a signal handler which would trigger the scheduler.
I'm trying to use Windows Thread Pool API as pat of my C program. In my scenario, I want to send many tasks to the thread pool, and wait until all tasks are finished, and there are no more pending tasks. After all tasks finished I want to close the thread pool and continue execution of my program.
I believe the way to achieve it is using the wait objects described in the link above, but I can't really figure out what is the right way to do that. Does anyone understand if it's possible and what is the right way to do that?
I have an application which uses WPF for its GUI but, on command kicks off a very heavy processing load.
I noticed that my GUI was rather sluggish when the engine (heavy processing) was running and when using the 'Application Timeline' tool in VS2015, I noticed that some of my engine code was being run on the UI thread.
The engine is started with the following line which, if i understand the LongRunningflag, creates a new thread and runs the given function on that thread.
rootTask = Task.Factory.StartNew(DoWork, TaskCreationOptions.LongRunning);
The DoWork method referenced above repeatedly uses Parallel.For to queue up hundreds of tasks.
Is it possible that the dispatcher thread is 'helping-out' by running tasks from the TaskScheduler queue? If so, is it possible to prevent this to keep the GUI responsive (allbeit to the detriment of the background tasks)?
Is it possible that the dispatcher thread is 'helping-out' by running tasks from the TaskScheduler queue?
No, as far as I know, that's not possible. If some code that comes from the Task really executes on the dispatcher thread, then that means the task had to explicity schedule it there.
Lets assume that I have a File Consumer that polls a directory every 10 seconds and does some sort of processing to the files it has found there.
This processing may take 40 seconds for each file. This means that during that interval the Cosumer will poll the directory again, and start another similar process?
Is there any way I can avoid that, and not allow the Consumer to poll if the previous poll has not finished?
The file consumer is single threaded so it will not poll while it already process files.
When the consumer finishes it will delay for 10s before polling again. This is controlled by useFixedDelay option which you can read more about in the JDK ScheduledExecutorService which is used by Camel as the scheduler.
I have a C program which communicates with PHP through Unix Sockets. The procedure is as follows: PHP accepts a file upload by the user, then sends a "signal" to C which then dispatches another process (fork) to unzip the file (I know this could be handled by PHP alone, this is just an example; the whole problem is more complex).
The problem is that I don't want to have more than say 4 processes running at the same time. I think this could be solved like this: C, when it gets a new "task" from PHP dumps it on a queue and handles them one-by-one (assuring that there are no more than 4 running) while still listening on the socket.
I'm unsure how to achieve this though, as I cannot do that in the same process (or can I)? I have thought I could have another child process for managing the queue which would be accessible by the parent with the use of shared memory, but that seems overly complicated. Is there any other way around?
Thanks in advance.
If you need to have a separate process for each task handler, then you might consider having five separate processes. The first one is the listener, and handles new incoming tasks, and places it into a queue. Each task handler initially sends a request for work, and also when it is finished processing a task. When the listener receives this request, it delivers the next task on the queue to the task handler, or it places the task handler on a handler queue if the task queue is empty. When the task queue transitions from empty to non-empty, it checks if there is a ready task handler in the handler queue. If so, it takes that task handler out of the queue, and delivers the task from the task queue to the task handler.
The PHP process would put tasks to the listener, while the task handlers would get tasks from the listener. The listener simply waits for put or get requests, and processes them. You can think of the listener as a simple web server, but each of the socket connections to the PHP process and to each task handler can be persistent.
Since the number of sockets is small and persistent, any of the multiplexing calls could work (select, poll, epoll, kqueue, or whatever is best and or available for your system), but it may be easiest to use a separate thread to handle each socket synchronously. The ready task handler queue would then be a semaphore or a condition variable on the task queue. The thread that handles puts from the PHP process would place tasks on the task queue, and up the semaphore. Each thread that handles ready tasks would down the semaphore, then take a task off the task queue. The task queue itself may itself need mutual exclusive protection depending on how it is implemented.