Using C/C++ <Thread> in my program? [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to be sure whether I should use <thread> in my code.
I am writting for a simple console game. It's like the letters I input fall down from the top of the console and I can remove them one by one by typing each letter in a real time. ( for the purpose of remembering names quickly. )
I think It should be split it into 2 parts, the one printing out the random letters which is falling down to the bottom( If it passes the bottom, get -1 from the life gauge ) and the other one waiting for the letters, which is to use for removal. ).
Both seem to need each processor to run on console in a real time and Message passing is necessary for matching letters.

Note that even though your application needs to perform seemingly parallel things, this does not automatically implies that you have to use multiple threads. Your task can be easily solved in a single thread, using an event loop for example.
Usually multi-threading is useful in just a couple of situations: 1) you need a computing performance, doing computations in parallel on multi-core system; 2) there are blocking operations (e.g. network or disc I/O) that affect other parts of the application (GUI responsiveness for instance).

You could use multithreading for this, but frankly, I wouldn't.
What you need to remember is that what a time period that's short enough to be simultaneous to a human is a small eternity to a computer. Or even a long one, depending on application.
What'll probably make your life a heck of a lot easier is something like this (pseudocode):
while (true) {
// moves letters down, spawns them at the top, checks if they've fallen off the screen
updateFallingLetters();
// polls for input, updates game state accordingly
checkIfIHaveInput();
// waits until one thirtieth or one sixtieth or whatever of a second has passed.
sleepUntilFrameRateAfterLastIWasHere();
}

Related

A program in C language (linux) which does the same thing as the nice -n [number] [process] [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed last year.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
I need to write a C program that launches another program with a modified priority, much as as the nice command does. In order to do that, I would like to find the PID of a process given as an argument (how can I do that?) and modify its priority level (how can I do that?).
Example: The command line might be ./a.out 5 sleep 500 &, and this should produce the same effect as nice -n 5 sleep 500&.
You are focusing on the wrong thing and therefore approaching the problem with the wrong idea. The key requirement is that your program must execute a specified command. Focusing on that will lead you toward how to achieve the process priority goal, at least by helping you frame the question more usefully. For example, you don't need to find any PID, because you don't need to adjust the niceness of an arbitrary process.
So how do you programmatically launch another command? The typical way would be to use one of the functions from the exec family. Since the program name and arguments are comming from the command line, execvp() is probably your best choice.
If you read their docs, you will find that the exec functions replace the process image in the current process. That is, they make the process in which they are called start and run a different program in place of the one it was running before. If the command you're going to launch will run in the current process, then it's the current process whose niceness you want to adjust, and for that there is nice().
You shouldn't need much more than those two functions and a little command-line parsing. Do read those functions' documentation carefully, however, especially execvp()'s, to make sure you set up the arguments correctly.

Life expectancy of usb stick when datalogging [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I know that on average a flashdrive has a life expectancy of roughly 100,000 write cycles. This raises a question for me
I have written a program where I write some values to a csv file on a usb stick every 6 seconds. Every day a new file is created. The machine is a Sigmatek PLC programmed in structure text (similar to pascal) with C library for file handling. The code looks something like file fopen (opens todays file), write some values to the stream along with a timestamp, then file fclose (close the file).
I heard someone say this could mean my usb stick will not last a long time since I'm opening and closing the file every 6 seconds. He suggested I opened the file, write values every 6 seconds as usual, and then close after 10 or 20 minutes, this way the usb stick would last a lot longer. His reasoning being that the usb stick will only be written to at the moment you would actually close the file with Fclose. Can someone confirm this?
Or will this perhaps not become a problem at all even if im opening and closing every 6 seconds, since the usb stick has 16gb of memory, and will only run out of memory after a looooong time (1 file is 500kb max, one file created evey day) , therefore I'm only writing and not writing and erasing from memory? Is the 100,000 write cycles lifetime based on purely writing or writing, erasing and re-writing?
First, regarding a fclose() every 10-20 minutes. This depends on the buffering mode (for C, see setvbuf). In buffered mode, what you were told is correct - any buffered data is written at the time of a fclose(). However, there is an increased risk of losing data (e.g. sudden power loss means unwritten buffer is lost).
We've also made embedded systems using writable flash (not USB). 100,000 write cycles is hugely variable. It means "P/E" (program-erase) cycles. If you're only appending data, then at the rate you cite, I would not bother too much about it. If you're doing other things like erasing/compressing log files which could result in the same storage location being written multiple times, then you need to think more about it. You'd also need to look at what is being done by the OS - for example, any type of auto-defrag should preferably not be enabled.

Kernel Scheduler(Linux) - Is a task a function? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I've looked into the Linux kernel source and I was wondering what the kernel sees as a task? Because obviously the CPU runs machine instructions so I thought the scheduler fetches maybe the memory adress of the main function of a program and puts it on the CPU. Is that at least kind of correct? When I click on an executable program, what will actually happen inside the scheduler?
EDIT:
I saw several task-related structs in the source code that stored a bunch of integers and floats (flags, priority etc.)...But I am wondering how the scheduler finds the machine instructions of my programs.
At a minimum a task is a set of register values. One of them is the program counter. When the kernel switches task it stores the current values of all registers in the task structure for the old task. It then loads all the register values from the new task structure, loading the program counter last. This resumes the execution of that task.
Now the hard to understand part is that in most kernels the program counter isn't loaded at all at task switch. Now how can that switch tasks then?
The trick is that all task switching is always done in the same function, which must be written in ASM. So the program counter of the old task is always exactly the program counter of the new task. So it doesn't need to be loaded at all. Execution simply continues where it is. BUT the code now runs in the context of the new task. And when the function returns it returns from where previously the new task called the task switching function. Maybe it's simpler to understand if I say that the new tasks program counter is loaded from the stack when the task switch function returns.
Anyway, what the scheduler does is switch the whole CPU state from one task to the other. It's much more than just a function pointer in C. If you want a C equivalent then look at setjmp() + longjmp().

How to present a monitor implementation of a counting semaphore without busy waiting? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am currently studying the OS. This question is for my mid-term.
present a monitor implementation of a counting semaphore. Apart from
initialization, the monitor implements two enyprocedures P() and V(). Indicate clearly the regular
variables used and the condition variables used. Do not neglect to present how the ordinary variables
are initialized. Reminder: busy waiting inside the monitor is not allowed.
I know what monitor and counting semaphore are. Specifically, I understand what Peterson and Dekker algorithms are. However, I think both of them will cause busy waiting. Is there any way to solve it? Or, I misunderstand the question because my English is poor?
You are right in that both the algorithms do busy waiting. But they do busy waiting outside the monitor. Your question asks you not to busy wait inside the monitor.
Side note: While busy waiting might seem a bad idea, it is very important to systems that need to avoid latency. In such cases, busy waiting is the best way to go and the programmers will have thorough understanding of the platform before they implement busy waiting.

What are the problems/issues that I can face when working with threads? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have learned about the following problems that I can face when working with threads:
When you write a value to a variable in memory, the value is not necessarily written to the memory location (the value can be written to cache), and so if another thread is reading this variable, it will not read the value that the other thread just wrote.
Also, when you read from a variable in memory, the value is not necessarily read from the memory location (the value can be read from cache), and so if another thread wrote a value to this variable, and your thread is trying to read it, it will not read the value that the other thread just wrote.
You need to be careful that some tasks needs to be "atomic", so for example if two threads are doing calculations on a variable, you must not allow these two threads to do their calculations at the same time (one thread must wait for the other thread to finish its calculations).
The compiler and/or the CPU can execute your program instructions out of order.
You can have a deadlock (if each thread is waiting for the other thread to signal it before continuing).
Are there other problems that I can face when working with threads?
When you write a value to a variable in memory, the value is not necessarily written to the memory location (the value can be written to cache)
You're thinking about it at the wrong level of abstraction. What you say is true, but it's mostly of interest to the developers of the programming language toolchain that you use. From the perspective of an application developer, it's better to say that a value written to memory by one thread does not immediately become visible to other threads.
The compiler and/or the CPU can execute your program instructions out of order
Better to say that, when one thread writes several values to memory in sequence, other threads do not necessarily see the new values appear in the same sequence.
Within any single thread, the compiler and the CPU both must insure that everything appears to happen in program order.
...some tasks needs to be "atomic", so for example if two threads are doing calculations on a variable, you must not allow these two threads to do their calculations at the same time
True again, but that's not enough information to be useful. You need to know when and why two different threads can or can not do their calculations at the same time.
The key concept is invariants. An invariant is any condition that is always assumed to be true. E.g., if you are implementing a linked list structure, one invariant is that every "next" pointer either points to a member of the list or, to NULL. If you are implemeting a ring of linked nodes, then one invariant says that if you follow the chain of "next" pointers far enough, it will always take you back to where you started.
It's often the case that there is no way to perform some operation without temporarily breaking an invariant. E.g., you may be unable to insert something into some data structure without temporarily putting the structure into an invalid state.
You said, "some tasks needs to be 'atomic'". Better to say, some tasks require mutual exclusion (mutexes) to prevent one thread from seeing an invariant in a temporarily broken state caused by the action of some other thread.

Resources