I want to detect a power failure on a PC using C programming, there are few functions for CTRL_LOG_OFF , CTRL_SHUTDOWN etc in wincon.h .. how to handle unexpected power failure using c programming?
The only way to recover from an unexpected power failure is to regularly safe state data so either your program can a) just restart when the system is operational again, or b) can recover when the system is operational again.
When a power failure occurrs, the system just stops. Sometimes, hard disks manage to flush their internal cache buffers so the filesystem is consistent. But in general, there is insufficient power left for the system to "shutdown" or even to let your program know the power went off. And even, when it could tell your program, and can tell all the other 100 programs running on you system, there won't be enough power to let these do any action to be able to recover later. At best the system could dump memory and state to the hibernate file, but even for that there won't be enough power.
However, if you system is a laptop with battery, Windows will tell tour program it changed power status. The program receives the WM_POWERBROADCAST message for this. Then check the wParam parameter for the type of change. See Windows documentation for this message.
When a computer looses power, the CPU (along with all other system peripherals) will also loose power and fail to continue executing instructions. Hence, C (or any other language) code cannot be relied upon to catch such a catastrophic event, let alone take any action as the event occurs.
A common method is to note the status of a program in a file. For example, when the program initializes, it could write a "running" status to the file. When the program terminates, it would write a "terminated" status to the file. This would also allow the program to check the file status prior to initialization to see if the program terminated normally previously.
Update:
What does SIGPWR do in Linux?
The SIGPWR signal indicates a change in the system's power status.
For example, a SIGPWR signal might issued by a processes associated with an "uninterruptible power supply" (UPS) that supplies power to the Linux system. The same (UPS) process writes status information to the file:
/var/run/powerstatus (or the now deprecated, /etc/powerstatus). The one letter value of this file will indicate the state of the system's power. States include:
'F' Failing power. UPS is providing emergency power from its battery backup due to an external (or building) power failure.
'L' Low power. UPS is providing emergency power from its battery backup due to an external (or building) power failure. However the UPS battery backup is critical low. The system should shut-down immediately (within the next two minutes).
'O' Power OK. External (or building) power has been restored.
Other examples of SIGPWR might be associated with a system that has multiple (backup/redundant) power supplies, or a laptop that has its own battery power supply.
Related
Looking for a quick clarification on why unrecoverable errors and page faults must be non-maskable interrupts? What happens when they aren't?
Interrupts and exceptions are very different kinds of events.
An interrupt is external to a CPU event that happens and arrives in the processor asynchronously (moment of arrival does not depend on currently executing programs).
An exception is internal to a CPU event that happens as a side effect of instruction execution.
Consider processor as an overcomplex unstoppable automaton with a well-defined and strictly specified behavior. It continuously fetches, decodes, and executes instructions, one by one. When it executes each instruction, it applies the result to the state of the automaton (registers and memory) by its type. It moves without pauses and interrupts. You only can change the direction of this continuous instruction crunching using function calls and jumps.
Such an automaton-like model supported by well-defined and strictly specified instructions behavior makes it extremely predictable and convenient for programming for compilers and software engineers. When you look at the assembler listing, you can precisely say what the processor will do, when it will execute this program. However, under some specific circumstances, the execution of an instruction can fall out of this well-defined model. And in such cases CPU literally does not know what to do next and how to react. For example, the program tries to divide by zero. What reaction do you expect? What value does it need to place into the target register as a result of division? How can it report to the program that something goes wrong? Now imagine another case. The program makes a jump to some virtual address, but it has no physical address mapped. How should CPU proceed with its unstoppable fetch-decode-execute job? From where should it take the next instruction to execute? Which instruction should it execute? Or maybe it should hang in response? There are no ways out from such states.
An exception is a tool for the CPU to go out from such situations gracefully and restore its unstoppable movement. At the same time is a tool to report the encountered error to the operating system and ask it to help with its handling. If you can turn off exceptions, you can steal that tool from the CPU and put all of the above issues back on the table. CPU designers do not have good answers for them and do not what to see them. Due to this, they make exceptions unmaskable.
I created a program that monitors for events.
I want to log these events "in the right way".
Currently I have a string array, log[500][100].
Each line is a string of characters (up to 100) that report something about the event.
I have it set up so that only the last 500 events are saved in the array.
After that, new events overwrite the oldest events.
Currently I just keep revolving through the array until the program terminates, then I write the array to a file.
Going forward I would like to view the log in real time, any time I wish, without disturbing the event processing and logging process.
I considered opening the file for "appending" but here are my concerns:
(1) The program is running on a Raspberry Pi which has a flash memory as a "disk drive". I believe flash memories have a limited number of write cycles before problems can occur. This program runs 24/7 "forever" so I am afraid the "disk drive" will "wear out".
(2) I am using pretty much all the CPU capacity of the RPi so I don't want to add a lot of overhead/CPU cycles.
How would experienced programmers attack this problem?
Please go easy on me, this is my first C program.
[EDIT]
I began reviewing all the information and I became intrigued by Mark A's suggestion for tmpfs. I looked into it more and I am sure this answers my question. It allows the creation of files in RAM not the SD card. They are lost on power down but I don't care.
In order to keep the files from growing to large I created a double buffer approach. First I write 500 events to file A then switch to file B. When 500 events have been written to file B I close and reopen file A (to delete the contents and start at 0 events) and switch to writing to file A. I found I needed to fflush(file...) after each write or else the file was empty until fclose.
Normally that would be OK but right now I am fighting a nasty segmentation fault so I want as much insight into what is going on. When I hit the fault, I never get to my fclose statements.
Welcome to Stack Overflow and to C programming! A wonderful world of possibilities awaits you.
I have several thoughts in response to your situation.
The very short summary is to use stdout and delegate the output-file management to the shell.
The longer, rambling answer full of my personal musing is as follows:
1 : A very typical thing for C programs to do is not be in charge of how outputs are kept. You might have heard of the "built in" file handles, stdin, stdout, and stderr. These file handles are (under normal circumstances) always available to your program for input (from stdin) and output (stdout and stderr). As you might guess from their names stdout is customarily used for regular output and stderr is customarily used for error / exception output. It is exceedingly typical for a C program to simply read from stdin and output to stdout and stderr, and let something else (e.g., the shell) take care of what those actually are.
For example, reading from stdin means that your program can be used for keyboard entry and for file reading, without needing to change your program's code. The same goes for stdout and stderr; simply output to those file handles, and let the user decide whether those should go to the screen or be redirected to a file. And, because stdout and stderr are separate file handles, the user can have them go to separate 'destinations'.
In your case, to implement this, drop the array entirely, and simply
fprintf(stdout, "event notice : %s\n", eventdetailstring);
(or similar) every time your program has something to say. Take a look at fflush(), too, because of potential output buffering.
2a : This gets you continuous output. This itself can help with your concern about memory wear on the Pi's flash disk. If you do something like:
eventmonitor > logfile
then logfile will be being appended to during the lifetime of your program, which will tend to be writing to new parts of the flash disk. Of course, if you only ever append, you will eventually run out of space on the disk, so you might set up a cron job to kill the currently running eventmonitor and restart it every day at midnight. Done with the above command, that would cause it to overwrite logfile once per day. This prevents endless growth, and it might even use a new physical area of the flash drive for the new file (even though it's the same name; underneath, it's a different file, with a different inode, etc.) But even if it reuses the exact same area of the flash drive, now you are down to worrying if this will last more than 10,000 days, instead of 10,000 writes. I'm betting that within 10,000 days, new options will be available -- worst case, you buy a new Pi every 27 years or so!
There are other possible variations on this theme, as well. e.g., you could have a sophisticated script kicked off by cron every day at midnight that kills any currently running eventmonitor, deletes output files older than a week, and starts a new eventmonitor outputting to a file whose filename is based partly on the date so that past days' file aren't overwritten. But all of this is in the realm of using your program. You can make your program easier to use by writing it to use stdin, stdout, and stderr.
2b : Or, you can just have stdout go to the screen, which is typically how it already is when a program is started from an interactive shell / terminal window. I imagine you could have the Pi running headless most of the time, and when you want to see what your program is outputting, hook up a monitor. Generally, things will stay running between disconnecting and reconnecting your monitor. This avoids affecting the flash drive at all.
3 : Another approach is to have your event monitoring program send its output somewhere off-system. This is getting into more advanced programming territory, so you might want to save this for a later enhancement, after you've mastered more of the basics. But, your program could establish a network connection to, say, a JSON API and send event information there. This would let you separate the functions of event monitoring from event reporting.
You will discover as you learn more programming that this idea of separation of concerns is an important concept, and applies at various levels of a program or a system of interoperating programs. In this case, the Pi is a good fit for the data monitoring aspect because it is a lightweight solution, and some other system with more capacity and more stable storage can cover the data collection aspect.
I am developing a simple Operating System only to know its internals better. On developing a Boot loader and a simple kernel that runs on 16-bit Real Mode, I came across the unfamiliar term System Call and a familiar Interrupt.
I have been Googling the terms since only to find that the concepts are still unclear to me. As far as I have understood, the System calls are used by the Application programs running in least privileged mode to request for a service to the Kernel running in Higher Privileged mode(Ring 0).
I am still unclear of How the System Calls are implemented.
Say, I am writing a Simple C program to print a word and compiling it. Now, I am left with an executable file that contains a System Call to print the given word on screen. My questions corresponding to the given scenario are as follows:
Question 1:
As soon the Program is executed, the system call informs the kernel of the request - What exactly happens here in terms of low level programming?
Question 2:
Can an Interrupt be a System Call or vice versa?
If it seems that I have not understood the concepts clearly, Kindly explain me the Concept of System Call.
Thanking you.
On most systems, interrupts and system calls (and exception handlers) are implemented in the same way.
As soon the Program is executed, the system call informs the kernel of the request - What exactly happens here in terms of low level programming?
Usually, system calls are wrappers around assembly language routines. The sequence of events is:
Call to System Routine
System Routine unpacks parameters and loads them into registers.
System Routine forces an exception (identified by a number) by executing a change mode instruction (to some mode higher than user mode).
The CPU handles the exception by dispatching to an exception handler in the system dispatch table.
The handler performs the system service.
The handler executes a return from exception or interrupt instruction, returning the process to user mode (or whatever mode was called from) and to the system service routine.
The system service routine unpacks the return values from registers and updates the parameters.
Return to the calling function.
Can an Interrupt be a System Call or vice versa?
No. They are dispatched in the same way.
Presumably an operating system could map system calls and interrupts to the same handler but that would be screwy.
System Calls are like function calls to the operating system, that perform operations that cannot or should not be handled manually by the programs and fall in the task scope of the operating system, e.g. file manipulation, writing to screen etc.
The x86 handles handles interrupts by some kind of callback mechanism. All kinds of external interrupt are given an interrupt number. The operating system sets up a table, (the interrupt vector table in real mode and the interrupt descriptor table in protected mode), that stores pointers to functions that handle the corresponding interrupt. For example if the pressing a key interrupt would be assigned to int 21h upon receiving the interrupt from the interrupt controller, the CPU stores the current code segment, instruction pointer, flags and stack and then the CPU will examine entry 21h in the interrupt table and reads out the address where the instruction handler is located. It then executes the handler and resumes normal execution.
However this behavior of calling an handler in the interrupt table can not only be triggered by real hardware interrupts, but also by an internal exception (like divide by zero, reaching an undefined opcode, etc.). The exceptions are assigned to interrupt numbers that are hopefully different to the ones used by hardware interrupts.
Finally any interrupt can also be triggered directly by the currently executed program using the "int n" instruction.
This last feature is often used for system calls. The reason is that the user program only needs to know the interrupt number (witch is usually standardized (DOS uses mainly 21h, Linux mainly 80h) and the operation system can located the interrupt handler wherever it likes it to be and store its address in the the corresponding interrupt table entry.
Keep in mind that there are other ways to implement system calls. For example in protected mode the x86 provides call gates witch are special segments that cause a system call if your try to load them into CS using a far call. Newer processors provide special syscall instructions that are faster them interrupts
I can't understand what this function aio_fsync does. I've read man pages and even googled but can't find an understandable definition. Can you explain it in a simple way, preferably with an example?
aio_fsync is just the asynchronous version of fsync; when either have completed, all data is written back to the physical drive media.
Note 1: aio_fsync() simply starts the request; the fsync()-like operation is not finished until the request is completed, similar to the other aio_* calls.
Note 2: only the aio_* operations already queued when aio_fsync() is called are included.
As you comment mentioned, if you don't use fsync or aio_fsync, the data will still appear in the file after your program ends. However, if the machine was abruptly powered off, it would very likely not be there.
This is because when you write to a file, the OS actually writes to the Page Cache which is a copy of disk sectors kept in RAM, not the to the disk itself. Of course, even before it is written back to the disk, you can still see the data in RAM. When you call fsync() or aio_fsync() it will insure that writes(), aio_writes(), etc. to all parts of that file are written back to the physical disk, not just RAM.
If you never call fsync(), etc. the OS will eventually write the data back to the drive whenever it has spare time to do it. Or an orderly OS shutdown should do it as well.
I would say you should usually not worry about manually calling these unless you need to insure that your data, say a log record, is flushed to the physical disk and needs to be more likely to survive an abrupt system crash. Clearly database engines would be doing this for transactions and journals.
However, there are other reasons the data may not survive this and it is very complex to insure absolute consistency in the face of failures. So if your application does not absolutely need it then it is perfectly reasonable to let the OS manage this for you. For example, if the output .o of the compiler ended up incomplete/corrupt because you power-cycled the machine in the middle of a compile or shortly after, it would not surprise anyone - you would just restart the build operation.
Assume that a large file is saved on disk and I want to run a computation on every chunk of data contained in the file.
The C/C++ code that I would write to do so would load part of the file, then do the processing, then load the next part, then do the processing of this next part, and so on.
If I am, however, interested to do so in the shortest possible time, I could actually do the following: First, tell DMA-controller to load first part of the file. When this part is loaded tell the DMA-controller to load the second part (in some other part of the memory) and then immediately start processing the first part.
If I get an interrupt from the DMA during processing the first part, I finish the first part and afterwards tell the DMA to overwrite it with the third part of the file; then I process the second part.
If I do not get an interrupt from the DMA during processing the first part, I finish the first part and wait for the interrupt of the DMA.
Depending of how long the processing takes in relation to the disk-read, this should be up to twice as fast. In reality, of course, one would have to measure. But that is not the question I am asking.
The question is: Is it possible to do this a) in C using some non-standard extension or b) in assembly? Or do operating systems not allow such things in general? The question is meant primarily in a single-thread context, although I also would be interested to know how to do it with two threads. Also, I am interested in specific code; this is more of a theoretical question.
You're right that you will not get the benefit of this by default, because a blocking read stops your thread from doing any processing. Hans is right that modern OSes already take care of all the little details of DMA and interrupt completion routines.
You need to use the architecture you've described, of issuing a request in advance of when you will use the data. Issue asynchronous I/O requests (on Windows these are called OVERLAPPED). Then the flow will go exactly as you envisions, but the DMA and interrupts are handled in the drivers.
On Windows, take a look at FILE_FLAG_OVERLAPPED (to CreateFile) and ReadFile (if you like events) or ReadFileEx (if you like callbacks). If you don't have to process the data in any particular order, then add a completion port to the mix, which queues the completion responses.
On Linux, OSX, and many other Unix-like OSes, look at aio_read. Or fadvise. Or use mmap with madvise.
And you can get these benefits without even writing native code. .NET recently added the ReadAsync method to its FileStream, which can be used with continuation-passing style in the form of Task objects, with async/await syntactic sugar in the C# compiler.
Typically, in a multi-mode (user/system) operating system, you do not have access to direct dma or to interrupts. In systems that extend those features from kernel(system) mode down to user mode, the overhead eliminates the benefit of using them.
Ignoring that what you're asking to do requires a very specialized environment to support it, the idea is sound and common: declaring two (or more) buffers to enable DMA to the next while you process the first. When two buffers are used they're sometimes referred to as ping-pong buffers.