Returning From Catching A Floating Point Exception - c

So, I am trying to return from a floating point exception, but my code keeps looping instead. I can actually exit the process, but what I want to do is return and redo the calculation that causes the floating point error.
The reason the FPE occurs is because I have a random number generator that generates coefficients for a polynomial. Using some LAPACK functions, I solve for the roots and do some other things. Somewhere in this math intensive chain, a floating point exception occurs. When this happens, what I want to do is increment the random number generator state, and try again until the coefficients are such that the error doesn't materialize, as it usually doesn't, but very rarely does and causes catastrophic results.
So I wrote a simple test program to learn how to work with signals. It is below:
In exceptions.h
#ifndef EXCEPTIONS_H
#define EXCEPTIONS_H
#define _GNU_SOURCE
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <math.h>
#include <errno.h>
#include <float.h>
#include <fenv.h>
void overflow_handler(int);
#endif // EXCEPTIONS_H //
In exceptions.c
#include "exceptions.h"
void overflow_handler(int signal_number)
{
if (feclearexcept(FE_OVERFLOW | FE_UNDERFLOW | FE_DIVBYZERO | FE_INVALID)){
fprintf(stdout, "Nothing Cleared!\n");
}
else{
fprintf(stdout, "All Cleared!\n");
}
return;
}
In main.c
#include "exceptions.h"
int main(void)
{
int failure;
float oops;
//===Enable Exceptions===//
failure = 1;
failure = feenableexcept(FE_OVERFLOW | FE_UNDERFLOW | FE_DIVBYZERO | FE_INVALID);
if (failure){
fprintf(stdout, "FE ENABLE EXCEPTIONS FAILED!\n");
}
//===Create Error Handler===//
signal(SIGFPE, overflow_handler);
//===Raise Exception===//
oops = exp(-708.5);
fprintf(stdout, "Oops: %f\n", oops);
return 0;
}
The Makefile
#===General Variables===#
CC=gcc
CFLAGS=-Wall -Wextra -g3 -Ofast
#===The Rules===#
all: makeAll
makeAll: makeExceptions makeMain
$(CC) $(CFLAGS) exceptions.o main.o -o exceptions -ldl -lm
makeMain: main.c
$(CC) $(CFLAGS) -c main.c -o main.o
makeExceptions: exceptions.c exceptions.h
$(CC) $(CFLAGS) -c exceptions.c -o exceptions.o
.PHONY: clean
clean:
rm -f *~ *.o
Why doesn't this program terminate when I am clearing the exceptions, supposedly successfully? What do I have to do in order to return to the main, and exit?
If I can do this, I can put code in between returning and exiting, and do something after the FPE has been caught. I think that I will set some sort of flag, and then clear all most recent info in the data structures, redo the calculation etc based on whether or not that flag is set. The point is, the real program must not abort nor loop forever, but instead, must handle the exception and keep going.
Help?

"division by zero", overflow/underflow, etc. result in undefined behaviour in the first place. If the system, however, generates a signal for this, the effect of UB is "suspended". The signal handler takes over instead. But if the handler returns, the effect of UB will "resume".
Therefore, the standard disallows returning from such a situation.
Just think: How would the program have to recover from e.g. DIV0? The abstract machine has no idea about FPU registers or status flags, and even if - what result would have to be generated?
C also has no provisions to unroll the stack properly like C++.
Note also, that generating signals for arithmetic exceptions is optional, so there is no guarantee a signal will actually be generated. The handler is mostly meant to notify about the event and possibly clean up external resources.
Behaviour is different for signals which do not origin from undefined behaviour, but just interrupt program execution. This is well defined as the program state is well-defined.
Edit:
If you have to rely on the program to continue under all circumstances, you hae to check all arguments of arithmetic operations before doing the actual operation and/or use safe operations only (re-order, use larger intermediate types, etc.). One exaple for integers might be to use unsigned instead of signed integers, as for those overflow-behavior is well-defined (wrap), so intermediate results overflowing will not make trouble as long as that is corrected afterwards and the wrap is not too much. (Disclaimer: that does not always work, of course).
Update:
While I am still not completely sure, according to comments, the standard might allow, for a hosted environment at least, to use LIA-1 traps and to recover from them (see Annex H. As these are not necessarily precise, I suspect recovery is not possible under all circumstances. Also, math.h might present additional aspects which have to be carefully evaluated.
Finally: I still think there is nothing gained with such approach, but some uncertainty added compared to using safe algorithms. It would be different, if there wer not so much different components involved. For a bare-metal embedded system, the view might be completely different.

I think you're supposed to mess around with the calling stack frame if you want to skip an instruction or break out of exp or whatever. This is high voodoo and bound to be unportable.
The GNU C library lets you use setjmp() outside of a signal handler to which you can longjmp() from inside. This seems like a better way to go. Here is a self-contained modification of your program showing how to do it:
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <setjmp.h>
#include <math.h>
#include <errno.h>
#include <float.h>
#include <fenv.h>
sigjmp_buf oh_snap;
void overflow_handler(int signal_number) {
if (feclearexcept(FE_OVERFLOW | FE_UNDERFLOW | FE_DIVBYZERO | FE_INVALID)){
fprintf(stdout, "Nothing Cleared!\n");
}
else{
fprintf(stdout, "All Cleared!\n");
}
siglongjmp(oh_snap, 1);
return;
}
int main(void) {
int failure;
float oops;
failure = 1;
failure = feenableexcept(FE_OVERFLOW | FE_UNDERFLOW | FE_DIVBYZERO | FE_INVALID);
if (failure){
fprintf(stdout, "FE ENABLE EXCEPTIONS FAILED!\n");
}
signal(SIGFPE, overflow_handler);
if (sigsetjmp(oh_snap, 1)) {
printf("Oh snap!\n");
} else {
oops = exp(-708.5);
fprintf(stdout, "Oops: %f\n", oops);
}
return 0;
}

Related

Segmentation fault before main when using key args [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I get a segmentation fault before main() when I try to start with command ./somename.o -s 4
Works well when using ./somename.o without key arguments
main.c
#include <stdio.h>
#include <stdlib.h>
#include "input.h"
#include "output.h"
int main(int argc, char** argv) {
input_handler(argc, argv);
pretty_print();
return 0;
}
input.h
#include "data.h"
#include"func.h"
#include <getopt.h>
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
void input_handler(int argc, char** argv);
data.h
#pragma once
void(*func) (void);
void(*input) (void);
static struct Matrix {
int size;
int** A;
}matrix;
GitHub:
https://github.com/sandderson/lab2
EDIT:
added include guards
Also some usefull info:
I use windows subsystem for linux
I compile with makefile and following sequence:
gcc -c func.c
gcc -c input.c
gcc -c main.c
gcc -c output.c
gcc main.o func.o input.o output.o -o Lab2.o
Your call to getopt_long uses "sdi" as the options string, which means that -s, -d and -i are possible options, and that none of them take an argument (since none are followed by a colon). See man getopt for details.
But when you are handling the -s option, you do:
matrix.size = atoi(optarg);
which assumes optarg will be set up to point to an argument. It isn't, because as far as getopt_long is concerned, -s doesn't take an argument. Thus, it has its initial value (NULL) and atoi attempts to use that as a string. Unsurprisingly, a segmentation fault results.
Moreover, your attempt to bracket the error by inserting printf calls fails because you have failed to ensure that the printf is flushed to the actual output device. Stdio buffering makes printf a notoriously inaccurate tool for demonstrating the sequence of actions inside a program; you really cannot assume that an error preceded a call to printf just because the output from the printf was not visible.
Ideally, you should do both of the following (although either one would be sufficient in most cases):
Send debugging output to stderr using fprintf
Terminate debugging lines with a newline character
Eg: fprintf(stderr, "%s\n", "dlfkg");, although you could use a better message.
(Even if you do that, it is possible that the line output to the terminal is overwritten or otherwise fails to be presented as a result of a segfault which occurs soon afterwards. But your odds of seeing the message are a lot better.)
But if you neither of those things, then the most likely outcome is that the characters printed will only be placed in the stdio buffer, where they will stay until the buffer becomes full or a newline is printed (if the device is line-buffered, for which there is no guarantee). When the program blows up as a result of the segfault, the stdio buffers vanish into thin air, so nothing ever gets printed. Thus the non-appearance of the line tells you precisely nothing about the sequence of events.
The small amount of extra typing would have been a lot less than asking this question here and responding to the resulting comments. Just sayin'

How does 32-bit system call table entry point maps to SYSCALL_DEFINE in x86_64

I am digging deeper into system calls,
Added a system call into both syscall_32.tbl and syscall_64.tbl
syscall_32.tbl
434 i386 hello sys_hello __ia32_sys_hello
syscall_64.tbl
434 common hello __x64_sys_hello
Definition:
SYSCALL_DEFINE0(hello) {
pr_info("%s\n", __func__);
pr_info("Hello, world!\n");
return 0;
}
User space code:
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <sys/syscall.h>
#include <string.h>
int main(void)
{
long return_value = syscall(434);
printf("return value from syscall: %ld, erron:%d\n", return_value, errno);
return 0;
}
When i run this user space code on x86_64, i get the following output in dmesg
$ gcc userspace.c -o userspace
[ 800.837360] __x64_sys_hello
[ 800.837361] Hello, world!
But when i compile it for 32-bit, i get
$ gcc userspace.c -o userspace -m32
[ 838.979286] __x64_sys_hello
[ 838.979286] Hello, world!
How come the entry point present in syscall_32.tbl (__ia32_sys_hello) maps to __x64_sys_hello?
On a 64-bit kernel, SYSCALL_DEFINE0 defines the compat (32-bit) and other ABI (e.g. x32 on x86_64) syscall entry points as aliases for the real 64-bit function. It does not define (and has no way to define; that's not how the preprocessor works) multiple functions built from a single body appearing after the ) of the macro evaluation. So __func__ expands to the name of the actual function that has __func__ written in it, not the name of the alias.
For SYSCALL_DEFINEx with x>0, it's more complicated since arguments have to be converted, and I believe wrappers are involved.
You can find all the magic in arch/x86/include/asm/syscall_wrapper.h (under the top-level kernel tree).
If you really want/need there to be separate functions, I believe there's a way to skip the magic and do it. But it makes your code harder to maintain since it may break when the mechanisms behind the magic break. It's likely preferable to probe whether the calling (current) userspace process is 32-bit or 64-bit and act differently according to that.

Interleaved usleep() functions being executed together. Is this compiler optimization?

I have a code that does something similar to the following repeatedly over a loop:
$ cat test.c
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
char arr[6] = {'h','e','l','l','o','!'};
for(int x=0; x<6 ; x++){
printf("%c",arr[x]);
usleep(1000000);
printf("%c",arr[x]);
usleep(1000000);
}
printf("\n");
return 0;
}
I see that printf() executes one after the other WITHOUT any delay (due to usleep), and then the program sleeps for the total usleep time at the end before the next iteration. Seems like all the usleep() calls happen together in the end.
I tried -O0 flag in gcc, because I suspected its the effect of compiler optimization. But I guess -O0 flag does not disable whatever optimization category this case falls under (if my guess is correct about the compiler being the reason for this behavior).
I am trying to understand the reason for this behavior and how to achieve the desired behavior from my program.
Note: I know it might be possible to replace usleep() with some compute-heavy function call that take an equivalent amount of time, but that is not the solution I am looking for.
You are using usleep() wrong. Use sleep(1) instead.
From man usleep:
EINVAL usec is greater than or equal to 1000000. (On systems where that is considered an error.)
Once you fix that you should do fflush() after printf() to avoid another surprise with output buffering.

Self-replicating code, how to implement different behavior in first iteration vs following ones?

So I'm having a tough time with a school project. The goal is to make a self-replicating code, name Sully.c. That program must output it's own source code (it's a quine) into a program named Sully_x.c, where x is an integer in the source code, then compile said program and execute it iff x > 0. x must decrement from one copy to the next, but not from the original Sully.c to Sully_5.c.
Here is my code so far:
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int k = 5;
#define F1 int main(void){int fd = open("Sully_5.c", 0);if(fd != -1){close(fd);k-=1;}char buff[62];(sprintf)(buff, "Sully_%d.c", k);FILE *f = fopen(buff, "w");fprintf(f, "#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\nint k = %d;\n#define F1 %s\n#define F2(x) #x\n#define F3(x) F2(x)\nconst char *s = F3(F1);\nF1\n", k, s);fclose(f);(sprintf)(buff, "gcc -Wall -Wextra -Werror Sully_%d.c -o Sully_%d", k, k);system(buff);if (k != 0){(sprintf)(buff, "./Sully_%d", k);system(buff);}return 0;}
#define F2(x) #x
#define F3(x) F2(x)
const char *s = F3(F1);
F1
That code works, and checks all the requirements for the program. However, I'm using a method that checks something other than the code itself -> I'm checking if sully_5.c already exists or not. If it doesn't, x doesn't move, if it does, then it is decremented.
Another method would have been to use argv[0] or the macro __FILE__, but both these options are explicitly forbidden for the assignment and considered cheating.
But, apparently there are other methods that doesn't require any of the above technique. I can't think of any, because if Sully.c and Sully_5.c need different behaviors but the same source code, than there must be an external variable that needs to influence the code behavior, or so is my hypothesis.
Am I right? Wrong? How else could this be done?
... there must be an external variable that needs to influence the code behavior
How else could this be done?
You can define or not some preprocessing variables (e.g. -Daze or -Daze=12 etc) to generate a different code using conditional compilation without changing the source
The execution can also use the argument(s) given to the program when it is run to change its behavior

How does C handle complex equations if using REALs

This is probably and easy one for you guys, but I couldn't find a definitive answer and I just want to be sure I'm not overlooking anything. I have an equation, which I know permits complex solutions, but I've programmed it in C using "double" and/or "float". Does C simply ignore the complex part if I don't use "complex" types? In other words, does it simply return the real part? Will it generate any errors by not using "complex"? Thanks.
There is a 'complex' and an 'imaginary' data type in C. However, since it has only been a few years since it has been introduced, some of the old systems might not support it. So, its best to handle that kind of solutions explicitly.
If you are performing an illegal operation like sqrt(-1), then it will generate an error.
The following post most probably answers your queries better How to work with complex numbers in C?
The documentation for sqrt() (if you read it) tells you it returns a domain error.
You can find this out for yourself with a test case:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <errno.h>
int main(int argc, char *argv[])
{
double foo = -1.234;
double foo_sqrt = sqrt(foo);
if (errno == EDOM) {
fprintf(stderr, "Error: EDOM - Mathematics argument out of domain of function (POSIX.1, C99)\n");
return EXIT_FAILURE;
}
/* we never get here */
fprintf(stdout, "sqrt(%f) = %f\n", foo, sqrt(foo));
return EXIT_SUCCESS;
}
Then compile and run:
$ gcc -lm -std=c99 -Wall sqrt_test.c -o sqrt_test
$ ./sqrt_test
Error: EDOM - Mathematics argument out of domain of function (POSIX.1, C99)
$ echo $?
1

Resources