I've created a publisher-subscriber communication scheme with ZeroMQ and I noticed one small issue with my server and client programs.
I know there is no try catch in C (according to my brief research) however having the next two while(1) without an exception catching seems dangerous to me.
Taking into account the following code snippets, what would be the most correct way to handle an exception (inside the while)? With the structure I have right now (as you can see below), the zmq_close and zmq_ctx_destroy will never execute, but I want them to, in case of a program error/exception (whichever the origin).
Note: In this architecture I have one client listening to multiple publishers, thus the for cycles in the Client code.
Server
(...inside main)
while (1) {
char update[20];
sprintf(update, "%s", "new_update");
s_send(publisher, update);
sleep(1);
}
zmq_close(publisher);
zmq_ctx_destroy(context);
return 0;
Client
(...inside main)
while(1){
for (c = 1; c < server_num; c = c + 1){
char *msg = s_recv(subscribers[c]);
if (msg) {
printf("%s\n",msg);
free(msg);
}
sleep(1);
}
}
for (c = 0; c < server_num; c = c + 1)
zmq_close(subscribers[c]);
zmq_ctx_destroy(context);
return 0;
In case one has never worked with ZeroMQ,
or have never met the concept of the art of Zen-of-Zero,
one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds" before diving into further details
Being the tag error-handling present ... :
Q : what would be the most correct way to handle an exception (inside the while)?
The best strategy is an error-prevention rather than any kind of "reactive" ( ex-post Exception ) handling.
Always assume the things may and will turn wreck havoc and let them fail cheaply. The cheaper the costs of failings are, the better and sooner the system may turn back into its own, intended behaviour.
This said, in modern low-latency distributed-systems, the more in real-time-systems an exception is extremely expensive, disruptive element of the designed code-execution flow.
For these reasons, and for allowing sustained levels of the utmost performance too,
ZeroMQ has since ever a very different approach :
0)
better use zmq_poll() as a cheapest ever detection of a presence ( or not presence ) of any read-able message ( already delivered and being ready so as to be received ), before ever, if at all, calling an API function of a zmq_recv(), to fetch such data into your application-level code's hands, from inside the Context()-instance internal storage.
1)
depending on your language binding (wrapper), best enjoy the non-blocking forms of the .poll(), .send() and .recv() methods. The native API is the most straightforward in always going in this mode with retCode = zmq_recv( ..., ZMQ_NOBLOCK );
2)
Always analyse the retCode - be it in a silent or explanatory assert( retCode == 0 && zmq_errno() ) or otherwise.
3)
Best review and fine-tune all configuration attributes of the instantiated tools available from ZeroMQ framework and harness all their hidden strengths to best match your application domain's needs. Many native API-settings may help mitigate, if not principally avoid, lots of colliding requirements right inside the Context()-engine instance, so do not hesitate to learn all details of possible settings and use them to the best of their help for your code.
Without doing all of this above, your code is not making the best of the Zen-of-Zero
Q : With the structure I have right now (...), the zmq_close and zmq_ctx_destroy will never execute, but I want them to, in case of a program error/exception (whichever the origin).
it is fair enough to set an explicit flag:
bool DoNotExitSoFar = True;
while ( DoNotExitSoFar ){
// Do whatever you need
// Record return-codes, always
retCode = zmq_...(...);
// Test/Set the explicit flag upon a context of retCode and zmq_errno()
if ( retCode == EPROTONOTSUPPORTED ){
// take all due measures needed
...
// FINALLY: Set
DoNotExitSoFar = False;
}
}
// --------------------------------------------- GRACEFUL TERMINATION .close()
if ( ENOTSOCK == zmq_close(...) ) { ...; }
...
// --------------------------------------------- GRACEFUL TERMINATION .term()
retCode = zmq_ctx_term(...);
if ( EINTR == retCode ){ ...; }
if ( EFAULT == retCode ){ ...; }
...
Using other tooling, like int atexit(void (*func)(void)); may serve as the last resort for a ALAP calling zmq_close() or zmq_ctx_term()
You are correct, C has no concept of try/catch, but that shouldn't be an issue. It just means you need to handle exceptions in the s_send() and s_recv() routines (so, for example, if something unexpected happens (like malloc() returning NULL), you have to deal with it and continue processing or return).
I would also suggest you look at the poll() or select() system calls for your client instead of doing a looping poll. It's much more elegant to only service the file descriptors that have data waiting to be read.
The idiomatic way in C to check for error is to look the return value and then check errno if it's negative.
// ... Your previous code
int ret = zmq_ctx_destroy(context);
if(ret < 0) {
// Process your error here
printf("My error message is : %s\n", strerror(errno));
}
You may need to add #include <errno.h> and <string.h> if it's not already in your program.
You can also read strerror documentation.
Now to adress this part of the question :
Taking into account the following code snippets, what would be the most correct way to handle an exception (inside the while)? With the structure I have right now (as you can see below), the zmq_close and zmq_ctx_destroy will never execute, but I want them to, in case of a program error/exception (whichever the origin).
All zmq_* functions will return an error and set errno. Check every function and break when an error occur. In that scenario, polling on non-blocking function is best to break out of your whileloop when an error occurs.
On Linux, you can also set a signal handler and execute a clean up routine when a singal is raised (for example, it is very common to catch SIGINT to properly exit a program on UNIX on ctrl+C in the console). See this answer
Related
I'm trying to understand how curl_multi_perform works.
The documentation says that:
This function performs transfers on all the added handles that need
attention in an non-blocking fashion. The easy handles have previously
been added to the multi handle with curl_multi_add_handle.
When an application has found out there's data available for the
multi_handle or a timeout has elapsed, the application should call
this function to read/write whatever there is to read or write right
now etc.
Question 1: What does the "application should call" mean? How can an application cause something? Did you mean the programmer should call ?
OK, I found two simple usage examples - "curl_multi_perform":
1 - https://everything.curl.dev/libcurl/drive/multi
int transfers_running;
do {
curl_multi_wait ( multi_handle, NULL, 0, 1000, NULL);
curl_multi_perform ( multi_handle, &transfers_running );
} while (transfers_running);
2 - enter link description here
int still_running;
do {
CURLMcode mc = curl_multi_perform(multi_handle, &still_running);
if(!mc && still_running)
/* wait for activity, timeout or "nothing" */
mc = curl_multi_poll(multi_handle, NULL, 0, 1000, NULL);
if(mc) {
fprintf(stderr, "curl_multi_poll() failed, code %d.\n", (int)mc);
break;
}
/* if there are still transfers, loop! */
} while(still_running);
-Firstly:
in the first example curl_multi_perform is called after curl_multi_wait.
in the second example curl_multi_perform is called before curl_multi_wait.
Nothing is clear.
Secondly:
Why do I need to call curl_multi_perform in a loop ?? I do not understand.
Why is one call not enough ?
Question 1: What does the "application should call" mean? How can an application cause something? Did you mean the programmer should call ?
Programmers don't call functions. Programmers write programs that tell the computer what to do. So this means that the programmer should write code that tells the application to call the function.
in the first example curl_multi_perform is called after curl_multi_wait.
in the second example curl_multi_perform is called before curl_multi_wait.
Either order works. As the documentation says:
This function does not require that there actually is any data available for reading or that data can be written, it can be called just in case.
If there's nothing available, it will simply return immediately, updating transfers_running.
Why do I need to call curl_multi_perform in a loop ?? I do not understand.
Because multiple transfers are in progress. curl_multi_wait() returns as soon as data is available on any of them. After you process that data, you need to continue waiting for other transfers.
Also, this doesn't wait for transfers to be complete, it processes partial data as it arrives. So you have to keep calling it until you've sent or received everything.
I've gotten ideas for multiple projects recently that all involve reading IP addresses from a file. Since they are all supposed to be able to handle a large amount of hosts, I've attempted to implement multi-threading or creating a pool of sockets and select()-ing from them in order to achieve some form of concurrency for better performance. On multiple occasions, reading from the file seems to be the bottleneck in enhancing performance. The way I understand it, reading from a file with fgets or similar is a synchronous, blocking operation. So even if I successfully implemented a client that connects to multiple hosts asynchronously, the operation would still be synchronous because I can only read one address at a time from a file.
/* partially pseudo code */
/* getaddrinfo() stuff here */
while(fgets(ip, sizeof(ip), file) {
FD_ZERO(&readfds);
/* create n sockets here in a for loop */
for (i = 0; i < socket_num; i++) {
if (newfd > fd[i]) newfd = fd[i];
FD_SET(fd[i], &readfds);
}
/* here's where I think I should connect n sockets to n addresses from file
* but I'm only getting one IP at a time from file, so I'm not sure how to connect to
* n addresses at once with fgets
*/
for (j = 0; j < socket_num; j++) {
if ((connect(socket, ai->ai_addr, ai->ai_addrlen)) == -1)
// error
else {
freeaddrinfo(ai);
FD_SET(socket, &master);
fdmax = socket;
if (select(socket+1, &master, NULL, NULL, &tv) == -1);
// error
if ((recvd = read(socket, banner, RECVD)) <= 0)
// error
if (FD_ISSET(socket, &master))
// print success
}
/* clear sets and close sockets and stuff */
}
I've pointed out my issues with comments, but just to clarify: I'm not sure how to perform asynchronous I/O operations on multiple target servers read from a file, since reading entries from file seems to be strictly synchronous. I've run into similar isssues with multithreading, with a marginally better degree of success.
void *function_passed_to_pthread_create(void *opts)
{
while(fgets(ip_addr, sizeof(ip_addr), opts->file) {
/* speak to ip_addr and get response */
}
}
main()
{
/* necessary stuff */
for (i = 0; i < thread_num; i++) {
pthread_create(&tasks, NULL, above_function, opts)
}
for (j = 0; j < thread_num; j++)
/* join threads */
return 0;
}
This seems to work, but since multiple threads are all processing the same file the results aren't always accurate. I imagine it's because multiple threads may process the same address from file at the same time.
I've considered loading all the entries from a file into an array/into memory, but if the file was particularly large I imagine that could cause memory issues. On top of that, I'm not sure it that even makes sense to do anyway.
As a final note; if the file I'm reading from happens to be a particularly large file with a huge amount of IPs then I do not believe either solution scales well. Anything is possible with C though, so I imagine there is some way to achieve what I'm hoping to.
To sum this post up; I'd like to find a way to improve a client-side applications performance using asynchronous I/O or multi-threading when reading entries from a file.
Several people have hinted at a good solution to this in their comments, but it's probably worth spelling it out in more detail. The full solution has quite a lot of details and is pretty complicated code, so I'm going to use pseudocode to explain what I'd recommend.
What you have here is really a variation on a classic producer/consumer problem: You have a single thing producing data, and many things trying to consume that data. In your case, it must be a "single thing" producing that data, because the lengths of each line of the source file are unknown: You can't just jump forward 'n' bytes and somehow be at the next IP. There can only be one actor at a time moving the read pointer toward the next unknown position of the \n, so you by definition have a single producer.
There are three general ways to attack this:
Solution A involves having each thread pulling a little more out of a shared file buffer, and kicking off an asynchronous (nonblocking) read every time the last read completes. There are a whole host of headaches getting this solution right, as it's very sensitive to timing differences between the filesystem and the work being performed: If the file reads are slow, the workers will all stall waiting for the file. If the workers are slow, the file reader will either stall or fill up memory waiting for them to consume the data. This solution is likely the absolute fastest, but it's also incredibly difficult synchronization code to get right with about a zillion caveats. Unless you're an expert in threading (or extremely clever abuse of epoll_wait()), you probably don't want to go this route.
Solution B has a "master" thread, responsible for reading the file, and populating some kind of thread-safe queue with the data it reads, with one IP address (one string) per queue entry. Each of the worker threads just consumes queue entries as fast as it can, querying the remote server and then requesting another queue entry. This requires a little care to get right, but is generally a lot safer than Solution A, especially if you use somebody else's queue implementation.
Solution C is pretty hacktastic, but you shouldn't dismiss it out-of-hand, depending on what you're doing. This solution just involves using something like the Un*x sed command (see Get a range of lines from a file given the start and end line numbers) to slice your source file into a bunch of "chunky" source files in advance — say, twenty of them. Then you just run twenty copies of a really simple single-thread program in parallel using &, each on a different "slice" of file. Mushed together with a little shell script to automate it, this can be a "good enough" solution for a lot of needs.
Let's take a closer look at Solution B — a master thread with a thread-safe queue. I'm going to cheat and assume you can construct a working queue implementation (if not, there are StackOverflow articles on implementing a thread-safe queue using pthreads: pthread synchronized blocking queue).
In pseudocode, this solution is then something like this:
main()
{
/* Create a queue. */
queue = create_queue();
/* Kick off the master thread to read the file, and give it the queue. */
master_thread = pthread_create(master, queue);
/* Kick off a bunch of workers with access to the queue. */
for (i = 0; i < 20; i++) {
worker_thread[i] = pthread_create(worker, queue);
}
/* Wait for everybody to finish. */
pthread_join(master_thread);
for (i = 0; i < 20; i++) {
pthread_join(worker_thread[i]);
}
}
void master(queue q)
{
FILE *fp = fopen("ips.txt", "r");
char buffer[BIGGER_THAN_ANY_IP];
/* Inhale the file as fast as we can, and push each line we
read onto the queue. */
while (fgets(fp, buffer) != NULL) {
char *next_ip = strdup(buffer);
enqueue(q, next_ip);
}
/* Add some final messages in the queue to let the workers
know that we're out of data. There are *much* better ways
of notifying them that we're "done", but in this case,
pushing a bunch of NULLs equal to the number of threads is
simple and probably good enough. */
for (i = 0; i < 20; i++) {
enqueue(q, NULL);
}
}
void worker(queue q)
{
char *ip;
/* Inhale messages off the queue as fast as we can until
we get a "NULL", which means that it's time to stop.
The call to dequeue() *must* block if there's nothing
in the queue; the call should only return NULL if the
queue actually had NULL pushed into it. */
while ((ip = dequeue(q)) != NULL) {
/* Insert code to actually do the work here. */
connect_and_send_and_receive_to(ip);
}
}
There are plenty of caveats and details in a real implementation (like: how do we implement the queue, ring buffers or a linked list? what if the text isn't all IPs? what if the char buffer isn't big enough? how many threads is enough? how do we deal with file or network errors? will malloc performance become a bottleneck? what if the queue gets too big? can we do better to overlap the network I/O?).
But, caveats and details aside, the pseudocode I presented above is a good enough starting point that you likely can expand it into a working solution.
read IP's from a file, have worker threads, keep giving IP's to worker threads. let all socket communication happen in worker threads. Also if the IPv4 addresses are stored hex format instead of ascii, probably can read multiples of them in a single shot and it would be faster.
If you just want to read asynchronously you can use getch() from ncurses with delay set to 0. It is part of posix so you don't need any additional dependencies. Also you have unlocked_stdio.
On the other hand, I have to wonder why is fgets() a bottleneck. As long as you have data in file it should not block. And even if data is huge (like 1MB or 100k ip addresses) reading it into list at startup should take less than 1 second.
And why are you openining sockets_num connections to every ip in the list? You are having sockets_num multiplied by number of ip addresses at the same time. Since every socket is file on linux you will hit system issues when you try to open more than several thousand files (see ulimit -Sn). Can you confirm that issue is not in connect() in that case?
Is this the correct way to do error handling in OpenSSL?
And what is the difference between SSL_get_error and ERR_get_error?
The docs are quite vague in this regard.
int ssl_shutdown(SSL *ssl_connection)
{
int rv, err;
ERR_clear_error();
rv = SSL_shutdown(ssl_connection);
if (rv == 0)
SSL_shutdown(ssl_connection);
if (rv < 0)
{
err = SSL_get_error(ssl_connection, rv);
if (err == SSL_ERROR_SSL)
fprintf(stderr, "%s\n", ERR_error_string(ERR_get_error(), NULL));
fprintf(stderr, "%s\n", SSL_state_string(ssl_connection));
return 1;
}
SSL_free(ssl_connection);
return 0;
}
SSL_get_error:
SSL_get_error() returns a result code (suitable for the C "switch"
statement) for a preceding call to SSL_connect(), SSL_accept(),
SSL_do_handshake(), SSL_read(), SSL_peek(), or SSL_write() on ssl. The
value returned by that TLS/SSL I/O function must be passed to
SSL_get_error() in parameter ret.
ERR_get_error:
ERR_get_error() returns the earliest error code from the thread's
error queue and removes the entry. This function can be called
repeatedly until there are no more error codes to return.
So the latter is for more general use and those shouldn't be used together, because:
The current thread's error queue must be empty before the TLS/SSL I/O operation is attempted, or SSL_get_error() will not work reliably.
So you have to read all of the errors using ERR_get_error and handle them (or ignore them by removal as you did in your code sample with ERR_clear_error) and then perform the IO operation. Your approach seems to be correct, although I can't check all aspects of it by myself at the moment.
Refer to this answer and this post for more information.
EDIT: according to this tutorial, BIO_ routines may generate an error and affect error queue:
The third field is the name of the package that generated the error,
such as "BIO routines" or "bignum routines".
And what is the difference between SSL_get_error and ERR_get_error?
There are two logical parts to OpenSSL. First is the SSL library, libssl.a (and libssl.so), and it includes the communication related stuff. Second is the cryptography library, libcrypto.a (and libcrypto.so), and it includes big numbers, configuration, input/output, etc.
libssl.a depends upon libcrypto.a, and its why the link command is ordered as -lssl -lcrypto.
You use SSL_get_error to retrieve most errors from the SSL portion library, and you use ERR_get_error to retrieve errors not in the SSL portion of the library.
Is this the correct way to do error handling in OpenSSL?
The code you showed is closer to "how do you shutdown a SSL socket". Ultimately, the gyrations control two cases. First is a half open connection, when the client shutdowns without sending the close notify message. The second is your program's behavior when sending the close notify message.
Its hard to answer "is it correct" because we don't know the behavior you want. If you don't care if the close notify is sent, then I believe you only need to call SSL_shutdown once, regardless of what the client does.
When writing code I often have checks to see if errors occurred. An example would be:
char *x = malloc( some_bytes );
if( x == NULL ){
fprintf( stderr, "Malloc failed.\n" );
exit(EXIT_FAILURE);
}
I've also used strerror( errno ) in the past.
I've only ever written small desktop appications where it doesn't matter if the program exit()ed in case of an error.
Now, however, I'm writing C code for an embedded system (Arduino) and I don't want the system to just exit in case of an error. I want it to go to a particular state/function where it can power down systems, send error reports and idle safely.
I could simply call an error_handler() function, but I could be deep in the stack and very low on memory, leaving error_handler() inoperable.
Instead, I'd like execution to effectively collapse the stack, free up a bunch of memory and start sorting out powering down and error reporting. There is a serious fire risk if the system doesn't power down safely.
Is there a standard way that safe error handling is implemented in low memory embedded systems?
EDIT 1:
I'll limit my use of malloc() in embedded systems. In this particular case, the errors would occur when reading a file, if the file was not of the correct format.
Maybe you're waiting for the Holy and Sacred setjmp/longjmp, the one who came to save all the memory-hungry stacks of their sins?
#include <setjmp.h>
jmp_buf jumpToMeOnAnError;
void someUpperFunctionOnTheStack() {
if(setjmp(jumpToMeOnAnError) != 0) {
// Error handling code goes here
// Return, abort(), while(1) {}, or whatever here...
}
// Do routinary stuff
}
void someLowerFunctionOnTheStack() {
if(theWorldIsOver)
longjmp(jumpToMeOnAnError, -1);
}
Edit: Prefer not to do malloc()/free()s on embedded systems, for the same reasons you said. It's simply unhandable. Unless you use a lot of return codes/setjmp()s to free the memory all the way up the stack...
If your system has a watchdog, you could use:
char *x = malloc( some_bytes );
assert(x != NULL);
The implementation of assert() could be something like:
#define assert (condition) \
if (!(condition)) while(true)
In case of a failure the watchdog would trigger, the system would make a reset. At restart the system would check the reset reason, if the reset reason was "watchdog reset", the system would goto a safe state.
update
Before entering the while loop, assert cold also output a error message, print the stack trace or save some data in non volatile memory.
Is there a standard way that safe error handling is implemented in low memory embedded systems?
Yes, there is an industry de facto way of handling it. It is all rather simple:
For every module in your program you need to have a result type, such as a custom enum, which describes every possible thing that could go wrong with the functions inside that module.
You document every function properly, stating what codes it will return upon error and what code it will return upon success.
You leave all error handling to the caller.
If the caller is another module, it too passes on the error to its own caller. Possibly renames the error into something more suitable, where applicable.
The error handling mechanism is located in main(), at the bottom of the call stack.
This works well together with classic state machines. A typical main would be:
void main (void)
{
for(;;)
{
serve_watchdog();
result = state_machine();
if(result != good)
{
error_handler(result);
}
}
}
You should not use malloc in bare bone or RTOS microcontroller applications, not so much because of safety reasons, but simple because it doesn't make any sense whatsoever to use it. Apply common sense when programming.
Use setjmp(3) to set a recovery point, and longjmp(3) to jump to it, restoring the stack to what it was at the setjmp point. It wont free malloced memory.
Generally, it is not a good idea to use malloc/free in an embedded program if it can be avoided. For example, a static array may be adequate, or even using alloca() is marginally better.
to minimize stack usage:
write the program so the calls are in parallel rather than function calls sub function that calls sub function that calls sub function.... I.E. top level function calls sub function where sub function promptly returns, with status info. top level function then calls next sub function... etc
The (bad for stack limited) nested method of program architecture:
top level function
second level function
third level function
forth level function
should be avoided in embedded systems
the preferred method of program architecture for embedded systems is:
top level function (the reset event handler)
(variations in the following depending on if 'warm' or 'cold' start)
initialize hardware
initialize peripherals
initialize communication I/O
initialize interrupts
initialize status info
enable interrupts
enter background processing
interrupt handler
re-enable the interrupt
using 'scheduler'
select a foreground function
trigger dispatch for selected foreground function
return from interrupt
background processing
(this can be, and often is implemented as a 'state' machine rather than a loop)
loop:
if status info indicates need to call second level function 1
second level function 1, which updates status info
if status info indicates need to call second level function 2
second level function 2, which updates status info
etc
end loop:
Note that, as much as possible, there is no 'third level function x'
Note that, the foreground functions must complete before they are again scheduled.
Note: there are lots of other details that I have omitted in the above, like
kicking the watchdog,
the other interrupt events,
'critical' code sections and use of mutex(),
considerations between 'soft real-time' and 'hard real-time',
context switching
continuous BIT, commanded BIT, and error handling
etc
For example, if it failed to invoke msgsnd/msgrcv:
How to handle the errno – what is the best way?
What principle is applying to business product?
Shall I have to cover all of them?
What kinds of error must be handled? Do I have to write a signal handler for EINTR or something like this?
Here's my straw-man code:
RetVal = msgrcv(... );
if( RetVal == -1 )
{
switch (errno)
{
case E2BIG:
...
case EAGAIN:
...
case EFAULT:
...
case EIDRM:
...
case EINTR:
...
case EINVAL:
...
case ENOMEM:
...
default:
...
}
This depends on the coding standards you want to apply, and how you might reasonably respond to the failures.
You should always check errors, but you might commonly only handle one or two of them such as EINTR. I would at least try to print some kind of diagnostic last-gasp message before violently exiting in the case of unexpected errors.
The more critical the software, the more carefuly-designed it needs to be, and more comprehensive error handling is part of that.
Since your tags are "C" and "Linux" I assume you're using GCC, in which case have a look at the handy %m in printf.
Obviously this too simple for some cases, but until your program is finished something like this is a good stub to have.
if(RetVal == -1) {
perror("message receive");
exit(1);
}
Typically, one only looks at the exact error if a specific recovery is called for in that case. Until you have some code that you need to make conditional on exactly the type of error, you should simply decide between...
Silently ignore the error
Warn, and then continue
Complain, and then exit
See also...
the nonstandard-but-useful err(3).
setjmp, longjmp, sigsetjmp, et al
It depends on your code and what you can do (similar to exception) and what error you have received. For example EAGAIN is not a strictly an error (it denotes that you tried non-blocking operation and it would block).
If it is a quick program you may do nothing (say - you just playing with API). If it has GUI it might display a message (say "disk is full" or "cannot connect to network") etc.
If the question had an ultimate answer there would be no need for errno - system call could do it as well.
The basic Linux system calls almost universally return -1 on error, and 0 or positive value on success. Also, the errno is set to one of the predefined values. So, checking failure of system calls is pretty easy and should be done consistently. Checking the errno for what type of error occurs should be done for the errors you can handle in your program itself. For other errors, it is best to inform the user that he made an error and notify him with the error. The strerror() in the string.h takes erroro as the parameter and returns a pointer to string describing the error.
#include<string.h>
char* strerror(int errno);
After telling the error, it is on the severity of the error whether to continut running the program or exit the program by
exit(1);