In my code, i need to call sigalrm after 2 seconds. However, when i use ualarm(2000000, 0 ) it doesn't work. using ualarm less than 1 second works. Whereas alarm(2) works. Is there a reason why ualarm should be used at times over alarm? Is there any way to get ualarm to work for over 1 second?
ualarm() is obsolete, and in fact has been removed from POSIX. Do not use it.
If you insist on using it anyway, the Linux manual page for it notes this:
The type useconds_t is an unsigned integer type capable of holding integers in the range [0,1000000].
, which I guess is a reference to the one-time POSIX specification. What you should take from that is that POSIX ualarm() was never guaranteed to be able to handle a first argument larger than 1000000. It's unclear whether any implementations ever did handle larger values, but the fact that POSIX specified it as it did suggests that at least some implementations did not.
Is there any way to get ualarm to work for over 1 second?
Since there is not (any longer) any standard for ualarm(), the answer is necessarily implementation-dependent. Based on what you presented, I'm inclined to think that with your implementation, the answer is "no".
Related
The title field isn't long enough to a capture a detailed question, so for the record, my actual question defines "unreasonable" in a specific way:
Is it legal1 for a C implementation have an arithmetic right
shift operator that returns different results, over time, for the
identical argument values? That is, must >> be a true function?
Imagine you wanted to write portable code using right-shift >> on signed values in C. Unfortunately, for you, the most efficient implementation of some critical part of your algorithm is fastest when signed right shifts fill the sign bit (i.e., they are arithmetic right shifts). Now since this behavior is implementation defined you are kind of screwed if you want to write portable code that takes advantage of it.
Just reading the compiler's documentation (which it is required to provide in the standard, but may or may not actually be easy to access or even exist) is great if you know all the compilers you will run on and will ever run on, but since that is often impossible, you might look for a way to do this portably.
One way I thought of is to simply test the compiler's behavior at runtime: it it appears to implement arithmetic2 right shift, use the optimized algorithm, but if not use a fallback that doesn't rely on it. Of course, just checking that say (short)0xFF00 >> 4 == 0xFFF0 isn't enough since it doesn't exclude that perhaps char or int values work differently, or even the weird case that it fills for some shift amounts or values but not for others3.
So given that a comprehensive approach would be to exhaustively check all shift values and amounts. For all 8-bit char inputs that's only 28 LHS values and 23 RHS values for a total of 211, while short (typically, but let's say int16_t if you want to be pedantic) totals only 220, which could be validated in a fraction of a second on modern hardware4. Doing all 237 32-bit values would take a few seconds on decent hardware, but still possibly reasonable. 64-bit is out for the foreseeable future however.
Let's say you did that and found that the >> behavior exactly matches the desired arithmetic shift behavior. Are you safe to rely on it according to the standard: does the standard constrain the implementation not to change its behavior at runtime? That is, does the behavior have to be expressible as a function of it's inputs, or can (short)0xFF00 >> 4 be 0xFFF0 one moment and then 0x0FF0 (or any other value) the next?
Now this question is mostly theoretical, but violating this probably isn't as crazy as it might seem, given the presence of hybrid architectures such as big.LITTLE that dynamically move processes from on CPU to another with possible small behavioral differences, or live VM migration between say chips manufactured by Intel and AMD with subtle (usually undocumented) instruction behavior differences.
1 By "legal" I mean according to the C standard, not according to the laws of your country, of course.
2 I.e., it replicates the sign bit for newly shifted in bits.
3 That would be kind of insane, but not that insane: x86 has different behavior (overflow flag bit setting) for shifts of 1 rather than other shifts, and ARM masks the shift amount in a surprising way that a simple test probably wouldn't detect.
4 At least anything better than a microcontroller. The inner validation loop is a few simple instructions, so a 1 GHz CPU could validate all ~1 million short values in ~1 ms at one instruction-per-cycle.
Let's say you did that and found that the >> behavior exactly matches the desired arithmetic shift behavior. Are you safe to rely on it according to the standard: does the standard constrain the implementation not to change its behavior at runtime? That is, does the behavior have to be expressible as a function of it's inputs, or can (short)0xFF00 >> 4 be 0xFFF0 one moment and then 0x0FF0 (or any other value) the next?
The standard does not place any requirements on the form or nature of the (required) definition of the behavior of right shifting a negative integer. In particular, it does not forbid the definition to be conditional on compile-time or run-time properties beyond the operands. Indeed, this is the case for implementation-defined behavior in general. The standard defines the term simply as
unspecified behavior where each implementation documents how the choice is made.
So, for example, an implementation might provide a macro, a global variable, or a function by which to select between arithmetic and logical shifts. Implementations might also define right-shifting a negative number to do less plausible, or even wildly implausible things.
Testing the behavior is a reasonable approach, but it gets you only a probabilistic answer. Nevertheless, in practice I think it's pretty safe to perform just a small number of tests -- maybe as few as one -- for each LHS data type of interest. It is extremely unlikely that you'll run into an implementation that implements anything other than standard arithmetic (always) or logical (always) right shift, or that you'll encounter one where the choice between arithmetic and logical shifting varies for different operands of the same (promoted) type.
The authors of the Standard made no effort to imagine and forbid every unreasonable way implementations might behave. In the days prior to the Standard, implementations almost always defined many behaviors based upon how the underlying platform worked, and the authors of the Standard almost certainly expected that most implementations would do likewise. Since the authors of the Standard didn't want to rule out the possibility that it might sometimes be useful for implementations to behave in other ways (e.g. to support code which targeted other platforms), however, they left many things pretty much wide open.
The Standard is somewhat vague as to the level of precision with which "Implementation Defined" behaviors need to be documented. I think the intention is that a person of reasonable intelligence reading the specification should be able to predict how a piece of code with Implementation-Defined behavior would behave, but I'm not sure that defining an operation as "yielding whatever happens to be in register 7" would be non-conforming.
From a practical perspective, what should matter is whether one is targeting quality implementations for platforms that have been developed within the last quarter century, or whether one is trying to force even deliberately-obtuse compilers to behave in controlled fashion. If the former, it may be worth
using a static assert to ensure the -1>>1==-1, but quality compilers for modern
hardware where that test passes are going to use arithmetic right shift
consistently. While targeting code to obtuse compilers might possibly have
some purpose, it's not generally possible to guard against all the ways that
a pathological-but-conforming compiler could sabotage one's code, and efforts
spent attempting to do so could often be more effectively spent on getting a
quality compiler.
I have a multithreaded application where I one producer thread(main) and multiple consumers.
Now from main I want to have some sort of percentage of how far into the work the consumers are. Implementing a counter is easy as the work that is done a loop. However since this loop repeats a couple of thousands of times, maybe even more than a million times. I don`t want to mutex this part. So I went looking into some atomic options of writing to an int.
As far as I understand I can use the builtin atomic functions from gcc:
https://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html
however, it doesn`t have a function for just reading the variable I want to work on.
So basically my question is.
can I read from the variable safely from my producer, as long as I use the atomic builtins for writing to that same variable in the consumer
or
do I need some sort of different function to read from the variable. and what function is that
Define "safely".
If you just use a regular read, on x86, for naturally aligned 32-bit or smaller data, the read is atomic, so you will always read a valid value rather than one containing some bytes written by one thread and some by another. If any of those things are not true (not x86, not naturally aligned, larger than 32 bits...) all bets are off.
That said, you have no guarantee whatsoever that the value read will be particularly fresh, or that the sequence of values seen over multiple reads will be in any particular order. I have seen naive code using volatile to defeat the compiler optimising away the read entirely but no other synchronisation mechanism, literally never see an updated value due to CPU caching.
If any of these things matter to you, and they really should, you should explicitly make the read atomic and use the appropriate memory barriers. The intrinsics you refer to take care of both of these things for you: you could call one of the atomic intrinsics in such a way that there is no side effect other than returning the value:
__sync_val_compare_and_swap(ptr, 0, 0)
or
__sync_add_and_fetch(ptr, 0)
or
__sync_sub_and_fetch(ptr, 0)
or whatever
If your compiler supports it, you can use C11 atomic types. They are introduced in the section 7.17 of the standard, but they are unfortunately optional, so you will have to check whether __STDC_NO_ATOMICS__ is defined to at least throw a meaningful error if it's not supported.
With gcc, you apparently need at least version 4.9, because otherwise the header is missing (here is a SO question about this, but I can't verify because I don't have GCC-4.9).
I'll answer your question, but you should know upfront that atomics aren't cheap. The CPU has to synchronize between cores every time you use atomics, and you won't like the performance results if you use atomics in a tight loop.
The page you linked to lists atomic operations for the writer, but says nothing about how such variables should be read. The answer is that your other CPU cores will "see" the updated values, but your compiler may "cache" the old value in a register or on the stack. To prevent this behavior, I suggest you declare the variable volatile to force your compiler not to cache the old value.
The only safety issue you will encounter is stale data, as described above.
If you try to do anything more complex with atomics, you may run into subtle and random issues with the order atomics are written to by one thread versus the order you see those changes in another thread. Unfortunately you're not using a built-in language feature, and the compiler builtins aren't designed perfectly. If you choose to use these builtins, I suggest you keep your logic very simple.
If I understood the problem, I would not use any atomic variable for the counters. Each worker thread can have a separate counter that it updates locally, the master thread can read the whole array of counters for an approximate snapshot value, so this becomes a 1 consumer 1 producer problem. The memory can be made visible to the master thread, for example, every 5 seconds, by using __sync_synchronize() or similar.
I'm working on a program with critical sections, so I am using semaphores. Specifically, the POSIX semaphores: http://www.kernel.org/doc/man-pages/online/pages/man3/sem_close.3.html
According to http://www.sbin.org/doc/glibc/libc_34.html (search: macro SEM_VALUE_MAX ), there is a maximum value that the general semaphore can be set to. On my system, this is about 32K.
Unfortunately, I'm dealing with some time-sensitive code (reading form an arduino via a serial port at ~1MBit/s), so I'd like to have larger semaphores, because of some implementation details. Ideally, I'd like them to be able to be at least 2^20, but I'm a little unclear on why there is an upper limit anyway.
Is there any way to exceed this SEM_VALUE_MAX, and get a semaphore with a larger value? I could only think of:
Redefining SEM_VALUE_MAX
probably a horrible idea; I think those POSIX folks know what they're doing
Having a semaphore refer to more than one 'chunk' of data
right now, each up() or down() only acquires/releases a single 'chunk' -- an unsigned short int.
I imagine dealing with multiple at a time could cause deadlock.
Implementing my own semaphores.
time consuming / redundant work
less portable
Ask you wonderful folks what you think!
Thanks a bunch in advance!
Couldn't you just use the semaphore to protect access to another counter that you use to track the allocations. That way you don't need any more semaphore values than you have accessors.
How portable do you need your program to be? _POSIX_SEM_VALUE_MAX is the minimum value of SEM_VALUE_MAX that is POSIX-conforming. Larger values are always allowed. glibc and other real-world implementations I'm familiar with have SEM_VALUE_MAX defined much larger, usually equal to INT_MAX. The only way you need to worry about this is if you want your program to be portable to other POSIX systems that have a very low SEM_VALUE_MAX.
I'm using this function:
__delay_cycles(var);
and I get the following error:
Argument to _delay_cycles must be a constant expression
Fair enough! But how can I bypass this? I have to delay my program with a different value every time. I receive my data from RS232 and I sore it in an int variable. I have to use this function and I can't modify its structure. I'm using AtMega16.
One suggestion that immediately springs to mind is to call __delay_cycles() with a constant argument, but do it in a loop, and vary the number of loop iterations.
The loop will add some overhead, so if you need precision you'll have to subtract the (constant) cost of one loop iteration from the (constant) argument to __delay_cycles().
Don't use that function. It is apparently some non-standard Texas junk that doesn't behave according to the rules of the C language. Write your own delay function using on-chip timers instead, or find one on the net. Takes less than 1 hour of work, which is no doubt less time than you will spend pondering the meaning of various non-standard junk.
The real reason why the embedded industry have so many crappy compilers, is because embedded programmers accept to be constantly fed with non-standard junk, even when there is no reason what-so-ever to deviate from the C standard.
if(var==1)
__delay_cycles(1);
else if(var==2)
__delay_cycles(2);
else if(var==3)
__delay_cycles(3);
...and so on.
in a current project I dared to do away with the old 0 rule, i.e. returning 0 on success of a function. How is this seen in the community? The logic that I am imposing on the code (and therefore on the co-workers and all subsequent maintenance programmers) is:
.>0: for any kind of success/fulfillment, that is, a positive outcome
==0: for signalling no progress or busy or unfinished, which is zero information about the outcome
<0: for any kind of error/infeasibility, that is, a negative outcome
Sitting in between a lot of hardware units with unpredictable response times in a realtime system, many of the functions need to convey exactly this ternary logic so I decided it being legitimate to throw the minimalistic standard return logic away, at the cost of a few WTF's on the programmers side.
Opininons?
PS: on a side note, the Roman empire collapsed because the Romans with their number system lacking the 0, never knew when their C functions succeeded!
"Your program should follow an existing convention if an existing convention makes sense for it."
Source: The GNU C Library
By deviating from such a widely known convention, you are creating a high level of technical debt. Every single programmer that works on the code will have to ask the same questions, every consumer of a function will need to be aware of the deviation from the standard.
http://en.wikipedia.org/wiki/Exit_status
I think you're overstating the status of this mythical "rule". Much more often, it's that a function returns a nonnegative value on success indicating a result of some sort (number of bytes written/read/converted, current position, size, next character value, etc.), and that negative values, which otherwise would make no sense for the interface, are reserved for signalling error conditions. On the other hand, some functions need to return unsigned results, but zero never makes sense as a valid result, and then zero is used to signal errors.
In short, do whatever makes sense in the application or library you are developing, but aim for consistency. And I mean consistency with external code too, not just your own code. If you're using third-party or library code that follows a particular convention and your code is designed to be closely coupled to that third-party code, it might make sense to follow that code's conventions so that other programmers working on the project don't get unwanted surprises.
And finally, as others have said, whatever your convention, document it!
It is fine as long as you document it well.
I think it ultimately depends on the customers of your code.
In my last system we used more or less the same coding system as yours, with "0" meaning "I did nothing at all" (e.g. calling Init() twice on an object). This worked perfectly well and everybody who worked on that system knew this was the convention.
However, if you are writing an API that can be sold to external customers, or writing a module that will be plugged into an existing, "standard-RC" system, I would advise you to stick to the 0-on-success rule, in order to avoid future confusion and possible pitfalls for other developers.
And as per your PS, when in Rome, do like the romans do :-)
I think you should follow the Principle Of Least Astonishment
The POLA states that, when two
elements of an interface conflict, or
are ambiguous, the behaviour should be
that which will least surprise the
user; in particular a programmer
should try to think of the behavior
that will least surprise someone who
uses the program, rather than that
behavior that is natural from knowing
the inner workings of the program.
If your code is for internal consumption only, you may get away with it, though. So it really depends on the people your code will impact :)
There is nothing wrong with doing it that way, assuming you document it in a way that ensures others know what you're doing.
However, as an alternative, if might be worth exploring the option to return an enumerated type defining the codes. Something like:
enum returnCode {
SUCCESS, FAILURE, NO_CHANGE
}
That way, it's much more obvious what your code is doing, self-documenting even. But might not be an option, depending on your code base.
It is a convention only. I have worked with many api that abandon the principle when they want to convey more information to the caller. As long as your consistent with this approach any experienced programmer will quickly pick up the standard. What is hard is when each function uses a different approach IE with win32 api.
In my opinion (and that's the opinion of someone who tends to do out-of-band error messaging thanks to working in Java), I'd say it is acceptable if your functions are of a kind that require strict return-value processing anyway.
So if the return value of your method has to be inspected at all points where it's called, then such a non-standard solution might be acceptable.
If, however, the return value might be ignored or just checked for success at some points, then the non-standard solution produces quite some problem (for example you can no longer use the if(!myFunction()) ohNoesError(); idiom.
What is your problem? It is just a convention, not a law. If your logic makes more sense for your application, then it is fine, as long as it is well documented and consistent.
On Unix, exit status is unsigned, so this approach won't work if you ever have to run your program there, and this will confuse all your Unix programmers to no end. (I looked it up just now to make sure, and discovered to my surprised that Windows uses a signed exit status.) So I guess it will probably only mostly confuse your Windows programmers. :-)
I'd find another method to pass status between processes. There are many to choose from, some quite simple. You say "at the cost of a few WTF's on the programmers side" as if that's a small cost, but it sounds like a huge cost to me. Re-using an int in C is a miniscule benefit to be gained from confusing other programmers.
You need to go on a case by case basis. Think about the API and what you need to return. If your function only needs to return success or failure, I'd say give it an explicit type of bool (C99 has a bool type now) and return true for success and false for failure. That way things like:
if (!doSomething())
{
// failure processing
}
read naturally.
In many cases, however, you want to return some data value, in which case some specific unused or unlikely to be used value must be used as the failure case. For example the Unix system call open() has to return a file descriptor. 0 is a valid file descriptor as is theoretically any positive number (up to the maximum a process is allowed), so -1 is chosen as the failure case.
In other cases, you need to return a pointer. NULL is an obvious choice for failure of pointer returning functions. This is because it is highly unlikely to be valid and on most systems can't even be dereferenced.
One of the most important considerations is whether the caller and the called function or program will be updated by the same person at any given time. If you are maintaining an API where a function will return the value to a caller written by someone who may not even have access to your source code, or when it is the return code from a program that will be called from a script, only violate conventions for very strong reasons.
You are talking about passing information across a boundary between different layers of abstraction. Violating the convention ties both the caller and the callee to a different protocol increasing the coupling between them. If the different convention is fundamental to what you are communicating, you can do it. If, on the other hand, it is exposing the internals of the callee to the caller, consider whether you can hide the information.