please can you help me with this problem I am rarely presented
The execution of the JMS message listener failed. Causes by Java.io.IOException Premature EOF
thank you very much!
This is most probably caused by incorrect route definition. I have encountered this error when i was missing some '&' characters between parameters or when i didn't have 'jms:queue' or 'jms:topic' at the beginning.
Related
I'm implementing a little HTTP client using OpenSSL, and I'm trying to handle "connection timed out" errors gracefully. By gracefully, I mean I want to print a nice, human-readable message that says "Connection Timed Out." Unfortunately, the error handling in OpenSSL isn't making sense to me. Here's what happens:
I create a nonblocking socket and deliberately connect to a port that I know won't respond, in order to test the error handling of "connection timed out." I make the socket into a nonblocking SSL channel.
Then, I call SSL_connect. It returns -1 as expected. I call SSL_get_error to get more information about the error. It returns SSL_ERROR_WANT_WRITE, as expected: it's waiting for connection to timeout, and that takes a while. So far so good.
I keep calling SSL_connect until finally, SSL_get_error returns SSL_ERROR_SYSCALL. Again, this is what I expect. I am expecting the connect system call to fail. So far so good.
Finally, and this is the part that isn't working for me, I try to get the actual error message. Here is the code I'm using:
unsigned long code =
ERR_get_error_line_data(&file, &line, &data, &flags);
To my surprise, this is returning zero, meaning there are no more errors in the error queue. I wasn't expecting that. What I was expecting was an error code with the property ERR_GET_REASON(code) == ETIMEDOUT, so that I could then pass the ETIMEDOUT to strerror, to get the actual error message. It seems weird to me that there's nothing at all in the error queue. I don't understand that.
I have read this subject:
Relationship slow system call with signal
and everything isn't clear for me. Especially I don't understand this part of answer because I don't see a problem with included source code.
Please explain me.
Thanks in advance.
Anyway, back to the question. If you're wondering why the read doesn't
fail with EINTR the answer is SA_RESTART. On most Unix systems a few
system calls are automatically restarted in the event of a signal.
The OP was expecting the read call to return an error code because it was interrupted by a signal. In the case of the read system call, the OS automatically restarts this system call in the event of a signal, so no error was returned.
so I cannot seem to find solid info on whether assert is useable in a mulththreaded context.
logically to me it seems if an assertion fails the thread get shutdown but not the other threads?
or does the entire process get killed?
so basically my question. is it safe to use assert in a multithreaded environment without leaking resources?
if you see the man page of assert(), it clearly states,
The purpose of this macro is to help the programmer find bugs in his
program. The message "assertion failed in file foo.c, function
do_bar(), line 1287" is of no help at all to a user.
This means, it's only useful [and should be used] in a developing environment, not in production software. IMO, in development stage, you need not to worry about leaks caused by assert(). YMMV.
Once you finished debugging your code, you can simply switch off the assert() functionality by defining [#define] NDEBUG.
I'd say more than yes. If I'd see a multithreaded code without asserts I'd not trust it. If you simplify a bit its implementations to something like:
#define assert(x) if( !(x) ) abort()
You'll see that it does nothing special for thread-safety or thread-specific. It's your responsibility to provide race-free condition and if the assertion fails, the whole process is aborted.
The entire process gets killed. Assert will send the expression, source filename and line number to stderr and then call abort(). Abort() terminates the entire process.
I was reading DJB's "Some thoughts on security after ten years of Qmail 1.0" and he listed this function for moving a file descriptor:
int fd_move(to,from)
int to;
int from;
{
if (to == from) return 0;
if (fd_copy(to,from) == -1) return -1;
close(from);
return 0;
}
It occurred to me that this code does not check the return value of close, so I read the man page for close(2), and it seems it can fail with EINTR, in which case the appropriate behavior would seem to be to call close again with the same argument.
Since this code was written by someone with far more experience than I in both C and UNIX, and additionally has stood unchanged in qmail for over a decade, I assume there must be some nuance that I'm missing that makes this code correct. Can anyone explain that nuance to me?
I've got two answers:
He was trying to make a point about factoring out common code and often such examples omit error checking for brevity and clarity.
close(2) may return EINTER, but does it in practice, and if so what would you reasonably do? Retry once? Retry until success? What if you get EIO? That could mean almost anything, so you really have no reasonable recourse except logging it and moving on. If you retry after an EIO, you might get EBADF, then what? Assume that the descriptor is closed and move on?
Every system call can return EINTR, escpecially one that blocks like read(2) waiting on a slow human. This is a more likely scenario and a good "get input from terminal" routine will indeed check for this. That also means that write(2) can fail, even when writing a log file. Do you try to log the error that the logger generated or should you just give up?
When a file descriptor is dup'd, as it is in the fd_copy or dup2 function, you will end up with more than one file descriptor referring to the same thing (i.e. the same struct file in the kernel). Closing one of them will simply decrement its reference count. No operation is performed on the underlying object unless it is the last close. As a result, conditions such as EINTR and EIO are not possible.
Another possibility is that his function is used only in an application (or a part of one) which has done something to ensure that the call will not be interrupted by a signal. If you aren't going to do anything important with signals, then you don't have to be responsive to them, and it might make sense to mask them all out, rather than wrap every single blocking system call in an EINTR retry. Except of course the ones that will kill you, so SIGKILL and frequently SIGPIPE if you handle it by quitting, along with SIGSEGV and similar fatal errors which will in any case never be delivered to a correct user-space app.
Anyway, if all he's talking about is security, then quite possibly he doesn't have to retry close. If close failed with EIO, then he would not be able to retry it, it would be a permanent failure. Therefore, it is not necessary for the correctness of his program that close succeeds. It may well be that it is not necessary for the correctness of his program that close be retried on EINTR, either.
Usually you want your program to make a best effort to succeed, and that means retrying on EINTR. But this is a separate concern from security. If your program is designed so that some function failing for any reason isn't a security flaw, then in particular the fact that it happens to have failed EINTR, rather than for a permanent reason, isn't a flaw. DJB has been known to be fairly opinionated, so I would not be at all surprised if he has proved some reason why he doesn't need to retry, and therefore doesn't bother, even if doing so would allow his program to succeed in flushing the handle in certain situations where maybe it currently fails (like being explicitly sent a harmless signal with kill by the user at a crucial moment).
Edit: it occurs to me that retrying on EINTR could potentially itself be a security flaw under certain conditions. It introduces a new behaviour to that section of code: it can loop indefinitely in response to a signal flood, where previously it would make one attempt to close and then return. I don't know for sure that this would cause qmail any problems (after all, close itself makes no guarantees how soon it will return). But if giving up after one attempt does make the code easier to analyse then it could plausibly be a smart move. Or not.
You might think that retrying prevents a DoS flaw, where a signal causes a spurious failure. But retrying allows another (more difficult) DoS flaw, where a signal flood causes an indefinite stall. In terms of binary "can this app be DoSed?", which is the kind of absolute security question DJB was interested in when he wrote qmail and djbdns, it makes no difference. If something can happen once, then normally that means it can happen many times.
Only broken unices ever return EINTR without you explicitly asking for it. The sane semantics for signal() enable restartable system calls ("BSD style"). When building a program on a system with the sysv semantics (interrupting signals), you should always replace calls to signal() with calls to bsd_signal(), which you can define yourself in terms of sigaction() if it doesn't exist.
It's further worth noting that no systems will return EINTR on signal receipt unless you have installed signal handlers. If the default action is left in place, or if the signal is set to no action, it's impossible for system calls to be interrupted.
When I do a "dbus_connection_close", do I need to flush the message queue?
In other words, do I need to continue with "dbus_connection_read_write_dispatch" until I receive the "disconnected" indication or is it safe to stop dispatching?
Updated: I need to close the connection to DBus in a clean manner. From reading the documentation, all the clean-up must be done prior to "unreferencing" the connection and this process isn't very well documented IMO.
After some more digging, it appears that there are two types of connection: shared and private.
The shared connection mustn't be closed just unreferenced. Furthermore, it does not appear that the connection must be flushed & dispatched unless the outgoing messages must be delivered.
In my case, I just needed to end the communication over DBus as soon as possible without trying to salvage any outgoing messages.
Thus the short answer is: NO - no flushing / no dispatching needs to be done prior to dbus_connection_unref.
Looking at the documentation for dbus_connection_close(), the only thing that may be invoked is the dispatch status function to indicate that the connection has been closed.
So, ordering here is something you probably want to pay attention to .. i.e getting notified of a closed / dropped connection prior to things left in the message queue.
Looking at the source of the function, it looks like the only thing its going to do is return if fail, i.e. invalid connection / NULL pointer. Otherwise, it (seems) to just hang up.
This means yes, you probably should flush the message queue prior to hanging up.
Disclaimer: I've only had to talk to dbus a few times, I'm not by any means an authority on it.