VxWorks initiating connect() it fails at startup, but succeeds later - c

I'm using VxWorks 5.4 and attempting to connect to a server via TCP. A server which I'm going to be sending logs to, but for some reason at boot it fails or takes even up to 6 seconds - and is blocking the continuation of the task that the connection attempt was made in, which obviously is a big no no.
I have checked if the problem is one the server side by making a simple c program in windows that would connect to that server, and it takes no time at all (milliseconds).
I have "solved" the problem by making a task that would attempt "connectwithtineout" every 1-2 seconds and it does work (initiates the connection after around 2 fails in around 20ms), but I don't really like this approach and would have liked to initiate the actual connection when whatever I need that I'm missing is there and up instead of checking if I can connect every time.

After trying to investigate what the issue could have been, eventually the problem was about how a session is being closed between my system and the server.
You see, when you have a client running on some app on your windows/ or whatever other system, when you shut it down, it goes through some processes that close the session properly.
That is not the case in my system where to close it I essentially unplug the wire - thereby not having my system go through a shutdown process that involves properly closing the session.
After the system is up again, the connect function cannot be performed because my system tries to make the same session as the "dead one" which the server thinks is running.
Solving the problem was easy from the server side, just have a keepalive functionality - if your system doesn't respond for a while that you decide, close the session.

Related

Websockets on stellaris board running lwIP 1.3.2

What I'm doing
I'm implementing a websocket server on a stellaris board as the title says. At the moment I'm able to establish connection to the client and send a few frames.
The way I'm implementing the websocket
The way I'm developing it is something like a master slave communication. Whenever the client sends a string, the server decodes it and then answers. At the moment I'm simply responding to a character 'e', which is designed to be just a counter. The thing is that I implemented the websocket on the client side to send 'e' whenever it receives a message and then displays the message on the page.
The problem
The problem is that it does about 15 transactions and then I can see the communication being re-transmitted from and to the stellaris board and then the communication closes. After the connection closes I noticed that that I can't access any other page on the board. It simply doesn't respond anymore.
My assumptions of what may be causing it
This lead me to believe that the transactions are being too fast and there may be an implementation bug, lwIP bug or hardware bug (I'm using the enet_io example as base).
My assumptions on how to fix it
After seeing this I can imagine that what I need is to control the string being sent to the microcontroller so that it sends once a second, or maybe even less, because at the moment it was doing something like 1000 transactions per second and sometimes more.
The question
So ... after my trials I still have a few questions that need to be answered. Do websockets need this kind of relationship? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open? Is my supposition that slowing down my rates will work?
Do websockets need this kind of relationship [request-response]? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open?
The Websocket protocol doesn't require a request-response model (except for the connection establishing handshake).
The server can stream data to the client without worrying about any response or request from the client.
However, it's common practice to get a response or a ping from a client once in a while, just to know they're alive.
This allows the client to renew a connection if a message or ping fails to reach the server - otherwise the client might not notice an abnormally dropped connection (it will just assume no updates are being sent because there's no new data).
It also allows the server to know a connection is still alive even when no information is being exchanged.
Is my supposition that slowing down my rates will work?
I guess this question becomes less relevant due to the first question's answer... however, I should probably note that the web socket client (often a browser) will have limited resources and a different memory management scheme.
Browsers are easy to overwhelm with too much data because they often keep references to all the exchanges since the page was loaded (or refreshed).
This is especially true when logging events to a browser's console.

Programmatically detect if local web server has hung

I realise that I'll get at least one answer along the lines of "(re)write the code so it doesn't hang" but let's assume we don't live in that shiny happy utopia just yet...
In our embedded system we have a big SDK including a web-server (Boa) which is the primary method of user interaction.
It's possible, during certain phases of the moon, that something can cause the web server to hang or become otherwise stuck in such a way that the process appears running normally (not crashed/dead/using 100% CPU) but does not serve any web pages.
So, the question is, how do we test/detect this situation?
To test whether the server is hung, create a TCP socket and connect to port 80 on IP address 127.0.0.1 (loopback address). Then send the following text over the socket
GET / HTTP/1.1\r\n\r\n
Most servers will interpret that as a request for index.html. Alternatively, you could implement an undocumented URL for testing (which allows for a shorter, predetermined response), e.g.
GET /test/fdoaoqfaf12491r2h1rfda HTTP/1.1\r\n\r\n
You then need to read the response from the server. This involves using select with a reasonable timeout to determine whether any data came back from the server, and if so, use recv to read the data. The response from the server will consist of a header followed by content. The header consists of lines of text, with a blank line at the end of the header. Lines end with \r\n, so the end of the header is \r\n\r\n.
Getting the content involves calling select and recv until recv returns 0. This assumes that the server will send the response and then close the socket. Some sophisticated servers will leave a socket open to allow multiple requests over the same socket. A simple embedded server should not be doing that. (If your server is trying to use the same socket for multiple requests, then you need to figure out how to turn that feature off.)
That's all very well and good, but you really need to rewrite your code so it doesn't hang.
The mostly likely cause of the problem is that the server has a bunch of dangling sockets, i.e. connections from clients that were never properly cleaned up. Dangling sockets will eventually prevent the server from accepting more connections, either because the server has a limit on the number of open connections, or because the process that's running the server uses up all of its file descriptors.
The first thing to check is the TCP timeout value. One project that I worked on had a default timeout of 5 hours, which meant that dangling sockets stayed open for 5 hours. A reasonable timeout is 1 minute.
Then you need to create a client that deliberately misbehaves. Clients can misbehave by
leaving a socket open without reading the server's response
abruptly closing the socket while reading the response
gracefully closing the socket while reading the response
The first situation should be handled by the TCP timeout. The other two need to be properly handled by the server code. Graceful and abrupt socket closure is controlled via the SO_LINGER option of ioctl and the shutdown function. After the client misbehaves, check the number of open file descriptors in the server process, to verify that the server has handled the situation correctly.

Linux & C: Communicating with X server from outside of X?

I'm working on a small server program that takes data received from the network and performs various actions. One of these actions is to open a connection with the X server running on the system and simulate key presses. This is fine when my server is started from a terminal inside X, but I want my program to start as a system service with the system boots and then communicate with X when requested by the clients.
The basic problem I have is that a call to XOpenDisplay(NULL) in a process that was not started from inside X fails. As far as I understand, I can't get open an X display from outside of X so the best workaround I can think of is to write a separate program that is started when a user logs in to X that waits for a signal or message from the server and then performs the requested action. It is perfectly okay to assume that the server can send an error back to the client if this helper program isn't running or has failed for some reason.
So question: Is what I described above the best (albeit messy) solution, or is there a better way? Is there, in fact, a way to open an X display from outside of X? Thanks!
Being "inside of X" is just a matter of having the DISPLAY environment variable set. You can do this from anywhere.
If the X server in question is being run for a different user, you may need to also deal with authentication tokens such as Xauthority tickets.
However -- for the use case you describe, I'd strongly recommend running your own X server process, independent of the system's actual display hardware. This could be Xvnc if you want to connect to inspect interactively, or a simple headless implementation, or Xvfb if you need no display buffer at all. This approach will also prevent your software from needing to restart when users log in and out, which would otherwise be the case.
It is possible to connect to an X display from any process running on the machine - you need the DISPLAY variable set to indicate which X session you want to connect to, and may need the correct XAuthority token.
However, this would be considered the "messy" solution for your case, since you'd need to essentially guess the display number and work around the authorization issue. You'd also have to handle the case where the X server hasn't started yet when your daemon starts, or when the X server gets restarted while your daemon is running (the X client library isn't really designed to handle the case of the X server going away and coming back again).
The "clean" solution is actually the one you've suggested as a workaround - a client running within the X session that connects to your daemon over a UNIX domain socket or similar.

How to get a more stable socket connection in Linux/C

I'm running a game website where users connect using an Adobe Flash client to a C server running on a Fedora Linux box.
Often users complain about disconnects. Usually they're "Connection reset by peer"-disconnects.
Is there any way to make the connection more stable or does it all depend on the route from the user host to my server?
One thing I tried is to make it more stable by sending PING in clear text every other minute to avoid timeout problems.
Anyone got more ideas?
You are not exhausting the number of socket/memory use/cpu that the server process is given on the server, are you?
Do check with ulimit.
Also, if possible try to trace the error message in the source code (when a RST packet is sent--), i.e. when a send() or accept() returns an error value. In such cases print a debug message into the logs; if you really fancy debugging it do a simulation of the server:
run it into debug mode on a separate machine (possibly a clone of the server)
simulate thousands of connection (or find a network harnessing program)
backtrace the call and/or sniff the connection
where are you running the server?
at home? at work? at a hosting facility?
this will make a very big difference.
Can you design your app to connect to two sockets on the server and then load balance or make it active/passive (or active/active)?
You can use SO_KEEPALIVE TCP socket option.

Cleanest way to stop a process on Win32?

While implementing an applicative server and its client-side libraries in C++, I am having trouble finding a clean and reliable way to stop client processes on server shutdown on Windows.
Assuming the server and its clients run under the same user, the requirements are:
the solution should work in the following cases:
clients may each feature either a console or a gui.
user may be unprivileged.
clients may be or become unresponsive (infinite loop, deadlock).
clients may or may not be children of the server (direct or indirect).
unless prevented by a client-side defect, clients shall be allowed the opportunity to exit cleanly (free their ressources, sync some data to disk...) and some reasonable time to do so.
all client return codes shall be made available (if possible) to the server during the shutdown procedure.
server shall wait until all clients are gone.
As of this edit, the majority of the answers below advocate the use of a shared memory (or another IPC mechanism) between the server and its clients to convey shutdown orders and client status. These solutions would work, but require that clients successfully initialize the library.
What I did not say, is that the server is also used to start the clients and in some cases other programs/scripts which don't use the client library at all. A solution that did not rely on a graceful communication between server and clients would be nicer (if possible).
Some time ago, I stumbled upon a C snippet (in the MSDN I believe) that did the following:
start a thread via CreateRemoteThread in the process to shutdown.
had that thread directly call ExitProcess.
Unfortunately now that I'm looking for it, I'm unable to find it and the search results seem to imply that this trick does not work anymore on Vista. Any expert input on this ?
If you use thread, a simple solution is to use a named system event, the thread sleeps on the event waiting for it to be signaled, the control application can signal the event when it wants the client applications to quit.
For the UI application it (the thread) can post a message to the main window, WM_ CLOSE or QUIT I forget which, in the console application it can issue a CTRL-C or if the main console code loops it can check some exit condition set by the thread.
Either way rather than finding the client applications an telling them to quit, use the OS to signal they should quit. The sleeping thread will use virtually no CPU footprint provided it uses WaitForSingleObject to sleep on.
You want some sort of IPC between clients and servers. If all clients were children, I think pipes would have been easiest; since they're not, I guess a server-operated shared-memory segment can be used to register clients, issue the shutdown command, and collect return codes posted there by clients successfully shutting down.
In this shared-memory area, clients put their process IDs, so that the server can forcefully kill any unresponsive clients (modulo server privileges), using TerminateProcess().
If you are willing to go the IPC route, make the normal communication between client and server bi-directional to let the server ask the clients to shut down. Or, failing that, have the clients poll. Or as the last resort, the clients should be instructed to exit when the make a request to server. You can let the library user register an exit callback, but the best way I know of is to simply call "exit" in the client library when the client is told to shut down. If the client gets stuck in shutdown code, the server needs to be able to work around it by ignoring that client's data structures and connection.
Use PostMessage or a named event.
Re: PostMessage -- applications other than GUIs, as well as threads other than the GUI thread, can have message loops and it's very useful for stuff like this. (In fact COM uses message loops under the hood.) I've done it before with ATL but am a little rusty with that.
If you want to be robust to malicious attacks from "bad" processes, include a private key shared by client/server as one of the parameters in the message.
The named event approach is probably simpler; use CreateEvent with a name that is a secret shared by the client/server, and have the appropriate app check the status of the event (e.g. WaitForSingleObject with a timeout of 0) within its main loop to determine whether to shut down.
That's a very general question, and there are some inconsistencies.
While it is a not 100% rule, most console applications run to completion, whereas GUI applications run until the user terminates them (And services run until stopped via the SCM). Hence, it's easier to request a GUI to close. You send them the equivalent of Alt-F4. But for a console program, you have to send them the equivalent of Ctrl-C and hope they handle it. In both cases, you simply wait. If the process sticks around, you then shoot it down (TerminateProcess) and pray that the damage is limited. But your HDD can fill up with temporary files.
GUI application in general do not have exit codes - where would they go? And a console process that is forcefully terminated by definition does not exit, so it has no exit code. So, in a server shutdown scenario, don't expect exit codes.
If you've got a debugger attached, you generally can't shutdown the process from another application. That would make it impossible for debuggers to debug exit code!

Resources