I know this should be obvious, but I have found far too many DIFFERENT answers and the ones I've tried all fail (sometimes or all the time), so...
We are working on a service and some applications that run at startup on a Windows 10 computer that performs an automatic login. The service and applications require Windows sockets for TCP, UDP and Multicast. Most of the time, our programs fail because they get errors about the network not being ready and such. Currently, we work around this by just adding a dumb, fixed length delay time before attempting to start, but we would prefer to start as soon at the network is ready to be used.
Our most recent attempt was to wait on the LanmanWorkstation (Workstation) service, but that generally reports it is running/ready before the sockets functions will succeed. I have also seen suggestions to use LanmanServer (Server) or Netman (Network Connections) or maybe even Tcpip (TCP/IP Protocol Driver), but I cannot find anything definitive. One would think this is a common requirement, so why would Microsoft make the info so difficult to find?
Ahem. Does any know a definitive method for a service or application to wait until winsock functions will succeed before using them? Short of a spin wait on a failing winsock function, of course!
Related
Once in a while my server accept functions just stop working properly anymore.
There is a much deeper story behind this, I'm being flooded with SYN and SYN/ACK packets, my network router goes disco and accept keeps returning ECONNABORTED.... I already tried to debug and fix this specific attack, but without success. By now I gave up and rather look for a more generic server recover solution.
Anyway I figured out that simpy "restarting" the server socket by closing and calling socket again is helping. Theoretically very simple, but practically I'm facing here a huge challenge because (a) the server is quite complex by now and (b) when should I exactly restart the server socket.
My setup is one accept-thread that calls accept and feeds epoll, one listener-thread that listens for epoll read/write etc. events and feeds a queue of a thread pool.
I have not found any literature that guides one through restarting the server socket.
Particularly:
When do I actually restart the server socket? I mean I do not really know if a ECONNABORTED return value from accept is just a aborted connection or the accept/filedescriptor is going banana.
How does closing the server socket affect epoll and connected clients? Should I close the server socket immediately or rather have a buffer time such that all clients have finished first?
Or is it even best to have two alternating server sockets such that if one goes banana I just try the other one.
I am making some assumptions about the things you say in your question all being true and accurate even though some of them seems like they may be misdiagnosed. Unfortunately, you didn't really explain how you reached the conclusions presented, so I really can't do much other than assume they're true.
For example, you don't explain how or why you figured that closing and calling socket again will help. From just the information you gave, I would strongly suspect the opposite is true. But again, without knowing the evidence and rationale that lead you to figure that, all I can do is assume it's true despite my instinct and experience saying it's wrong.
When do I actually restart the server socket? I mean I do not really know if a ECONNABORTED return value from accept is just a aborted connection or the accept/filedescriptor is going banana.
If it really is the case that accepting connections will recover faster from a restart than without one and you really can't get any connections through, keep track of the last successful connection and the number of failures since the last successful connection. If, for example, you've gone 120 seconds or more without a successful connection and had at least four failed connections since the last successful one, then close and re-open. You may need to tune those parameters.
How does closing the server socket affect epoll and connected clients?
It has no effect on them unless you're using epoll on the server socket itself. In that case, make sure to remove it from the set before closing it.
Should I close the server socket immediately or rather have a buffer time such that all clients have finished first?
I would suggest "draining" the socket by calling accept without blocking until it returns EWOULDBLOCK. Then you can close it. If you get any legitimate connections in that process, don't close it since it's obviously still working.
A client that tries to get in between your close and getting around to calling listen on a new socket might get an error. But if they're getting errors anyway, that should be acceptable.
Or is it even best to have two alternating server sockets such that if one goes banana I just try the other one.
A long time ago, port DoS attacks were common because built-in defenses to things like SYN-bombs weren't as good as they are now. In those days, it was common for a server to support several different ports and for clients to try the ports in rotation. This is why IRC servers often accepted connections on ranges of ports such as 6660-6669. That meant an attacker had to do ten times as much work to make all the ports unusable. These days, it's pretty rare for an attack to take out a specific inbound port so the practice has largely gone away. But if you are facing an attack that can take out specific listening ports, it might make sense to open more listening ports.
Or you could work harder to understand the attack and figure out why you are having a problem that virtually nobody else is having.
I have a program that needs to:
Handle 20 connections. My program will act as client in every connection, each client connecting to a different server.
Once connected my client should send a request to the server every second and wait for a response. If no request is sent within 9 seconds, the server will time out the client.
It is unacceptable for one connection to cause problems for the rest of the connections.
I do not have access to threads and I do not have access to non-blocking sockets. I have a single-threaded program with blocking sockets.
Edit: The reason I cannot use threads and non blocking sockets is that I am on a non-standard system. I have a single RTOS(Real-Time Operating System) task available.
To solve this, use of select is necessary but I am not sure if it is sufficient.
Initially I connect to all clients. But select can only be used to see if a read or write will block or not, not if a connect will.
So when I have connected to say 2 clients and they are all waiting to be served, what if the 3rd does not work, the connection will block causing the first 2 connections to time out as well.
Can this be solved?
I think the connection-issue can be solved by setting a timeout for the connect-operation, so that it will fail fast enough. Of course that will limit you if the network really is working, but you have a very long (slow) path to some of the server(s). That's bad design, but your requirements are pretty harsh.
See this answer for details on connection-timeouts.
It seems you need to isolate the connections. Well, if you cannot use threads you can always resort to good-old-processes.
Spawn each client by forking your server process and use traditional IPC mechanisms if communication between them is required.
If you can neither use a multiprocess approach I'm afraid you'll have a hard time doing that.
I am working on a project involving a microcontroller communicating to a PC via Modbus over TCP. My platform is an STM32F4 chip, programming in C with no RTOS. I looked around and found LwIP and Freemodbus and have had pretty good success getting them both to work. Unfortunately, I'm now running into some issues which I'm not sure how to handle.
I've noticed that if I establish connection, then lose connection (by unplugging the Ethernet cable) I will not be able to reconnect (once I've plugged back in, of course). Freemodbus only allows one client and still has the first client registered. Any new clients trying to connect are ignored. It won't drop the first client until after a specific timeout period which, as far as I can tell, is a TCP/IP standard.
My thoughts are...
I need a Modbus module that will handle multiple clients. The new client request after communication loss will be accepted and the first client will eventually be dropped due to the timeout.
How do I modify Freemodbus to handle this? Are there examples out there? I've looked into doing it myself and it appears to be a decently sized project.
Are there any good Modbus packages out there that handle multiple clients, are not too expensive, and easy to use? I've seen several threads about various options, but I'm not sure any of them meet exactly what I need. I've had a hard time finding any on my own. Most don't support TCP and the ones that do only support one client. Is it generally a bad idea to support multiple clients?
Is something wrong with how I connect to the microcontroller from my PC?
Why is the PC changing ports every time it tries to reconnect? If it kept the same port it used before, this wouldn't be a problem
Should I drop the client from Freemodbus as soon as I stop communicating?
This seems to go against standards but might work.
I'm leaning towards 1. Especially since I'm going to need to support multiple connections eventually anyways. Any help would be appreciated.
Thanks.
If you have a limit on the number of modbus clients then dropping old connections when a new one arrives is actually suggested in the modbus implementation guide (https://www.modbus.org/docs/Modbus_Messaging_Implementation_Guide_V1_0b.pdf)
Nevertheless a mechanism must be implemented in case of exceeding the number of
authorized connection. In such a case we recommend to close the oldest unused
connection.
It has its own problems but everything is a compromise.
Regarding supporting multiple clients...if you think about modbus/rs server - it could only ever have one master at a time. Then replace the serial cable with TCP and you see why it's not uncommon to only support one client (and of course it's easier to program). It is annoying though.
Depending on what you are doing you wont need the whole modbus protocol and implementing the parts you do need is pretty easy. Of course if you have to support absolutely everything its a different prospect. I haven't used freemodbus, or any other library appropriate to your setup, so I can't help with suggestions there.
Regarding the PC using different TCP source port each time - that is how TCP is supposed to work and no fault on your side. If it did reuse the same source port then it wouldn't help you because e.g. sequence numbers would be wrong.
Regarding dropping clients. You are allowed to drop clients though its better not to. Some clients will send a modbus command, notice the connection has failed, reconnect, but not reissue the command. That may be their problem but still nicer to not see it that often where possible. Of course things like battery life might make the calculation different.
I have a daemon to write in C, that will need to handle 20-150K TCP connections simultaneously. They are long running connections, and rarely ever tear down. They have a very small amount of data (rarely exceeding MTU even.. it's a stimulus/response protocol) in transmit at any given time, but response times to them are critical. I'm wondering what the current UNIX community is using to get large amounts of sockets, and minimizing the latency on response of them. I've seen designs revolving around multiplexing connects to fork worker pools, threads (per connection), static sized thread pools. Any suggestions?
the easiest suggestion is to use libevent, it makes it easy to write a simple non-blocking single-threaded server that would comply with your requirements.
if the processing for each response takes some time, or if it uses some blocking API (like almost anything from a DB), then you'll need some threading.
One answer is the worker threads, where you spawn a set of threads, each listening on some queue to work. it can be separate processes, instead of threads, if you like. The main difference would be the communications mechanism to tell the workers what to do.
A different way to do is to use several threads, and give to each of them a portion of those 150K connections. each will have it's own process loop and work mostly like the single-threaded server, except for the listening port, which will be handled by a single thread. This helps spreading the load between cores, but if you use a blocking resource, it would block all the connections handled by this specific thread.
libevent lets you use the second way if you're careful; but there's also an alternative: libev. it's not as well known as libevent, but it specifically supports the multi-loop scheme.
If performance is critical then you'll really want to go for a multithreaded event loop solution - i.e. a pool of worker threads to handle your connections. Unfortunately, there is no abstraction library to do this that works on most Unix platforms (note that libevent is only single-threaded as are most of these event-loop libraries), so you'll have to do the dirty work yourself.
On Linux that means using edge-triggered epoll with a pool of worker threads (Windows would have I/O completion ports which also works fine in a multithreaded environment - I am not sure about other Unixes).
BTW, I have done some work trying to abstract edge-triggered epoll on Linux and Windows I/O completion ports on http://nginetd.cmeerw.org (it is work in progress, but might provide some ideas).
If you have system configuration access don't over-do it and set up some iptables/pf/etc to load-balance connections across n daemon instances (processes) as this will work out of the box. Depending on how blocking the nature of the daemon n should be from the number of cores on the system or several times higher. This approach looks crude but it can handle broken daemons and even restart them if necessary. Also migration would be smooth as you could start diverting new connections to another set of processes (for example, a new release or migrating to a new box) instead of service interruptions. On top of that you get several features like source affinity wich can help significantly caching and contention of problematic sessions.
If you don't have system access (or ops can't be bothered), you can use load balancer daemon (there are plenty of open source ones) instead of iptables/pf/etc and use also n service daemons, like above.
Also this approach helps with separating privileges of ports. If the external service needs to service on a low port (<1024) you only need the load balancer running privileged/or admin/root, or kernel.)
I've written several IP load balancers in the past and it can be very error-prone in production. You don't want to support and debug that. Also operations and management will tend second-guess your code more than external code.
i think javier's answer makes the most sense. if you want to test the theory out, then check out the node javascript project.
Node is based on Google's v8 engine which compiles javascript to machine code and is as fast as c for certain tasks. It is also based on libev and is designed to be completely non-blocking, meaning you don't have to worry about context switching between threads (everything runs on a single event loop). It is very similar to erlang in that respect.
Writing high performance servers in javascript is now really, really easy with node. You could also, with a little bit of effort, write your custom code in c and create bindings for node to call into it to do your actual processing (look at the node source to see how to do this - documentation is a little sketchy at the moment). as an uglier alternative, you could build your custom c code as an application and use stdin/stdout to communicate with it.
I've tested node myself with upwards of 150k connections with absolutely no issues (of course you will need some serious hardware if all these connections are going to be communicating at once). A TCP connection in node.js on average uses only 2-3k of memory so you could theoretically handle 350-500k connections per 1GB of RAM.
Note - Node.js is not currently supported on windows, but it is only at an early stage of development and i'd imagine it will be ported at some stage.
Note 2 - you will have to ensure the code you are calling into from Node does not block
Several systems have been developed to improve on select(2) performance: kqueue, epoll, and /dev/poll. In all these systems, you can have a pool of worker threads waiting for tasks; you will not be forced to setup all file handles over and over again when done with one of them.
do you have to start from scratch? You could use something like gearman.
In short: Is there any known protocol for remote process management?
I have a system that contains several applications, each has it's own computer in a local network. When the applications are up and running, they communicate without any problems.
What I'm interested in is a protocol to manage the remote applications startup, shutdown and monitoring. By monitoring I mean getting error codes (predefined) when something goes wrong. Ideally I would control the whole system from one managing application and get status about what's going on.
I once worked in a place that wrote an in-house protocol that did this. However, I wish to avoid writing it again if someone already figured this out.
Edit: some more details:
Platforms in use are Windows and Linux, both on x86.
On Windows, C/C++ and .NET are used. On Linux, C/C++.
Why bother with homegrown solutions instead of using tried and tested technology? Unless you only employ programmers who are MENSA members with 30+ years of experience, your solution will be less robust and costlier to maintain.
You failed to mention any details about the platform you're using, so I'll assume a Unix-ish system. I would go with (and have been going with for years)
SNMP for monitoring
either daemontools or cron + scripting (as a distant second choice) for supervision and restart
ssh/scp with RSA authentication for interactive intervention, remote command execution, and occasional transfers