In short: Is there any known protocol for remote process management?
I have a system that contains several applications, each has it's own computer in a local network. When the applications are up and running, they communicate without any problems.
What I'm interested in is a protocol to manage the remote applications startup, shutdown and monitoring. By monitoring I mean getting error codes (predefined) when something goes wrong. Ideally I would control the whole system from one managing application and get status about what's going on.
I once worked in a place that wrote an in-house protocol that did this. However, I wish to avoid writing it again if someone already figured this out.
Edit: some more details:
Platforms in use are Windows and Linux, both on x86.
On Windows, C/C++ and .NET are used. On Linux, C/C++.
Why bother with homegrown solutions instead of using tried and tested technology? Unless you only employ programmers who are MENSA members with 30+ years of experience, your solution will be less robust and costlier to maintain.
You failed to mention any details about the platform you're using, so I'll assume a Unix-ish system. I would go with (and have been going with for years)
SNMP for monitoring
either daemontools or cron + scripting (as a distant second choice) for supervision and restart
ssh/scp with RSA authentication for interactive intervention, remote command execution, and occasional transfers
Related
I want to know if I can use a system(run time pc) with Windows 10 OS which has TwinCAT XAR installed in it as a remote system. In other words can I select it as a target? Do we need any extra settings to make it work or it will work just like any other hardware controller?
Yes, you can select a Windows 10 PC with TwinCAT XAR installed as a remote target, however the performance may not be the same as you would get with purchasing a known hardware configuration from Beckhoff.
As noted in the Beckhoff documentation:
For a reliable, optimized and performant realtime behavior, a
completely aligned system design (hardware, BIOS, OS, drivers,
realtime-runtime) is mandatory. Each single component of the control
system has to be checked and optimized for this type of application -
that is the one and only way for an optimal, reliable and performant
realtime behavior. Beckhoff IPCs are optimized in each detail for this
type of operation. There is no guarantee for proper, reliable realtime
behavior on third-party PCs.
To use any Windows PC as a remote target, you need to ensure that the XAR is installed and that the Windows firewall is open to ADS. See also routing through a firewall. Specifically, you should open port 48898 to incoming TCP traffic and port 48899 to incoming UDP traffic in the Windows firewall. After this, you should be able to create a route normally using the IP address of the target PC through the ADS router on your development system.
You may also want to isolate a CPU core on the target system and dedicate TwinCAT tasks to it to ensure more consistent realtime behavior.
Finally, you need to purchase a license for the PLC if you intend to use it for a purpose other than development. This requires the higher performance level >= P90 and a license dongle, see this note about TwinCAT 3 licenses for non-Beckhoff IPCs.
I know this should be obvious, but I have found far too many DIFFERENT answers and the ones I've tried all fail (sometimes or all the time), so...
We are working on a service and some applications that run at startup on a Windows 10 computer that performs an automatic login. The service and applications require Windows sockets for TCP, UDP and Multicast. Most of the time, our programs fail because they get errors about the network not being ready and such. Currently, we work around this by just adding a dumb, fixed length delay time before attempting to start, but we would prefer to start as soon at the network is ready to be used.
Our most recent attempt was to wait on the LanmanWorkstation (Workstation) service, but that generally reports it is running/ready before the sockets functions will succeed. I have also seen suggestions to use LanmanServer (Server) or Netman (Network Connections) or maybe even Tcpip (TCP/IP Protocol Driver), but I cannot find anything definitive. One would think this is a common requirement, so why would Microsoft make the info so difficult to find?
Ahem. Does any know a definitive method for a service or application to wait until winsock functions will succeed before using them? Short of a spin wait on a failing winsock function, of course!
I'm developing a program that will need to run on Internet servers (a back-end component to be used by several cross-platform programs). I'm familiar with the security precautions to take (to prevent buffer overflows and SQL Injection attacks, for instance), but have never written a server program before, or any program that will be used on this scale.
The program needs to be able to serve hundreds or thousands of clients simultaneously. The protocols are designed for processing speed and to minimize the amount of data that must be exchanged, and the server side will be written in C. There will be both a Windows and a Linux version from the same code.
Questions:
How should the program handle communications -- multiple threads, a single thread handling all the sockets in turn, or spawn a new process for every so many incoming connections (or for each one)?
Do I need to worry about things like memory fragmentation, since this program will need to run for months at a time?
What other design issues, specific to this kind of programming, might an experienced developer of cross-platform programs for desktop and mobile systems not be aware of?
Please, no suggestions to use a different language. That decision has already been made, for reasons I'm not at liberty to go into.
For I'd use libevent or libev and non-blocking I/O. This way the operating system will take case of most of your scheduling problems. I'd also use a thread pool for processing tasks, that by nature are blocking, so they don't block the main loop. And if you ever need to read or write large amounts of data to or from the disc, use mmap, again to let the OS handle as much as possible.
The basic advice is use the OS, as much as possible. If you want a good example of a program which does this look at Varnish, it is very well written, and performs fantastic.
With my experience running multiple servers for over 3 years of uptime, and programs with little over a year of uptime I can still recommend making the setup so that the system gracefully recovers from a program error and from a server reboot.
Even though performance gets a hit when a program is restarted, you need to be able to handle that as external circumstances can force the program to such a restart.
Don't try to reinvent the wheel when not needed, and have a look at zeromq or something like that to handle distribution of incoming communications. (If you are allowed to, prototype the backends in a more forgiving language than C like Python, then reimplement in C but keeping the communications protocol)
I was asked to develop a algorithm for network application on C. This project will be developed on Linux for PC and then it will be transferred to a more portable platform, something that will include a microcontroller. There are many microcontroller/companies out there that provide very nice and large libraries for TCP/IP. This software will hold statistics on the network performance.
The whole idea of a cross platform (uC - PC) seems rubbish to me cause eventually the code should be written in a more platform specific way for the microcontroller, but I am not expert to judge anyway.
Is there any clever way of doing this or is there a anyone that did this before? My brainstorming has "Wrapper library" and "Matlab"... Any ideas?
Thx!
I do agree with you to some extent - you do want the target system and the system on which you are developing in the interim should be as close as possible (it is better if they can match). Nevertheless the idea with cross-platform is to get you started with the firmware development while the hardware is being designed. Instead of doing it on Linux - what I would do is to use Embedded OS simulator. Here are the steps
- Step 1: Identify the OS for the Embedded System; make sure that OS has a simulator that runs on PC (Win or Linux) Typical Embedded OS with Simulator include VxWorks, μC/OS-II, QNX, uClinux ... Agreeing on the OS means that the hardware design team knows that the OS is the right match for the hardware that is being designed and there is a consensus that the hardware + OS + Application being designed will meet the requirements of the system that is being developed.
- Step 2: Use this simulator to develop the application until the hardware that is being designed is brought up.
- Step 3: Once the first version of the hardware is ready and has been powered up - you can run your application with minimum changes - mostly likely no changes to the code, but changes to the linker/library being used is likely.
The idea of cross-platform if done correct has immense advantages - it helps remove serializing your project development activities.
Given that you mention it is a TCP/IP application - check for Berkeley Sockets support and you use it. Usually this API should not matter if you are using a Simulator, in the extreme case if you have to change the OS for whatever reason your Berkeley Sockets based application is likely to be better portable.
Just assume you can use the standard BSD socket library (system calls are socket(), bind(), accept(), connect(), recv(), send(), with various options). Any OS with a TCP/IP stack will support this standard API.
There may be some caveats that you will run into if your embedded system uses a run to completion type TCP/IP stack like *u*IP, but those will be easily solvable.
Also only use POSIX file I/O (fopen, fread, fwrite, printf, etc). But keep in mind your target may not have a filesystem.
If using a simulator was not an option I would try to wrap the Linux functions up in interfaces that match those of the embedded system, if possible. That way any extra bulk in the system will be on the Linux development system (which is not resource constrained). Various embedded OSes and TCP/IP stacks can have vastly different architectures, so how easy this is can range from nearly impossible to no work at all.
If it turns out that writing wrapper libraries to make Linux look like the embedded system is too difficult then I suggest at least trying to keep the embedded OS in mind while writing the Linux version so that you can try to at least write some functions so that they work on both systems.
If it doesn't take too long writing a Linux version of at least part of the code may help you to shake out a few flaws in the overall design, at the very least. At most it will allow you to more quickly test changes to the system since loading code onto an embedded device often takes more time than you would like. It may also be easier to debug on your development machine.
Some embedded OSes will run on x86, and it would not surprise me if some of them have drivers that allow them to be run in virtual machines, so this may be an option as well.
Another thing to consider is the endian-ness and the word size of the development machine verses the embedded system. If these differ then you need to keep this in mind as you code. Getting this type of thing right when you originally write the code is easier than going back and trying to fix code, in my opinion.
Ideally, I would connect an Ingenico/VeriFone terminal to the net via an Ethernet cable, the terminal will exclusively run a program that I wrote. This program would poll a webservice, beep when it detects some kind of info, wait for somebody's input, transmit said info back to the webservice, and print a ticket.
Is this possible with terminals from Ingenico/VeriFone/someone else?
I'm looking for the form factor/semi-ruggedness of said terminals. We don't need/want something bigger like an PC or laptop.
I have built applications on Verifone, Hypercom and Trintech terminals. The Verifones are by far the easiest to devlop for. They have a simple flash and RAM file systems, apps are downloaded and run as files, the OS (Verix) is POSIX like with good C/C++ libraries etc. Only downside is tool cost, VerixV use ARM SDT (5K Euro per seat) and older Verix terminals (Coldfire based) use SDS compiler. Dev kit comes with default keys to sign your apps (not most secures, but you can password protect download access on terminal). I have written lots of apps on these terminals, not just payment app. Verifone multi-app controller (VMAC) is a crock of shit but it's very easy to run multiple apps yourself using pipes for inter-app comms (your apps won't run on third party terminals which use VMAC though). We used ethernet connectivity for FTP to manage app and config download as well as transaction batching. Also used WIFI on latest terminals for same (also used 3G terminals but I didnt do any of code on these). Verifone is PC-like in terms of code development and we shared lots of library/app code between WIN32/Verix/VerixV and Linux. Verifone terminals are well built and can take a lot of abuse but then most serious terminal manufacturers do a good job these days.