What is the difference between NRDP vs NCPA - nagios

I am very familiar of using Nagios with NRDP, NRDP I use for remote server traps handling! but am unable to understand what is NCPA can any one explain me? for what this NCPA is required in actual?
I have seen in below Nagios user agent comparison link that NCPA is the best among the other agents like NRDS,NSClient,NRPE.
Iam unable to understand what is NCPA from below mentioned official definition.
NRDP
Nagios Remote Data Processor (NDRP) is a flexible data transport mechanism and processor for Nagios. It is designed with a simple and powerful architecture that allows for it to be easily extended and customized to fit individual users' needs. It uses standard ports protocols (HTTP(S) and XML) and can be implemented as a replacement for NSCA.
NCPA
NCPA is a cross-platform monitoring agent that runs on Windows, Linux/Unix, and Mac OS/X machines. Its features include both active and passive checks, remote management, and a local monitoring interface.

You should compare NCPA with NSClient++, they are both agents that can run on servers and actively or passively execute checks through commands over different protocols, scuh as NRPE, NSCA and NRDP.
Agents: NSClient++, NCPA
Protocols - Active:
NRPE => https://docs.nsclient.org/reference/windows/NSClientServer/
Protocols - Passive:
NSCA => https://docs.nsclient.org/reference/client/NSCAClient/
NRDP => https://docs.nsclient.org/reference/client/NRDPClient/
Fyi, imho NSClient++ is much better then NCPA, as it has amongst other features integrated real-time eventlog monitoring.

Related

Can a simple PC (windows 10) having TwinCAT XAR be used as a target in host computer having TwinCAT 3 XAE

I want to know if I can use a system(run time pc) with Windows 10 OS which has TwinCAT XAR installed in it as a remote system. In other words can I select it as a target? Do we need any extra settings to make it work or it will work just like any other hardware controller?
Yes, you can select a Windows 10 PC with TwinCAT XAR installed as a remote target, however the performance may not be the same as you would get with purchasing a known hardware configuration from Beckhoff.
As noted in the Beckhoff documentation:
For a reliable, optimized and performant realtime behavior, a
completely aligned system design (hardware, BIOS, OS, drivers,
realtime-runtime) is mandatory. Each single component of the control
system has to be checked and optimized for this type of application -
that is the one and only way for an optimal, reliable and performant
realtime behavior. Beckhoff IPCs are optimized in each detail for this
type of operation. There is no guarantee for proper, reliable realtime
behavior on third-party PCs.
To use any Windows PC as a remote target, you need to ensure that the XAR is installed and that the Windows firewall is open to ADS. See also routing through a firewall. Specifically, you should open port 48898 to incoming TCP traffic and port 48899 to incoming UDP traffic in the Windows firewall. After this, you should be able to create a route normally using the IP address of the target PC through the ADS router on your development system.
You may also want to isolate a CPU core on the target system and dedicate TwinCAT tasks to it to ensure more consistent realtime behavior.
Finally, you need to purchase a license for the PLC if you intend to use it for a purpose other than development. This requires the higher performance level >= P90 and a license dongle, see this note about TwinCAT 3 licenses for non-Beckhoff IPCs.

Migrating particular TCP Connection using CRIU tools

Is it possible to migrate a single and particular TCP connection inside a running process in one machine to another machine using CRIU tools in Linux?
What I want is to dump a particular TCP Connection information in a memory and transfer this information to a peer machine. Inside this machine, I will use the dumped information to recreate the the migrated TCP connection. Does anyone have an example or tutorial in c language?
I am aware about different solutions like SockMi which provides Kernel Module + User Space APIs to migrate a certain TCP Socket. However, I want to use CRIU tools since it is part of Linux Mainline.
Right now we only have the TCP migration functionality integrated into CRIU tool. It sits in the sk-tcp.c file, the whole TCP-repair code is there, though it's bound to the rest of CRIU.
On the other hand, we've been asked for TCP-only migration for quite a while, it's possible to pull this code into smth like libcriutcp.so, but it will require patching. You're welcome to participate to the https://github.com/xemul/criu/issues/72

How does Check_MK work with Nagios?

Hi I have just installed a clean copy of Nagios and Check_MK. But I don't understand how they work together. Nagios uses nrpe to connect to clients and performs checks. This means that some Nagios plugins have to sit on the client and return results from when they are called. But how does Check_MK tie into Nagios. Does it use check_mk_agent to replace all the Nagios plugins to perform its checks? Also does the Nagios configurations all have to be fully configured with all the clients already in place to be checked and then ported to Check_MK interface (wato) or can the clients be added to Check_MK without being present in the Nagios configurations. This is where my confusion lies and I cant find a concrete answer to this question anywhere. Please help.
Check_MK uses Nagios core for theses tasks:
Manage Check results
Triggering of alarms
Manage planned downtimes
Test host availability
Detect network failures
As you can see at the bottom of this page: http://mathias-kettner.com/checkmk_monitoring_system.html
Check_MK needs both: client side monitoring agent and server side monitoring system.
The server side monitoring system calls the agent of the host and passes the check results to the monitoring core (usually Nagios but there is also an new core just for Check_MK).
What makes Check_MK different from other passive Checks (like NRPE) is that the results for all checks is send to the monitoring system in one package. If you run the agent on a host in a shell it will return something like this:
➜ ~ check_mk_agent
<<<df>>>
/dev/mapper/MyStorage-rootvol ext4 15350768 13206900 1341052 91% /
dev devtmpfs 4022348 0 4022348 0% /dev
plus many more lines ....
So the server side part of Check_MK splits these packages into single checks so that the Nagios core can handle them.
So Check_MK wont replace your existing checks, it doesn't care about them. It will just add more.
You don't necessary need WATO to configure Check_MK. WATO is just an interface for the configuration. Configuration can also be done with plain text files. You should start with WATO and take a look at the configuration it has generated.

Windows Azure - Web Role and Virtual Machines Securely Communicating

I am attempting to deploy an app to Windows Azure and I am having some trouble figuring out how I can achieve my optimal configuration because of lack of documentation and newness of the Azure infrastructure. I need to have two virtual machines configured (One Linux box and one Windows Server with SQL Server) to communicate with one Web Role Instance. The Web Role should have the only end point accessible from the outside world. It should be able to communicate with SQL Server and the Linux machine (these machines don’t need to communicate with each other). I can achieve this if I open up endpoints on the VM (for example Port 1433 on the Windows machine and the same port in the VM’s firewall), however I am concerned about the security risk of doing this and would rather have the Web Role communicate directly with my virtual machine WITHOUT opening up an endpoint (using the Azure Portal). I have read some examples that refer to deploying the items as a cloud service, but none include a Web Role AND a CUSTOMIZED Virtual Machine. I have seen references made to using a Virtual Network, but no examples. I have looked everywhere for a solution to no success. This seems like a common scenario, so I don’t think it should be this difficult. Am I missing something?
Well you have 2 options here: use Windows Azure Connect or use Virtual Networks. Since you're really trying to make a network of different machines I would suggest to use a Virtual Network (I think this is the most flexible option). And connecting your Virtual Machines to your Cloud Services is pretty easy:
Create a Virtual Network as described here: Create a Virtual Network in Windows Azure
Add your Virtual Machines to that network as described here: Add a Virtual Machine to a Virtual Network
Modify the ServiceConfiguration.cscfg of your cloud service to connect to your Virtual Network. The schema is available on MSDN or you can follow the blog post on Michael Washam's blog.
I marked the answer above as correct, because it does provide the answer especially if you are only creating a virtual network with MS products. What they fail to point out in the majority of their documentation, is that VN functionality is limited for Linux machines while VMs and VNs are in their current preview. However, this does not mean you can't add a Linux VM to a VN. After searching for sometime and piecing information together, Linux machines can be added to an existing VN rather simply by using PowerShell and cmdlets. The following generic script can be run from a PowerShell ISE with your own information in order to create and add a Linux VM in your VN.
$vm = New-AzureVMConfig -Name $vmname -InstanceSize ExtraSmall -ImageName $img |
Add-AzureProvisioningConfig -Linux –LinuxUser $user -Password $pass |
Set-AzureSubnet -SubnetNames $subnet
New-AzureVM -ServiceName $cloudSvcName -AffinityGroup $affinitygroup -VNetName $vnetname -VMs $vm
Hope this helps someone from pulling their hair out.

Protocol for remote process management

In short: Is there any known protocol for remote process management?
I have a system that contains several applications, each has it's own computer in a local network. When the applications are up and running, they communicate without any problems.
What I'm interested in is a protocol to manage the remote applications startup, shutdown and monitoring. By monitoring I mean getting error codes (predefined) when something goes wrong. Ideally I would control the whole system from one managing application and get status about what's going on.
I once worked in a place that wrote an in-house protocol that did this. However, I wish to avoid writing it again if someone already figured this out.
Edit: some more details:
Platforms in use are Windows and Linux, both on x86.
On Windows, C/C++ and .NET are used. On Linux, C/C++.
Why bother with homegrown solutions instead of using tried and tested technology? Unless you only employ programmers who are MENSA members with 30+ years of experience, your solution will be less robust and costlier to maintain.
You failed to mention any details about the platform you're using, so I'll assume a Unix-ish system. I would go with (and have been going with for years)
SNMP for monitoring
either daemontools or cron + scripting (as a distant second choice) for supervision and restart
ssh/scp with RSA authentication for interactive intervention, remote command execution, and occasional transfers

Resources