net-snmp agentx re-enable - net-snmp

I have embedded a net-snmp agentx subagent in my c++ application code on Ubuntu Linux. I want to disable the agentx subagent once it is working and then re-enable it again. I am successfully able to setup the agent, poll the mib using snmpget from the command line and disable agentx socket connection by using snmp_shutdown but I am unable to re-enable the socket connection again once I disable it.
Appreciate any help/pointers.
I use the following code to initialise the SNMP library and the agentx socket connection.
In the beginning, initialise the AgentX subagent -
netsnmp_ds_set_boolean(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_ROLE, 1);
netsnmp_ds_set_int(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_AGENTX_PING_INTERVAL, 120);
netsnmp_ds_set_string(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_X_SOCKET, m_agentx_socket.c_str());
/* initialize the agent library */
init_agent("MyApp");
// initialise MIB module
init_snmp("MyApp");
The poll the MIB using snmpget and disable the connection using the function below -
snmp_shutdown("MyApp");
SOCK_CLEANUP;
Works fine so far.
Then I re-enable the connection using the code below but this does not work.
netsnmp_ds_set_boolean(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_ROLE, 1);
netsnmp_ds_set_int(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_AGENTX_PING_INTERVAL, 120);
netsnmp_ds_set_string(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_X_SOCKET, m_agentx_socket.c_str());
/* initialize the agent library */
init_agent("MyApp");
init_snmp("MyApp");

I think you have to re-run binary itself after it got shut down.
You have not clarified here why you want to restart agentx.
if you are doing this for fetching some data frequently. than I guess you can try infinite for loop with sleep command of a time span in your code. this will be better option.

I found the following information in the README.agentx file from net-snmp-5.7.2 (currently visible at http://www.net-snmp.org/docs/README.agentx.html :
Similarly, a subagent will not be able to re-register in place of a
defunct colleague until the master agent has received three requests
for the dead connection (and hence unregistered it).
It seems likely, therefore, that the master still has your subagent registered despite your attempt at a clean shutdown. Perhaps you could try making three or more requests while your subagent is disabled and then proceeding with your re-registration.

Related

VxWorks initiating connect() it fails at startup, but succeeds later

I'm using VxWorks 5.4 and attempting to connect to a server via TCP. A server which I'm going to be sending logs to, but for some reason at boot it fails or takes even up to 6 seconds - and is blocking the continuation of the task that the connection attempt was made in, which obviously is a big no no.
I have checked if the problem is one the server side by making a simple c program in windows that would connect to that server, and it takes no time at all (milliseconds).
I have "solved" the problem by making a task that would attempt "connectwithtineout" every 1-2 seconds and it does work (initiates the connection after around 2 fails in around 20ms), but I don't really like this approach and would have liked to initiate the actual connection when whatever I need that I'm missing is there and up instead of checking if I can connect every time.
After trying to investigate what the issue could have been, eventually the problem was about how a session is being closed between my system and the server.
You see, when you have a client running on some app on your windows/ or whatever other system, when you shut it down, it goes through some processes that close the session properly.
That is not the case in my system where to close it I essentially unplug the wire - thereby not having my system go through a shutdown process that involves properly closing the session.
After the system is up again, the connect function cannot be performed because my system tries to make the same session as the "dead one" which the server thinks is running.
Solving the problem was easy from the server side, just have a keepalive functionality - if your system doesn't respond for a while that you decide, close the session.

Bluez, multiple GATT servers running on the same computer

In other socket applications you can’t open a port that is already in use but bluetoothd seems to accept several listening GATT servers running in parallel, how is that possible?
I try to setup a GATT server using bluez 5.35 on Raspberry Pi Jessie. I have made an application that starts the GATT server much like the example btgatt-server.c using l2cap socket. I have a custom characteristic that a client application can connect and use. I have also setup to enable advertising using hci commands (it is set to enable just after listen() command on the socket).
I have done so the application auto start in rc.local. My problem is that after reboot, sometimes I don't see my own characteristics but I get a complete other list of services/characteristics. If I don’t start my own application and only enable advertising (sudo hciconfig hci0 leadv) I see the same list so it seems to be running a GATT server by default.
What mechanism in bluez decide if my services/characteristics or the other ones (I guess loaded by default plugins) are visible? They are never combined and visible at the same time and I don’t see any error messages during my application startup even if I can’t see the characteristics from the client and don’t get anything by accept(). How can I be sure my characteristics is always visible?

upgrade server executable without losing user's connections

I need to develop a mechanism to upgrade a running daemon in production environment to a new version without losing client's (TCP) connections. Something similar to what nginx does when you upgrade it to a new version. I need this for bug removal or to release minor version changes, which may be once a day. The daemon is developed in C for Linux platform.
The process for the upgrade would be like this:
The new_daemon would be ran from the command line specifying the process id of the old_daemon
The new_daemon would connect via socket to the old daemon to send/receive data and mesages.
The new_daemon would send the old_daemon a message to stop listening on the PORT which is used to receive client's connections. After confirming the detention of the listening service, the new_daemon would start listening on PORT
The new_daemon would send the message to old_daemon to send currently open file descriptors of the user's connections. Using the system call sendmsg() the old_daemon would pass the new_daemon all resources it has allocated with the kernel, not only the connections, but also all open files.
The new_daemon would send the message to old_daemon to pass all global memory variables and the old_daemon would send it over the socket connection between both processes.
This process is very complex, so I would like to ask if someone can suggest a better process or maybe there is some methodology to do this easily? The goal is to have the least downtime during the upgrade process.
TIA
Another alternative is to force the old_daemon to fork()/exec() the new_daemon and immediately stop accepting. The new_daemon would inherit the listening socket, existing connections, and open files (unless they are fcntl'd to FD_CLOEXEC) automagically.
That said, I don't think there is a clean way to hand over incomplete jobs (as I understand steps 4 and 5 try to accomplish). If possible, let the old_daemon complete them.
One alternative is to write most of your demon as a shared library and use dlopen to link the new functions into the running process. This means some parts can't be changed and you might have concurrency issues but it removes the need for IPC.

Reconnecting with hiredis

I'm trying to reconnect to the Redis server on disconnect.
I'm using redisAsyncConnect and I've setup a callback on disconnect. In the callback I try to reconnect with the same command I use at the very start of the program to establish the connection but it's not working. Can't seem to reconnect.
Can anyone help me out with an example?
Managing Redis (re)connections asynchronously is a bit tricky when an event loop is used.
Here is an example implementing a small zset polling daemon connecting to a list of Redis instances, which is resilient to disconnection events. The ae event loop is used (it is the one used by Redis itself).
http://gist.github.com/4149768
Check the following functions:
connectCallback
disconnectCallback
checkConnections
reconnectIfNeeded
The main daemon loop does its activity only when the connection is available. Once per second, a second time initiated callback checks if some connections have to be reestablished. We have found this mechanism quite reliable.
Note: error management is crude in this example for brevity sake. Real production code should manage errors in a more graceful way.
One tricky point when dealing with multiple asynchronous connections is the fact there is no user defined contextual data passed as a parameter of the corresponding callbacks. Cleaning the data associated to a connection after a disconnection event can be a bit difficult.

How to get a more stable socket connection in Linux/C

I'm running a game website where users connect using an Adobe Flash client to a C server running on a Fedora Linux box.
Often users complain about disconnects. Usually they're "Connection reset by peer"-disconnects.
Is there any way to make the connection more stable or does it all depend on the route from the user host to my server?
One thing I tried is to make it more stable by sending PING in clear text every other minute to avoid timeout problems.
Anyone got more ideas?
You are not exhausting the number of socket/memory use/cpu that the server process is given on the server, are you?
Do check with ulimit.
Also, if possible try to trace the error message in the source code (when a RST packet is sent--), i.e. when a send() or accept() returns an error value. In such cases print a debug message into the logs; if you really fancy debugging it do a simulation of the server:
run it into debug mode on a separate machine (possibly a clone of the server)
simulate thousands of connection (or find a network harnessing program)
backtrace the call and/or sniff the connection
where are you running the server?
at home? at work? at a hosting facility?
this will make a very big difference.
Can you design your app to connect to two sockets on the server and then load balance or make it active/passive (or active/active)?
You can use SO_KEEPALIVE TCP socket option.

Resources