Net-SNMP: snmpbulkget - genError failure - net-snmp

I am attempting to implement Net-SNMP 5.7.3 on a 64-bit Linux server. Currently I hit a road-block in not being able to determine why my Linux server returns an error code whenever a getBulkRequest comes in for OID that has MIBs underneath it. I am able to get the info for each MIB when I use snmpget against its corresponding OID, but when I perform snmpbulkget, I keep getting
Error in packet.
Reason: (genError) A general failure occured
Failed object: MY-ELEMENT-MIB::myModel.0
I think my problem lies in my Agent's SNMP configuration file, but I have been unable to resolve this. Anyhow, in case I am wrong, I am posting everything I have done thus far. Hopefully my "cleaned up" code makes it easier to follow and helps someone else out in the future; I struggled to get this far.
My snmp.conf I used to startup my Agent (placed in /etc/snmp/). I am using v2c and leaving it open to public.
###############################################################################
#
# EXAMPLE.conf:
# An example configuration file for configuring the Net-SNMP agent ('snmpd')
# See the 'snmpd.conf(5)' man page for details
#
# Some entries are deliberately commented out, and will need to be explicitly activated
#
###############################################################################
#
# AGENT BEHAVIOUR
#
# Listen for connections from the local system only
#agentAddress udp:127.0.0.1:161
# Listen for connections on all interfaces (both IPv4 *and* IPv6)
agentAddress udp:161,udp6:[::1]:161
###############################################################################
#
# ACCESS CONTROL
#
####
# First, map the community name (COMMUNITY) into a security name
# (local and mynetwork, depending on where the request is coming
# from):
# sec.name source community
com2sec local localhost secret42
com2sec cust1_sec default public
####
# Second, map the security names into group names:
# sec.model sec.name
group MyRWGroup v1 local
group MyRWGroup v2c local
group cust1_grp v1 cust1_sec
group cust1_grp v2c cust1_sec
####
# Third, create a view for us to let the groups have rights to:
# incl/excl subtree mask
view all included .1
#view cust1_v excluded .1
#view cust1_v included sysUpTime.0
#view cust1_v included interfaces.ifTable.ifEntry.ifIndex.1 ff.a0
####
# Finally, grant the groups access to their views:
# context sec.model sec.level match read write notif
access MyRWGroup "" any noauth exact all all none
access cust1_grp "" any noauth exact all all none
# Full read-only access for SNMPv3
#rouser authOnlyUser
# Full write access for encrypted requests
# Remember to activate the 'createUser' lines above
#rwuser authPrivUser priv
###############################################################################
#
# SYSTEM INFORMATION
#
# Note that setting these values here, results in the corresponding MIB objects being 'read-only'
# See snmpd.conf(5) for more details
sysLocation Server Room
sysContact Me <me#example.org>
# Application + End-to-End layers
sysServices 72
###############################################################################
#
# ACTIVE MONITORING
#
# send SNMPv1 traps
#trapsink localhost:162 public
# send SNMPv2c traps
#trap2sink localhost public
# send SNMPv2c INFORMs
#informsink localhost public
# Note that you typically only want *one* of these three lines
# Uncommenting two (or all three) will result in multiple copies of each notification.
#
# Event MIB - automatically generate alerts
#
# Remember to activate the 'createUser' lines above
#iquerySecName internalUser
#rouser internalUser
# generate traps on UCD error conditions
#defaultMonitors yes
# generate traps on linkUp/Down
#linkUpDownNotifications yes
###############################################################################
#
# EXTENDING THE AGENT
#
#
# AgentX Sub-agents
#
# Run as an AgentX master agent
master agentx
# Listen for network connections (from localhost)
# rather than the default named socket /var/agentx/master
#agentXSocket tcp:localhost:705
After I startup snmpd (the Agent), I startup a sub-Agent Daemon in my C++ application. Here is the Daemon example I followed
http://www.net-snmp.org/tutorial/tutorial-5/toolkit/demon/example-demon.c
Here is the C++ sub-Agent code I implemented. Daemon calls my init_myMibSubAgent() instead of init_nstAgentSubagentObject().
#include <net-snmp/net-snmp-config.h>
#include <net-snmp/net-snmp-includes.h>
#include <net-snmp/agent/net-snmp-agent-includes.h>
#include "myMibSubAgent.h"
#include "SNMPAppData.h"
#include <string>
using namespace std;
void CMyMibSubAgent::init_myMibSubAgent(void)
{
static oid myGeneral_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 2};
// myModel; DisplayString
static oid myModel_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 2, 1}; // We dont add 0 to end of watch scalar registration oid
netsnmp_register_watched_scalar(
netsnmp_create_handler_registration("myModel", NULL,
myModel_oid, OID_LENGTH(myModel_oid),
HANDLER_CAN_RWRITE),
netsnmp_create_watcher_info(&gSNMPAppData.myGeneral.data.myModel, sizeof(gSNMPAppData.myGeneral.data.myModel),
ASN_OCTET_STR, WATCHER_SIZE_STRLEN));
// mySerialCode; DisplayString
static oid mySerialCode_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 2, 2}; // We dont add 0 to end of watch scalar registration oid
netsnmp_register_watched_scalar(
netsnmp_create_handler_registration("mySerialCode", NULL,
mySerialCode_oid, OID_LENGTH(mySerialCode_oid),
HANDLER_CAN_RWRITE),
netsnmp_create_watcher_info(&gSNMPAppData.myGeneral.data.mySerialCode, sizeof(gSNMPAppData.myGeneral.data.mySerialCode),
ASN_OCTET_STR, WATCHER_SIZE_STRLEN));
// myType; INTEGER
static oid myType_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 2, 3, 0};
netsnmp_register_int_instance("myType", myType_oid, OID_LENGTH(myType_oid), &gSNMPAppData.myGeneral.data.myType, NULL );
// mySoftwareRev; DisplayString
static oid mySoftwareRev_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 2, 4}; // We dont add 0 to end of watch scalar registration oid
netsnmp_register_watched_scalar(
netsnmp_create_handler_registration("mySoftwareRev", NULL,
mySoftwareRev_oid, OID_LENGTH(mySoftwareRev_oid),
HANDLER_CAN_RWRITE),
netsnmp_create_watcher_info(&gSNMPAppData.myGeneral.data.mySoftwareRev, sizeof(gSNMPAppData.myGeneral.data.mySoftwareRev),
ASN_OCTET_STR, WATCHER_SIZE_STRLEN));
// myState; INTEGER
static oid myState_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 2, 5, 0};
netsnmp_register_int_instance("myState", myState_oid, OID_LENGTH(myState_oid), &gSNMPAppData.myGeneral.data.myState, NULL );
// mySeverityLevel; INTEGER
//static oid mySeverityLevel_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 2, 6, 0};
//netsnmp_register_int_instance("mySeverityLevel", mySeverityLevel_oid, OID_LENGTH(mySeverityLevel_oid), &gSNMPAppData.myGeneral.data.mySeverityLevel, NULL );
// myAssetTag; DisplayString
static oid myAssetTag_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 2, 7}; // We dont add 0 to end of watch scalar registration oid
netsnmp_register_watched_scalar(
netsnmp_create_handler_registration("myAssetTag", NULL,
myAssetTag_oid, OID_LENGTH(myAssetTag_oid),
HANDLER_CAN_RWRITE),
netsnmp_create_watcher_info(&gSNMPAppData.myGeneral.data.myAssetTag, sizeof(gSNMPAppData.myGeneral.data.myAssetTag),
ASN_OCTET_STR, WATCHER_SIZE_STRLEN));
// myTtMaxTargets; Integer32
static oid myTtMaxTargets_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 3, 1, 0};
netsnmp_register_int_instance("myTtMaxTargets", myTtMaxTargets_oid, OID_LENGTH(myTtMaxTargets_oid), &gSNMPAppData.myTrapTargets.data.myTtMaxTargets, NULL );
// myTtCfgTableNextIndex; Integer32
static oid myTtCfgTableNextIndex_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 3, 2, 0};
netsnmp_register_int_instance("myTtCfgTableNextIndex", myTtCfgTableNextIndex_oid, OID_LENGTH(myTtCfgTableNextIndex_oid), &gSNMPAppData.myTrapTargets.data.myTtCfgTableNextIndex, NULL );
// myTtCfgTable
for (int i=0; i < vsmNcIpCfgEntry_MAXROWS; i++)
{
// myTtCfgIndex; Integer32
//static oid myTtCfgIndex_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 3, 3, 1, 1, i+1};
//string myTtCfgIndexStr = "myTtCfgIndex."+i;
//netsnmp_register_int_instance(myTtCfgIndexStr.c_str(), myTtCfgIndex_oid, OID_LENGTH(myTtCfgIndex_oid), &gSNMPAppData.vsmNcIpCfgEntry.data[i].myTtCfgIndex, NULL );
// myTtCfgIpAddress; IpAddress
oid myTtCfgIpAddress_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 3, 3, 1, 2, i+1};
string myTtCfgIpAddressStr = "myTtCfgIpAddress." + std::to_string(i);
netsnmp_register_handler(
netsnmp_create_handler_registration(myTtCfgIpAddressStr.c_str(), handle_myTtCfgIpAddress,
myTtCfgIpAddress_oid, OID_LENGTH(myTtCfgIpAddress_oid), HANDLER_CAN_RWRITE));
// myTtCfgCommunity; OCTET STRING
oid myTtCfgCommunity_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 3, 3, 1, 3, i+1};
string myTtCfgCommunityStr = "myTtCfgCommunity." + std::to_string(i);
netsnmp_register_watched_instance(
netsnmp_create_handler_registration(myTtCfgCommunityStr.c_str(), NULL,
myTtCfgCommunity_oid, OID_LENGTH(myTtCfgCommunity_oid),
HANDLER_CAN_RWRITE),
netsnmp_create_watcher_info(&gSNMPAppData.myTtCfgEntry.data[i].myTtCfgCommunity, sizeof(gSNMPAppData.myTtCfgEntry.data[i].myTtCfgCommunity),
ASN_OCTET_STR, WATCHER_SIZE_STRLEN));
// myTtCfgEntryStatus; RowStatus
oid myTtCfgEntryStatus_oid[] = { 1, 3, 6, 1, 4, 1, 1234, 2, 1, 3, 3, 1, 4, i+1};
string myTtCfgEntryStatusStr = "myTtCfgEntryStatus." + std::to_string(i);
netsnmp_register_int_instance(myTtCfgEntryStatusStr.c_str(), myTtCfgEntryStatus_oid, OID_LENGTH(myTtCfgEntryStatus_oid), &gSNMPAppData.myTtCfgEntry.data[i].myTtCfgEntryStatus, NULL );
}
}
int CMyMibSubAgent::handle_myTtCfgIpAddress(netsnmp_mib_handler *handler,
netsnmp_handler_registration *reginfo,
netsnmp_agent_request_info *reqinfo,
netsnmp_request_info *requests)
{
switch(reqinfo->mode)
{
case MODE_GET:
{
snmp_set_var_typed_value(requests->requestvb, ASN_IPADDRESS,
(u_char *)&gSNMPAppData.vsmNcIpCfgEntry.data[1].vsmIpAddress,
sizeof(gSNMPAppData.vsmNcIpCfgEntry.data[1].vsmIpAddress));
}
break;
case MODE_SET_RESERVE1:
break;
case MODE_SET_RESERVE2:
break;
case MODE_SET_FREE:
break;
case MODE_SET_ACTION:
break;
case MODE_SET_COMMIT:
break;
case MODE_SET_UNDO:
break;
default:
return SNMP_ERR_GENERR;
}
return SNMP_ERR_NOERROR;
}
Finally, here is the MIB module I placed in my Manager /usr/share/snmp/mibs/ directory that invokes the snmpbulkget and snmpget
--
-- Common Object Definitions for My Element MIB
--
MY-ELEMENT-MIB DEFINITIONS ::= BEGIN
-- Relationship to Other MIBs
--
--
-- The objects defined in this MIB are located under the
-- private.enterprises subtree as shown below:
--
-- iso(1).org(3).dod(6).internet(1)
-- |
-- private(4)
-- |
-- enterprises(1)
-- |
-- myOID(1234)
-- |
-- myRegistrations(2)
-- |
-- myElementMIB(1)
--
--
--
-- Object Synopsis
--
--
-- All objects within this MIB are prefixed with the OBJECT
-- IDENTIFIER "p", where "p" is:
--
-- iso(1).org(3).dod(6).internet(1).private(4).enterprises(1).
-- myOID(1234).myRegistrations(2).myElementMIB(1)
--
-- or, 1.3.6.1.4.1.1234.2.1
--
--
-- Object Name Object Id
-- ================================ ==============
--
-- myMIBNotifications p.0
-- myStateChange p.0.1
-- myGeneral p.2
-- myModel p.2.1.0
-- mySerialCode p.2.2.0
-- myType p.2.3.0
-- mySoftwareRev p.2.4.0
-- myState p.2.5.0
-- mySeverityLevel p.2.6.0
-- myAssetTag p.2.7.0
-- myTrapTargets p.3
-- myTtMaxTargets p.3.1.0
-- myTtCfgTableNextIndex p.3.2.0
-- myTtCfgTable p.3.3
-- myTtCfgEntry p.3.3.1
-- myTtCfgIndex p.3.3.1.1.n
-- myTtCfgIpAddress p.3.3.1.2.n
-- myTtCfgCommunity p.3.3.1.3.n
-- myTtCfgEntryStatus p.3.3.1.4.n
--
IMPORTS
MODULE-IDENTITY, NOTIFICATION-TYPE, OBJECT-TYPE,
IpAddress, TimeTicks, Integer32
FROM SNMPv2-SMI
TEXTUAL-CONVENTION,
DisplayString, RowStatus
FROM SNMPv2-TC
myRegistrations
FROM MY-REG;
myElementMIB MODULE-IDENTITY
LAST-UPDATED "200503230000Z"
ORGANIZATION "My Solutions, Inc."
CONTACT-INFO
"
My Solutions, Inc.
123 Smith Ave
Los Angeles, CA 12345,
USA.
phone: +1 (123) 123-4567
e-mail: me#example.org
http://www.website.com/support"
DESCRIPTION
"This MIB module describes the generic
characteristics of a manageable physical
element beneath the my enterprise."
REVISION "9911120000Z"
DESCRIPTION
"First draft."
REVISION "200004100000Z"
::= { myRegistrations 1 }
--
-- Element MIB Textual Conventions
--
MyFloatingPoint ::= TEXTUAL-CONVENTION
DISPLAY-HINT
"63a"
STATUS current
DESCRIPTION
"FloatingPoint provides a way of representing
non-integer numbers in SNMP. Numbers are
represented as a string of ASCII characters in
the natural way. So for example, '3', '3.142'
and '0.3142E1' are all valid numbers.
The syntax for the string is as follows. []
enclose an optional element, | is the separator
for a set of alternatives. () enclose syntax
which is to be viewed as a unit.
FloatingPoint ::= [Sign]
(Float1 | Float2 | DigitSequence)
[ExponentPart]
Float1 ::= DigitSequence '.' [DigitSequence]
Float2 ::= '.' DigitSequence
DigitSequence ::= Digit [DigitSequence]
ExponentPart ::= ('e' | 'E') [Sign] DigitSequence
Digit ::= '0'..'9'
Sign ::= '+' | '-'"
SYNTAX OCTET STRING (SIZE (1..63))
MyTimecode ::= TEXTUAL-CONVENTION
DISPLAY-HINT
"11a"
STATUS current
DESCRIPTION
"A display representation for timecode which
essentially provides a machine readable address
for video and audio. Timecodes are represented as
a string of ASCII characters as hh:mm:ss:ff where
hh is hours, mm is minutes, ss is seconds and
ff is video frames."
SYNTAX OCTET STRING (SIZE (0..11))
--
-- The Element MIB top-level groups
--
-- NOTE: { myElementMIB 1 } is reserved for internal use
myGeneral OBJECT IDENTIFIER ::= { myElementMIB 2 }
myTrapTargets OBJECT IDENTIFIER ::= { myElementMIB 3 }
--
-- The Element MIB Object Definitions
--
--
-- The General Group
-- The myGeneral group provides general
-- information for the my managed element.
--
myModel OBJECT-TYPE
SYNTAX DisplayString (SIZE(0..64))
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"The element model string. The preferred value is
the customer-visible part number, which may be
printed on the physical managed element itself.
If the element being managed does not have
a model descriptor, or the model descriptor is
unknown to the agent, the value of this variable
will be a null string."
::= { myGeneral 1 }
mySerialCode OBJECT-TYPE
SYNTAX DisplayString (SIZE(0..64))
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"The manufacturing serial code of the
element on which the management software
is running. The preferred value is the serial
number string actually printed on the CCU itself
(if present).
If the element being managed does not have
a serial code, the value of this variable
will be a null string."
::= { myGeneral 2 }
myType OBJECT-TYPE
SYNTAX INTEGER {
myTypeUnknown(1),
-- My Type
myType1(2),
-- My Type
myType2(3),
-- My Type
myType3(4)
}
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"Identifies the type of the element being managed."
::= { myGeneral 3 }
mySoftwareRev OBJECT-TYPE
SYNTAX DisplayString (SIZE(0..64))
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"The revision stamp of the software running on
the element that supports this MIB module."
::= { myGeneral 4 }
myState OBJECT-TYPE
SYNTAX INTEGER {
myElementRunning(1),
myElementInMaintenance(2),
myElementFaulty(3),
myElementDisabled(4),
myElementIdling(5),
myElementInitializing(6),
myElementResetting(7),
myElementHalted(8),
myElementSwLicenseExpired(9),
myElementImntSwLicExpiry(10),
myElementSwLicRecovered(11)
}
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"The operational state of an element."
::= { myGeneral 5 }
mySeverityLevel OBJECT-TYPE
SYNTAX INTEGER {
levelUnknown(1),
levelTrace(2),
levelInformational(3),
levelNormal(4),
levelWarning(5),
levelAlarm(6),
levelResentWarning(7),
levelResentAlarm(8)
}
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
"Defines the typical severity levels"
::= { myGeneral 6 }
myAssetTag OBJECT-TYPE
SYNTAX DisplayString (SIZE(0..64))
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"This object is a user-assigned asset tracking identifier for
the element"
::= { myGeneral 7 }
--
-- The Trap Targets Group
-- The myTrapTargets group provides means to
-- configure the trap target specifics using which an
-- element can dispatch traps.
--
myTtMaxTargets OBJECT-TYPE
SYNTAX Integer32
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"The maximum number of trap targets that this
element can support.
If the value of this variable is -1, the element
element is capable of supporting a theoretically
infinite number of trap targets dynamically. In
othercases, the maximum number of trap targets
that can be supported by this element is limited
to the value of this variable."
::= { myTrapTargets 1 }
myTtCfgTableNextIndex OBJECT-TYPE
SYNTAX Integer32
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"Identifies a hint for the next value of
myTtCfgIndex to be used in a row creation attempt
for the myTtCfgTable table. If no new rows can be
created, this object will have a value of 0."
::= { myTrapTargets 2 }
myTtCfgTable OBJECT-TYPE
SYNTAX SEQUENCE OF MyTtCfgEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"A list of trap target configuration entries on
this element.
Trap Target Configuration Entry Creation:
=========================================
When creating a trap target configuration entry
the manager should use a GET operation to
determine the value of myTtCfgTableNextIndex.0.
If this value is non-zero, the manager can then
use this value as the index while creating a
table row.
The process of creating and activating a row of
this table takes two forms: the one-set mode and
the multiple-set mode.
In the one-set mode, a manager must specify the
values of myTtCfgIpAddress and myTtCfgCommunity
required to activate a row in a single SET operation
along with an assignment of the myTtCfgEntryStatus
to 'createAndGo(4)'. If the values and instances
supplied are correct, an instance of the trap target
configuration is created and the value of
myTtCfgEntryStatus transitions to 'active(1)'.
for example:
============
SnmpGet(<myTtCfgTableNextIndex.0, NULL>)
returns
<myTtCfgTableNextIndex.0, 2>
SnmpSet(<myTtCfgIpAddress.2, 192.158.104.93>,
<myTtCfgCommunity.2, 'public'>,
<myTtCfgEntryStatus.2, createAndGo(4)>)
returns
<myTtCfgIpAddress.2, 192.158.104.93>,
<myTtCfgCommunity.2, 'public'>,
<myTtCfgEntryStatus.2, active(1)>
In the multiple-set mode, creating a trap target
configuration table row, filling it with values,
and activating it are carried out in discrete steps.
To create the row, the manager specifies a value
of 'createAndWait(5)' for the myTtCfgEntryStatus
status variable. This SET request could contain
values of myTtCfgIpAddress and myTtCfgCommunity
but it is not required. More often, the values for
these columnar objects are specified in additional
SET requests. After each SET operation, the
myTtCfgEntryStatus variable takes on the value
'notReady(3)' or 'notInService(2)'. To place the
entry into service, the manager requests that the
myTtCfgEntryStatus variable transition to the
'active(1)' state.
for example:
============
SnmpGet(<myTtCfgTableNextIndex.0>, NULL)
returns
<myTtCfgTableNextIndex.0, 2>
SnmpSet(<myTtCfgEntryStatus.2, createAndWait(5)>)
returns
<myTtCfgEntryStatus.2, notReady(3)>
SnmpSet(<myTtCfgIpAddress.2, 192.158.104.93>,
<myTtCfgCommunity.2, 'public'>)
returns
<myTtCfgIpAddress.2, 192.158.104.93>,
<myTtCfgCommunity.2, 'public'>
SnmpSet(<myTtCfgEntryStatus.2, active(1)>)
returns
<myTtCfgEntryStatus.2, active(1)>
Trap Target Configuration Entry Deletion:
=========================================
To delete an existing trap target configuration
entry, the manager performs a SET operation on the
myTtCfgEntryStatus variable with the value
'destroy(6)'."
::= { myTrapTargets 3 }
myTtCfgEntry OBJECT-TYPE
SYNTAX MyTtCfgEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"A trap target configuration entry."
INDEX { myTtCfgIndex }
::= { myTtCfgTable 1 }
MyTtCfgEntry ::=
SEQUENCE {
myTtCfgIndex
Integer32,
myTtCfgIpAddress
IpAddress,
myTtCfgCommunity
OCTET STRING,
myTtCfgEntryStatus
RowStatus
}
myTtCfgIndex OBJECT-TYPE
SYNTAX Integer32 (1..2147483647)
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"The index of a trap target configuration row.
Note that the value of this object will not
be visible to a manager and any GET/SET
operations on this variable will fail."
::= { myTtCfgEntry 1 }
myTtCfgIpAddress OBJECT-TYPE
SYNTAX IpAddress
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"The IP address of the target/manager to which
this element is supposed to send notifications."
::= { myTtCfgEntry 2 }
myTtCfgCommunity OBJECT-TYPE
SYNTAX OCTET STRING (SIZE(0..128))
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"The community name to be used when this element
sends notifications to the target identified by
this value of this entries myTtCfgIpAddress."
::= { myTtCfgEntry 3 }
myTtCfgEntryStatus OBJECT-TYPE
SYNTAX RowStatus
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"This object controls the creation, activation
and deletion of a row in trap target configuration
table."
::= { myTtCfgEntry 4 }
--
-- The Element MIB Notifications
--
--
-- The notifications group is being assigned the OID "0" so as
-- to comply with the trap handling in the different SNMP versions
--
myMIBNotifications OBJECT IDENTIFIER ::= { myElementMIB 0 }
myStateChange NOTIFICATION-TYPE
OBJECTS {
myState
}
STATUS current
DESCRIPTION
"Notifies when a state change occurs on an element.
The myState variable will hold the new state that
the element is operating in."
::= { myMIBNotifications 1 }
END
snmpget command I used on Manager
snmpget -v 2c -c public 10.16.20.191 MY-ELEMENT-MIB::myModel.0
MY-ELEMENT-MIB::myModel.0 = STRING: Hello World
snmpbulkget command I used on Manager
snmpbulkget -v 2c -c public 10.16.20.191 MY-ELEMENT-MIB::myGeneral
Error in packet.
Reason: (genError) A general failure occured
Failed object: MY-ELEMENT-MIB::myModel.0
Additional Notes:
Using OID directly yields same results
Using Wireshark I see my Linux system response to the getBulkRequest with all the MIBs' info found under the OID
I used -DALL switch in the snmpbulkget command and I see all the MIBs' info found under the OID. Additionally I see Error Status = 5, which explains why I get genError response.

I found my problem. My issue was with the Netsnmp_Node_Handler method I used to register myTtCfgIpAddress OID. I was missing a MODE_GETNEXT & MODE_GETBULK case in CMyMibSubAgent::handle_myTtCfgIpAddress(). Because of this, my switch case defaulted to return SNMP_ERR_GENERR.
Now my recommendation to anyone who comes across this problem, don't solely rely on the SNMP tool response. The snmpbulkget -DALL command told me the problem was with my first OID object (in my case MY-ELEMENT-MIB::myModel.0). However, after using Wireshark, I found that the first few OIDs were returning with valid values, but after it got to my myTtCfgTable, its started having issues. From here I started using snmpgetnext and Wireshark to pinpoint the specific OID that caused the genError. After that, it was just a review of the OID's implementation (and in my case I had to use the debugger to determine the reqinfo->mode I got was MODE_GETNEXT). Hope this explanation helps someone in future.
One more thing, I used Wireshark filter "udp.port==161 || udp.port==162" to only observed SNMP traffic.

Related

NebulaGraph Database: when submitting the algorithm package directly to run louvain, it reports an error

Some details are as follows:
NebulaGraph version is 3.3.0
NebulaGraph Studio version is 3.5.0
Deployment way is distributed
Data volume is as following:
The nodes are Comment, and the edges are the ones in the red box.
application.conf:
{
# Spark relation config
spark: {
app: {
name: louvain
# spark.app.partitionNum
partitionNum:50
}
master:local
}
data: {
# data source. optional of nebula,csv,json
source: nebula
# data sink, means the algorithm result will be write into this sink. optional of nebula,csv,text
sink: nebula
# if your algorithm needs weight
hasWeight: false
}
# Nebula Graph relation config
nebula: {
# algo's data source from Nebula. If data.source is nebula, then this nebula.read config can be valid.
read: {
# Nebula metad server address, multiple addresses are split by English comma
metaAddress: "192.168.200.100:9559,192.168.200.101:9559,192.168.200.111:9559"
# Nebula space
space: ldbc
# Nebula edge types, multiple labels means that data from multiple edges will union together
labels: ["HAS_CREATOR","HAS_TAG","IS_LOCATED_IN","REPLY_OF"]
# Nebula edge property name for each edge type, this property will be as weight col for algorithm.
# Make sure the weightCols are corresponding to labels.
weightCols: [""]
}
# algo result sink into Nebula. If data.sink is nebula, then this nebula.write config can be valid.
write:{
# Nebula graphd server address, multiple addresses are split by English comma
graphAddress: "192.168.200.100:9669,192.168.200.101:9669,192.168.200.111:9669,192.168.200.112:9669,192.168.200.114:9669"
# Nebula metad server address, multiple addresses are split by English comma
metaAddress: "192.168.200.100:9559,192.168.200.101:9559,192.168.200.111:9559"
user:root
pswd:nebula
# Nebula space name
space:ldbc
# Nebula tag name, the algorithm result will be write into this tag
tag:Comment
# algorithm result is insert into new tag or update to original tag. type: insert/update
type:update
}
}
local: {
# algo's data source from Nebula. If data.source is csv or json, then this local.read can be valid.
read:{
filePath: "file:///tmp/edge_follow.csv"
# srcId column
srcId:"_c0"
# dstId column
dstId:"_c1"
# weight column
#weight: "col3"
# if csv file has header
header: false
# csv file's delimiter
delimiter:","
}
# algo result sink into local file. If data.sink is csv or text, then this local.write can be valid.
write:{
resultPath:/tmp/count
}
}
algorithm: {
# the algorithm that you are going to execute,pick one from [pagerank, louvain, connectedcomponent,
# labelpropagation, shortestpaths, degreestatic, kcore, stronglyconnectedcomponent, trianglecount,
# betweenness, graphtriangleCount, clusteringcoefficient, bfs, hanp, closeness, jaccard, node2vec]
executeAlgo: louvain
# PageRank parameter
pagerank: {
maxIter: 10
resetProb: 0.15 # default 0.15
}
# Louvain parameter
louvain: {
maxIter: 20
internalIter: 10
tol: 0.5
}
# connected component parameter.
connectedcomponent: {
maxIter: 20
}
# LabelPropagation parameter
labelpropagation: {
maxIter: 20
}
# ShortestPaths parameter
shortestpaths: {
# several vertices to compute the shortest path to all vertices.
landmarks: "1"
}
# Vertex degree statistics parameter
degreestatic: {}
# KCore parameter
kcore:{
maxIter:10
degree:1
}
# Trianglecount parameter
trianglecount:{}
# graphTriangleCount parameter
graphtrianglecount:{}
# Betweenness centrality parameter. maxIter parameter means the max times of iterations.
betweenness:{
maxIter:5
}
# Clustering Coefficient parameter. The type parameter has two choice, local or global
# local type will compute the clustering coefficient for each vertex, and print the average coefficient for graph.
# global type just compute the graph's clustering coefficient.
clusteringcoefficient:{
type: local
}
# ClosenessAlgo parameter
closeness:{}
# BFS parameter
bfs:{
maxIter:5
root:"10"
}
# HanpAlgo parameter
hanp:{
hopAttenuation:0.1
maxIter:10
preference:1.0
}
#Node2vecAlgo parameter
node2vec:{
maxIter: 10,
lr: 0.025,
dataNumPartition: 10,
modelNumPartition: 10,
dim: 10,
window: 3,
walkLength: 5,
numWalks: 3,
p: 1.0,
q: 1.0,
directed: false,
degree: 30,
embSeparate: ",",
modelPath: "hdfs://127.0.0.1:9000/model"
}
# JaccardAlgo parameter
jaccard:{
tol: 1.0
}
}
}
When I execute spark-submit --master "local" --class com.vesoft.nebula.algorithm.Main /opt/offline/nebula/nebula-algorithm-3.0.0.jar -p /opt/offline/nebula/application.conf, it reports the error in the screenshot.
How should I solve this error?
The value of the weightcols parameter in the application.conf file cannot be quoted if it is empty.
Check the appliation.conf. If the item weightcols is empty, use [""] without and blank.

evaluating test dataset using eval() in LightGBM

I have trained a ranking model with LightGBM with the objective 'lambdarank'.
I want to evaluate my model to get the nDCG score for my test dataset using the best iteration, but I have never been able to use the lightgbm.Booster.eval() nor lightgbm.Booster.eval_train() function.
First, I have created 3 dataset instances, namely the train set, valid set and test set:
lgb_train = lgb.Dataset(x_train, y_train, group=query_train, free_raw_data=False)
lgb_valid = lgb.Dataset(x_valid, y_valid, reference=lgb_train, group=query_valid, free_raw_data=False)
lgb_test = lgb.Dataset(x_test, y_test, group=query_test)
I then train my model using lgb_train and lgb_valid:
gbm = lgb.train(params,
lgb_train,
num_boost_round=1500,
categorical_feature=chosen_cate_features,
valid_sets=[lgb_train, lgb_valid],
evals_result=evals_result,
early_stopping_rounds=150
)
When I call the eval() or the eval_train() functions after training, it returns an error:
gbm.eval(data=lgb_test,name='test')
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-122-7ff5ef5136b8> in <module>()
----> 1 gbm.eval(data=lgb_test,name='test')
/usr/local/lib/python3.6/dist-packages/lightgbm/basic.py in eval(self, data,
name, feval)
1925 raise TypeError("Can only eval for Dataset instance")
1926 data_idx = -1
-> 1927 if data is self.train_set:
1928 data_idx = 0
1929 else:
AttributeError: 'Booster' object has no attribute 'train_set'
gbm.eval_train()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-123-0ce5fa3139f5> in <module>()
----> 1 gbm.eval_train()
/usr/local/lib/python3.6/dist-packages/lightgbm/basic.py in eval_train(self,
feval)
1956 List with evaluation results.
1957 """
-> 1958 return self.__inner_eval(self.__train_data_name, 0, feval)
1959
1960 def eval_valid(self, feval=None):
/usr/local/lib/python3.6/dist-packages/lightgbm/basic.py in
__inner_eval(self, data_name, data_idx, feval)
2352 """Evaluate training or validation data."""
2353 if data_idx >= self.__num_dataset:
-> 2354 raise ValueError("Data_idx should be smaller than number
of dataset")
2355 self.__get_eval_info()
2356 ret = []
ValueError: Data_idx should be smaller than number of dataset
and when i called the eval_valid() function, it returns an empty list.
Can anyone tell me how to evaluate a LightGBM model and get the nDCG score using test set properly? Thanks.
If you add keep_training_booster=True as an argument to your lgb.train, the returned booster object would be able to execute eval and eval_train (though eval_valid would still return an empty list for some reason even when valid_sets is provided in lgb.train).
Documentation says:
keep_training_booster (bool, optional (default=False)) – Whether the returned Booster will be used to keep training. If False, the returned value will be converted into _InnerPredictor before returning.

How to display messages from a message file on a display screen using RPGLE?

I have designed a screen using SDA in AS/400 that takes an ID number as input and searches in two PFs for that ID and displays corresponding values fetched from those PFs in the respective fields on screen. Below is the DSPF code:
A*%%TS SD 20180813 084626 PATELDH REL-V7R1M0 5770-WDS
A*%%EC
A DSPSIZ(24 80 *DS3)
A R HEADER
A*%%TS SD 20180802 075026 PATELDH REL-V7R1M0 5770-WDS
A 2 2USER
A 2 30'PRODUCT INQUIRY SCREEN'
A COLOR(WHT)
A 2 63DATE
A EDTCDE(Y)
A 3 63TIME
A R FOOTER
A*%%TS SD 20180802 074433 PATELDH REL-V7R1M0 5770-WDS
A OVERLAY
A 22 4'F3=EXIT'
A R DETAIL
A*%%TS SD 20180813 073420 PATELDH REL-V7R1M0 5770-WDS
A CA03(03 'EXIT')
A CA12(12 'PREVIOUS')
A OVERLAY
A 7 16'ID:'
A 10 16'NAME:'
A 12 16'CATEGORY:'
A #ID R I 7 20REFFLD(CATEGORIES/ID AS400KT2/RCATE-
A GORY)
A #NAME R O 10 22REFFLD(PRODUCTS/NAME AS400KT2/RPROD-
A UCTS)
A #CATEGORY R O 12 26REFFLD(CATEGORIES/CATEGORY AS400KT2-
A /RCATEGORY)
A R MSGSFL SFL
A*%%TS SD 20180803 054959 PATELDH REL-V7R1M0 5770-WDS
A SFLMSGRCD(24)
A MSGKEY SFLMSGKEY
A MSGQ SFLPGMQ(10)
A R MSGCTL SFLCTL(MSGSFL)
A*%%TS SD 20180813 084626 PATELDH REL-V7R1M0 5770-WDS
A OVERLAY
A SFLDSP
A SFLDSPCTL
A SFLINZ
A 01 SFLEND
A SFLSIZ(0002)
A SFLPAG(0001)
A MSGQ SFLPGMQ(10)
I have written a free format RPGLE code that makes this screen work. Below is the RPGLE code:
FDSPPRD CF E WorkStn
FRPRODUCTS IF E K DISK
FRCATEGORY IF E K DISK
FRPRODCATEGO A E K DISK
DtempID S LIKE(ID)
DmsgID S 7A
DmsgF S 10A
D getMsg PR EXTPGM('MSGSFLCL')
D msgID 7A
D msgF 10A
/Free
DoW *In03 = *Off;
Write HEADER;
Write FOOTER;
ExFmt DETAIL;
If #ID = *Zeros;
msgID = 'MSG0001';
msgF = 'ASGNMSGF';
getMsg(msgID:msgF);
Else;
Chain #ID RPRODUCTS;
If %Found(RPRODUCTS);
#NAME = NAME;
Chain ID RCATEGORY;
If %Found(RCATEGORY);
#CATEGORY = CATEGORY;
EndIf;
EndIf;
EndIf;
EndDo;
*InLR = *On;
/End-Free
Below is the CL program called by RPGLE program to get the message text from the msgfile:
PGM PARM(&MSGID &MSGF)
DCL VAR(&MSGID) TYPE(*CHAR) LEN(7)
DCL VAR(&MSGF) TYPE(*CHAR) LEN(10)
SNDPGMMSG MSGID(&MSGID) MSGF(&MSGF)
ENDPGM
Below are the two PFs from which the records are read:
RPRODUCTS-
A R PRODUCTS
A ID 2P 0
A NAME 16A
A K ID
RCATEGORY-
A R CATEGORIES
A ID 2P 0
A CATEGORY 15A
A K ID
All the above codes compile successfully. But the problem is that the msg from the msgf does not appear on the screen. Rest all works. Just the msg from msgf is not being displayed when I press 'Enter' key with a blank ID on screen. Can someone please suggest an information source from where I can learn the concepts of such applications. Also, a help with this one would be appreciated.
You are not writing the MSGCTL record. If you don't write that, then the message subfile will not be displayed. You are also not providing a value for MSGQ.
When using a message subfile, I generally get the program name out of the program status data structure, and put that into MSGQ during program initialization time. It should never change. I also pass that to my procedure that sends the message to the message queue. That way I know that both values will be the same. If they are not the messages will not display.
Here is my message subfile definition:
A* ========================================================================
A* Message Subfile
A* ------------------------------------------------------------------------
A R MSGSFL SFL
A SFLMSGRCD(27)
A MSGKEY SFLMSGKEY
A PGMQ SFLPGMQ(10)
A* ------------------------------------------------------------------------
A* Message Subfile Control
A* ------------------------------------------------------------------------
A R MSGCTL SFLCTL(MSGSFL)
A SFLPAG(1)
A SFLSIZ(2)
A SFLDSP SFLDSPCTL
A SFLINZ
A 53
AON53 SFLEND
A PGMQ SFLPGMQ(10)
There are only a few differences from yours. Lets go through them.
A SFLMSGRCD(27)
This is 27 because I am using the *DS4 screen size. Not an issue.
You are using OVERLAY, I'm not because I write that format first, but as long as you write MSGCTL after you write HEADER you should be good there.
You are using SFLCLR. That is unnecessary, remove it.
A 53
AON53 SFLEND
This is a bit different. I do this because SFLEND requires a conditioning indicator, but I really don't care, I want SFLEND active no matter what that indicator says. (I use *In53 as my SFLEND for regular subfiles too, and I don't want to have to worry whether it is on or off.
I use a sub-procedure to send the message: here is my code for that:
// ----------------------------------------
// SndDspfMsg - sends an *INFO message to the
// message subfile in a display file.
//
// Parameters:
// StackEntry - The program call stack entry to which the message is sent.
// Usually the program name. This must be the same value that
// is placed in the SFLPGMQ variable in the message subfile
// control format.
// MsgId - The Message ID from message file JCMSGF to be sent to the program
// message Queue.
// MsgDta - (optional) Data to be used by the message to provide dynamic
// message content. Defaults to blank.
// MsgDtaLen - (optional) The length of the message data provided above.
// This parameter is required if MsgDta is provided. Defaults
// to zero. If this is not provided or is zero, MsgDta is ignored.
// ----------------------------------------
dcl-proc SndDspfMsg Export;
dcl-pi *n;
StkEnt Char(10) Const;
MsgId Char(7) Const;
MsgDta Char(512) Const Options(*VarSize: *NoPass);
MsgDtaLen Int(10) Const Options(*NoPass);
end-pi;
dcl-s Name_t Char(10) Template Inz('');
// Call Stack Qualifier - used by message handling APIs
dcl-ds CallStackQual_t Qualified Template Inz;
Module Like(Name_t) Inz('*NONE');
Program Like(Name_t) Inz('*NONE');
end-ds;
// Qualified Name
dcl-ds QualName_t Qualified Template Inz;
Name Like(Name_t) Inz('');
User Like(Name_t) Inz('');
end-ds;
// Standard Error Code Format
dcl-ds ErrorCdType1_t Qualified Template Inz;
BytesProv Int(10) Inz(%size(ErrorCdType1_t));
BytesAvail Int(10);
MsgId Char(7);
Data Char(1024) Pos(17);
end-ds;
dcl-ds MsgFile LikeDs(QualName_t) Inz(*LikeDs);
dcl-ds ErrorCd LikeDs(ErrorCdType1_t) Inz(*LikeDs);
dcl-s pmMsgDta Char(512) Inz('');
dcl-s pmMsgDtaLen Int(10) Inz(0);
dcl-s pmMsgTyp Char(10) Inz('*INFO');
dcl-s pmStkCnt Int(10) Inz(0);
dcl-s pmMsgKey Char(4) Inz('');
// Send Program Message
dcl-pr qmhsndpm ExtPgm('QMHSNDPM');
MessageId Char(7) Const;
MessageFile LikeDs(QualName_t) Const;
MessageDta Char(512) Const Options(*Varsize);
MessageLen Int(10) Const;
MessageType Char(10) Const;
StackEntry Char(4102) Const Options(*Varsize);
StackCounter Int(10) Const;
MessageKey Char(4);
Error LikeDs(ErrorCdType1_t);
StackEntryLen Int(10) Const Options(*NoPass);
StackEntryQual LikeDs(CallStackQual_t)
Const Options(*NoPass);
ScreenWaitTime Int(10) Const Options(*NoPass);
StackEntryType Char(10) Const Options(*NoPass);
Ccsid Int(10) Const Options(*NoPass);
end-pr;
// Handle *NoPass Parms
if %parms() >= %parmnum(MsgDtaLen);
pmMsgDtaLen = MsgDtaLen;
endif;
// if Message Data is provided,
if pmMsgDtaLen > 0;
pmMsgDtaLen = min(%size(pmMsgDta): pmMsgDtaLen);
pmMsgDta = %subst(MsgDta: 1: pmMsgDtaLen);
endif;
MsgFile.Name = 'JCMSGF';
qmhsndpm(MsgId: MsgFile: pmMsgDta: pmMsgDtaLen:
pmMsgTyp: StkEnt: pmStkCnt: pmMsgKey:
ErrorCd);
end-proc;
This should get your message subfile working. As for why the other fields are not populating, maybe your product ID and category ID are not found in the file. Note, the product ID and category ID will be the same value when this program runs because ID is mapped to the display file, the product file, and the category file. This doesn't seem to be what you want. If you have trouble dealing with that, ask a new question.

Flink Streaming: From one window, lookup state in another window

I have two streams:
Measurements
WhoMeasured (metadata about who took the measurement)
These are the case classes for them:
case class Measurement(var value: Int, var who_measured_id: Int)
case class WhoMeasured(var who_measured_id: Int, var name: String)
The Measurement stream has a lot of data. The WhoMeasured stream has little. In fact, for each who_measured_id in the WhoMeasured stream, only 1 name is relevant, so old elements can be discarded if one with the same who_measured_id arrives. This is essentially a HashTable that gets filled by the WhoMeasured stream.
In my custom window function
class WFunc extends WindowFunction[Measurement, Long, Int, TimeWindow] {
override def apply(key: Int, window: TimeWindow, input: Iterable[Measurement], out: Collector[Long]): Unit = {
// Here I need access to the WhoMeasured stream to get the name of the person who took a measurement
// The following two are equivalent since I keyed by who_measured_id
val name_who_measured = magic(key)
val name_who_measured = magic(input.head.who_measured_id)
}
}
This is my job. Now as you might see, there is something missing: The combination of the two streams.
val who_measured_stream = who_measured_source
.keyBy(w => w.who_measured_id)
.countWindow(1)
val measurement_stream = measurements_source
.keyBy(m => m.who_measured_id)
.timeWindow(Time.seconds(60), Time.seconds(5))
.apply(new WFunc)
So in essence this is sort of a lookup table that gets updated when new elements in the WhoMeasured stream arrive.
So the question is: How to achieve such a lookup from one WindowedStream into another?
Follow Up:
After implementing in the way Fabian suggested, the job always fails with some sort of serialization issue:
[info] Loading project definition from /home/jgroeger/Code/MeasurementJob/project
[info] Set current project to MeasurementJob (in build file:/home/jgroeger/Code/MeasurementJob/)
[info] Compiling 8 Scala sources to /home/jgroeger/Code/MeasurementJob/target/scala-2.11/classes...
[info] Running de.company.project.Main dev MeasurementJob
[error] Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: The implementation of the RichCoFlatMapFunction is not serializable. The object probably contains or references non serializable fields.
[error] at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:100)
[error] at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1478)
[error] at org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:161)
[error] at org.apache.flink.streaming.api.datastream.ConnectedStreams.flatMap(ConnectedStreams.java:230)
[error] at org.apache.flink.streaming.api.scala.ConnectedStreams.flatMap(ConnectedStreams.scala:127)
[error] at de.company.project.jobs.MeasurementJob.run(MeasurementJob.scala:139)
[error] at de.company.project.Main$.main(Main.scala:55)
[error] at de.company.project.Main.main(Main.scala)
[error] Caused by: java.io.NotSerializableException: de.company.project.jobs.MeasurementJob
[error] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
[error] at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
[error] at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
[error] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
[error] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
[error] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
[error] at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:301)
[error] at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:81)
[error] ... 7 more
java.lang.RuntimeException: Nonzero exit code returned from runner: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last MeasurementJob/compile:run for the full output.
[error] (MeasurementJob/compile:run) Nonzero exit code returned from runner: 1
[error] Total time: 9 s, completed Nov 15, 2016 2:28:46 PM
Process finished with exit code 1
The error message:
The implementation of the RichCoFlatMapFunction is not serializable. The object probably contains or references non serializable fields.
However, the only field my JoiningCoFlatMap has is the suggested ValueState.
The signature looks like this:
class JoiningCoFlatMap extends RichCoFlatMapFunction[Measurement, WhoMeasured, (Measurement, String)] {
I think what you want to do is a window operation followed by a join.
You can implement the a join of a high-volume stream and a low-value update-by-key stream using a stateful CoFlatMapFunction as in the example below:
val measures: DataStream[Measurement] = ???
val who: DataStream[WhoMeasured] = ???
val agg: DataStream[(Int, Long)] = measures
.keyBy(_._2) // measured_by_id
.timeWindow(Time.seconds(60), Time.seconds(5))
.apply( (id: Int, w: TimeWindow, v: Iterable[(Int, Int, String)], out: Collector[(Int, Long)]) => {
// do your aggregation
})
val joined: DataStream[(Int, Long, String)] = agg
.keyBy(_._1) // measured_by_id
.connect(who.keyBy(_.who_measured_id))
.flatMap(new JoiningCoFlatMap)
// CoFlatMapFunction
class JoiningCoFlatMap extends RichCoFlatMapFunction[(Int, Long), WhoMeasured, (Int, Long, String)] {
var names: ValueState[String] = null
override def open(conf: Configuration): Unit = {
val stateDescrptr = new ValueStateDescriptor[String](
"whoMeasuredName",
classOf[String],
"" // default value
)
names = getRuntimeContext.getState(stateDescrptr)
}
override def flatMap1(a: (Int, Long), out: Collector[(Int, Long, String)]): Unit = {
// join with state
out.collect( (a._1, a._2, names.value()) )
}
override def flatMap2(w: WhoMeasured, out: Collector[(Int, Long, String)]): Unit = {
// update state
names.update(w.name)
}
}
A note on the implementation: A CoFlatMapFunction cannot decide which input to process, i.e., the flatmap1 and flatmap2 functions are called depending on what data arrives at the operator. It cannot be controlled by the function. This is a problem when initializing the state. In the beginning, the state might not have the correct name for an arriving Measurement object but return the default value. You can avoid that by buffering the measurements and joining them once, the first update for the key from the who stream arrives. You'll need another state for that.

Python 3, extract info from file problems

And again, asking for help. But, before I start, here will be a lot of text, so please sorry for that.
I have about 500~ IP addresses with devices 2x categories in .xlsx book
I want:
telnet to device. Check device (by authentication prompt) type 1 or type 2.
If device is type 1 - get it firmware version in 2x partitions
write in excel file:
column 1 - IP address
column 2 - device type
column 3 - firmware version
column 4 - firmware version in reserve partition.
If type 2 - write in excel file:
column 1 - IP address
column 2 - device type
If device is down, or device type 3(unknown) - write in excel file:
column 1 - IP address
column 2 - result (EOF, TIMEOUT)
What I have done: I'm able to telnet to device, check device type, write in excel with 2 columns (in 1 column IP addresses, in 2 column is device type, or EOF/TIMEOUT results)
And, I'm writing full logs from session to files in format IP_ADDRESS.txt to future diagnosis.
What I can't understand to do? I can't understand how to get firmware version, and put it on 3,4 columns.
I can't understand how to work with current log session in real time, so I've decided to copy logs from main file (IP_ADDRESS.txt) to temp.txt to work with it.
I can't understand how to extract information I needed.
The file output example:
Trying 10.40.81.167...
Connected to 10.40.81.167.
Escape character is '^]'.
####################################
# #
# RADIUS authorization disabled #
# Enter local login/password #
# #
####################################
bt6000 login: admin
Password:
Please, fill controller information at first time (Ctrl+C to abort):
^C
Controller information filling canceled.
^Cadmin#bt6000# firmware info
Active boot partition: 1
Partition 0 (reserved):
Firmware: Energomera-2.3.1
Version: 10117
Partition 1 (active):
Firmware: Energomera-2.3.1_01.04.15c
Version: 10404M
Kernel version: 2.6.38.8 #2 Mon Mar 2 20:41:26 MSK 2015
STM32:
Version: bt6000 10083
Part Number: BT6024
Updated: 27.04.2015 16:43:50
admin#bt6000#
I need values - after "Energomera" words, like 2.3.1 for reserved partition, and 2.3.1_01.04.15c for active partition.
I've tried to work with string numbers and excract string, but there was not any kind of good result at all.
Full code of my script below.
import pexpect
import pxssh
import sys #hz module
import re #Parser module
import os #hz module
import getopt
import glob #hz module
import xlrd #Excel read module
import xlwt #Excel write module
import telnetlib #telnet module
import shutil
#open excel book
rb = xlrd.open_workbook('/samba/allaccess/Energomera_Eltek_list.xlsx')
#select work sheet
sheet = rb.sheet_by_name('IPs')
#rows number in sheet
num_rows = sheet.nrows
#cols number in sheet
num_cols = sheet.ncols
#creating massive with IP addresses inside
ip_addr_list = [sheet.row_values(rawnum)[0] for rawnum in range(sheet.nrows)]
#create excel workbook with write permissions (xlwt module)
wb = xlwt.Workbook()
#create sheet IP LIST with cell overwrite rights
ws = wb.add_sheet('IP LIST', cell_overwrite_ok=True)
#create counter
i = 0
#authorization details
port = "23" #telnet port
user = "admin" #telnet username
password = "12345" #telnet password
#firmware ask function
def fw_info():
print('asking for firmware')
px.sendline('firmware info')
px.expect('bt6000#')
#firmware update function
def fw_send():
print('sending firmware')
px.sendline('tftp server 172.27.2.21')
px.expect('bt6000')
px.sendline('firmware download tftp firmware.ext2')
px.expect('Updating')
px.sendline('y')
px.send(chr(13))
ws.write(i, 0, host)
ws.write(i, 1, 'Energomera')
#if eltek found - skip, write result in book
def eltek_found():
print(host, "is Eltek. Skipping")
ws.write(i, 0, host)
ws.write(i, 1, 'Eltek')
#if 23 port telnet conn. refused - skip, write result in book
def conn_refuse():
print(host, "connection refused")
ws.write(i, 0, host)
ws.write(i, 1, 'Connection refused')
#auth function
def auth():
print(host, "is up! Energomera found. Starting auth process")
px.sendline(user)
px.expect('assword')
px.sendline(password)
#start working with ip addresses in ip_addr_list massive
for host in ip_addr_list:
#spawn pexpect connection
px = pexpect.spawn('telnet ' + host)
px.timeout = 35
#create log file with in IP.txt format (10.1.1.1.txt, for example)
fout = open('/samba/allaccess/Energomera_Eltek/{0}.txt'.format(host),"wb")
#push pexpect logfile_read output to log file
px.logfile_read = fout
try:
index = px.expect (['bt6000', 'sername', 'refused'])
#if device tell us bt6000 - authorize
if index == 0:
auth()
index1 = px.expect(['#', 'lease'])
#if "#" - ask fw version immediatly
if index1 == 0:
print('seems to controller ID already set')
fw_info()
#if "Please" - press 2 times Ctrl+C, then ask fw version
elif index1 == 1:
print('trying control C controller ID')
px.send(chr(3))
px.send(chr(3))
px.expect('bt6000')
fw_info()
#firmware update start (temporarily off)
# fw_send()
#Eltek found - func start
elif index == 1:
eltek_found()
#Conn refused - func start
elif index == 2:
conn_refuse()
#print output to console (test purposes)
print(px.before)
px.send(chr(13))
#Copy from current log file to temp.txt for editing
shutil.copy2('/samba/allaccess/Energomera_Eltek/{0}.txt'.format(host), '/home/bark/expect/temp.txt')
#EOF result - skip host, write result to excel
except pexpect.EOF:
print(host, "EOF")
ws.write(i, 0, host)
ws.write(i, 1, 'EOF')
#print output to console (test purposes)
print(px.before)
#Timeout result - skip host, write result to excel
except pexpect.TIMEOUT:
print(host, "TIMEOUT")
ws.write(i, 0, host)
ws.write(i, 1, 'TIMEOUT')
#print output to console (test purposes)
print(px.before)
#Copy from current log file to temp.txt for editing
shutil.copy2('/samba/allaccess/Energomera_Eltek/{0}.txt'.format(host), '/home/bark/expect/temp.txt')
#count +1 to correct output for Excel
i += 1
#workbook save
wb.save('/samba/allaccess/Energomera_Eltek_result.xls')
Have you have any suggestions or ideas, guys, how I can do this?
Any help is greatly appreciated.
You can use regular expressions
example:
>>> import re
>>>
>>> str = """
... Trying 10.40.81.167...
...
... Connected to 10.40.81.167.
...
... Escape character is '^]'.
...
...
...
... ####################################
... # #
... # RADIUS authorization disabled #
... # Enter local login/password #
... # #
... ####################################
... bt6000 login: admin
... Password:
... Please, fill controller information at first time (Ctrl+C to abort):
... ^C
... Controller information filling canceled.
... ^Cadmin#bt6000# firmware info
... Active boot partition: 1
... Partition 0 (reserved):
... Firmware: Energomera-2.3.1
... Version: 10117
... Partition 1 (active):
... Firmware: Energomera-2.3.1_01.04.15c
... Version: 10404M
... Kernel version: 2.6.38.8 #2 Mon Mar 2 20:41:26 MSK 2015
... STM32:
... Version: bt6000 10083
... Part Number: BT6024
... Updated: 27.04.2015 16:43:50
... admin#bt6000#
... """
>>> re.findall(r"Firmware:.*?([0-9].*)\s", str)
['2.3.1', '2.3.1_01.04.15c']
>>> reserved_firmware = re.search(r"reserved.*\s*Firmware:.*?([0-9].*)\s", str).group(1)
>>> reserved_firmware
'2.3.1'
>>> active_firmware = re.search(r"active.*\s*Firmware:.*?([0-9].*)\s", str).group(1)
>>> active_firmware
'2.3.1_01.04.15c'
>>>

Resources