How to set request-specific data to SNMP agent using net-snmp? - net-snmp

I want the SNMP agent to response differently depending on the source requester, but cannot find a way to magic convey some data to make it distinguishable by the SNMP agent.
What I have tried setting is the netsnmp_session structure and netsnmp_pdu structure. because they're two parameters of snmp_send. The data field I tried to facilitate is myvoid and callback_magic.
But unfortunately on the SNMP agent, the data received are all 0, which is not what I have set on the SNMP client.

Sorry to answer myselv's question.
Finally I found the following trick to circumvent the issue:
insert a well known SNMP object(such as ifNumber) immediately after the target SNMP object to identify the specific SNMP query.
The handler function in agent should check the variable next to current variable to see whether
it's exactly the well known SNMP object ifNumber. If yes then the query comes from you, which using
NET-SNMP API to form the variable list of this query.
client code:
oid dest_OID[ MAX_OID_LEN ] = {0};
size_t dest_OID_len = COUNT_OF(dest_OID);
get_node(g_snmp_name_ifNumber, dest_OID, &dest_OID_len );
snmp_add_null_var(pdu, dest_OID, dest_OID_len);
On agent side:
int get_status(netsnmp_mib_handler *handler,
netsnmp_handler_registration *reginfo,
netsnmp_agent_request_info *reqinfo,
netsnmp_request_info *requests)
{
switch (reqinfo->mode) {
case MODE_GET:
{
bool is_sent_by_manager = false;
if( requests->requestvb->next_variable )
{
struct variable_list * v = requests->requestvb->next_variable;
oid dest_OID[ MAX_OID_LEN ] = {0};
size_t dest_OID_len = COUNT_OF(dest_OID);
get_node(g_snmp_name_ifNumber, dest_OID, &dest_OID_len );
const int nbytes = v->name_length * sizeof(v->name[0]);
if( dest_OID_len >= v->name_length
&& memcmp(dest_OID, v->name, nbytes) == 0 ) {
is_sent_by_manager = true;
}
}
if( is_sent_by_manager ) {
...
}
else {
...
}

Related

Cannot connect to https server using mbedtls example client

EDIT: I tested with a static IP on both the board and my computer with a python SSL server and it works as expected, leading me to believe that the DHCP is the problem. If anyone has a lead on what may be occuring it would be greatly appreciated.
I am using the mbedTLS library on a STM32F746-NUCLEO board and I want to use it as both a SSL client and server. The server works well, so i tried to use the client example code (as is, in a separate project).
The following mbedtls_net_connect call returns -68 (MBEDTLS_ERR_NET_CONNECT_FAILED). Digging deeper reveals that it is due to a routing error (line 900 in tcp.c from LwIP), because local_ip is set to 0. The board is in DHCP mode on a home router which is connected to the internet. The destination server is up and running and the SERVER_NAME is the IP address in plain text.
mbedtls_entropy_context client_entropy;
static mbedtls_net_context server_fd;
mbedtls_x509_crt cacert;
static uint32_t flags;
static uint8_t vrfy_buf[512];
static const uint8_t* client_pers = "ssl_client";
mbedtls_ssl_config client_config;
mbedtls_ctr_drbg_context client_ctr_drbg;
mbedtls_ssl_context client_ssl;
static uint8_t client_buf[1024];
void SSL_Server(void const *argument) {
int ret, len;
UNUSED(argument);
mbedtls_net_init(&server_fd);
mbedtls_ssl_init(&client_ssl);
mbedtls_ssl_config_init(&client_config);
mbedtls_x509_crt_init(&cacert);
mbedtls_ctr_drbg_init(&client_ctr_drbg);
// Seeding the random number generator
mbedtls_entropy_init( &client_entropy );
len = strlen((char *) client_pers);
if((ret = mbedtls_ctr_drbg_seed(&client_ctr_drbg, mbedtls_entropy_func,
&client_entropy, (const unsigned char *) client_pers, len)) != 0)
{
goto exit;
}
// 1. Initialize certificates
ret = mbedtls_x509_crt_parse( &cacert, (const unsigned char *) mbedtls_test_cas_pem,
mbedtls_test_cas_pem_len );
if( ret < 0 )
{
goto exit;
}
if((ret = mbedtls_net_connect(&server_fd, SERVER_NAME, SERVER_PORT,
MBEDTLS_NET_PROTO_TCP)) != 0)
{
mbedtls_printf( " failed\n ! mbedtls_net_connect returned %d\n\n", ret );
goto exit;
}
}
Here the SSL_Server function is a FreeRTOS thread called in the main(). I can also confirm that the network interface has been assigned an IP address when the error occurs.
I expect the connection call to return 0 and connect to the server to initiate the SSL handshake.
You need to set the default netif route for LWIP to be able to route the remote address.
Simply add netif_set_default(&netif); after dhcp_start() inside the function mbedtls_net_init().
void mbedtls_net_init( mbedtls_net_context *ctx ) {
...
/* add the network interface */
netif_add(&netif, &addr, &netmask, &gw, NULL, &ethernetif_init, &ethernet_input);
/* register the default network interface */
netif_set_up(&netif);
#ifdef USE_DHCP
netif.ip_addr.addr = 0;
dhcp_start(&netif);
#endif
netif_set_default(&netif); // <-- Here
osDelay(500);
start = HAL_GetTick();
while((netif.ip_addr.addr == 0) && (HAL_GetTick() - start < 10000))
{
}
if (netif.ip_addr.addr == 0) {
printf(" Failed to get ip address! Please check your network configuration.\n");
Error_Handler();
}
...
The documentation for MbedTLS can be kinda tricky, hope this helps.

Building an ASN1 set using the openssl C API

I'm trying to build a set of sequences using the openssl C API. As was noted in various places, the documentation is VERY sparse on this and code samples seem to be non-existent.
I've found various suggestions on the web but none that seemed to work correctly.
I've gotten that far in order to create sequences:
#include <openssl/asn1t.h>
countdef struct StringStructure {
ASN1_INTEGER *count;
ASN1_INTEGER *asnVersion;
ASN1_OCTET_STRING *value;
} StringSequence;
DECLARE_ASN1_FUNCTIONS(StringSequence)
ASN1_SEQUENCE(StringSequence) = {
ASN1_SIMPLE(StringSequence, count, ASN1_INTEGER),
ASN1_SIMPLE(StringSequence, asnVersion, ASN1_INTEGER),
ASN1_SIMPLE(StringSequence, value, ASN1_OCTET_STRING),
} ASN1_SEQUENCE_END(StringSequence)
IMPLEMENT_ASN1_FUNCTIONS(StringSequence)
auto aSeq = StringSequence_new();
aSeq->count = ASN1_INTEGER_new();
aSeq->asnVersion = ASN1_INTEGER_new();
aSeq->value = ASN1_OCTET_STRING_new();
if (!ASN1_INTEGER_set(aSeq->count, 10) ||
!ASN1_INTEGER_set(aSeq->asnVersion, 1) ||
!ASN1_STRING_set(aSeq->value, "Test", -1)) {
// -- Error
}
auto anotherSeq = StringSequence_new();
anotherSeq->count = ASN1_INTEGER_new();
anotherSeq->asnVersion = ASN1_INTEGER_new();
anotherSeq->value = ASN1_OCTET_STRING_new();
if (!ASN1_INTEGER_set(anotherSeq->count, 32) ||
!ASN1_INTEGER_set(anotherSeq->asnVersion, 1) ||
!ASN1_STRING_set(anotherSeq->value, "Something Else", -1)) {
// -- Error
}
Where do I go from there in order to build a set of these?
The OpenSSL source code is your best documentation...
As an example of a construct like the one you are trying to build, check out the PKCS7_SIGNED ASN1 definition in crypto/pkcs7/pk7_asn1.c:
ASN1_NDEF_SEQUENCE(PKCS7_SIGNED) = {
ASN1_SIMPLE(PKCS7_SIGNED, version, ASN1_INTEGER),
ASN1_SET_OF(PKCS7_SIGNED, md_algs, X509_ALGOR),
ASN1_SIMPLE(PKCS7_SIGNED, contents, PKCS7),
ASN1_IMP_SEQUENCE_OF_OPT(PKCS7_SIGNED, cert, X509, 0),
ASN1_IMP_SET_OF_OPT(PKCS7_SIGNED, crl, X509_CRL, 1),
ASN1_SET_OF(PKCS7_SIGNED, signer_info, PKCS7_SIGNER_INFO)
} ASN1_NDEF_SEQUENCE_END(PKCS7_SIGNED)
Its second member, md_algs, is a set of X509_ALGOR, which is in itself a sequence defined in crypto/asn1/x_algor.c:
ASN1_SEQUENCE(X509_ALGOR) = {
ASN1_SIMPLE(X509_ALGOR, algorithm, ASN1_OBJECT),
ASN1_OPT(X509_ALGOR, parameter, ASN1_ANY)
} ASN1_SEQUENCE_END(X509_ALGOR)
So that field md_algs is a set of sequences, like you are asking for. The equivalent C- structure definitions can be found in include/openssl/pkcs7.h:
typedef struct pkcs7_signed_st {
ASN1_INTEGER *version; /* version 1 */
STACK_OF(X509_ALGOR) *md_algs; /* md used */
STACK_OF(X509) *cert; /* [ 0 ] */
STACK_OF(X509_CRL) *crl; /* [ 1 ] */
STACK_OF(PKCS7_SIGNER_INFO) *signer_info;
struct pkcs7_st *contents;
} PKCS7_SIGNED;
The md_algs field shows that to capture the set-construct, you need to use the STACK API, which is intended to handle collections. In your case, that would be a STACK_OF(StringSequence).

C NET-SNMP Get and Set specifically via MIB Name, Not OID

I have written and am testing software for a generic SNMP client module in C as well as an implementation using this generic module. I am having trouble getting a get request to work by passing in a MIB name(e.g. sysDescr) instead of an OID(e.g. 1.3.6.1.2.1.1.1).
I am successful when I pass in a character array containing the OID to _snmp_parse_oid()_ but not the name.
I have checked the MIB file to make sure I am using the correct name. When I run the command line SNMP translate on the name it gives me the OID listed above:
$ snmptranslate -m +<MIB File> -IR -On <MIB Name>
.#.#.#.#.#.#.#####.#.#.#.#.#.#
(In the above command I replaced my actual mib file with <MIB File>, mib name with <MIB Name>, and OID numbers returned from the command with # characters)
The following is my code for my generic SNMP get function, assume returned values are #define numbers and I have removed some error handling for brevity:
/// #Synopsis Function to send out get request since the
/// SNMPOidData object has been setup
///
/// #Param oid_name String containing the OID to set
/// #Param value Value to set
///
/// #Returns Error
int snmpGet(SNMPAgent *this, char const * const oid_name, SNMPOidData * value)
{
netsnmp_pdu *pdu;
netsnmp_pdu *response;
netsnmp_variable_list *vars;
oid *retrieved_oid;
oidStruct oid_to_get;
int status = 0;
int result = ERROR_SUCCESS;
// Create the PDU for the data for our request
pdu = snmp_pdu_create(SNMP_MSG_GET);
oid_to_get.OidLen = MAX_OID_LEN; // Set max length
// Send out the request(s)
retrieved_oid = snmp_parse_oid(oid_name, oid_to_get.Oid, &oid_to_get.OidLen);
// Set the data
snmp_add_null_var(pdu, oid_to_get.Oid, oid_to_get.OidLen))
// Send the request out
status = snmp_synch_response(this->port.snmp_session_handle, pdu, &response);
if (STAT_SUCCESS == status)
{
if (SNMP_ERR_NOERROR == response->errstat)
{
vars = response->variables;
value->type = vars->type;
if (vars->next_variable != NULL)
{
// There are more values, set return type to null
value->type = ASN_NULL;
}
else if (!(CHECK_END(vars->type))) // Exception
{
result = ERROR_NOT_PRESENT;
fprintf(stderr, "Warning: OID=%s gets snmp exception %d \n",
oid_name, vars->type);
}
else if ((vars->type == ASN_INTEGER)
|| (vars->type == ASN_COUNTER)
|| (vars->type == ASN_UNSIGNED))
{
value->integer = *(vars->val.integer);
value->str_len = sizeof(value->integer);
}
else
{
value->str_len = vars->val_len;
if (value->str_len >= MAX_ASN_STR_LEN)
value->str_len = MAX_ASN_STR_LEN;
if (value->str_len > 0)
memcpy(value->string, vars->val.string, value->str_len);
// guarantee NULL terminated string
value->string[value->str_len] = '\0';
}
}
}
this->freePDU(response); // Clean up: free the response
return result;
}
The error I am getting:
oid_name: Unknown Object Identifier (Sub-id not found: (top) -> <MIB Name>)
Which comes from the following call:
retrieved_oid = snmp_parse_oid(oid_name, oid_to_get.Oid, &oid_to_get.OidLen);
I have made sure that the MIB files are on the machine in the configured place (snmptranslate wouldn't work if this weren't the case).
I have spent a good amount of time on Google results as well as directly searching here on Stack Overflow.
The following is a good tutorial but does not address my issue (they directly reference the OID they want to get the value of):
http://www.net-snmp.org/wiki/index.php/TUT:Simple_Application
Any help or insight would be much appreciated.
Some other info I can think of is that this is being compiled to run on an armv5tejl target running Linux communicating with an external device via ethernet.
Thanks,
When I call MIB variables by their string name I use the following net-snmp functions.
read_objid(OID, anOID, &anOID_len);
snmp_add_null_var(pdu, anOID, anOID_len);
Where:
oid anOID[MAX_OID_LEN];
size_t anOID_len = MAX_OID_LEN;
In my program I pack this all into a single function call.
void packSingleGetOID(const char *OID, struct snmp_pdu *pdu){
// OID in / PDU out
oid anOID[MAX_OID_LEN];
size_t anOID_len = MAX_OID_LEN;
read_objid(OID, anOID, &anOID_len);
snmp_add_null_var(pdu, anOID, anOID_len);
}
I pass in the MIB OID string and the pointer to session pdu. Remember the OID string is MIB_Name::variable.

SDLNet Networking Not Working

I am working on a game written in C using SDL. Given that it already uses SDL, SDL_image, and SDL_ttf, I decided to add SDL_mixer and SDL_net to my engine. Getting SDL_mixer set up and working was very easy, but I am having a lot of trouble with SDL_net.
To test I created a very simple application with the following rules:
Run without arguments act as a TCP server on port 9999
Run with an argument try to connect to the server at the given IP address on port 9999
Here are some of the key lines of the program (I'm not going to post my whole event-driven SDL engine because its too long):
char *host = NULL;
if (argc > 1) host = argv[1];
and...
IPaddress ip;
TCPsocket server = NULL;
TCPsocket conn = NULL;
if (host) { /* client mode */
if (SDLNet_ResolveHost(&ip,host,port) < 0)
return NULL; //this is actually inside an engine method
if (!(conn = SDLNet_TCP_Open(&ip)))
return NULL;
} else { /* server mode */
if (SDLNet_ResolveHost(&ip,NULL,port) < 0)
return NULL;
if (!(server = SDLNet_TCP_Open(&ip)))
return NULL;
}
and... inside the event loop
if (server) {
if (!conn)
conn = SDLNet_TCP_Accept(server);
}
if (conn) {
void *buf = malloc(size); //server, conn, size are actually members of a weird struct
while (SDLNet_TCP_Recv(conn,buf,size))
onReceive(buf); //my engine uses a callback system to handle things
free(buf);
}
The program seems to start up just fine. However for some reason when I run it in client mode trying to connect to my home computer (which I have on a different IP) from my laptop I find that the call to SDLNet_TCP_Open blocks the program for awhile (5-10 seconds) then returns NULL. Can anybody see what I did wrong? Should I post more of the code? Let me know.

QT Movie Metadata Tagging with QTKit

I'm trying to do some metadata tagging to some video files using QTKit. I've got things down for tagging atom that take a string as their value, but having a hard time setting atoms that take an 8-bit integer as their argument. Here is what I got right now from Apple's Documentation and other various sources on the internet:
-(void) setMediaKind: (NSString *) value
{
QTMetaDataRef metaDataRef;
Movie theMovie;
OSStatus status;
theMovie = [movie quickTimeMovie];
status = QTCopyMovieMetaData (theMovie, &metaDataRef );
NSAssert(status == noErr,#"QTCopyMovieMetaData failed!");
if (status == noErr)
{
int intValue = NSSwapHostIntToBig([(NSNumber *)value intValue]);
UInt8 *dataValuePtr = (UInt8*)(&intValue);
ByteCount dataSize = sizeof(int);
if (dataValuePtr)
{
OSType key = 'stik';
QTMetaDataItem outItem;
status = QTMetaDataAddItem(metaDataRef,
kQTMetaDataStorageFormatiTunes,
kQTMetaDataKeyFormatiTunesShortForm,
(const UInt8 *)&key,
sizeof(key),
dataValuePtr,
dataSize,
kQTMetaDataTypeSignedIntegerBE,
&outItem);
NSAssert(status == noErr,#"QTMetaDataAddItem failed!");
char langCodeStr[] = "en";
status = QTMetaDataSetItemProperty(
metaDataRef,
outItem,
kPropertyClass_MetaDataItem,
kQTMetaDataItemPropertyID_Locale,
strlen(langCodeStr) + 1,
langCodeStr);
}
}
}
So the atom 'stik' sets the video's kind in iTunes. If I want to specify the video as a TV Show i'd need to assign it a value of 10. If I send #"10" to this method I don't get any errors but the video file isn't properly tagged either.
I'm sure part of my problem is I skipped learning C and went straight to Objective C so when I have to dive into C like this I have problems.

Resources