SNMP Trap Truncated? - net-snmp

So we receive an SNMP trap and the text is as follows:
The following SNMP trap notification was generated by ms931.mytest.com (xx.x.xxx.xx):
DISMAN-EVENT-MIB::sysUpTimeInstance 0:0:21:08.75
SNMPv2-MIB::snmpTrapOID.0 DISMAN-EVENT-MIB::mteTriggerFired
DISMAN-EVENT-MIB::mteHotTrigger.0 44448217: No response from lo DISMAN-EVENT-MIB::mteHotTargetName.0
DISMAN-EVENT-MIB::mteHotContextName.0
DISMAN-EVENT-MIB::mteHotOID.0 SNMPv2-SMI::enterprises.7143.6.9.1.0
DISMAN-EVENT-MIB::mteHotValue.0 1224 SNMPv2-SMI::enterprises.7143.6.9.2.0 7
Essentially the line with the error code (i.e. 44448217) some how gets truncated. Or the line that follows eats it up or something. The snmpd service, that we use, is being run on Linux RH 5.6 and the RPM package version is net-snmp-5.3.2.2-9.el5_5.1. We have the 44448217 error message that we provide but why is it being truncated? This didn't happen with net-snmp-5.1.2-13.el4_7.2.
Cheers,
Matt

This question appears to have been also asked (and answered) in this Google group for Net-SNMP users.
To summarize the conversation there, the information was internally being truncated and had nothing to do with the trap itself. There were hard limits to the length of the text that were not followed, and thus the posted result.
DISMAN-EVENT-MIB::mteHotTrigger, the OID whose value appears to be truncated, is an SnmpAdminString that represents a trigger name. While an SnmpAdminString can be up to 255 bytes in length, the trigger names from DISMAN-EVENT-MIB::mteTriggerName can only be up to 32 bytes in length.
The trigger name was specified in the particular configuration as:
44448217: No response from local user/portal application.
This was well over the 32-byte limit. The solution was simply to make the trigger name the error code value:
44448217
(which worked out fine for the poster).
In theory, if the entire trigger table were to have been walked, the value of DISMAN-EVENT-MIB::mteTriggerName would have also appeared to have been truncated. That information would have been helpful to this post if it were originally provided.

Related

Encoding (?) issues fetching binary data (image type column) from SQL Server via pyodbc/Python3 [duplicate]

I'm executing this query
SELECT CMDB_ID FROM DB1.[dbo].[CDMID]
when I do this on SSMS 18 I get this:
I'm aware these are HEX values, although I'm not an expert on the topic.
I need to excute this exact query on python so I can process that information through a script, this script need as input the HEX values without any manipulation (as you see in the SSMS output).
So, through pyodbc library with a regular connection:
SQLserver_Connection("Driver={SQL Server Native Client 11.0};"
"Server=INSTANCE;"
"Database=DB1;"
"UID=USER;"
"PWD=PASS;")
I get this:
0 b'#\x12\x90\xb2\xbb\x92\xbbe\xa3\xf9:\xe2\x97#...
1 b'#"\xaf\x13\x18\xc9}\xc6\xb0\xd4\x87\xbf\x9e\...
2 b'#G\xc5rLh5\x1c\xb8h\xe0\xf0\xe4t\x08\xbb'
3 b'#\x9f\xe65\xf8tR\xda\x85S\xdcu\xd3\xf6*\xa2'
4 b'#\xa4\xcb^T\x06\xb2\xd0\x91S\x9e\xc0\xa7\xe543'
... ...
122 b'O\xa6\xe1\xd8\tA\xe9E\xa0\xf7\x96\x7f!"\xa3\...
123 b'O\xa9j,\x02\x89pF\xb9\xb4:G]y\xc4\xb6'
124 b'O\xab\xb6gy\xa2\x17\x1b\xadd\xc3\r\xa6\xee50'
125 b'O\xd7ogpWj\xee\xb0\xd8!y\xec\x08\xc7\xfa'
126 b"O\xf0u\x14\xcd\x8cT\x06\x9bm\xea\xddY\x08'\xef"
I have three questions:
How can this data be intepreted, why am I getting this?
Is there a way to manipulate this data back at the original HEX value? and if not...
What can I do to receive the original HEX value?
I've looking for a solution but haven't found anything yet, as you can see I'm not an expert on this kind of topics so if you are not able to provide a solution I will, also, really appreciate documents with some background knowledge that I need to get so I can provide a solution by myself.
I think your issue is simply due to the fact that SSMS and Python produce different hexadecimal representations of binary data. Your column is apparently a binary or varbinary column, and when you query it in SSMS you see its fairly standard hex representation of the binary values, e.g., 0x01476F726400.
When you retrieve the value using pyodbc you get a <class 'bytes'> object which is represented as b'hex_representation' with one twist: Instead of simply displaying b'\x01\x47\x6F\x72\x64\x00', Python will render any byte that corresponds to a printable ASCII character as that character, so we get b'\x01Gord\x00' instead.
That minor annoyance (IMO) aside, the good news it that you already have the correct bytes in a <class 'bytes'> object, ready to pass along to any Python function that expects to receive binary data.

DBT snowflake utf-8' codec can't decode byte 0xa0 in position 1031: invalid start byte

I get the following error when I add fields (even commented out) into my query into DBT.
I am using DBT cloud, running off snowflake.
This runs fine - it even has the table from which I want the fields in the join at the bottom.
However, as soos as I put in the fields - even commented out, I get the error in the title.
Any one have any idea why this is happenning.
so in the end, I found out that somehow a non-ANSI character (like a deadspace) crept into my script between copying code to and from the DBT interface to SSMS. Still not sure how this happened. None the less, DBT has a python module that parses the text, and this python module was having an issue with that character.
So in the end, I simply had to retype my code... and then it worked.

ISPF/Mainframe Send File to Host with variable length

I need help with something I'm trying to do and cannot find help anywhere.
I'm trying to upload a file to Host via ISPF (ISPF -> Command -> "Send File to Host"). And the problem I'm having is that the file have variable length (it was exported from a DB2 database via a SH script) and it's not working well.
What I mean is:
In windows, the file looks like this:
This is line one
This is the second line
And this is the third
But in Host it always ends being like this:
This is line one This is
the second line and this
is the third
Or similar, depending on the "Record length" I set when allocating the data set.
I don't know if the problem is how I'm creating the file on Host. If the problem is with the send parameters.. or maybe is with the TXT file.
I tried creating the dataset with different Record Formats (F, FB, V, VB) and with all was the same.
And also tried modifing the Send parameters in here:
Send parameters
And checked the txt file, but it seems to be ok.
Well, thanks in advance for the help! and sorry for my the poor english.
UPDATE 03/18
Hi! I'm still trying to solve this. But now I have a more info!
It seems that problem is within the file exported, not the configuration of the terminal.
I'm using a linux script to export the file from a DB2 database, and I'm trying to upload it from a Windows PC (that have the E3270 terminal).
I read a lot, and noticed that the file exported from DB2 to linux only use the "New Line" code to mark an End of Line (0A in hex), while Windows use "Carriage Return + New Line" (which are "0D 0A" in hex).
Could the problem be there?
I tried creating a new txt file with Windows (which end each line with 0D 0A).. and it worked great! But I tried to modify the exported file.. adding an "space" at the end, and then changing that space hex (20) with the 0D (so I had 0D 0A.. it didn't let me "add" a new hexa).. but it didn't work. That.. throw me away the whole theory haha, but maybe I'm doing something wrong.
well, thanks!
From the Host output the file (dataset) is being considered as fixed length of 24. It needs to be specified as Variable (VB) in the send.
From here Personal Communications 6.0.0>Product Documentation>Books>Emulator User's Reference>Transferring Files it appears that you can specify this as per :-
Record Format
Valid only for VM/CMS and MVS/TSO when APPEND is not specified for
file transmission. You can select any of the following:
Default
Fixed (fixed length)
Variable (variable length)
Undefined (undefined mode for MVS/TSO only)
If you select the Default value, the record format is selected
automatically by the host system.
Specifying Variable for VM file transfer enables host disk space to be
used efficiently. Logical Record Length (LRECL)
Valid only for VM/CMS and MVS/TSO when APPEND is not specified for
file transmission.
Enter the logical record length to be used (host record byte count) in
the LRECL text box. If Variable and Undefined Mode are specified as
the record format, the logical record length is the maximum record
length within a file. The maximum value is 32767.
The record length of a file sent from a workstation to the host system
might exceed the logical record length specified here. If so, the host
file transfer program divides the file by the logical record length.
When sending a text file from a workstation to a host, if the text
file contains 2-byte workstation codes (such as kanji codes), the
record length of the file is changed because SO and SI have been
inserted.
To send a file containing long records to the host system, specify a
sufficiently long logical record length.
Because the record length of a workstation file exceeds the logical
record length, a message does not appear normally if each record is
divided. To display a message, add the following specification to the
[Transfer] item of the workstation profile:
DisplayTruncateMessage = Y
As I don't have access I can't actually look into this further but I do recall that it can be a little confusing to use the file transfer.
I'd suggest using the 32767 as the LRECL, along with variable, and perhaps having a look at the whole page that has been linked. Something on the PC side will have to know how to convert the file (ie at LF determine the length of the record and prefix the record with that record length (if I recall correctly 2 bytes/a word)) so you might have to use variable in conjunction with another selectable parameter.
If you follow the link, you will see that Record Format is part of the Defining Transfer Types, you may have to define a transfer type as per :-
Click Edit -> Preferences -> Transfer from the session window.
Click the tab for your host type or modem protocol.
The property page for the selected host or modem protocol opens. The items that appear depend on the selected host system.
Enter transfer-type names in the Transfer Type box, or select them from the drop-down list.
Select or enter the required items (see Items to Be Specified).
To add or replace a transfer type, click Save. To delete a transfer type, click Delete.
A dialog box displays, asking for confirmation. Click OK.

Parsing text in Julia- Invalid ascii sequence

I'm trying to format a data file so that my other program will properly handle it. I am trying to handle the following data and I am getting a very weird error that I can't seem to put my finger on.
https://snap.stanford.edu/data/wiki-RfA.html
I am trying to format the data as [SRC TGT VOT], so I'd like the first two lines of my output file to be
1 2 1
3 2 1
because user 1 (stored in dictionary of users first) votes for user 2 with VOT 1 and then user 3 votes for user 2 with VOT 1. My problem is that when I try to run my code below, I always end up getting a very strange "invalid ascii sequence" error- can anyone help me identify the issue or perhaps find a way around this? It'd obviously be best if I could learn what I am doing wrong. Thank you!
Note, I understand that this is a bit specific of a question and I appreciate any help- I'm sort of baffled by this error and don't know how to resolve it at the moment.
f=open("original_vote_data.txt") #this is the file linked above
arr=readlines(f)
i=edge_count=src=tgt=vot=1
dict=Dict{ASCIIString, Int64}()
edges=["" for k=1:198275]
while i<1586200
src_temp=(arr[i])[5:end-2]
if (haskey(dict, src_temp))
new_src= dict[src_temp]
else
dict[src_temp]=src
new_src=src
src=src+1
end
tgt_temp=(arr[i+1])[5:end-2]
if (haskey(dict, tgt_temp))
new_tgt= dict[tgt_temp]
else
dict[tgt_temp]=tgt
new_tgt=tgt
tgt=tgt+1
end
vot_temp=(arr[i+2])[5]
edges[edge_count]=string(new_src)* " " * string(new_tgt)* " " *string(vot_temp)
edge_count=edge_count+1
i=i+8
end
Here we go - I'll write up my comment as an answer since it seems to have solved the question.
My hunch that the error stemmed from the fourth line (dict=Dict{ASCIIString, Int64}) was based on the fact that ASCIIStrings will error if you try to store non-ASCII characters in them. Since this file is coming from an international site, it's not unlikely that there are users with unicode characters in their names (or elsewhere in the data). So the simple fix is to change all instances of ASCIIString to UTF8String.
Just to make this answer a bit more complete, I downloaded the file and tried running the program. The simplest way to debug this is to run the script at top-level in the REPL and then inspect the program state after the error. After the error is thrown, i==3017. Now just try running each line of the while loop incrementally. You'll quickly see that line 3017 contains "SRC:Guðsþegn\n" — unicode, as I suspected. When you try to create a new entry in dict with that as the key, the error should have a backtrace to setindex! in dict.jl, where you'll see that it's trying to convert the key (a UTF8String) to an ASCIIString. So changing the dictionary type to have UTF8String keys solves the problem.
As it turns out, the edges array only contains strings of three integers (or sometimes a hyphen), so the ASCIIString there is ok, but still a little dangerous. I'd probably store that information in a more dedicated array of ints instead of converting it to a space-separated string: you know the first two elements in the string are ints, but the last element is unvalidated text from the file itself… which may be unicode or a space itself (which could mess up processing down the line).

Twitter name length in DB

I'm adding a field to a member table for twitter names for members on a site. From what I can work out the maximum twitter name length is 20 so it seems obvious that I should set the field size to varchar(20) (SQL Server).
Is this a good idea?
What if Twitter starts allowing multi-byte characters in the user names? Should I make this field nvarchar?
What if Twitter decides to increase the size of a username? Should I make it 50 instead and then warn a user if they enter a name longer than 20?
I'm trying to code defensively so that I can reduce the chances of modifying the code around this input field and the DB schema changes that might be needed.
while looking for the same info i found the following in a sort of weird place in the twitter help section (why not in the API docs? who knows?):
"Your user name can contain up to 15 characters. Why no more? Because we append your user name to your 140 characters on outgoing SMS updates and IM messages. If your name is longer than 15 characters, your message would be too long to send in a single text message."
http://help.twitter.com/entries/14609-how-to-change-your-username
so perhaps one could even get away with varchar(16)
While new accounts has a limit of 15 characters in the username and 20 characters in the name, for old accounts this limit seems to be undefined. The documentation here states:
Earlybirds: Early users of Twitter may have a username or real name longer than user names we currently allow. This is ok until you need to save changes to your account settings. No changes will save unless your user/real name is the appropriate length; this means you have to change your real name/username to meet our most modern regulations.
So you are probably better of having a long field and save yourself some time when you hit the border cases.
Nowadays, space is usually not a concern, so I'd use a mostly generic approach: use nvarchar(200).
When designing DB schemas you must think 2 steps ahead, even more than when programming. Or get yourself a good schema update strategy, then you'll be fine also with varchar(20).
Personally I wouldn't worry. Use something like 200 (or a nice round number like 256) and you won't have this problem. The limit then is on their API, so you might be best to do some verification that it is a real username anyway. That verification implicitly includes the length checking.
Twitter allows for 140 characters to be typed in as the message payload for transmission, and includes "[username]:" at the beginning of the SMS message. With an upper limit of 140 characters for the message combined with the messaging system being based on SMS, I think they would have to decrease the allowable message size to increase the username. I think it is a pretty safe bet that 20 characters would be the max username length. I'd use nvarchar just in case someone uses 16-bit characters, and maybe pad it a little. nvarchar(24) should work; I wouldn't go any higher than nvarchar(32).
If you're going to develop an app for their service, you should probably watch the messages on Twitter's API Announcements mailing list.
[opinion only]
Twitter works on SMS and the limit there is something like 256 characters, so the name has to be small to avoid hitting into the message.
nvarchar would be a good idea for all twitter text
If the real ID of a Twitterer is a cell-phone then the longest phone number is your max - 20 should easily cover it!
Defensive programming is always good :) !
[/opinion only]
There's only so much you can code defensively, I'd suggest looking at the twitter API documentation and following anything specified there. That said, from a cursory look through nowhere seems to specify the length of the username, annoyingly :/
One thing to keep in mind here is that a field using nvarchar needs twice as much space, since it needs 2 bytes to store each potential unicode character. So, a twitter status would need a size of 280 using nvarchar, PLUS some more for possible retweets, as those aren't inlcuded in the 140 char limit. I discovered this just today in fact!
For example:
RT #chatrbyte: here's some great tweet
that I'm retweeting.
The RT #chatrbyte: is not included in the 140 character limit.
So, assuming that a Twitter username has a 20 character limit, and wanting to also capture a ReTweet, a field to hold a full tweet would need to be a nvarchar of size 280 + 40 (for the username) + 8 (for the initial RT # before a retweet) +4 (for the :+space after a Retweet username) = 330.
I would say go for nvarchar(350) to give yourself a little room. That's what I am trying right now. If I'm wrong I'll update here.
I'm guessing you are managing the data entry on the Twitter name field in your application somewhere other than just in the database. If you open the field to 200 characters, you only have to change the code in one place or if you allow users to enter Twitters names with more than 20 characters, you don't have to worry about a change at all.

Resources