Explain the hex of the OID portion of an SNMP GET REQUEST message - c

So I've been working on coding the C version of an SNMPGET request on linux. I open a UDP socket, form the message, and send it out only to continually get a reply saying the OID isn't found. The OID I was using was:
1.3.6.1.2.1.1.1.0
Hex:
00 03 06 01 02 01 01 01 00
After running tcpdump on the packets being sent by snmpget, I realized that even with the same OID the packet being sent actually contained the hex:
2B 06 01 02 01 01 01 00
When I tried that hex with my program it worked. So, question is, why is it 2B instead of 01 03? I've looked everywhere but I can't wrap my head around the logic of it. It seems that every SNMP get message is sent this way where the iso.org translates to 2B but I've yet to see a reason why.

First two numbers (1.3 in your case) are encoded differently. The calculation will be:
1*40 + 3 = 43(dec) = 2B(hex).
That's the reason for your 2B.
Read this for more info.
And here is an online tool to encode/decode.

Related

get wifi security type using SIOCSIWSCAN ioctl for WEP network

I'm trying to scan the list of available networks and enumerate the security type for each SSID. I'm at a point where I can issue a SIOCSIWSCAN ioctl and parse the results. However, when I try to differentiate between a WEP network and an open network, I seem to be getting the same type of IE from the AP.
For example, I configured my Dlink DIR-655 router to be of open type vs WEP.
Since the network is a WEP network, I look for the hex byte 0xDD to tell me that this is an IE describing a WPA/WEP/open network. For this case, I only get one byte that says 0xDD for both open and WEP networks and the corresponding IE looks the same for both:
DD 18 00 50 F2 02 01 01 83 00 03 A4 00 00 27 A4 00 00 42 43 5E 00 62 32 2F 00
Does that mean that the router doesn't populate information about open networks under the byte 0xDD and I should be looking somewhere else?
PS: I've been reverse engineering the source from iwlist to tell me how to read the IEs returned. But they only seem to be describing WPA and WPA2 networks

freebcp: "Unicode data is odd byte size for column. Should be even byte size"

This file works fine (UTF-8):
$ cat ok.txt
291054 Ţawī Rifā
This file causes an error (UTF-8):
$ cat bad.txt
291054 Ţawī Rifā‘
Here's the message:
$ freebcp 'DB.dbo.table' in bad.txt ... -c
Starting copy...
Msg 20050, Level 4
Attempt to convert data stopped by syntax error in source field
Msg 4895, Level 16, State 2
Server '...', Line 1
Unicode data is odd byte size for column 2. Should be even byte size.
Msg 20018, Level 16
General SQL Server error: Check messages from the SQL Server
The only difference is the last character, which is unicode 2018 (left single quotation mark)
Any idea what is causing this error?
The SQL Server uses UTF-16LE (though TDS starts with UCS-2LE and switches over I believe)
The column in question is nvarchar(200)
Here's the packet sent right before the error:
packet.c:741:Sending packet
0000 07 01 00 56 00 00 01 00-81 02 00 00 00 00 00 08 |...V.... ........|
0010 00 38 09 67 00 65 00 6f-00 6e 00 61 00 6d 00 65 |.8.g.e.o .n.a.m.e|
0020 00 69 00 64 00 00 00 00-00 09 00 e7 90 01 09 04 |.i.d.... ...ç....|
0030 d0 00 34 04 6e 00 61 00-6d 00 65 00 d1 ee 70 04 |Ð.4.n.a. m.e.Ñîp.|
0040 00 13 00 62 01 61 00 77-00 2b 01 20 00 52 00 69 |...b.a.w .+. .R.i|
0050 00 66 00 01 01 18 - |.f....|
Update: This issue has apparently been fixed in FreeTDS v1.00.16, released 2016-11-04.
I can reproduce your issue using FreeTDS v1.00.15. It definitely looks like a bug in freebcp that causes it to fail when the last character of a text field has a Unicode code point of the form U+20xx. (Thanks to #srutzky for correcting my conclusion as to the cause.) As you noted, this works ...
291054 Ţawī Rifā
... and this fails ...
291054 Ţawī Rifā‘
... but I found that this also works:
291054 Ţawī Rifā‘x
So, an ugly workaround would be to run a script against your input file that would append a low-order non-space Unicode character to each text field (e.g., x which is U+0078, as in the last example above), use freebcp to upload the data, and then run an UPDATE statement against the imported rows to strip off the extra character.
Personally, I would be inclined to switch from FreeTDS to Microsoft's SQL Server ODBC Driver for Linux, which includes the bcp and sqlcmd utilities when installed using the instructions described here:
https://gallery.technet.microsoft.com/scriptcenter/SQLCMD-and-BCP-for-Ubuntu-c88a28cc
I just tested it under Xubuntu 16.04, and although I had to tweak the procedure a bit to use libssl.so.1.0.0 instead of libssl.so.0.9.8 (and the same for libcrypto), once I got it installed the bcp utility from Microsoft succeeded where freebcp failed.
If the SQL Server ODBC Driver for Linux will not work on a Mac then another alternative would be to use the Microsoft JDBC Driver 6.0 for SQL Server and a little bit of Java code, like this:
connectionUrl = "jdbc:sqlserver://servername:49242"
+ ";databaseName=myDb"
+ ";integratedSecurity=false";
String myUserid = "sa", myPassword = "whatever";
String dataFileSpec = "C:/Users/Gord/Desktop/bad.txt";
try (
Connection conn = DriverManager.getConnection(connectionUrl, myUserid, myPassword);
SQLServerBulkCSVFileRecord fileRecord = new SQLServerBulkCSVFileRecord(dataFileSpec, "UTF-8", "\t", false);
SQLServerBulkCopy bulkCopy = new SQLServerBulkCopy(conn)) {
fileRecord.addColumnMetadata(1, "col1", java.sql.Types.NVARCHAR, 50, 0);
fileRecord.addColumnMetadata(2, "col2", java.sql.Types.NVARCHAR, 50, 0);
bulkCopy.setDestinationTableName("dbo.freebcptest");
bulkCopy.writeToServer(fileRecord);
} catch (Exception e) {
e.printStackTrace(System.err);
}
This issue has nothing to do with UTF-8 given that the data being transmitted, as shown in the transmission packet (bottom of the question) is UTF-16 Little Endian (just as SQL Server would be expecting). And it is perfectly good UTF-16LE, all except for the missing final byte, just like the error message implies.
The problem is most likely a minor bug in freetds that incorrectly applies logic meant to strip off trailing spaces from variable length string fields. There are no trailing spaces, you say? Well, if it hadn't gotten chopped off then it would be a little clearer (but, if it hadn't gotten chopped off there wouldn't be this error). So, let's look at what the packet to see if we can reconstruct it.
The error in the data is probably being overlooked because the packet contains an even number of bytes. But not all fields are double-byte, so it doesn't need to be an even number. If we know what the good data is (prior to the error), then we can find a starting point in the data and move forwards. It is best to start with Ţ as it will hopefully be above the 255 / FF value and hence take 2 bytes. Anything below will have a 00 and many of the characters have that on both sides. While we should be able to assume Little Endian encoding, it is best to know for certain. To that end, we need at least one character that has two non-00 bytes, and bytes that are different (one of the character is 01 for both bytes and that does not help determine ordering). The first character of this string field, Ţ, confirms this as it is Code Point 0162 yet shows up as 62 01 in the packet.
Below are the characters, in the same order as the packet, their UTF-16 LE values, and a link to their full details. The first character's byte sequence of 62 01 gives us our starting point, and so we can ignore the initial 00 13 00 of line 0040 (they have been removed in the copy below for readability). Please note that the "translation" shown to the right does not interpret Unicode, so the 2-byte sequence of 62 01 is displayed as 62 by itself (i.e. lower-case Latin "b") and 01 by itself (i.e. non-printable character; displayed as ".").
0040 xx xx xx 62 01 61 00 77-00 2b 01 20 00 52 00 69 |...b.a.w .+. .R.i|
0050 00 66 00 01 01 18 ?? - |.f....|
Ţ -- 62 01 -- http://unicode-table.com/en/0162/
a -- 61 00 -- http://unicode-table.com/en/0061/
w -- 77 00 -- http://unicode-table.com/en/0077/
ī -- 2B 01 -- http://unicode-table.com/en/012B/
-- 20 00 -- http://unicode-table.com/en/0020/
R -- 52 00 -- http://unicode-table.com/en/0052/
i -- 69 00 -- http://unicode-table.com/en/0069/
f -- 66 00 -- http://unicode-table.com/en/0066/
ā -- 01 01 -- http://unicode-table.com/en/0101/
‘ -- 18 20 -- http://unicode-table.com/en/2018/
As you can see, the last character is really 18 20 (i.e. a byte-swapped 20 18 due to the Little Endian encoding), not 01 18 as it might appear if reading the packet starting at the end. Somehow, the final byte -- hex 20 -- is missing, hence the Unicode data is odd byte size error.
Now, 20 by itself, or followed by 00, is a space. This would explain why #GordThompson was able to get it working by adding an additional character to the end (the final character was no longer trimmable). This could be further proven by ending with another character that is a U+20xx Code Point. For example, if I am correct about this, then ending with ⁄ -- Fraction Slash U+2044 -- would have the same error, while ending with ⅄ -- Turned Sans-Serif Capital Y U+2144 -- even with the ‘ just before it, should work just fine (#GordThompson was kind enough to prove that ending with ⅄ did work, and that ending with ⁄ resulted the same error).
If the input file is null (i.e. 00) terminated, then it could simply be the 20 00 ending sequence that does it, in which case ending with a newline might fix it. This can also be proven by testing a file with two lines: line 1 is the existing row from bad.txt, and line 2 is a line that should work. For example:
291054 Ţawī Rifā‘
999999 test row, yo!
If the two-line file shown directly above works, that proves that it is the combination of a U+20xx Code Point and that Code Point being the last character (of the transmission more than of the file) that exposes the bug. BUT, if this two-line file also gets the error, then it proves that having a U+20xx Code Point as the last character of a string field is the issue (and it would be reasonable to assume that this error would happen even if the string field were not the final field of the row, since the null terminator for the transmission has already been ruled out in this case).
It seems like either this is a bug with freetds / freebcp, or perhaps there is a configuration option to not have it attempt trimming trailing spaces, or maybe a way to get it to see this field as being NCHAR instead of NVARCHAR.
UPDATE
Both #GordThompson and the O.P. (#NeilMcGuigan) have tested and confirmed that this issue exists regardless of where the string field is in the file: in the middle of a row, at the end of the row, on the last row, and not on the last row. Hence it is a general issue.
And in fact, I found the source code and it makes sense that the issue would happen since there is no consideration for multi-byte character sets. I will file an Issue on the GitHub repository. The source for the rtrim function is here:
https://github.com/FreeTDS/freetds/blob/master/src/dblib/bcp.c#L2267
Regarding this statement:
The SQL Server uses UTF-16LE (though TDS starts with UCS-2LE and switches over I believe)
From an encoding stand-point, there is really no difference between UCS-2 and UTF-16. The byte sequences are identical. The only difference is in the interpretation of Surrogate Pairs (i.e. Code Points above U+FFFF / 65535). UCS-2 has the Code Points used to construct Surrogate Pairs reserved, but there was no implementation at that time of any Surrogate Pairs. UTF-16 simply added the implementation of the Surrogate Pairs in order to create Supplementary Characters. Hence, SQL Server stores and retrieves UTF-16 LE data without a problem. The only issue is that the built-in functions don't know how to interpret Surrogate Pairs unless the Collation ends with _SC (for Supplementary Characters), and those Collations were introduced in SQL Server 2012.
This might be an encoding issue of the source file.
As you are using non-standard characters, the source file should be unicode by itself probably. Other encodings use a differing count of bytes (one up to three) to encode one single character. E.g. your Unicode 2018 is 0xE2 0x80 0x98 in UTF-8.
Your packet ends with .R.i.f....| while there should be your ā‘. And the error shows Server '...', Line 1.
Try to find out the encoding of your source file (look at big and little endian too) and try to convert your file to a sure unicode format.
This might solve it:
inf your /etc/freetds/freetds.conf
add:
client charset = UTF-8
also found this about the flag use utf-16
use utf-16 Instead of using UCS-2 for database wide
character encoding use UTF-16. Newer Windows versions use this
encoding instead of UCS-2. This could result in some issues if clients
assume that a character is always 2 bytes.

Unable to read information from Contact VISA Card using APDU commands

I am using the Telpo TPS300 POS terminal to try and read information from a VISA bank Card. The terminal comes with C libraries so sending commands is a lot more easier. However the when I run the SELECT APDU command
(00 A4 04 00 )Lc=0, it returns the following hex data 18byte long as below
6F 10 84 08 A0 00 00 00 03 00 00 00 A5 04 9F 65 01 FF.
I read the ISO 7816-4 specification and the EMV specification and from the look of things, my returned data seems to be lacking one of the mandatory tag 88 as specified in EMV Specification 11.3.4
When I try to SELECT the returned DF name i.e one with tag 84 (A0 00 00 00 03 00 00 00), it returns the same information. All other commands were not successful as well specifically I tried READ RECORD, VERIFY, GET PROCESSING OPTIONS, GET CHALLENGE and they all return the SW 6D 00 (Instruction code not supported or invalid). I just want to retrieve user info from the card and perform an offline authentication of the PIN using the verify command.
I have looked around the web but no one seems to answer me. I have read the standard ISO 7816-4 and EMV Specification again and again on the commands and response interactions but no luck so far because I can't go beyond this step (SELECT command response)
I am using the Telpo TPS300 POS terminal to try and read information from a VISA bank Card
As you said you tried with blank card , here information is coming from card is correct.
when you send select command like,
00 A4 04 00 00 , it select ISD - Issuer Security Domain and return ISD AID i.e. A0 00 00 00 03 00 00 00 with tag 9F 65 that means -Maximum length of data field in command message
Recv - 6F 10 84 08 A0 00 00 00 03 00 00 00 A5 04 9F 65 01 FF
what you receive showing AID of ISD and value of tag 9F65. it seems correct.
my returned data seems to be lacking one of the mandatory tag 88 as specified in EMV Specification 11.3.4
Tag 88 -SFI of the Directory Elementary File is come out from card when you select PSE directory with using command,
00 A4 04 00 0E 315041592E5359532E4444463031 (select PSE command)
it will give you tag 88 if PSE is installed in the card.
I tried READ RECORD, VERIFY, GET PROCESSING OPTIONS, GET CHALLENGE and they all return the SW 6D 00 (Instruction code not supported or invalid).
for reading emv card, EMV application should be installed and personalize in the card then only you can get information from the card with using sequence of commands. try- how to read emv card
it gives basic idea to read emv card with sequence of commands.
hope it helps you..

XBee command to transmit or receive Dn status

I have two XBee chips - one of them is connected to a relay switch, the other one - to my computer via USB cable.
I can configure the locally connected XBee to send its D0 value to the remote XBee, so that when I toggle the D0 line of the local XBee the remote relay switch toggles as well.
What I want to do is be able to send a command to the local XBee over its serial connection and have the local XBee send a command to the remote XBee that would toggle the relay switch without having to physically interact with the D0 line on my local XBee.
The XBees are S1, so they don't (seem to?) support ATIO command, at least my tests didn't show that working. I also tried using ATAP 1 with API command 83 as shown here but that didn't work.
The hardware setup works - attaching a button to D0 transmits its status to the remote XBee, so how do I get the same to happen with software alone?
You need to send a "Remote AT Command" frame, for parameter ATD0, as described in this page on Digi's website.
Although that page is for the Series 2 radio modules, if you look at the documentation for the Series 1, you can find the frame format for a remote AT command.
And, if you're going to use C to send the command, this Open Source, portable, ANSI C XBee Host Library includes a function process_command_remote() in samples/common/_atinter.c to send a remote AT command.
Finally figured it out, thanks for steering me in the right direction #tomlogic
The problem was that Digi's website doesn't tell you to set IA to 0xFFFF (allow all source addresses to change pin state), which by default is 0xFFFF FFFF FFFF FFFF (disable remote pin changes)
Found a better tip on this site.
This is what it looks like in XCTU for Mac:
Here are all the settings that differ from defaults one I got it to work.
Transmitter:
MY=7
AP=1 (API enabled)
D0=3 (Pin 0 Input)
IC=FF (Change detect all)
Receiver:
MY=2
D0=5 (Pin 0 High Output)
IU=0 (UART IO Disable)
IA=0xFFFF (Allow all to change pins)
The commands I used:
7E 00 10 17 01 00 13 A2 00 AA BB CC DD FF FE 02 49 4F 00 8D - Send remote ATIO 0
7E 00 10 17 01 00 13 A2 00 AA BB CC DD FF FE 02 49 4F 01 8C - Send remote ATIO 1

Cannot connect to SQL Server database using pymssql but can connect using underlying freetds tsql

I have no idea why I am getting this error and cannot find any solutions for it. I can connect to a SQL Server database using freetds tsql but I keep getting an error when connecting using pymssql.connect.
The specific error is:
pymssql.OperationalError: (18456, "Login failed for user 'xxx'.DB-Lib error message 18456, severity 14:\nGeneral SQL Server
error: Check messages from the SQL Server\nDB-Lib error message 20002,
severity 9:\nAdaptive Server connection failed\n")
I have the configuration set for freetds as:
[custom_config]
host = myhost
port = 1433
tds version = 7.0
encryption = request
dump file = /tmp/freetds.log
running:
tsql -S custom_config -U tsmv -P xxx
returns:
locale is "en_US.UTF-8"
locale charset is "UTF-8"
using default charset "UTF-8"
1>
which allows me to query the database.
However, running:
python
>> import pymssql
>> pymssql.connect(server='custom_config', user='user', password='xxx', database='database')
raises the above error.
I am using Linux CentOS, python 2.6.6, freetds 0.92 dev (I have tried other versions compiling with tdsver=7.0).
The freetds log is:
log.c:196:Starting log file for FreeTDS 0.92
on 2012-04-12 10:39:15 with debug flags 0x4fff.
iconv.c:330:tds_iconv_open(0x1391b70, ISO-8859-1)
iconv.c:187:local name for ISO-8859-1 is ISO-8859-1
iconv.c:187:local name for UTF-8 is UTF-8
iconv.c:187:local name for UCS-2LE is UCS-2LE
iconv.c:187:local name for UCS-2BE is UCS-2BE
iconv.c:349:setting up conversions for client charset "ISO-8859-1"
iconv.c:351:preparing iconv for "ISO-8859-1" <-> "UCS-2LE" conversion
iconv.c:391:preparing iconv for "ISO-8859-1" <-> "UCS-2LE" conversion
iconv.c:394:tds_iconv_open: done
net.c:205:Connecting to xx.x.x.xxx port 1433 (TDS version 7.1)
net.c:270:tds_open_socket: connect(2) returned "Operation now in progress"
net.c:310:tds_open_socket() succeeded
util.c:156:Changed query state from DEAD to IDLE
net.c:741:Sending packet
0000 12 01 00 34 00 00 00 00-00 00 15 00 06 01 00 1b |...4.... ........|
0010 00 01 02 00 1c 00 0c 03-00 28 00 04 ff 08 00 01 |........ .(......|
0020 55 00 00 02 4d 53 53 51-4c 53 65 72 76 65 72 00 |U...MSSQ LServer.|
0030 c7 39 00 00 - |.9..|
net.c:555:Received header
0000 04 01 00 25 00 00 01 00- |...%....|
net.c:609:Received packet
0000 04 01 00 25 00 00 01 00-00 00 15 00 06 01 00 1b |...%.... ........|
0010 00 01 02 00 1c 00 01 03-00 1d 00 00 ff 0a 00 0f |........ ........|
0020 a0 00 00 02 00 - |.....|
login.c:1057:detected flag 2
login.c:782:quietly sending TDS 7+ login packet
token.c:328:tds_process_login_tokens()
net.c:555:Received header
0000 04 01 00 72 00 51 01 00- |...r.Q..|
net.c:609:Received packet
0000 04 01 00 72 00 51 01 00-aa 5e 00 18 48 00 00 01 |...r.Q.. .^..H...|
0010 0e 1d 00 4c 00 6f 00 67-00 69 00 6e 00 20 00 66 |...L.o.g .i.n. .f|
0020 00 61 00 69 00 6c 00 65-00 64 00 20 00 66 00 6f |.a.i.l.e .d. .f.o|
0030 00 72 00 20 00 75 00 73-00 65 00 72 00 20 00 27 |.r. .u.s .e.r. .'|
0040 00 74 00 73 00 6d 00 76-00 27 00 2e 00 0c 4d 00 |.t.s.m.v .'....M.|
0050 43 00 53 00 2d 00 44 00-41 00 54 00 41 00 42 00 |C.S.-.D. A.T.A.B.|
0060 41 00 53 00 45 00 00 01-00 fd 02 00 00 00 00 00 |A.S.E... ........|
0070 00 00 - |..|
token.c:337:looking for login token, got aa(ERROR)
token.c:122:tds_process_default_tokens() marker is aa(ERROR)
token.c:2588:tds_process_msg() reading message 18456 from server
token.c:2661:tds_process_msg() calling client msg handler
dbutil.c:85:_dblib_handle_info_message(0x14e2e30, 0x1391b70, 0x7fff8b047e40)
dbutil.c:86:msgno 18456: "Login failed for user 'xxx'."
token.c:2674:tds_process_msg() returning TDS_SUCCEED
token.c:337:looking for login token, got fd(DONE)
token.c:122:tds_process_default_tokens() marker is fd(DONE)
token.c:2339:tds_process_end: more_results = 0
was_cancelled = 0
error = 1
done_count_valid = 0
token.c:2355:tds_process_end() state set to TDS_IDLE
token.c:2370: rows_affected = 0
token.c:438:tds_process_login_tokens() returning TDS_FAIL
login.c:466:login packet accepted
util.c:156:Changed query state from IDLE to DEAD
util.c:331:tdserror(0x14e2e30, 0x1391b70, 20002, 0)
dblib.c:7929:dbperror(0x1383c70, 20002, 0)
dblib.c:7981:20002: "Adaptive Server connection failed"
dblib.c:8002:"Adaptive Server connection failed", client returns 2 (INT_CANCEL)
util.c:361:tdserror: client library returned TDS_INT_CANCEL(2)
util.c:384:tdserror: returning TDS_INT_CANCEL(2)
dblib.c:1443:dbclose(0x1383c70)
dblib.c:258:dblib_del_connection(0x7fa462faf540, 0x1391b70)
mem.c:615:tds_free_all_results()
dblib.c:305:dblib_release_tds_ctx(1)
dblib.c:5882:dbfreebuf(0x1383c70)
dblib.c:739:dbloginfree(0x1533a40)
I am completely lost as to why this is not working. Any help would be much appreciated.
The "Adaptive Server connection failed" seems to be a fairly generic message, but here are some things to try.
This mailing list thread (http://lists.ibiblio.org/pipermail/freetds/2010q3/026060.html) says that using the incorrect TDS protocol results in an "Adaptive Server connection failed" message. That doesn't seem to be the case in chewynougat's log, but perhaps it helps others.
This FAQ gives a lot of steps to try:
https://github.com/pymssql/pymssql/blob/87f4383ec153962b7ca7e63a05042d3f09005178/docs/faq.rst,
One is attempting to test the tds connection via tsql -H, which bypasses reading from the conf and only reads in the passed in values. Given that the conf above holds both the port and protocol version, it might be worthwhile to check that along with tsql -C to see if there are adjustments needed.
Also, at the bottom of the FAQ, it states that
real "Login incorrect" messages has code=18456 and severity=14
That is the error message being sent, so perhaps try Login Auditing (http://msdn.microsoft.com/en-us/library/ms175850.aspx) to see if pymssql is passing your credentials in properly.
That same section talks about different character sets messing up the mssql.connect, so also perhaps try a basic password (i.e., ASCII 65-90) to ensure that nothing is lost in translation. It looks like Aki works with Japanese, so perhaps this is a cause as well.
I faced the same problem when I installed via pip install pymssql because this installed a pre-built OS-specific binary wheel which does not support encryption.
This, even though I had installed a specific version of freetds, that I had expected would be used. Installing instead with pip install --no-binary pymssql pymssql gave an installation that works.
If you want to encrypt your database connections you will need to first build/install freetds, and then install pymssql as described here.
In this event I highly recommend ensuring your freetds.conf file specifies 'require' as opposed to 'request', to avoid silently falling back to unencrypted traffic. Using tcpdump -A and grepping for SQL keywords can help determining whether traffic is really encrypted.
I faced the same problem. Fortunately, I found out what is wrong. I had two different versions of FreeTDS on my machine. I installed one of them(v 0.91)by:
sudo apt-get install freetds-dev
later I found out it is not the last version and I downloaded the tar file of Freetds from freetds.org. When I ran tsql -C. it showed the correct path and I manipulated that freetds.conf correctly. With changing environment variable (http://www.freetds.org/userguide/envvar.htm), I could connect to database. However, each time I tried to connect by Pymssql I got error.
Finally, I looked at the log file and I figured out that Python is using the old version (v 0.91) but my last one was version 0.95.
>>> import os
>>> os.environ['TDSDUMP'] = 'stdout'
>>>
>>> import pymssql
>>> conn = pymssql.connect(server="sqlserverhost")
So, I deleted the version 0.91 with:
sudo apt-get purge freetds-common
and pymssql connected to the right version with correct configuration.
It may help you as well.
try follow the new notes in the pymssql documentation: azure need to take care of the user part. it's so weird, but worked. ms sql warehouse makes work on linux/mac so difficult..
http://pymssql.org/en/latest/azure.html
IMPORTANT: Do not use username#server.database.windows.net for the user parameter of the relevant connect() call! You must use the shorter username#server form instead!

Resources