What does creating a connection between an application and database mean? - database

When we say we have created a connection between a database and an application (that can be stored in connection pool), what really a "connection" means here?
Does it got anything to do with establishing a TCP/ TLS connection?
Does it load the database schema with every connection?
What happens to a connection (that are already loaded in application connection pool) when the database schema changes, and there is an active transaction going on?

"A Connection" is nothing but details of a Socket, with extra details (like username, password, etc.). Each connection have a different socket connection.
For example:
Connection 1:
Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]
Connection 2:
Socket[addr=localhost/127.0.0.1,port=1030,localport=51246]
I have created two connection in a single JVM process to demonstrate how the server knows in which Socket the reply is to be sent. A socket, if I define in terms of UNIX is a special file that is used for inter-process communication:
srwxr-xr-x. 1 root root 0 Mar 3 19:30 /tmp/somesocket
When a socket is created (i.e., when this special socket file is created; how to create a socket? and this) operating system creates a file descriptor that points to that file. Server distinguishes the Socket with following attributes:
Ref.
{SRC-IP, SRC-PORT, DEST-IP, DEST-PORT, PROTOCOL}
PROTOCOL: I have used postgres as an example, the socket connection in postgres driver is done with SocksSocketImpl which is a TCP socket implementation (RFC 1928)
Coming back the two connections I have created, if you look closely the localport for both the connections are different, so the server clearly understands where it has to send the reply back.
Now there are limitations on the number of files (or file descriptors) that you can open in an operating system, thus it's recommended not to keep your connections dangling (called connection leak)
Does it load the database schema with every connection?
Answer: No, it's the ResultSet that takes care of it.
What happens to a connection when the database schema changes
Answer: Connection and database schema are two different things. Connection just defines how to communicate with another process. Database schema is a contract between application and database, an application might throw errors that the contract is broken, or it may simply ignore it.
If you are interested in digging more, you should add a breakpoint to a connection object and below is how it looks like (see FileDescriptor)
connection = {Jdbc4Connection#777}
args = {String[0]#776}
connection = {Jdbc4Connection#777}
_clientInfo = null
rsHoldability = 2
savepointId = 0
logger = {Logger#778}
creatingURL = "dbc:postgresql://localhost:1030/postgres"
value = {char[40]#795}
hash = 0
openStackTrace = null
protoConnection = {ProtocolConnectionImpl#780}
serverVersion = "10.7"
cancelPid = 19672
cancelKey = 1633313435
standardConformingStrings = true
transactionState = 0
warnings = null
closed = false
notifications = {ArrayList#796} size = 0
pgStream = {PGStream#797}
host = "localhost"
port = 1030
_int4buf = {byte[4]#802}
_int2buf = {byte[2]#803}
connection = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
created = true
bound = true
connected = true
closed = false
closeLock = {Object#811}
shutIn = false
shutOut = false
impl = {SocksSocketImpl#812} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
server = null
serverPort = 1080
external_address = null
useV4 = false
cmdsock = null
cmdIn = null
cmdOut = null
applicationSetProxy = false
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
exclusiveBind = true
isReuseAddress = false
timeout = 0
trafficClass = 0
shut_rd = false
shut_wr = false
socketInputStream = {SocketInputStream#819}
eof = false
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
temp = null
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
created = true
bound = true
connected = true
closed = false
closeLock = {Object#811}
shutIn = false
shutOut = false
impl = {SocksSocketImpl#812} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
server = null
serverPort = 1080
external_address = null
useV4 = false
cmdsock = null
cmdIn = null
cmdOut = null
applicationSetProxy = false
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
timeout = 0
trafficClass = 0
shut_rd = false
shut_wr = false
socketInputStream = null
socketOutputStream = null
fdUseCount = 0
fdLock = {Object#815}
closePending = false
CONNECTION_NOT_RESET = 0
CONNECTION_RESET_PENDING = 1
CONNECTION_RESET = 2
resetState = 0
resetLock = {Object#816}
stream = false
socket = null
serverSocket = null
fd = {FileDescriptor#817}
address = null
port = 0
localport = 0
oldImpl = false
closing = false
fd = {FileDescriptor#817}
fd = 1260
handle = -1
parent = {SocketInputStream#819}
eof = false
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
temp = null
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
closing = false
fd = {FileDescriptor#817}
fd = 1260
handle = -1
parent = {SocketInputStream#819}
eof = false
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
exclusiveBind = true
isReuseAddress = false
timeout = 0
trafficClass = 0
shut_rd = false
shut_wr = false
socketInputStream = {SocketInputStream#819}
socketOutputStream = {SocketOutputStream#820}
fdUseCount = 0
fdLock = {Object#821}
closePending = false
CONNECTION_NOT_RESET = 0
CONNECTION_RESET_PENDING = 1
CONNECTION_RESET = 2
resetState = 0
resetLock = {Object#822}
stream = true
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
serverSocket = null
fd = {FileDescriptor#817}
address = {Inet4Address#823} "localhost/127.0.0.1"
port = 1030
localport = 51099
temp = null
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
closing = false
fd = {FileDescriptor#817}
path = null
channel = null
closeLock = {Object#826}
closed = false
otherParents = {ArrayList#833} size = 2
closed = false
path = null
channel = null
closeLock = {Object#826}
closed = false
otherParents = {ArrayList#833} size = 2
closed = false
path = null
channel = null
closeLock = {Object#826}
closed = false
socketOutputStream = {SocketOutputStream#820}
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
temp = {byte[1]#843}
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
closing = false
fd = {FileDescriptor#817}
append = false
channel = null
path = null
closeLock = {Object#844}
closed = false
fdUseCount = 0
fdLock = {Object#821}
closePending = false
CONNECTION_NOT_RESET = 0
CONNECTION_RESET_PENDING = 1
CONNECTION_RESET = 2
resetState = 0
resetLock = {Object#822}
stream = true
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
serverSocket = null
fd = {FileDescriptor#817}
address = {Inet4Address#823} "localhost/127.0.0.1"
port = 1030
localport = 51099
timeout = 0
trafficClass = 0
shut_rd = false
shut_wr = false
socketInputStream = null
socketOutputStream = null
fdUseCount = 0
fdLock = {Object#815}
closePending = false
CONNECTION_NOT_RESET = 0
CONNECTION_RESET_PENDING = 1
CONNECTION_RESET = 2
resetState = 0
resetLock = {Object#816}
stream = false
socket = null
serverSocket = null
fd = {FileDescriptor#817}
address = null
port = 0
localport = 0
oldImpl = false
pg_input = {VisibleBufferedInputStream#805}
pg_output = {BufferedOutputStream#806}
streamBuffer = null
encoding = {Encoding#807} "UTF-8"
encodingWriter = {OutputStreamWriter#808}
user = "postgres"
database = "postgres"
executor = {QueryExecutorImpl#800}
logger = {Logger#778}
compatible = "9.0"
dbVersionNumber = "10.7"
commitQuery = {SimpleQuery#783} "COMMIT"
rollbackQuery = {SimpleQuery#784} "ROLLBACK"
_typeCache = {TypeInfoCache#785}
prepareThreshold = 5
autoCommit = true
readOnly = false
bindStringAsVarchar = true
firstWarning = null
timestampUtils = {TimestampUtils#786}
typemap = null
fastpath = null
largeobject = null
metadata = null
copyManager = null

Here the connection you are talking about means the opening function that the application invokes to open and read/modify/delete the database or childs of it.
For example if we talk about a PHP file (used to load websites requests in server, like HTML) or a HTML file, where you login in a page that has as name: https://example.com/login.php (PHP) or https://example.com/login.html (HTML) and the page requires to access the adatabase of the users to check if the credentials you insert are correct, if the values given (for e.g: username:"demoUser" and password:"password*1234"), exists in the databse as rows in a specific table. The database can contain infinite tables and infinite rows inside. An example of a simple database with only one table called Users:
username | password | date_created // Table columns
"demoUser" | "password" | "23-03-2019" // Example showed above
"user1213" | "passw0rd" | "04-02-2019" //Second user example
then here above if the application need to verify if the value exist in this database, the operating system of the application will access the database with a simple file reading file is normally with .db and then it will read each rows to find the values.
To do this the code in the login.php/login.html pages invokes the server that runs the file and the server open the database and then the server take the query(what the code request to check in the database), and execute it as if the database was a simple file with (for e.g:) .db. The connection here stands as the query the

To put it in simple words. A "Database connection" is a link between your application process and Database's serving process.
Client side:
When you create a connection your application stores information like: what database address is, what socket is used for the connection, what server process is responsible for processing your requests and etc. This information depends on a connection driver implementation and differs from database to database.
Server side:
When a request from a client application arrives, a database performs authentication and authorization of the client and creates a new process or a thread which is responsible for serving it. Implementation and data loaded by this server process is also vendor-dependent and differs from database to database.
This process of 'preparing' a database for serving a new client takes a good amount of time, and that's where connection pools come to help.
Connection pool:
Connection pool is basically used to reduce the need for openning new connections and wasting time on authentication, authorization, creating server process etc. It allows reusing already established connections.
What happens to a connection (that are already loaded in application connection pool) when the database schema changes, and there is an active transaction going on?
First of all, a database does not know about any connection pools. For the databse it's a client-side feature. What happens also depends on a particular database and its implementation. Usually databases have a blocking mechanism to prevent objects from modifying while they are still in use and vice-versa.

Related

Not able to connect MSSQL server from Python using pyodbc (while jaydebeapi works fine)

I am trying to connect to MSSQL server using Pyodbc.
connStr = "DRIVER={{ODBC Driver 17 for SQL Server}};SERVER={0};UID={1}/{2};PWD={3};Trusted_Connection=no".format("host,port", "mydomain_name", "myuser", "mypassword")
pyodbc.connect(connStr)
The error trace:
[ODBC][62095][1669186101.493930][__handles.c][499]
Exit:[SQL_SUCCESS]
Environment = 0x1a84840
[ODBC][62095][1669186101.494078][SQLSetEnvAttr.c][189]
Entry:
Environment = 0x1a84840
Attribute = SQL_ATTR_ODBC_VERSION
Value = 0x3
StrLen = 4
[ODBC][62095][1669186101.494173][SQLSetEnvAttr.c][381]
Exit:[SQL_SUCCESS]
[ODBC][62095][1669186101.494272][SQLAllocHandle.c][395]
Entry:
Handle Type = 2
Input Handle = 0x1a84840
UNICODE Using encoding ASCII 'UTF-8' and UNICODE 'UCS-2LE'
[ODBC][62095][1669186101.494503][SQLAllocHandle.c][531]
Exit:[SQL_SUCCESS]
Output Handle = 0x1a9a7b0
[ODBC][62095][1669186101.495647][SQLDriverConnectW.c][298]
Entry:
Connection = 0x1a9a7b0
Window Hdl = (nil)
Str In = [DRIVER={ODBC Driver 17 for SQL Server};SERVER=host,port;UID=mydomain_name/myuser;PWD=mypasswordlength = 155 (SQL_NTS)]
Str Out = (nil)
Str Out Max = 0
Str Out Ptr = (nil)
Completion = 0
[ODBC][62095][1669186101.547891][__handles.c][499]
Exit:[SQL_SUCCESS]
Environment = 0x1b28e80
[ODBC][62095][1669186101.548070][SQLGetEnvAttr.c][157]
Entry:
Environment = 0x1b28e80
Attribute = 65002
Value = 0x7ffcf258c390
Buffer Len = 128
StrLen = 0x7ffcf258c32c
[ODBC][62095][1669186101.548172][SQLGetEnvAttr.c][273]
Exit:[SQL_SUCCESS]
[ODBC][62095][1669186101.548301][SQLFreeHandle.c][220]
Entry:
Handle Type = 1
Input Handle = 0x1b28e80
[ODBC][62095][1669186101.548574][SQLDriverConnectW.c][869]
Exit:[SQL_ERROR]
[ODBC][62095][1669186101.548693][SQLDriverConnect.c][751]
Entry:
Connection = 0x1a9a7b0
Window Hdl = (nil)
Str In = [DRIVER={ODBC Driver 17 for SQL Server};SERVER=host,port;UID=mydomain_name/myuser;PWD=mypassword][length = 155 (SQL_NTS)]
Str Out = 0x7ffcf258ab20
Str Out Max = 2048
Str Out Ptr = (nil)
Completion = 0
DIAG [28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'mydomain_name/myuser'.
[ODBC][62095][1669186101.594689][SQLDriverConnect.c][1717]
Exit:[SQL_ERROR]
[ODBC][62095][1669186101.594808][SQLGetDiagRecW.c][535]
Entry:
Connection = 0x1a9a7b0
Rec Number = 1
SQLState = 0x7ffcf258f316
Native = 0x7ffcf258f304
Message Text = 0x1ad89c0
Buffer Length = 1023
Text Len Ptr = 0x7ffcf258f302
[ODBC][62095][1669186101.594923][SQLGetDiagRecW.c][596]
Exit:[SQL_SUCCESS]
SQLState = [28000]
Native = 0x7ffcf258f304 -> 18456 (32 bits)
Message Text = [[Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'mydomain_name/myuser'.]
[ODBC][62095][1669186101.595099][SQLFreeHandle.c][290]
Entry:
Handle Type = 2
Input Handle = 0x1a9a7b0
[ODBC][62095][1669186101.595192][SQLFreeHandle.c][339]
Exit:[SQL_SUCCESS]
I tried with many different options:
I have tried with single / between user and domain and user. Similarly with single \, double // and double \.
I have tried with Trusted_Connection=no/yes
I have tried with Authentication=ActiveDirectoryPassword and Authentication=ActiveDirectoryIntegrated
I have tried with uid=user#domain
I have also tried with Jaydebeapi.
import sys
import jaydebeapi
# jTDS Driver.
driver_name = "net.sourceforge.jtds.jdbc.Driver"
# jTDS Connection string.
connection_url = "jdbc:jtds:sqlserver://host:port;ssl=require;domain=domain_name;useNTLMv2=true;databaseName=db_name"
user=<username>
password=<pwd>
connection_properties = {"user": user,"password": password}
# Path to jTDS Jar
jar_path = "path_to_jar/jtds-1.3.1.jar"
# Establish connection.
connection = jaydebeapi.connect(driver_name, connection_url, connection_properties, jar_path)
Jaydebeapi works fine and I am able to connect to MSSQL through this and fetch data.
The difference that I can see is that in Jaydebeapi, I am passing domain name as a separate parameter while in Pyodbc, there is no such parameter. I have tried many different ways of passing domain name (as mentioned above) but none works.
I am getting a login failed error for all above ways, one of whose trace I have mentioned above.
If anyone have some insights as to how to resolve this and make Pyodbc working, Please answer this question.

Import data from MS SQL Server to HBase with Flume

I'm really new to Flume. I prefer Flume than Sqoop because data is continued to be imported to MS SQL Server in my case, therefore I think Flume is a better choice which is able to transfer data in real time.
I just followed some online example and then editing my own flume config file which tells something about the source, channel, and sink. However, it seemed that Flume didn't work successfully. There was no data being transferred to HBase.
mssql-hbase.conf
# source, channel, sink
agent1.sources = src1
agent1.channels = ch1
agent1.sinks = sk1
# declare source type
agent1.sources.src1.type = org.keedio.flume.source.SQLSource
agent1.sources.src1.hibernate.connection.url = jdbc:sqlserver://xx.xx.xx.xx:1433;DatabaseName=xxxx
agent1.sources.src1.hibernate.connection.user = xxxx
agent1.sources.src1.hibernate.connection.password = xxxx
agent1.sources.src1.table = xxxx
agent1.sources.src1.hibernate.connection.autocommit = true
# declare mysql hibernate dialect
agent1.sources.src1.hibernate.dialect = org.hibernate.dialect.SQLServerDialect
agent1.sources.src1.hibernate.connection.driver_class = com.microsoft.sqlserver.jdbc.SQLServerDriver
#agent1.sources.src1.hibernate.provider_class=org.hibernate.connection.C3P0ConnectionProvider
#agent1.sources.src1.columns.to.select = *
#agent1.sources.src1.incremental.column.name = PK, name, machine, time
#agent1.sources.src1.start.from=0
#agent1.sources.src1.incremental.value = 0
# query time interval
agent1.sources.src1.run.query.delay = 5000
# declare the folder loaction where flume state is saved
agent1.sources.src1.status.file.path = /home/user/flume-source-state
agent1.sources.src1.status.file.name = src1.status
agent1.sources.src1.batch.size = 1000
agent1.sources.src1.max.rows = 1000
agent1.sources.src1.delimiter.entry = |
# set the channel to memory mode
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 10000
agent1.channels.ch1.transactionCapacity = 10000
agent1.channels.ch1.byteCapacityBufferPercentage = 20
agent1.channels.ch1.byteCapacity = 800000
# declare sink type
agent1.sinks.sk1.type = org.apache.flume.sink.hbase.HBaseSink
agent1.sinks.sk1.table = yyyy
agent1.sinks.sk1.columnFamily = yyyy
agent1.sinks.sk1.hdfs.batchSize = 100
agent1.sinks.sk1.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
agent1.sinks.sk1.serializer.regex = ^\"(.*?)\",\"(.*?)\",\"(.*?)\"$
agent1.sinks.sk1.serializer.colNames = PK, name, machine, time
# bind source, channel, sink
agent1.sources.src1.channels = ch1
agent1.sinks.sk1.channel = ch1
But, I use a similar config file to transfer data from MySql to HBase. Luckily, it worked.
mysql-hbase.conf
# source, channel, sink
agent1.sources = src1
agent1.channels = ch1
agent1.sinks = sk1
# declare source type
agent1.sources.src1.type = org.keedio.flume.source.SQLSource
agent1.sources.src1.hibernate.connection.url = jdbc:mysql://xxxx:3306/userdb
agent1.sources.src1.hibernate.connection.user = xxxx
agent1.sources.src1.hibernate.connection.password = xxxx
agent1.sources.src1.table = xxxx
agent1.sources.src1.hibernate.connection.autocommit = true
# declare mysql hibernate dialect
agent1.sources.src1.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
agent1.sources.src1.hibernate.connection.driver_class = com.mysql.jdbc.Driver
#agent1.sources.src1.hibernate.provider_class=org.hibernate.connection.C3P0ConnectionProvider
#agent1.sources.src1.columns.to.select = *
#agent1.sources.src1.incremental.column.name = id
#agent1.sources.src1.incremental.value = 0
# query time interval
agent1.sources.src1.run.query.delay = 5000
# declare the folder loaction where flume state is saved
agent1.sources.src1.status.file.path = /home/user/flume-source-state
agent1.sources.src1.status.file.name = src1.status
#agent1.sources.src1.interceptors=i1
#agent1.sources.src1.interceptors.i1.type=search_replace
#agent1.sources.src1.interceptors.i1.searchPattern="
#agent1.sources.src1.interceptors.i1.replaceString=,
# Set the channel to memory mode
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 10000
agent1.channels.ch1.transactionCapacity = 10000
agent1.channels.ch1.byteCapacityBufferPercentage = 20
agent1.channels.ch1.byteCapacity = 800000
# declare sink type
agent1.sinks.sk1.type = org.apache.flume.sink.hbase.HBaseSink
agent1.sinks.sk1.table = user_test_2
agent1.sinks.sk1.columnFamily = user_hobby
agent1.sinks.sk1.hdfs.batchSize = 100
agent1.sinks.sk1.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
agent1.sinks.sk1.serializer.regex = ^\"(.*?)\",\"(.*?)\",\"(.*?)\",\"(.*?)\"$
agent1.sinks.sk1.serializer.colNames = id,name,age,hobby
# bind source, channel, sink
agent1.sources.src1.channels = ch1
agent1.sinks.sk1.channel = ch1
Does anyone know is there something wrong in the config file? Thanks.

Python3 TypeError: sequence item 0: expected a bytes-like object, int found

I'm trying to send an array over TCP from a server-like script to a client-like one. The array is variable, so the data is sent using packets and then joined together at the client.
The data I'm trying to send is from the MNIST hand-written digits dataset for Deep Learning. The server-side code is:
tcp = '127.0.0.1'
port = 1234
buffer_size = 4096
(X_train, y_train), (X_test, y_test) = mnist.load_data()
test_data = (X_test, y_test)
# Client-side Deep Learning stuff
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((tcp, port))
x = pickle.dumps(test_data)
s.sendall(x)
s.close()
The client-side script loads a Neural Network that uses the test data to predict classes. The script for listening to said data is:
tcp = '127.0.0.1'
port = 1234
buffer_size = 4096
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((tcp, port))
print ('Listening...')
s.listen(1)
conn, addr = s.accept()
data_arr = []
while True:
data_pack = conn.recv(buffer_size)
if not data: break
data_pack += data
my_pickle = b"".join(data_pack)
test_data = pickle.loads(my_pickle)
print ("Received: " + test_data)
conn.close()
# Irrelevant Deep Learning stuff...
The server sends the data without a hitch, but the client crashes when trying to join the packets received by the client (my_pickle = ...) with the following error:
TypeError: sequence item 0: expected a bytes-like object, int found
How should I format the join in order to recreate the data sent and use it for the rest of the script?
I ended up using both Pickle and ZeroMQ to handle the comunication protocol. An advantage of this method is that I can send more than one data package.
On the client side:
ip = '127.0.0.1'
port = '1234'
# ZeroMQ context
context = zmq.Context()
# Setting up protocol (client)
sock = context.socket(zmq.REQ)
sock.bind('tcp://'+ip+':'+port)
print('Waiting for connection at tcp://'+ip+':'+port+'...')
sock.send(pickle.dumps(X_send))
X_answer = sock.recv()
sock.send(pickle.dumps(y_send))
print('Data sent. Waiting for classification...')
y_answer = sock.recv()
print('Done.')
And on the server side:
# ZeroMQ Context
context = zmq.Context()
# Setting up protocol (server)
sock = context.socket(zmq.REP)
ip = '127.0.0.1'
port = '1234'
sock.connect('tcp://'+ip+':'+port)
print('Listening to tcp://'+ip+':'+port+'...')
X_message = sock.recv()
X_test = pickle.loads(X_message)
sock.send(pickle.dumps(X_message))
y_message = sock.recv()
y_test = pickle.loads(y_message)
print('Data received. Starting classification...')
# Classification process
sock.send(pickle.dumps(y_message))
print('Done.')

tac_plus Active Directory config

I seem to be having an issue with the pro bono tac_plus configuration.
my switch is giving me the following log message
May 4 20:58:52 sv5-c1-r104-ae02 Aaa: %AAA-4-EXEC_AUTHZ_FAILED: User jdambly failed authorization to start a shell
if I look at the tac_plus logs it looks like my group mapping is not configured correctly, here is the log
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: Start authorization request
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: cfg_get: checking user/group jdambly, tag (NULL)
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: cfg_get: checking user/group jdambly, tag (NULL)
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: user 'jdambly' found
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: cfg_get: checking user/group jdambly, tag (NULL)
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: jdambly#192.168.0.19: not found: svcname=shell#world protocol=
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: jdambly#192.168.0.19: not found: svcname=shell protocol=
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: jdambly#192.168.0.19: svcname=shell protocol= not found, default is <unknown>
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: Writing AUTHOR/FAIL size=18
here is my config
id = tac_plus {
debug = PACKET AUTHEN AUTHOR MAVIS
access log = /var/log/tac_plus/access.log
accounting log = /var/log/tac_plus/acct.log
authorization log = /var/log/tac_plus/auth.log
mavis module = external {
setenv LDAP_SERVER_TYPE = "microsoft"
#setenv LDAP_HOSTS = "ldaps://xxxxxx:3268"
setenv LDAP_HOSTS = "xxxxxx:3268"
setenv LDAP_SCOPE = sub
setenv LDAP_BASE = "dc=nskope,dc=net"
setenv LDAP_FILTER = "(&(objectclass=user)(sAMAccountName=%s))"
setenv LDAP_USER = "xxxx#nskope.net"
setenv LDAP_PASSWD = "xxxxxxxx"
#setenv AD_GROUP_PREFIX = devops
# setenv REQUIRE_AD_GROUP_PREFIX = 1
# setenv USE_TLS = 0
exec = /usr/local/lib/mavis/mavis_tacplus_ldap.pl
}
user backend = mavis
login backend = mavis
pap backend = mavis
skip missing groups = yes
host = world {
address = 0.0.0/0
prompt = "Welcome\n"
key = cisco
}
group = devops {
default service = permit
service = shell {
default command = permit
default attribute = permit
set priv-lvl = 15
}
}
}
I'm trying to map the ad group devops to the group in the config but I think that's failing and I don't get why
so LONG story short I got this working using the following config.
#!../../../sbin/tac_plus
id = spawnd {
listen = { port = 49 }
spawn = {
instances min = 1
instances max = 10
}
background = no
}
id = tac_plus {
debug = PACKET AUTHEN AUTHOR MAVIS
access log = /var/log/tac_plus/access.log
accounting log = /var/log/tac_plus/acct.log
authorization log = /var/log/tac_plus/auth.log
mavis module = external {
setenv LDAP_SERVER_TYPE = "microsoft"
#setenv LDAP_HOSTS = "ldaps://xxxxxxxxx:3268"
setenv LDAP_HOSTS = "xxxxxxxxx:3268"
#setenv LDAP_SCOPE = sub
setenv LDAP_BASE = "cn=Users,dc=nskope,dc=net"
setenv LDAP_FILTER = "(&(objectclass=user)(sAMAccountName=%s))"
setenv LDAP_USER = "xxxxxxxx"
setenv LDAP_PASSWD = "xxxxxxxx"
#setenv FLAG_FALLTHROUGH=1
setenv UNLIMIT_AD_GROUP_MEMBERSHIP = "1"
#setenv EXPAND_AD_GROUP_MEMBERSHIP=1
#setenv FLAG_USE_MEMBEROF = 1
setenv AD_GROUP_PREFIX = ""
# setenv REQUIRE_AD_GROUP_PREFIX = 1
# setenv USE_TLS = 0
exec = /usr/local/lib/mavis/mavis_tacplus_ldap.pl
}
user backend = mavis
login backend = mavis
pap backend = mavis
skip missing groups = yes
host = world {
address = 0.0.0/0
#prompt = "Welcome\n"
key = cisco
}
group = devops {
default service = permit
service = shell {
default command = permit
default attribute = permit
set priv-lvl = 15
}
}
}
what really did the trick is adding
setenv UNLIMIT_AD_GROUP_MEMBERSHIP = "1"
setenv AD_GROUP_PREFIX = ""
with these settings it's not looking for a prefix to the all the ad groups. This config allows for a direct mappings of ad group to the group configured in this file, in my case the group is called dev ops. also note that I had to use quotes around the 1. without these quests it does not set the var UNLIMIT_AD_GROUP_MEMBERSHIP to one so watch out for that. hopefully this can help someone else so they do not have to go through all the pain I did ;)

Data encryption issues with Oracle Advanced Security

I have used Oracle Advanced Security to encrypt data during data transfer. I have successfully configured ssl with below parameters and I have restarted the instance. I am retrieving data from a Java class given below. But I could read the data without decrypting, the data is not getting encrypted.
Environment:
Oragle 11g database
SQLNET.AUTHENTICATION_SERVICES= (BEQ, TCPS, NTS)
SSL_VERSION = 0
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
SSL_CLIENT_AUTHENTICATION = FALSE
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = C:\Users\kcr\Oracle\WALLETS)
)
)
SSL_CIPHER_SUITES= (SSL_RSA_EXPORT_WITH_RC4_40_MD5)
Java class:
try{
Properties properties = Utils.readProperties("weka/experiment/DatabaseUtils.props");
// Security.addProvider(new oracle.security.pki.OraclePKIProvider()); //Security syntax
String url = "jdbc:oracle:thin:#(DESCRIPTION =\n" +
" (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))\n" +
" (CONNECT_DATA =\n" +
" (SERVER = DEDICATED)\n" +
" (SERVICE_NAME = sal)\n" +
" )\n" +
" )";
java.util.Properties props = new java.util.Properties();
props.setProperty("user", "system");
props.setProperty("password", "weblogic");
// props.setProperty("javax.net.ssl.trustStore","C:\\Users\\kcr\\Oracle\\WALLETS\\ewallet.p12");
// props.setProperty("oracle.net.ssl_cipher_suites","SSL_RSA_EXPORT_WITH_RC4_40_MD5");
// props.setProperty("javax.net.ssl.trustStoreType","PKCS12");
//props.setProperty("javax.net.ssl.trustStorePassword","welcome2");
DriverManager.registerDriver(new OracleDriver());
Connection conn = DriverManager.getConnection(url, props);
/*8 OracleDataSource ods = new OracleDataSource();
ods.setUser("system");
ods.setPassword("weblogic");
ods.setURL(url);
Connection conn = ods.getConnection();*/
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery("select * from iris");
///////////////////////////
while(rset.next()) {
for (int i=1; i<=5; i++) {
System.out.print(rset.getString(i));
}
}
Are you expecting that your SELECT statement would return encrypted data and that your System.out.print calls would result in encrypted output going to the screen? If so, that's not the way advanced security works-- Advanced Security allows you to encrypt data over the wire but the data is unencrypted in the SQLNet stack. Your SELECT statement, therefore, would always see the data in an unencrypted state. You would need to do a SQLNet trace or use some sort of packet sniffer to see the encrypted data flowing over the wire.
You'll find the documentation in "SSL With Oracle JDBC Thin Driver".
In particular you should probably use PROTOCOL = TCPS instead of PROTOCOL = TCP. I'd also suggest using a stronger cipher suite (and avoid the anonymous ones, since with them you don't verify the identity of the remote server).

Resources