Python3 TypeError: sequence item 0: expected a bytes-like object, int found - arrays

I'm trying to send an array over TCP from a server-like script to a client-like one. The array is variable, so the data is sent using packets and then joined together at the client.
The data I'm trying to send is from the MNIST hand-written digits dataset for Deep Learning. The server-side code is:
tcp = '127.0.0.1'
port = 1234
buffer_size = 4096
(X_train, y_train), (X_test, y_test) = mnist.load_data()
test_data = (X_test, y_test)
# Client-side Deep Learning stuff
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((tcp, port))
x = pickle.dumps(test_data)
s.sendall(x)
s.close()
The client-side script loads a Neural Network that uses the test data to predict classes. The script for listening to said data is:
tcp = '127.0.0.1'
port = 1234
buffer_size = 4096
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((tcp, port))
print ('Listening...')
s.listen(1)
conn, addr = s.accept()
data_arr = []
while True:
data_pack = conn.recv(buffer_size)
if not data: break
data_pack += data
my_pickle = b"".join(data_pack)
test_data = pickle.loads(my_pickle)
print ("Received: " + test_data)
conn.close()
# Irrelevant Deep Learning stuff...
The server sends the data without a hitch, but the client crashes when trying to join the packets received by the client (my_pickle = ...) with the following error:
TypeError: sequence item 0: expected a bytes-like object, int found
How should I format the join in order to recreate the data sent and use it for the rest of the script?

I ended up using both Pickle and ZeroMQ to handle the comunication protocol. An advantage of this method is that I can send more than one data package.
On the client side:
ip = '127.0.0.1'
port = '1234'
# ZeroMQ context
context = zmq.Context()
# Setting up protocol (client)
sock = context.socket(zmq.REQ)
sock.bind('tcp://'+ip+':'+port)
print('Waiting for connection at tcp://'+ip+':'+port+'...')
sock.send(pickle.dumps(X_send))
X_answer = sock.recv()
sock.send(pickle.dumps(y_send))
print('Data sent. Waiting for classification...')
y_answer = sock.recv()
print('Done.')
And on the server side:
# ZeroMQ Context
context = zmq.Context()
# Setting up protocol (server)
sock = context.socket(zmq.REP)
ip = '127.0.0.1'
port = '1234'
sock.connect('tcp://'+ip+':'+port)
print('Listening to tcp://'+ip+':'+port+'...')
X_message = sock.recv()
X_test = pickle.loads(X_message)
sock.send(pickle.dumps(X_message))
y_message = sock.recv()
y_test = pickle.loads(y_message)
print('Data received. Starting classification...')
# Classification process
sock.send(pickle.dumps(y_message))
print('Done.')

Related

What does creating a connection between an application and database mean?

When we say we have created a connection between a database and an application (that can be stored in connection pool), what really a "connection" means here?
Does it got anything to do with establishing a TCP/ TLS connection?
Does it load the database schema with every connection?
What happens to a connection (that are already loaded in application connection pool) when the database schema changes, and there is an active transaction going on?
"A Connection" is nothing but details of a Socket, with extra details (like username, password, etc.). Each connection have a different socket connection.
For example:
Connection 1:
Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]
Connection 2:
Socket[addr=localhost/127.0.0.1,port=1030,localport=51246]
I have created two connection in a single JVM process to demonstrate how the server knows in which Socket the reply is to be sent. A socket, if I define in terms of UNIX is a special file that is used for inter-process communication:
srwxr-xr-x. 1 root root 0 Mar 3 19:30 /tmp/somesocket
When a socket is created (i.e., when this special socket file is created; how to create a socket? and this) operating system creates a file descriptor that points to that file. Server distinguishes the Socket with following attributes:
Ref.
{SRC-IP, SRC-PORT, DEST-IP, DEST-PORT, PROTOCOL}
PROTOCOL: I have used postgres as an example, the socket connection in postgres driver is done with SocksSocketImpl which is a TCP socket implementation (RFC 1928)
Coming back the two connections I have created, if you look closely the localport for both the connections are different, so the server clearly understands where it has to send the reply back.
Now there are limitations on the number of files (or file descriptors) that you can open in an operating system, thus it's recommended not to keep your connections dangling (called connection leak)
Does it load the database schema with every connection?
Answer: No, it's the ResultSet that takes care of it.
What happens to a connection when the database schema changes
Answer: Connection and database schema are two different things. Connection just defines how to communicate with another process. Database schema is a contract between application and database, an application might throw errors that the contract is broken, or it may simply ignore it.
If you are interested in digging more, you should add a breakpoint to a connection object and below is how it looks like (see FileDescriptor)
connection = {Jdbc4Connection#777}
args = {String[0]#776}
connection = {Jdbc4Connection#777}
_clientInfo = null
rsHoldability = 2
savepointId = 0
logger = {Logger#778}
creatingURL = "dbc:postgresql://localhost:1030/postgres"
value = {char[40]#795}
hash = 0
openStackTrace = null
protoConnection = {ProtocolConnectionImpl#780}
serverVersion = "10.7"
cancelPid = 19672
cancelKey = 1633313435
standardConformingStrings = true
transactionState = 0
warnings = null
closed = false
notifications = {ArrayList#796} size = 0
pgStream = {PGStream#797}
host = "localhost"
port = 1030
_int4buf = {byte[4]#802}
_int2buf = {byte[2]#803}
connection = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
created = true
bound = true
connected = true
closed = false
closeLock = {Object#811}
shutIn = false
shutOut = false
impl = {SocksSocketImpl#812} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
server = null
serverPort = 1080
external_address = null
useV4 = false
cmdsock = null
cmdIn = null
cmdOut = null
applicationSetProxy = false
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
exclusiveBind = true
isReuseAddress = false
timeout = 0
trafficClass = 0
shut_rd = false
shut_wr = false
socketInputStream = {SocketInputStream#819}
eof = false
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
temp = null
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
created = true
bound = true
connected = true
closed = false
closeLock = {Object#811}
shutIn = false
shutOut = false
impl = {SocksSocketImpl#812} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
server = null
serverPort = 1080
external_address = null
useV4 = false
cmdsock = null
cmdIn = null
cmdOut = null
applicationSetProxy = false
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
timeout = 0
trafficClass = 0
shut_rd = false
shut_wr = false
socketInputStream = null
socketOutputStream = null
fdUseCount = 0
fdLock = {Object#815}
closePending = false
CONNECTION_NOT_RESET = 0
CONNECTION_RESET_PENDING = 1
CONNECTION_RESET = 2
resetState = 0
resetLock = {Object#816}
stream = false
socket = null
serverSocket = null
fd = {FileDescriptor#817}
address = null
port = 0
localport = 0
oldImpl = false
closing = false
fd = {FileDescriptor#817}
fd = 1260
handle = -1
parent = {SocketInputStream#819}
eof = false
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
temp = null
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
closing = false
fd = {FileDescriptor#817}
fd = 1260
handle = -1
parent = {SocketInputStream#819}
eof = false
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
exclusiveBind = true
isReuseAddress = false
timeout = 0
trafficClass = 0
shut_rd = false
shut_wr = false
socketInputStream = {SocketInputStream#819}
socketOutputStream = {SocketOutputStream#820}
fdUseCount = 0
fdLock = {Object#821}
closePending = false
CONNECTION_NOT_RESET = 0
CONNECTION_RESET_PENDING = 1
CONNECTION_RESET = 2
resetState = 0
resetLock = {Object#822}
stream = true
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
serverSocket = null
fd = {FileDescriptor#817}
address = {Inet4Address#823} "localhost/127.0.0.1"
port = 1030
localport = 51099
temp = null
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
closing = false
fd = {FileDescriptor#817}
path = null
channel = null
closeLock = {Object#826}
closed = false
otherParents = {ArrayList#833} size = 2
closed = false
path = null
channel = null
closeLock = {Object#826}
closed = false
otherParents = {ArrayList#833} size = 2
closed = false
path = null
channel = null
closeLock = {Object#826}
closed = false
socketOutputStream = {SocketOutputStream#820}
impl = {DualStackPlainSocketImpl#814} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
temp = {byte[1]#843}
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
closing = false
fd = {FileDescriptor#817}
append = false
channel = null
path = null
closeLock = {Object#844}
closed = false
fdUseCount = 0
fdLock = {Object#821}
closePending = false
CONNECTION_NOT_RESET = 0
CONNECTION_RESET_PENDING = 1
CONNECTION_RESET = 2
resetState = 0
resetLock = {Object#822}
stream = true
socket = {Socket#804} "Socket[addr=localhost/127.0.0.1,port=1030,localport=51099]"
serverSocket = null
fd = {FileDescriptor#817}
address = {Inet4Address#823} "localhost/127.0.0.1"
port = 1030
localport = 51099
timeout = 0
trafficClass = 0
shut_rd = false
shut_wr = false
socketInputStream = null
socketOutputStream = null
fdUseCount = 0
fdLock = {Object#815}
closePending = false
CONNECTION_NOT_RESET = 0
CONNECTION_RESET_PENDING = 1
CONNECTION_RESET = 2
resetState = 0
resetLock = {Object#816}
stream = false
socket = null
serverSocket = null
fd = {FileDescriptor#817}
address = null
port = 0
localport = 0
oldImpl = false
pg_input = {VisibleBufferedInputStream#805}
pg_output = {BufferedOutputStream#806}
streamBuffer = null
encoding = {Encoding#807} "UTF-8"
encodingWriter = {OutputStreamWriter#808}
user = "postgres"
database = "postgres"
executor = {QueryExecutorImpl#800}
logger = {Logger#778}
compatible = "9.0"
dbVersionNumber = "10.7"
commitQuery = {SimpleQuery#783} "COMMIT"
rollbackQuery = {SimpleQuery#784} "ROLLBACK"
_typeCache = {TypeInfoCache#785}
prepareThreshold = 5
autoCommit = true
readOnly = false
bindStringAsVarchar = true
firstWarning = null
timestampUtils = {TimestampUtils#786}
typemap = null
fastpath = null
largeobject = null
metadata = null
copyManager = null
Here the connection you are talking about means the opening function that the application invokes to open and read/modify/delete the database or childs of it.
For example if we talk about a PHP file (used to load websites requests in server, like HTML) or a HTML file, where you login in a page that has as name: https://example.com/login.php (PHP) or https://example.com/login.html (HTML) and the page requires to access the adatabase of the users to check if the credentials you insert are correct, if the values given (for e.g: username:"demoUser" and password:"password*1234"), exists in the databse as rows in a specific table. The database can contain infinite tables and infinite rows inside. An example of a simple database with only one table called Users:
username | password | date_created // Table columns
"demoUser" | "password" | "23-03-2019" // Example showed above
"user1213" | "passw0rd" | "04-02-2019" //Second user example
then here above if the application need to verify if the value exist in this database, the operating system of the application will access the database with a simple file reading file is normally with .db and then it will read each rows to find the values.
To do this the code in the login.php/login.html pages invokes the server that runs the file and the server open the database and then the server take the query(what the code request to check in the database), and execute it as if the database was a simple file with (for e.g:) .db. The connection here stands as the query the
To put it in simple words. A "Database connection" is a link between your application process and Database's serving process.
Client side:
When you create a connection your application stores information like: what database address is, what socket is used for the connection, what server process is responsible for processing your requests and etc. This information depends on a connection driver implementation and differs from database to database.
Server side:
When a request from a client application arrives, a database performs authentication and authorization of the client and creates a new process or a thread which is responsible for serving it. Implementation and data loaded by this server process is also vendor-dependent and differs from database to database.
This process of 'preparing' a database for serving a new client takes a good amount of time, and that's where connection pools come to help.
Connection pool:
Connection pool is basically used to reduce the need for openning new connections and wasting time on authentication, authorization, creating server process etc. It allows reusing already established connections.
What happens to a connection (that are already loaded in application connection pool) when the database schema changes, and there is an active transaction going on?
First of all, a database does not know about any connection pools. For the databse it's a client-side feature. What happens also depends on a particular database and its implementation. Usually databases have a blocking mechanism to prevent objects from modifying while they are still in use and vice-versa.

Unable to transcode from audio/l16;rate=48000;channel=1 to one of: audio/x-float-array; rate=16000; channels=1,

I am currently working on Softbanks' robot Pepper and I try to use Watson speech-to-text solution on Pepper's audio buffers remote streaming by using websocket protocol.
I used the answer to that former question NAO robot remote audio problems to find a way to access remotly pepper's audio buffers and that project https://github.com/ibm-dev/watson-streaming-stt to learn how to use websocket protocole to use watson streaming stt.
However, after I open my websocket application, I start sending buffers to watson and after a few sendings, I receive error: 'Unable to transcode from audio/l16;rate=48000;channel=1 to one of: audio/x-float-array; rate=16000; channels=1'
Each time I'm trying to send Pepper's audio buffer to watson, it is unable to understand it.
I compared data I send with data sent in watson streaming stt example (using pyaudio streaming from microphone instead of Pepper's buffer streaming) and I don't see any difference. Both time I'm pretty sure that I am sending a string containing raw chunks of bytes. Which is what Watson asks for in it documentation.
I try to send chunks of 8192 bytes with a sample rate of 48kHz and I can easily convert Pepper's audio buffer in hexa so I don't understand why Watson can't transcode it.
Here is my code:
# -*- coding: utf-8 -*-
#!/usr/bin/env python
import argparse
import base64
import configparser
import json
import threading
import time
from optparse import OptionParser
import naoqi
import numpy as np
import sys
from threading import Thread
import ssl
import websocket
from websocket._abnf import ABNF
CHANNELS = 1
NAO_IP = "172.20.10.12"
class SoundReceiverModule(naoqi.ALModule):
"""
Use this object to get call back from the ALMemory of the naoqi world.
Your callback needs to be a method with two parameter (variable name, value).
"""
def __init__( self, strModuleName, strNaoIp):
try:
naoqi.ALModule.__init__(self, strModuleName );
self.BIND_PYTHON( self.getName(),"callback" );
self.strNaoIp = strNaoIp;
self.outfile = None;
self.aOutfile = [None]*(4-1); # ASSUME max nbr channels = 4
self.FINALS = []
self.RECORD_SECONDS = 20
self.ws_open = False
self.ws_listening = ""
# init data for websocket interfaces
self.headers = {}
self.userpass = "" #userpass and password
self.headers["Authorization"] = "Basic " + base64.b64encode(
self.userpass.encode()).decode()
self.url = ("wss://stream.watsonplatform.net//speech-to-text/api/v1/recognize"
"?model=fr-FR_BroadbandModel")
except BaseException, err:
print( "ERR: abcdk.naoqitools.SoundReceiverModule: loading error: %s" % str(err) );
# __init__ - end
def __del__( self ):
print( "INF: abcdk.SoundReceiverModule.__del__: cleaning everything" );
self.stop();
def start( self ):
audio = naoqi.ALProxy( "ALAudioDevice", self.strNaoIp, 9559 );
self.nNbrChannelFlag = 3; # ALL_Channels: 0, AL::LEFTCHANNEL: 1, AL::RIGHTCHANNEL: 2; AL::FRONTCHANNEL: 3 or AL::REARCHANNEL: 4.
self.nDeinterleave = 0;
self.nSampleRate = 48000;
audio.setClientPreferences( self.getName(), self.nSampleRate, self.nNbrChannelFlag, self.nDeinterleave ); # setting same as default generate a bug !?!
audio.subscribe( self.getName() );
#openning websocket app
self._ws = websocket.WebSocketApp(self.url,
header=self.headers,
on_open = self.on_open,
on_message=self.on_message,
on_error=self.on_error,
on_close=self.on_close)
sslopt={"cert_reqs": ssl.CERT_NONE}
threading.Thread(target=self._ws.run_forever, kwargs = {'sslopt':sslopt}).start()
print( "INF: SoundReceiver: started!" );
def stop( self ):
print( "INF: SoundReceiver: stopping..." );
audio = naoqi.ALProxy( "ALAudioDevice", self.strNaoIp, 9559 );
audio.unsubscribe( self.getName() );
print( "INF: SoundReceiver: stopped!" );
print "INF: WebSocket: closing..."
data = {"action": "stop"}
self._ws.send(json.dumps(data).encode('utf8'))
# ... which we need to wait for before we shutdown the websocket
time.sleep(1)
self._ws.close()
print "INF: WebSocket: closed"
if( self.outfile != None ):
self.outfile.close();
def processRemote( self, nbOfChannels, nbrOfSamplesByChannel, aTimeStamp, buffer ):
"""
This is THE method that receives all the sound buffers from the "ALAudioDevice" module"""
print "receiving buffer"
# self.data_to_send = self.data_to_send + buffer
# print len(self.data_to_send)
#self.data_to_send = ''.join( [ "%02X " % ord( x ) for x in buffer ] ).strip()
self.data_to_send = buffer
#print("buffer type :", type(data))
#print("buffer :", buffer)
#~ print( "process!" );
print( "processRemote: %s, %s, %s, lendata: %s, data0: %s (0x%x), data1: %s (0x%x)" % (nbOfChannels, nbrOfSamplesByChannel, aTimeStamp, len(buffer), buffer[0],ord(buffer[0]),buffer[1],ord(buffer[1])) );
if self.ws_open == True and self.ws_listening == True:
print "sending data"
self._ws.send(self.data_to_send, ABNF.OPCODE_BINARY)
print "data sent"
#print self.data_to_send
aSoundDataInterlaced = np.fromstring( str(buffer), dtype=np.int16 );
#
aSoundData = np.reshape( aSoundDataInterlaced, (nbOfChannels, nbrOfSamplesByChannel), 'F' );
# print "processRemote over"
# processRemote - end
def on_message(self, ws, msg):
print("message")
data = json.loads(msg)
print data
if "state" in data:
if data["state"] == "listening":
self.ws_listening = True
if "results" in data:
if data["results"][0]["final"]:
self.FINALS.append(data)
# This prints out the current fragment that we are working on
print(data['results'][0]['alternatives'][0]['transcript'])
def on_error(self, ws, error):
"""Print any errors."""
print(error)
def on_close(self, ws):
"""Upon close, print the complete and final transcript."""
transcript = "".join([x['results'][0]['alternatives'][0]['transcript']
for x in self.FINALS])
print("transcript :", transcript)
self.ws_open = False
def on_open(self, ws):
"""Triggered as soon a we have an active connection."""
# args = self._ws.args
print "INF: WebSocket: opening"
data = {
"action": "start",
# this means we get to send it straight raw sampling
"content-type": "audio/l16;rate=%d;channel=1" % self.nSampleRate,
"continuous": True,
"interim_results": True,
# "inactivity_timeout": 5, # in order to use this effectively
# you need other tests to handle what happens if the socket is
# closed by the server.
"word_confidence": True,
"timestamps": True,
"max_alternatives": 3
}
# Send the initial control message which sets expectations for the
# binary stream that follows:
self._ws.send(json.dumps(data).encode('utf8'))
# Spin off a dedicated thread where we are going to read and
# stream out audio.
print "INF: WebSocket: opened"
self.ws_open = True
def version( self ):
return "0.6";
def main():
"""initialisation
"""
parser = OptionParser()
parser.add_option("--pip",
help="Parent broker port. The IP address or your robot",
dest="pip")
parser.add_option("--pport",
help="Parent broker port. The port NAOqi is listening to",
dest="pport",
type="int")
parser.set_defaults(
pip=NAO_IP,
pport=9559)
(opts, args_) = parser.parse_args()
pip = opts.pip
pport = opts.pport
# We need this broker to be able to construct
# NAOqi modules and subscribe to other modules
# The broker must stay alive until the program exists
myBroker = naoqi.ALBroker("myBroker",
"0.0.0.0", # listen to anyone
0, # find a free port and use it
pip, # parent broker IP
pport) # parent broker port
"""fin initialisation
"""
global SoundReceiver
SoundReceiver = SoundReceiverModule("SoundReceiver", pip) #thread1
SoundReceiver.start()
try:
while True:
time.sleep(1)
print "hello"
except KeyboardInterrupt:
print "Interrupted by user, shutting down"
myBroker.shutdown()
SoundReceiver.stop()
sys.exit(0)
if __name__ == "__main__":
main()
I would be thankful if anyone had any idea on how to bypass that error or on what to try to get useful info. I first believed that I was sending "wrong" data to watson however after lots of attempts I have no clue on how to fix that problem.
Thank you a lot,
Alex

Send list using socket

I'm trying to send a list from server to client.
The list looks like this (it's a csv file).
201,8,0040000080
205,8,1f421d25721e
but when sending I get this error:
TypeError: must be string or buffer, not list
I tried 2 options:
Iterate through the list and send each string to the server, but got this as a result:
201 ---> 2,0,1
Tried casting each line, e.g str(line), and then send it, but got this:
201,8,0040000080 ---> [,',2,0,1,',",", ,',8,',",", ,',0,0,4,0,0,0,0,0,8,0,',]
how can I solve this? I just want to send the data from the client to server as is. For the record, the Client code:
import socket
import csv
clientSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
with open('can_data.csv', 'rb') as csv_file:
csv_reader = csv.reader(csv_file)
for line in csv_reader:
clientSock.sendto(str(line), (self.address, self.port))
Server code:
with open('output.csv', 'wb') as new_file:
csv_writer = csv.writer(new_file)
while True:
data, addr = s.recvfrom(1024)
csv_writer.writerow(data)
Both sides need to agree on a serialization format. str(line) may work when paired with ast.literaleval() but repr(line) would be the better choice as repr tries for a more precise representation that str. You could also move to a serialization protocol like pickle or json.
Assuming this is python 3, I also moved the csv to text and used utf-8 encoding on the wire.
client:
import socket
import csv
address = 'localhost'
port = 5555
clientSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
with open('can_data.csv', 'r') as csv_file:
csv_reader = csv.reader(csv_file)
for line in csv_reader:
clientSock.sendto(repr(line).encode('utf-8'), (address, port))
server:
import socket
import csv
import ast
address = 'localhost'
port = 5555
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((address, port))
with open('output.csv', 'w') as new_file:
csv_writer = csv.writer(new_file)
while True:
data, addr = sock.recvfrom(1024)
csv_writer.writerow(ast.literal_eval(data.decode('utf-8')))

OpenFlow - How are ICMP messages handled

I am running a Ryu controller and a Mininet instance with 2 hosts and 1 switch like below.
H1---S---H2
Code in Ryu controller
from ryu.base import app_manager
from ryu.controller import ofp_event
from ryu.controller.handler import CONFIG_DISPATCHER, MAIN_DISPATCHER
from ryu.controller.handler import set_ev_cls
from ryu.ofproto import ofproto_v1_3
from ryu.lib.packet import packet
from ryu.lib.packet import ethernet
from ryu.lib.packet import ether_types
class SimpleSwitch13(app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
def __init__(self, *args, **kwargs):
super(SimpleSwitch13, self).__init__(*args, **kwargs)
self.mac_to_port = {}
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
datapath = ev.msg.datapath
ofproto = datapath.ofproto
parser = datapath.ofproto_parser
Basically the switch flow table is empty. In this case, when I run h1 ping h2 from my mininet console and record the packet exchanges, this is what I get in wireshark from host h1.
There is no router in the mininet instance. How am I receiving an ICMP Host Destination Unreachable Message from the same host that initiated the ping?
The app code you posted is not complete.
For complete simple_switch_13.py, you can get it from the osrg github.
Take a look, it is like this:
class SimpleSwitch13(app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
def __init__(self, *args, **kwargs):
super(SimpleSwitch13, self).__init__(*args, **kwargs)
self.mac_to_port = {}
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
datapath = ev.msg.datapath
ofproto = datapath.ofproto
parser = datapath.ofproto_parser
match = parser.OFPMatch()
actions = [parser.OFPActionOutput(ofproto.OFPP_CONTROLLER,
ofproto.OFPCML_NO_BUFFER)]
self.add_flow(datapath, 0, match, actions)
def add_flow(self, datapath, priority, match, actions, buffer_id=None):
ofproto = datapath.ofproto
parser = datapath.ofproto_parser
inst = [parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS,
actions)]
if buffer_id:
mod = parser.OFPFlowMod(datapath=datapath, buffer_id=buffer_id,
priority=priority, match=match,
instructions=inst)
else:
mod = parser.OFPFlowMod(datapath=datapath, priority=priority,
match=match, instructions=inst)
datapath.send_msg(mod)
#set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
def _packet_in_handler(self, ev):
# If you hit this you might want to increase
# the "miss_send_length" of your switch
if ev.msg.msg_len < ev.msg.total_len:
self.logger.debug("packet truncated: only %s of %s bytes",
ev.msg.msg_len, ev.msg.total_len)
msg = ev.msg
datapath = msg.datapath
ofproto = datapath.ofproto
parser = datapath.ofproto_parser
in_port = msg.match['in_port']
pkt = packet.Packet(msg.data)
eth = pkt.get_protocols(ethernet.ethernet)[0]
if eth.ethertype == ether_types.ETH_TYPE_LLDP:
# ignore lldp packet
return
dst = eth.dst
src = eth.src
dpid = datapath.id
self.mac_to_port.setdefault(dpid, {})
self.logger.info("packet in %s %s %s %s", dpid, src, dst, in_port)
# learn a mac address to avoid FLOOD next time.
self.mac_to_port[dpid][src] = in_port
if dst in self.mac_to_port[dpid]:
out_port = self.mac_to_port[dpid][dst]
else:
out_port = ofproto.OFPP_FLOOD
actions = [parser.OFPActionOutput(out_port)]
# install a flow to avoid packet_in next time
if out_port != ofproto.OFPP_FLOOD:
match = parser.OFPMatch(in_port=in_port, eth_dst=dst)
# verify if we have a valid buffer_id, if yes avoid to send both
# flow_mod & packet_out
if msg.buffer_id != ofproto.OFP_NO_BUFFER:
self.add_flow(datapath, 1, match, actions, msg.buffer_id)
return
else:
self.add_flow(datapath, 1, match, actions)
data = None
if msg.buffer_id == ofproto.OFP_NO_BUFFER:
data = msg.data
out = parser.OFPPacketOut(datapath=datapath, buffer_id=msg.buffer_id,
in_port=in_port, actions=actions, data=data)
datapath.send_msg(out)
This simple_switch_13.py app only handles layer 2 forwarding, which is your case.
As you can see, after the connection established, the switch_features_handler will listen on this event and add a send all flow to controller flow on the switch.(table-miss flow)
And for the normal states, when the controller receives PACKET_IN, it will check if the dst_MAC is in the mac_to_port. If yes, then output to the port, and at the same time insert a flow(whose match field is inport and dst_MAC); else(not in the array), the action is set to be FLOOD by assigning the outport=FLOOD.
That's the case in Layer 2 switching.
For ICMP messages handling in layer 3 switching, you need to read the rest_router.py code, which is a lot more complicated.
You get ICMP Host Destination Unreachable because the ARP request is never answered by h2.
Since h1 gets no ARP reply, ICMP error message comes from its own IP stack.

Data encryption issues with Oracle Advanced Security

I have used Oracle Advanced Security to encrypt data during data transfer. I have successfully configured ssl with below parameters and I have restarted the instance. I am retrieving data from a Java class given below. But I could read the data without decrypting, the data is not getting encrypted.
Environment:
Oragle 11g database
SQLNET.AUTHENTICATION_SERVICES= (BEQ, TCPS, NTS)
SSL_VERSION = 0
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
SSL_CLIENT_AUTHENTICATION = FALSE
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = C:\Users\kcr\Oracle\WALLETS)
)
)
SSL_CIPHER_SUITES= (SSL_RSA_EXPORT_WITH_RC4_40_MD5)
Java class:
try{
Properties properties = Utils.readProperties("weka/experiment/DatabaseUtils.props");
// Security.addProvider(new oracle.security.pki.OraclePKIProvider()); //Security syntax
String url = "jdbc:oracle:thin:#(DESCRIPTION =\n" +
" (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))\n" +
" (CONNECT_DATA =\n" +
" (SERVER = DEDICATED)\n" +
" (SERVICE_NAME = sal)\n" +
" )\n" +
" )";
java.util.Properties props = new java.util.Properties();
props.setProperty("user", "system");
props.setProperty("password", "weblogic");
// props.setProperty("javax.net.ssl.trustStore","C:\\Users\\kcr\\Oracle\\WALLETS\\ewallet.p12");
// props.setProperty("oracle.net.ssl_cipher_suites","SSL_RSA_EXPORT_WITH_RC4_40_MD5");
// props.setProperty("javax.net.ssl.trustStoreType","PKCS12");
//props.setProperty("javax.net.ssl.trustStorePassword","welcome2");
DriverManager.registerDriver(new OracleDriver());
Connection conn = DriverManager.getConnection(url, props);
/*8 OracleDataSource ods = new OracleDataSource();
ods.setUser("system");
ods.setPassword("weblogic");
ods.setURL(url);
Connection conn = ods.getConnection();*/
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery("select * from iris");
///////////////////////////
while(rset.next()) {
for (int i=1; i<=5; i++) {
System.out.print(rset.getString(i));
}
}
Are you expecting that your SELECT statement would return encrypted data and that your System.out.print calls would result in encrypted output going to the screen? If so, that's not the way advanced security works-- Advanced Security allows you to encrypt data over the wire but the data is unencrypted in the SQLNet stack. Your SELECT statement, therefore, would always see the data in an unencrypted state. You would need to do a SQLNet trace or use some sort of packet sniffer to see the encrypted data flowing over the wire.
You'll find the documentation in "SSL With Oracle JDBC Thin Driver".
In particular you should probably use PROTOCOL = TCPS instead of PROTOCOL = TCP. I'd also suggest using a stronger cipher suite (and avoid the anonymous ones, since with them you don't verify the identity of the remote server).

Resources