xDebug [Errno 24] Too many open files when connecting to DBGp Proxy - xdebug

I'm having issues running an xDebug session when I've connected to the DBGp proxy successfully. I'm using both local and remote SSH tunnels for port 9000 (xdebug), and 9001 for a (xdebug DBGp client).
] The code is being debugged remotely, the xDebug server is running on an Amazon EC2 instance
] I am using Zend Studio for my local debugging client on my Macbook
] I am running a remote SSH tunnel for port 9000 "ssh ec2-user#X.X.X.X -R 9000/127.0.0.1/9000"
As of here, I'm able to successfully use xDebug, but then I run into issues running the proxy
But then I start running into issues running the proy:
] I then run the dbgp proxy on the remote server
./pydbgpproxy
INFO: dbgp.proxy: starting proxy listeners. appid: 20906
INFO: dbgp.proxy: dbgp listener on 127.0.0.1:9000
INFO: dbgp.proxy: IDE listener on 127.0.0.1:9001
] I then setup a local SSH tunnel for port 9001 - "ssh ec2-user#X.X.X.X -L 9001/127.0.0.1/9001"
] From Zend Studio I'm able to connect successfully to the DBGp server, where "SessionName" is the name of my session
INFO: dbgp.proxy: Server:onConnect ('127.0.0.1', 51828) [proxyinit -p 9000 -k SessionName -m 0]
] When I trigger a remote xdebug debugging sessions using my session name, it fails like so.
INFO: dbgp.proxy: connection from 127.0.0.1:39172 [<__main__.sessionProxy instance at 0x122e0e0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39173 [<__main__.sessionProxy instance at 0x7f87980210e0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39174 [<__main__.sessionProxy instance at 0x7f87980243b0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39175 [<__main__.sessionProxy instance at 0x7f879814c878>]
INFO: dbgp.proxy: connection from 127.0.0.1:39176 [<__main__.sessionProxy instance at 0x7f87800a2368>]
INFO: dbgp.proxy: connection from 127.0.0.1:39177 [<__main__.sessionProxy instance at 0x123cb48>]
INFO: dbgp.proxy: connection from 127.0.0.1:39178 [<__main__.sessionProxy instance at 0x12387e8>]
INFO: dbgp.proxy: connection from 127.0.0.1:39179 [<__main__.sessionProxy instance at 0x122ec68>]
INFO: dbgp.proxy: connection from 127.0.0.1:39180 [<__main__.sessionProxy instance at 0x124fb48>]
INFO: dbgp.proxy: connection from 127.0.0.1:39181 [<__main__.sessionProxy instance at 0x7f8798047dd0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39182 [<__main__.sessionProxy instance at 0x1244d88>]
ERROR: dbgp.proxy: Unable to connect to the server listener 127.0.0.1:9000 [<__main__.sessionProxy instance at 0x7f8790025878>]
Traceback (most recent call last):
File "./pydbgpproxy", line 222, in startServer
File "/usr/lib64/python2.6/socket.py", line 184, in __init__
error: [Errno 24] Too many open files
WARNING: dbgp.proxy: Unable to connect to server with key [SessionName], stopping request [<__main__.sessionProxy instance at 0x7f8790025878>]
WARNING: dbgp.proxy: Exception in _cmdloop [[Errno 104] Connection reset by peer]
INFO: dbgp.proxy: session stopped
It actually shows those lines like "INFO: dbgp.proxy: connection from 127.0.0.1:39179 [<main.sessionProxy instance at 0x122ec68>]" almost 50x more than what I copied and pasted here for brevity.
It seems like I got it to work but it's erring out. I'm currently using the pydbgpproxy phyton script, version 7 from: http://code.activestate.com/komodo/remotedebugging/. I tried the version 8 script but it just errors. I also tried the pydbgpprox, version 6, but it still has the same exact issue.
IN SUMMARY: xDebug is running on the server, I can connect to it normally without proxy. With the proxy I can connect to it successfully to, but then running a script it encounters this werid error.
Does anyone know what this issue might be caused from?

Related

How to connect remotely to SQL Server Instance Running in Minikube k8s cluster from SSMS?

I have a Windows 10 bare metal machine running a Ubuntu 20 Virtual Machine with VirtualBox.
The Ubuntu VM runs a minikube cluster (v1.25.2 with podman driver) on which a SQL Server Linux instance is deployed with the following resources:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: mcr.microsoft.com/mssql/server:2019-latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1433
env:
- name: ACCEPT_EULA
value: "Y"
- name: MSSQL_SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: MSSQL_SA_PASSWORD
---
apiVersion: v1
kind: Secret
metadata:
name: mssql
type: Opaque
data:
MSSQL_SA_PASSWORD: PFlvdXJTdHJvbmchUGFzc3cwcmQ+
---
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: app
ports:
- protocol: TCP
port: 1433
targetPort: 1433
type: LoadBalancer
By using minikube tunnel, I am able to expose a Load Balancer service with an external IP inside the VM and able to connect successfully with sqlcmd on the SQL Server Instance from inside the Linux VM with the Load Balancer External IP.
The Ubuntu VM is configured with a NAT network interface with port 1433 from the VM mapped on port 1433 in the Windows host:
Whenever I try to connect with SSMS from the Windows host machine I get the following error:
TITLE: Connect to Server
------------------------------
Cannot connect to 127.0.0.1.
------------------------------
ADDITIONAL INFORMATION:
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) (Microsoft SQL Server, Error: 64)
For help, click: https://learn.microsoft.com/sql/relational-databases/errors-events/mssqlserver-64-database-engine-error
------------------------------
The specified network name is no longer available
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) (.Net SqlClient Data Provider
In addition, I get the same error with sqlcmd.exe from the windows host:
SQLCMD.EXE -S 127.0.0.1 -U sa -P "<YourStrong!Passw0rd>"
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Client unable to establish connection because an error was encountered during handshakes before login. Common causes include client attempting to connect to an unsupported version of SQL Server, server too busy to accept new connections or a resource limitation (memory or maximum allowed connections) on the server..
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : TCP Provider: An existing connection was forcibly closed by the remote host.
.
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Client unable to establish connection.
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Client unable to establish connection due to prelogin failure.
The connection does not timeout but rather it looks like it is interrupted by something.
A lot of resources on the internet related to error 64 seems to point to firewall misconfigurations or DNS issues.
Note that I tried the following:
I am connecting to the instance via 127.0.0.1 from the Windows host (so DNS issues are irrelevant)
Ensure that port 1433 is free on the host machine
Created firewall rule(Windows Firewall) to open outbound connections to port 1433
Port forward to the pod with kubectl port-forward but same issue
Tried to set session timeout for LanManWorkstation as suggested here without success.
What am I missing ?

Nagios nrpe plugin install on remote host

On Centos7, following nrpe plugin install steps, when testing the connection between the Nagios server and the remote agent, I got this error...
/usr/local/nagios/libexec/check_nrpe -H 192.168.50.5
CHECK_NRPE: Error - Could not connect to 192.168.50.5: Connection reset by peer
In /etc/xinetd.d/nrpe, I added the Nagios server's IP address to the only_from field.
# default: off
# description: NRPE (Nagios Remote Plugin Executor)
service nrpe
{
disable = no
socket_type = stream
port = 5666
wait = no
user = nagios
group = nagios
server = /usr/local/nagios/bin/nrpe
server_args = -c /usr/local/nagios/etc/nrpe.cfg --inetd
only_from = 127.0.0.1 ::1 {server_IP}
log_on_success =
}
I then restarted the xinetd service; however, upon checking the service status this error log message appeared...
Aug 09 09:32:21 localhost.localdomain xinetd[1448]: bind failed (Address already in use (errno = 98)). service = nrpe
Aug 09 09:32:21 localhost.localdomain xinetd[1448]: Service nrpe failed to start and is deactivated.
The solution was to not only to include the server IP in /etc/xinetd.d/nrpe, but also to stop the nrpe service before restarting the xinetd service.
systemctl stop nrpe
systemctl restart xinetd
It seems restarting xinetd on its own failed to load the nrpe service because it the ports conflicted with the existing nrpe service.

Unable to connect to SQL Server from Docker container (Linux image)

In our application we are using linux based container which access SQL server installed on VM. Everything works fine in local environment outside the container, But when I ran the app in local container we are getting the below error.
"A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 35 - An internal exception was caught"
appsetings.json
"ConnectionStrings": {
"DbConnection": "Server=tcp:vmname\\sqlservername,49763;Database=dbname;User ID=username_Users;Password=pwd;MultipleActiveResultSets=true;Integrated Security=False;"
}
Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
.......
Any inputs will be appreciated
The issue was related to TLS version of the SQL server, enabling TLS 1.2 resolved the issue
Please add ;TrustServerCertificate=true to your connection string.

Using external zookeper with solr cloud

I am trying to implement solrcloud.I foollowed doc from official resource https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud .It works fine with embeded zookeper but it is recomended to use external zookeper. I insalled zookeper on my system created data dictionary zookeper on my home folder.I created sub folders named 1 and 2 and created myid file with text 1 and two respectively i each folder as mentioned in doc.I created config files for zookeper zoo.cnfg
clientPort=2181
initLimit=5
syncLimit=2
server.1=localhost:2879:3879
server.2=localhost:2888:3888
and zoo2.cnfg
initLimit=5
syncLimit=2
clientPort=2182
server.1=localhost:2878:3878
server.2=localhost:2888:3888
Next I run cd
bin/zkServer.sh start zoo.cfg
bin/zkServer.sh start zoo2.cfg
And its started sucessfully. next I run
bin/solr start -e cloud -z localhost:2181,localhost:2182
system ask me no of shards etc like in getting started i select port for node1 8990 and for node 2 8991. It gives error
Waiting to see Solr listening on port 8991 [/] Still not seeing Solr listening on 8991 after 30 seconds!
WARN - 2015-10-30 09:47:04.827; [ ] org.apache.zookeeper.ClientCnxn$SendThread; Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
WARN - 2015-10-30 09:47:05.929; [ ] org.apache.zookeeper.ClientCnxn$SendThread; Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
WARN - 2015-10-30 09:47:06.030; [ ] org.apache.zookeeper.ClientCnxn$SendThread; Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
WARN - 2015-10-30 09:47:07.131; [ ] org.apache.zookeeper.ClientCnxn$SendThread; Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
WARN - 2015-10-30 09:47:07.232; [ ] org.apache.zookeeper.ClientCnxn$SendThread; Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
Where I am missing ? gone through many docs but apche doc is not proper for external zookeper setup.
Your Zookeeper ensemble must have an impair number of nodes : 1, 3, 5, etc...
If you want to test ZK clustering feature than you have to set up at least 3 ZK instances. In this case, don't forget :
To set correctly the ZK server id in the file myid, that must be created in the directory dataDir, referenced by your zoo.cfg.
Separate the dataDir and dataLogDir for each ZK instance.

not attempt to authenticate using SASL (unknown error)

I am trying to setup zookeeper on ec2 two instances. as given here and here.
I am trying to run zookeeper which fails with an error:
command: bin/zkCli.sh -server localhost:2181
> 2015-03-15 00:22:35,644 [myid:] - INFO [main:ZooKeeper#438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher#3ff0efca
Welcome to ZooKeeper!
2015-03-15 00:22:35,671 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread#975] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2015-03-15 00:22:35,677 [myid:] - WARN [main-SendThread(localhost:2181):ClientCnxn$SendThread#1102] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[zk: localhost:2181(CONNECTING) 0] 2015-03-15 00:22:36,796 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread#975] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2015-03-15 00:22:36,797 [myid:] - WARN [main-SendThread(localhost:2181):ClientCnxn$SendThread#1102] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
zoo.cfg as bellow
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/lib/zookeeper
clientPort=2181
server.1=localhost:2888:3888
server.2=<My ec2 private IPs>:2889:3889
also I have created myId file as on both ec2 instances - /var/lib/zookeeper/myid
I also tried to edit /ect/hosts file but still facing the same issue.
also how I can start both of the zookeeper instances by 1 command?
Note: Server get started successfully if I tried with bin/zkCli.sh start command.
Thanks in advance!
look zk log zookeeper.out,if there have connection limit error, configure the following to zoo.cfg.
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=60
This is temporary error , for mine after some time , It gone away :-
This is my zoo.conf file ::-
Dir=../data
clientPort=2181
tickTime=2000
initLimit=5
This error occurred when I forgot to run% ZOOKEEPER_HOME% \ bin \ zkserver.cmd
By running, the problem has been resolved.
Correct this property on the server.properties
default would be localhost change it to match the zookeeper server starup ip and port
zookeeper.connect=0.0.0.0:2181

Resources