Can someone give me a tip on how to fix a forward historian agent that is going to a BAD state installed on an edge device with an attempt to forward data to a Central instance?
This is my forward agent config on the edge device where the destination server key I got from the Central instance:
{
"destination-vip": "tcp://192.168.0.168:22916",
"destination-serverkey": "asdfasdfasdfasdfasdf"
}
On the edge device I did a vctl auth serverkey to get the edge device key and on the VOLTTRON central I did a vctl auth add to add this key of this edge device to the server.
Full traceback in the log file. When I start the agent is goes BAD in about 30 seconds, and I think the error is coming from the edge to server authentication? AND not some internal edge device agent authentication....but not sure.
2022-10-24 15:23:20,622 (forwarderagent-5.1 2914) volttron.platform.vip.agent.core INFO: Destination serverkey not found in known hosts file, using config
2022-10-24 15:23:20,630 (forwarderagent-5.1 2914) volttron.platform.vip.agent.core INFO: CORE address:tcp://10.200.200.168:22916?publickey=WMdhM3co5g2keLwh5yMdFBgoxaq9g9RRFz__K6KctkI&secretkey=U6UZ9bLnbvvcDm6V5uByBbc-1tA1bUd1ZYubWgKjdG4&serverkey=GgJApMOBM8bg7HiKECP6KOwk_O8f_HslrbqPygDcoCY
2022-10-24 15:23:20,630 (forwarderagent-5.1 2914) volttron.platform.vip.agent.core ERROR: No response to hello message after 10 seconds.
2022-10-24 15:23:20,630 (forwarderagent-5.1 2914) volttron.platform.vip.agent.core ERROR: Type of message bus used zmq
2022-10-24 15:23:20,630 (forwarderagent-5.1 2914) volttron.platform.vip.agent.core ERROR: A common reason for this is a conflicting VIP IDENTITY.
2022-10-24 15:23:20,630 (forwarderagent-5.1 2914) volttron.platform.vip.agent.core ERROR: Another common reason is not having an auth entry onthe target instance.
2022-10-24 15:23:20,630 (forwarderagent-5.1 2914) volttron.platform.vip.agent.core ERROR: Shutting down agent.
2022-10-24 15:23:20,630 (forwarderagent-5.1 2914) volttron.platform.vip.agent.core ERROR: Possible conflicting identity is: volttron1.forwarderagent-5.1_1
2022-10-24 15:23:20,829 (listeneragent-3.3 1608) __main__ INFO: Peer: pubsub, Sender: listeneragent-3.3_1:, Bus: , Topic: heartbeat/listeneragent-3.3_1, Headers: {'TimeStamp': '2022-10-24T15:23:20.826211+00:00', 'min_compatible_version': '3.0', 'max_compatible_version': ''}, Message:
'GOOD'
2022-10-24 15:23:21,219 (listeneragent-3.3 1608) __main__ INFO: Peer: pubsub, Sender: forwarderagent-5.1_1:, Bus: , Topic: heartbeat/forwarderagent-5.1_1, Headers: {'TimeStamp': '2022-10-24T15:23:21.216704+00:00', 'min_compatible_version': '3.0', 'max_compatible_version': ''}, Message:
'BAD'
2022-10-24 15:23:21,613 () volttron.platform.auth.auth_protocols.auth_zmq INFO: AUTH: After authenticate user id: 'control.connection', b'4552b404-0290-41d5-b7a8-ba4653e98e3c'
2022-10-24 15:23:21,613 () volttron.platform.auth.auth_protocols.auth_zmq INFO: authentication success: userid=b'4552b404-0290-41d5-b7a8-ba4653e98e3c' domain='vip', address='localhost:1000:1000:2943', mechanism='CURVE', credentials=['sRZnNl3PPt-_NBiyXcCtqbwG7_5dYAALmDQzXtBEFFU'], user='control.connection'
2022-10-24 15:23:21,624 (listeneragent-3.3 1608) __main__ INFO: Peer: pubsub, Sender: control.connection:, Bus: , Topic: heartbeat/control.connection, Headers: {'TimeStamp': '2022-10-24T15:23:21.623967+00:00', 'min_compatible_version': '3.0', 'max_compatible_version': ''}, Message:
'GOOD'
2022-10-24 15:23:21,631 (forwarderagent-5.1 2914) volttron.platform.vip.agent.subsystems.auth INFO: Skipping updating rpc auth capabilities for agent volttron1.forwarderagent-5.1_1 connecting to remote address: tcp://10.200.200.168:22916?publickey=WMdhM3co5g2keLwh5yMdFBgoxaq9g9RRFz__K6KctkI&secretkey=U6UZ9bLnbvvcDm6V5uByBbc-1tA1bUd1ZYubWgKjdG4&serverkey=GgJApMOBM8bg7HiKECP6KOwk_O8f_HslrbqPygDcoCY
2022-10-24 15:23:21,631 (forwarderagent-5.1 2914) volttron.platform.vip.agent.subsystems.auth WARNING: Auth entry not found for volttron1.forwarderagent-5.1_1: rpc_method_authorizations not updated. If this agent does have an auth entry, verify that the 'identity' field has been included in the auth entry. This should be set to the identity of the agent
2022-10-24 15:23:30,627 (forwarderagent-5.1 2914) __main__ ERROR: Couldn't connect to address. gevent timeout: (tcp://10.200.200.168:22916)
2022-10-24 15:23:30,628 (forwarderagent-5.1 2914) __main__ ERROR: Could not connect to targeted historian dest_vip tcp://10.200.200.168:22916 dest_address None
2022-10-24 15:23:30,755 () volttron.platform.vip.agent.core INFO: CORE address:inproc://vip
2022-10-24 15:23:30,786 () volttron.platform.vip.agent.core INFO: Connected to platform: router: 4552b404-0290-41d5-b7a8-ba4653e98e3c version: 1.0 identity
One thing to note is I am trying to rectify a botched update from 8.0 to 8.2 on the edge device I ended up just recloning the repo and bootstrapping 8.2, so now I have this volttron1 config someone.
#bbartling, on the destination Volttron platform, can you verify that all remote connections (e.g. Forwarder connections) have been received and approved? You can run the following commands on the destination platform to verify:
# to view all remote connections; this will return a list of user ids that can be used to approve or deny connections
vctl auth remote list
# to approve a remote connection
vctl auth remote approve <USER_ID>
Docs to vctl auth remote commands
If you're goal is to verify that your forwarder can connect to the destination platform, you can always auto accept all remote connections (though you should only do this for testing since it is not secure):
vctl auth add --credentials '/.*/'
I fixed this by doing:
vctl auth serverkey on the central instance and adding this into
the forward agent config file on the edge device
Then on the edge device doing a vctl auth publickey which flashes the
edge device key of the forward agent which then needs to be added to the central instance
on a vctl auth add
Related
Since my xCode got (automatically...) updated to 13 and all my simulators moved from iOS 14.5 to 15. with the same setup / commands I'm unable to launch the WDA from both the desktop app, and the node ( selenium server grid/node setup)
The issue exists for all simulators.
I can still use real device..
(the actual ports in errors might differ, the first one is from the actual automation where it fails, the 2nd and 3rd is from the desktop app -where it still fails with the same error. the root issue is still the same on both)
The error logs I get:
In short:
org.openqa.selenium.SessionNotCreatedException: Unable to create a new remote session. Please check the server log for more details. Original error: An unknown server-side error occurred while processing the command. Original error: Unable to start WebDriverAgent session because of xcodebuild failure: An unknown server-side error occurred while processing the command. Original error: Could not proxy command to the remote server. Original error: connect ECONNREFUSED 127.0.0.1:8205
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:17:03'
System info: host: 'Janoss-MacBook-Pro.local', ip: 'fe80:0:0:0:18e4:52:307c:8c7b%en0', os.name: 'Mac OS X', os.arch: 'x86_64', os.version: '11.5', java.version: '11.0.11'
Driver info: driver.version: IOSDriver
remote stacktrace: UnknownError: An unknown server-side error occurred while processing the command. Original error: Unable to start WebDriverAgent session because of xcodebuild failure: An unknown server-side error occurred while processing the command. Original error: Could not proxy command to the remote server. Original error: connect ECONNREFUSED 127.0.0.1:8205
at getResponseForW3CError (/usr/local/lib/node_modules/appium/node_modules/appium-base-driver/lib/protocol/errors.js:804:9)
at asyncHandler (/usr/local/lib/node_modules/appium/node_modules/appium-base-driver/lib/protocol/protocol.js:380:37)
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:17:03'
System info: host: 'Janoss-MacBook-Pro.local', ip: 'fe80:0:0:0:18e4:52:307c:8c7b%en0', os.name: 'Mac OS X', os.arch: 'x86_64', os.version: '11.5', java.version: '11.0.11'
Driver info: driver.version: IOSDriver
The longer error from the server:
[iOSSim] Got Simulator UI client PID: 11815
[iOSSim] Both Simulator with UDID 'F5A500AB-FAA5-41A0-A009-E5A8EDB8643A' and the UI client are currently running
[BaseDriver] Event 'simStarted' logged at 1632487969688 (14:52:49 GMT+0200 (Central European Summer Time))
[WebDriverAgent] No obsolete cached processes from previous WDA sessions listening on port 8100 have been found
[DevCon Factory] Requesting connection for device F5A500AB-FAA5-41A0-A009-E5A8EDB8643A on local port 8100
[DevCon Factory] Cached connections count: 0
[DevCon Factory] Successfully requested the connection for F5A500AB-FAA5-41A0-A009-E5A8EDB8643A:8100
[XCUITest] Starting WebDriverAgent initialization with the synchronization key 'XCUITestDriver'
[WD Proxy] Matched '/status' to command name 'getStatus'
[WD Proxy] Proxying [GET /status] to [GET http://127.0.0.1:8100/status] with no body
[WD Proxy] connect ECONNREFUSED 127.0.0.1:8100
[WebDriverAgent] WDA is not listening at 'http://127.0.0.1:8100/'
[WebDriverAgent] WDA is currently not running. There is nothing to cache
[XCUITest] Trying to start WebDriverAgent 2 times with 10000ms interval
[XCUITest] These values can be customized by changing wdaStartupRetries/wdaStartupRetryInterval capabilities
[BaseDriver] Event 'wdaStartAttempted' logged at 1632487969773 (14:52:49 GMT+0200 (Central European Summer Time))
[WebDriverAgent] Launching WebDriverAgent on the device
[WebDriverAgent] WebDriverAgent does not need a cleanup. The sources are up to date (1620631774000 >= 1620631774000)
[WebDriverAgent] Killing running processes 'xcodebuild.*F5A500AB-FAA5-41A0-A009-E5A8EDB8643A, F5A500AB-FAA5-41A0-A009-E5A8EDB8643A.*XCTRunner, xctest.*F5A500AB-FAA5-41A0-A009-E5A8EDB8643A' for the device F5A500AB-FAA5-41A0-A009-E5A8EDB8643A...
[WebDriverAgent] 'pgrep -if xcodebuild.*F5A500AB-FAA5-41A0-A009-E5A8EDB8643A' didn't detect any matching processes. Return code: 1
[WebDriverAgent] 'pgrep -if F5A500AB-FAA5-41A0-A009-E5A8EDB8643A.*XCTRunner' didn't detect any matching processes. Return code: 1
[WebDriverAgent] 'pgrep -if xctest.*F5A500AB-FAA5-41A0-A009-E5A8EDB8643A' didn't detect any matching processes. Return code: 1
[WebDriverAgent] Beginning test with command 'xcodebuild build-for-testing test-without-building -project /Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-webdriveragent/WebDriverAgent.xcodeproj -scheme WebDriverAgentRunner -destination id=F5A500AB-FAA5-41A0-A009-E5A8EDB8643A IPHONEOS_DEPLOYMENT_TARGET=15.0 GCC_TREAT_WARNINGS_AS_ERRORS=0 COMPILER_INDEX_STORE_ENABLE=NO' in directory '/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-webdriveragent'
[WebDriverAgent] Output from xcodebuild will only be logged if any errors are present there. To change this, use 'showXcodeLog' desired capability
[WebDriverAgent] Waiting up to 60000ms for WebDriverAgent to start
[WD Proxy] Matched '/status' to command name 'getStatus'
[WD Proxy] Proxying [GET /status] to [GET http://127.0.0.1:8100/status] with no body
[WD Proxy] connect ECONNREFUSED 127.0.0.1:8100
[WD Proxy] Matched '/status' to command name 'getStatus'
[WD Proxy] Proxying [GET /status] to [GET http://127.0.0.1:8100/status] with no body
[WD Proxy] connect ECONNREFUSED 127.0.0.1:8100
Then a long waiting with only the ECONNREFUSED error and finally failing:
[WD Proxy] Proxying [GET /status] to [GET http://127.0.0.1:8100/status] with no body
[WD Proxy] connect ECONNREFUSED 127.0.0.1:8100
[XCUITest] Failed to create WDA session (An unknown server-side error occurred while processing the command. Original error: Could not proxy command to the remote server. Original error: connect ECONNREFUSED 127.0.0.1:8100). Retrying...
[XCUITest] UnknownError: An unknown server-side error occurred while processing the command. Original error: Could not proxy command to the remote server. Original error: connect ECONNREFUSED 127.0.0.1:8100
[XCUITest] at JWProxy.command (/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-base-driver/lib/jsonwp-proxy/proxy.js:274:13)
[XCUITest] at runMicrotasks ()
[XCUITest] at processTicksAndRejections (internal/process/task_queues.js:85:5)
[XCUITest] Unable to start WebDriverAgent session because of xcodebuild failure: An unknown server-side error occurred while processing the command. Original error: Could not proxy command to the remote server. Original error: connect ECONNREFUSED 127.0.0.1:8100
[XCUITest] Quitting and uninstalling WebDriverAgent
[WebDriverAgent] Shutting down sub-processes
[iOSSim] Building bundle path map
[iOSSim] The simulator has '1' bundles which have 'WebDriverAgentRunner-Runner' as their 'CFBundleName':
[iOSSim] 'com.jano.facebook.WebDriverAgentRunner.xctrunner'
[WebDriverAgent] Uninstalling WDAs: 'com.jano.facebook.WebDriverAgentRunner.xctrunner'
[XCUITest] {}
[DevCon Factory] Releasing connections for F5A500AB-FAA5-41A0-A009-E5A8EDB8643A device on any port number
[DevCon Factory] Found cached connections to release: ["F5A500AB-FAA5-41A0-A009-E5A8EDB8643A:8100"]
[DevCon Factory] Cached connections count: 0
[XCUITest] Not clearing log files. Use `clearSystemFiles` capability to turn on.
[IOSSimulatorLog] Stopping iOS log capture
[BaseDriver] Event 'newSessionStarted' logged at 1632488146662 (14:55:46 GMT+0200 (Central European Summer Time))
[MJSONWP] Encountered internal error running command: Error: Unable to start WebDriverAgent session because of xcodebuild failure: An unknown server-side error occurred while processing the command. Original error: Could not proxy command to the remote server. Original error: connect ECONNREFUSED 127.0.0.1:8100
[MJSONWP] at quitAndUninstall (/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-xcuitest-driver/lib/driver.js:544:15)
[MJSONWP] at /Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-xcuitest-driver/lib/driver.js:610:11
[MJSONWP] at wrapped (/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/asyncbox/lib/asyncbox.js:60:13)
[MJSONWP] at retry (/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/asyncbox/lib/asyncbox.js:43:13)
[MJSONWP] at retryInterval (/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/asyncbox/lib/asyncbox.js:70:10)
[MJSONWP] at /Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-xcuitest-driver/lib/driver.js:559:7
[HTTP] <-- POST /wd/hub/session 500 177858 ms - 388
[HTTP]
[HTTP] --> DELETE /wd/hub/session
To solve this issue,
We need to update our appkum on the computer ( should be 1.22.0 or above)
To see your version:
appium -v
To install / Update your appium:
npm install -g appium
This fixed the simulator part. to fix the appium desktop, you also have to update the desktop app, but now it's moved on as 2 separate app.
One for the server
https://github.com/appium/appium-desktop/releases/
And one for the inspector https://github.com/appium/appium-inspector/releases
I had a similar problem with the old appium (where the server and the inspector were combined), and with the new version, where they were taken out separately.
I solved it by adding server Capabilities:
serverCapabilities.setCapability("wdaStartupRetries", "4");
serverCapabilities.setCapability("iosInstallPause","8000" );
serverCapabilities.setCapability("wdaStartupRetryInterval", "20000");
after that, the wda session became stable
I had the same problem
I solved it by adding appium desired capabilities and I reduced the time from 120 to 60 for newCommandTimeout capabilities
my appium capabilities' last status
"newCommandTimeout": 60,
"wdaStartupRetries": 3,
"wdaStartupRetryInterval": 20000
Consider the following error message:
Timestamp: 9/11/2018 3:09:34 PM
Message: Class: XXX.MW.BackEnds.oTransaccoesSuspeitas Method: Microsoft.XLANGs.Core.StopConditions segment7(Microsoft.XLANGs.Core.StopConditions) : Exception: An error occurred while processing the message, refer to the details section for more information
Message ID: {139DCE33-4627-4103-9B25-1906EBAD14A8}
Instance ID: {1F943998-158C-4357-8E2A-0473B9425CD3}
Error Description: System.Net.WebException: The HTTP request is unauthorized with client authentication scheme 'Ntlm'. The authentication header received from the server was 'NTLM'.
401 UNAUTHORIZED
Stack Microsoft.XLANGs.BizTalk.Engine
Category: Critical
Priority: 4
EventId: 0
Severity: Critical
Title:XXX.MW.BackEnds, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1512f695a673d231
Machine: CORE04
Application Domain: __XDomain_3.0.1.0_0
Process Id: 54889
Process Name: C:\Program Files (x86)\Microsoft BizTalk Server 2013 R2\BTSNTSvc.exe
Win32 Thread Id: 7844
Thread Name:
Extended Properties: Exception - Microsoft.XLANGs.Core.XlangSoapException: An error occurred while processing the message, refer to the details section for more information
Message ID: {139DCE33-4627-4103-9B25-1906EBAD14A8}
Instance ID: {1F943998-158C-4357-8E2A-0473B9425CD3}
Error Description: System.Net.WebException: The HTTP request is unauthorized with client authentication scheme 'Ntlm'. The authentication header received from the server was 'NTLM'.
401 UNAUTHORIZED
at Microsoft.BizTalk.XLANGs.BTXEngine.BTXPortBase.VerifyTransport(Envelope env, Int32 operationId, Context ctx)
at Microsoft.XLANGs.Core.Subscription.Receive(Segment s, Context ctx, Envelope& env, Boolean topOnly)
at Microsoft.XLANGs.Core.PortBase.GetMessageId(Subscription subscription, Segment currentSegment, Context cxt, Envelope& env, CachedObject location)
at XXX.MW.BackEnds.oTransaccoesSuspeitas.segment2(StopConditions stopOn)
at Microsoft.XLANGs.Core.SegmentScheduler.RunASegment(Segment s, StopConditions stopCond, Exception& exp)
The error is being thrown by an orchestration during the processing of a message obtained from an SQL table.
The data was obtained successfully because I have a readingIndex that's updated by biztalk.
The problem happens randomly, but I can fix it by restarting the biztalk application.
What could possibly be causing the problem?
Do I reset some identification code by restarting the applications?
I am using Istio and Envoy as sidecar proxy. I have deployed the bookinfo sample and its working fine but when I am deploying my own application which calls SQL Server on https or other external services, it gives exception.
A connection was successfully established with the server, but then an
error occurred during the pre-login handshake. (provider: TCP
Provider, error: 35 - An internal exception was caught)
To let Istio applications communicate with external TCP services,
check this blog post https://istio.io/latest/blog/2018/egress-tcp/.
To let Istio applications communicate with external HTTP and TLS services, check https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/.
I faced same issue to connect SQL server from my application, Which i have deployed
in istio enabled namespace. I created serviceentry as shown below to create accessablity.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: sql-replica
spec:
hosts:
- SQL-DNS-NAME or IP
addresses:
- xxx.xx.x.xxx/32
ports:
- number: 5432
name: tcp
protocol: TCP
location: MESH_EXTERNAL
Here in config file xxx.xx.x.xxx ip is that IP which we get by pinging to DNS
$ kubectl apply -f access-sql-server-from-mesh.yaml
I am having a hard time creating my first example listener agent on the Volttron platform. This is the error I am getting when I enter the (tail volttron.log) command after creating and starting the agent:
2017-01-13 13:12:56,664 (listeneragent-3.2 16153)
volttron.platform.vip.agent.core ERROR: No response to hello message
after 10 seconds.
2017-01-13 13:12:56,664 (listeneragent-3.2 16153)
volttron.platform.vip.agent.core ERROR: A common reason for this is a
conflicting VIP IDENTITY.
2017-01-13 13:12:56,664 (listeneragent-3.2 16153)
volttron.platform.vip.agent.core ERROR: Shutting down agent.
2017-01-13 13:12:56,664 (listeneragent-3.2 16153)
volttron.platform.vip.agent.core ERROR: Possible conflicting identity
is: platform.listener }
When I activate the Volttron platform and just run the (tail volttron.log) command without creating any agents, I get this message
in the terminal :-
{ 2017-01-13 13:22:06,276 () volttron.platform.vip.agent.core DEBUG:
Running onstart methods.
2017-01-13 13:22:06,277 () volttron.platform.vip.agent.core INFO:
Connected to platform: router: ce01039f-9fc1-4395-b294-0c008f43aa8b
version: 1.0 identity: pubsub
2017-01-13 13:22:06,277 () volttron.platform.vip.agent.core DEBUG:
Running onstart methods.
2017-01-13 13:22:06,278 () volttron.platform.vip.agent.core INFO:
Connected to platform: router: ce01039f-9fc1-4395-b294-0c008f43aa8b
version: 1.0 identity: pubsub.compat
2017-01-13 13:22:06,278 () volttron.platform.vip.agent.core DEBUG:
Running onstart methods.
2017-01-13 13:22:06,279 () volttron.platform.vip.agent.core INFO:
Connected to platform: router: ce01039f-9fc1-4395-b294-0c008f43aa8b
version: 1.0 identity: master.web
2017-01-13 13:22:06,279 () volttron.platform.vip.agent.core DEBUG:
Running onstart methods.
2017-01-13 13:22:06,279 () volttron.platform.main INFO: loading
protected-topics file /home/mint/.volttron/protected_topics.json
2017-01-13 13:22:06,279 () volttron.platform.main INFO:
protected-topics file /home/mint/.volttron/protected_topics.json
loaded
2017-01-13 13:22:06,279 () volttron.platform.web INFO: Web server
not started. }
Any idea what's causing this error: INFO: Web server not started. ??
(We actually solved this in an over the phone conversation, I'm not psychic)
This is what happens when you attempt to run VOLTTRON on the Mint OS Live CD and not from an installed copy of Mint.
You will also see authentication errors in the log when Agents try to start up.
In the VM double click on the "Install Linux Mint" icon on the desktop and follow the prompts. Once Mint is installed on the VM restart the VM and proceed with the normal installation instructions.
I'm having issues running an xDebug session when I've connected to the DBGp proxy successfully. I'm using both local and remote SSH tunnels for port 9000 (xdebug), and 9001 for a (xdebug DBGp client).
] The code is being debugged remotely, the xDebug server is running on an Amazon EC2 instance
] I am using Zend Studio for my local debugging client on my Macbook
] I am running a remote SSH tunnel for port 9000 "ssh ec2-user#X.X.X.X -R 9000/127.0.0.1/9000"
As of here, I'm able to successfully use xDebug, but then I run into issues running the proxy
But then I start running into issues running the proy:
] I then run the dbgp proxy on the remote server
./pydbgpproxy
INFO: dbgp.proxy: starting proxy listeners. appid: 20906
INFO: dbgp.proxy: dbgp listener on 127.0.0.1:9000
INFO: dbgp.proxy: IDE listener on 127.0.0.1:9001
] I then setup a local SSH tunnel for port 9001 - "ssh ec2-user#X.X.X.X -L 9001/127.0.0.1/9001"
] From Zend Studio I'm able to connect successfully to the DBGp server, where "SessionName" is the name of my session
INFO: dbgp.proxy: Server:onConnect ('127.0.0.1', 51828) [proxyinit -p 9000 -k SessionName -m 0]
] When I trigger a remote xdebug debugging sessions using my session name, it fails like so.
INFO: dbgp.proxy: connection from 127.0.0.1:39172 [<__main__.sessionProxy instance at 0x122e0e0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39173 [<__main__.sessionProxy instance at 0x7f87980210e0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39174 [<__main__.sessionProxy instance at 0x7f87980243b0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39175 [<__main__.sessionProxy instance at 0x7f879814c878>]
INFO: dbgp.proxy: connection from 127.0.0.1:39176 [<__main__.sessionProxy instance at 0x7f87800a2368>]
INFO: dbgp.proxy: connection from 127.0.0.1:39177 [<__main__.sessionProxy instance at 0x123cb48>]
INFO: dbgp.proxy: connection from 127.0.0.1:39178 [<__main__.sessionProxy instance at 0x12387e8>]
INFO: dbgp.proxy: connection from 127.0.0.1:39179 [<__main__.sessionProxy instance at 0x122ec68>]
INFO: dbgp.proxy: connection from 127.0.0.1:39180 [<__main__.sessionProxy instance at 0x124fb48>]
INFO: dbgp.proxy: connection from 127.0.0.1:39181 [<__main__.sessionProxy instance at 0x7f8798047dd0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39182 [<__main__.sessionProxy instance at 0x1244d88>]
ERROR: dbgp.proxy: Unable to connect to the server listener 127.0.0.1:9000 [<__main__.sessionProxy instance at 0x7f8790025878>]
Traceback (most recent call last):
File "./pydbgpproxy", line 222, in startServer
File "/usr/lib64/python2.6/socket.py", line 184, in __init__
error: [Errno 24] Too many open files
WARNING: dbgp.proxy: Unable to connect to server with key [SessionName], stopping request [<__main__.sessionProxy instance at 0x7f8790025878>]
WARNING: dbgp.proxy: Exception in _cmdloop [[Errno 104] Connection reset by peer]
INFO: dbgp.proxy: session stopped
It actually shows those lines like "INFO: dbgp.proxy: connection from 127.0.0.1:39179 [<main.sessionProxy instance at 0x122ec68>]" almost 50x more than what I copied and pasted here for brevity.
It seems like I got it to work but it's erring out. I'm currently using the pydbgpproxy phyton script, version 7 from: http://code.activestate.com/komodo/remotedebugging/. I tried the version 8 script but it just errors. I also tried the pydbgpprox, version 6, but it still has the same exact issue.
IN SUMMARY: xDebug is running on the server, I can connect to it normally without proxy. With the proxy I can connect to it successfully to, but then running a script it encounters this werid error.
Does anyone know what this issue might be caused from?