Can anyone help me with this error ELK/Salesforce - salesforce

ERROR
Attempted to resurrect connection to dead ES instance,
{:url=>#http://localhost:9200/>,
:error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError,
:error=>"Elasticsearch Unreachable:
[http://localhost:9200/][Manticore::SocketException] Connection
refused (Connection refused)"}
Here are my files.
Logstash.conf:
input
{
tcp
{
port => 5000
}
}
output {
elasticsearch
{
hosts => "elasticsearch:9200"
}
}
elasticsearch.yml:
cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
discovery.type: single-node
xpack.security.enabled: false
xpack.monitoring.enabled: false
xpack.ml.enabled: false
xpack.graph.enabled: false
xpack.watcher.enabled: false
logstash.yml
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
xpack.monitoring.enabled: false
Kibana.yml:
server.name: kibana
server.host: "0"
elasticsearch.url:http://elasticsearch:9200
xpack.security.enabled: false
xpack.monitoring.enabled: false
xpack.ml.enabled: false
xpack.graph.enabled: false
xpack.reporting.enabled: false

Connection refused might be an indication of the firewall blocking that port or service, make sure your firewall settings allow the service your trying to reach. In Linux there is a GUI interface for the firewall To start this application, either select System → Administration → Firewall from the panel, or type system-config-firewall at a shell prompt. If your in windows just type firewall in the search box at the bottom left corner and click on firewall advance settings. Try enabling port 9200 and if elasitcshearch is a service running also enable it through firewall.
*In Linux you can try this to unblock port 9200:
firewall-cmd --permanent --add-port=9200/tcp
firewall-cmd --permanent --add-port=9300/tcp
*Start the service:
sudo systemctl enable elasticsearch.service
sudo systemctl start elasticsearch.service
sudo systemctl daemon-reload
*Then enable the service in the firewall:
sudo firewall-cmd --permanent --elasticsearch.service
firewall-cmd --reload

Related

Unable to make Remote Connection with Postgresql

I have PostgreSQL running on Ubuntu Server and I want to make remote connection with PostgreSQL running on port 5432.
I've checked if I can ping the public IP of ubuntu server from my machine and that works fine.
Next I've changed two files on ubuntu server first I've changed postgresql.conf which looks as below
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
Next I've added two lines in pg_hba.conf as below
host all all 0.0.0.0/0 trust
host all all ::/0 trust
Finally I checked if firewall is running by running sudo ufw verbose which outputted inactive.
As per my understanding I've allowed PostgreSQL to accept remote connection and firewall is also not present hence nothing is blocking. Still I get the following error.
psycopg2.OperationalError: connection to server at "XXX.XXX.XXX.XXX", port 5432 failed: Connection timed out (0x0000274C/10060)
Is the server running on that host and accepting TCP/IP connections?
How can I fix this error?
Edit
Although I can ping and ssh to the Ubuntu server using public IP but can not telnet.
I checked if port 5432 is open using this link but it turned out to be closed.

Nagios nrpe plugin install on remote host

On Centos7, following nrpe plugin install steps, when testing the connection between the Nagios server and the remote agent, I got this error...
/usr/local/nagios/libexec/check_nrpe -H 192.168.50.5
CHECK_NRPE: Error - Could not connect to 192.168.50.5: Connection reset by peer
In /etc/xinetd.d/nrpe, I added the Nagios server's IP address to the only_from field.
# default: off
# description: NRPE (Nagios Remote Plugin Executor)
service nrpe
{
disable = no
socket_type = stream
port = 5666
wait = no
user = nagios
group = nagios
server = /usr/local/nagios/bin/nrpe
server_args = -c /usr/local/nagios/etc/nrpe.cfg --inetd
only_from = 127.0.0.1 ::1 {server_IP}
log_on_success =
}
I then restarted the xinetd service; however, upon checking the service status this error log message appeared...
Aug 09 09:32:21 localhost.localdomain xinetd[1448]: bind failed (Address already in use (errno = 98)). service = nrpe
Aug 09 09:32:21 localhost.localdomain xinetd[1448]: Service nrpe failed to start and is deactivated.
The solution was to not only to include the server IP in /etc/xinetd.d/nrpe, but also to stop the nrpe service before restarting the xinetd service.
systemctl stop nrpe
systemctl restart xinetd
It seems restarting xinetd on its own failed to load the nrpe service because it the ports conflicted with the existing nrpe service.

UFW not blocking traffic to microk8s cluster

I successfully deployed k8s pod with service of type NodePort in microk8s cluster. Now when I enable UFW and try to deny incoming traffic to the exposed port 31001 using command ufw deny 31001 , UFW still allows traffic to port 31001.
What should I do in UFW to allow and deny traffic to port 31001 ?
Even if there is no entry in UFW for port 31001, I get a successful response from port 31001.
Please help.

Xdebug Netbean in Windows 10 doesn't connect to remote host

After upgrade to windows 10, Xdebug unable to connect to remote host. The log says:
I: Checking remote connect back address.
I: Remote address found, connecting to ::1:9001.
E: Could not connect to client. :-(
The following is xdebug configuration:
xdebug.remote_enable = true
xdebug.remote_handler=dbgp
xdebug.remote_connect_back = 1
xdebug.remote_host=localhost
xdebug.remote_port=9001
xdebug.idekey=netbeans-xdebug
xdebug.remote_log="D:/wamp/tmp/xdebug.log"
output_buffering=off
xdebug.profiler_enable = 0[/code]
I didn't forget to set Debugger port to 9001 in the Netbean options.
What did I miss?
regards

xDebug [Errno 24] Too many open files when connecting to DBGp Proxy

I'm having issues running an xDebug session when I've connected to the DBGp proxy successfully. I'm using both local and remote SSH tunnels for port 9000 (xdebug), and 9001 for a (xdebug DBGp client).
] The code is being debugged remotely, the xDebug server is running on an Amazon EC2 instance
] I am using Zend Studio for my local debugging client on my Macbook
] I am running a remote SSH tunnel for port 9000 "ssh ec2-user#X.X.X.X -R 9000/127.0.0.1/9000"
As of here, I'm able to successfully use xDebug, but then I run into issues running the proxy
But then I start running into issues running the proy:
] I then run the dbgp proxy on the remote server
./pydbgpproxy
INFO: dbgp.proxy: starting proxy listeners. appid: 20906
INFO: dbgp.proxy: dbgp listener on 127.0.0.1:9000
INFO: dbgp.proxy: IDE listener on 127.0.0.1:9001
] I then setup a local SSH tunnel for port 9001 - "ssh ec2-user#X.X.X.X -L 9001/127.0.0.1/9001"
] From Zend Studio I'm able to connect successfully to the DBGp server, where "SessionName" is the name of my session
INFO: dbgp.proxy: Server:onConnect ('127.0.0.1', 51828) [proxyinit -p 9000 -k SessionName -m 0]
] When I trigger a remote xdebug debugging sessions using my session name, it fails like so.
INFO: dbgp.proxy: connection from 127.0.0.1:39172 [<__main__.sessionProxy instance at 0x122e0e0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39173 [<__main__.sessionProxy instance at 0x7f87980210e0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39174 [<__main__.sessionProxy instance at 0x7f87980243b0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39175 [<__main__.sessionProxy instance at 0x7f879814c878>]
INFO: dbgp.proxy: connection from 127.0.0.1:39176 [<__main__.sessionProxy instance at 0x7f87800a2368>]
INFO: dbgp.proxy: connection from 127.0.0.1:39177 [<__main__.sessionProxy instance at 0x123cb48>]
INFO: dbgp.proxy: connection from 127.0.0.1:39178 [<__main__.sessionProxy instance at 0x12387e8>]
INFO: dbgp.proxy: connection from 127.0.0.1:39179 [<__main__.sessionProxy instance at 0x122ec68>]
INFO: dbgp.proxy: connection from 127.0.0.1:39180 [<__main__.sessionProxy instance at 0x124fb48>]
INFO: dbgp.proxy: connection from 127.0.0.1:39181 [<__main__.sessionProxy instance at 0x7f8798047dd0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39182 [<__main__.sessionProxy instance at 0x1244d88>]
ERROR: dbgp.proxy: Unable to connect to the server listener 127.0.0.1:9000 [<__main__.sessionProxy instance at 0x7f8790025878>]
Traceback (most recent call last):
File "./pydbgpproxy", line 222, in startServer
File "/usr/lib64/python2.6/socket.py", line 184, in __init__
error: [Errno 24] Too many open files
WARNING: dbgp.proxy: Unable to connect to server with key [SessionName], stopping request [<__main__.sessionProxy instance at 0x7f8790025878>]
WARNING: dbgp.proxy: Exception in _cmdloop [[Errno 104] Connection reset by peer]
INFO: dbgp.proxy: session stopped
It actually shows those lines like "INFO: dbgp.proxy: connection from 127.0.0.1:39179 [<main.sessionProxy instance at 0x122ec68>]" almost 50x more than what I copied and pasted here for brevity.
It seems like I got it to work but it's erring out. I'm currently using the pydbgpproxy phyton script, version 7 from: http://code.activestate.com/komodo/remotedebugging/. I tried the version 8 script but it just errors. I also tried the pydbgpprox, version 6, but it still has the same exact issue.
IN SUMMARY: xDebug is running on the server, I can connect to it normally without proxy. With the proxy I can connect to it successfully to, but then running a script it encounters this werid error.
Does anyone know what this issue might be caused from?

Resources