Debugging remote (DigitalOcean) Lando site (drupal8 recipe) with PHPStorm + Xdebug - xdebug

I'm trying to run Lando remotely to avoid consuming local resources. Sometimes I need to work on a laptop and lando+xdebug is a hungry beast.
Local
I don't have Lando running locally. I'm synchronizing my files using PHPStorm and Lando is running remotely.
Remote
I have a DigitalOcean droplet set up and running a Lando (drupal8) site. I can access the site and it's running as normal at:
http://165.xxx.xxx.xxx:ppppp
165.xxx.xxx.xxx, being the IP of the droplet and
ppppp, being the port that Lando (docker) exposes the container
.lando.yml
name: XXXXXX
recipe: drupal8
config:
php: 7.1
webroot: ./docroot
xdebug: false // overridden later
services:
appserver:
build:
- composer install
ruby:
type: ruby:2.4
run:
- "cd $LANDO_MOUNT && gem install compass"
tooling:
blt:
service: appserver
cmd: /app/vendor/acquia/blt/bin/blt
gem:
service: ruby
compass:
service: ruby
fix-compass:
service: ruby
cmd: "gem install compass"
.lando.local.yml
Since I don't want this config for my fellow developers
config:
xdebug: true
config:
php: .lando.php.ini
.lando.php.ini
xdebug.remote_enable = 1
xdebug.remote_autostart = 1
xdebug.remote_connect_back = 0
xdebug.remote_host = localhost
xdebug.remote_port = 9002
xdebug.remote_log = /xdebug.log
xdebug.remote_mode = req
xdebug.idekey = PHPSTORM
PHPStorm Server
Host: localhost
Port: 9002
Debugger: Xdebug
Use path mappings (checked)
-- project --> /app
Steps I take to run this
Start listening for debug connections in PHPStorm
Create SSH tunnel with ssh -R 9002:localhost:9002 root#165.xxx.xxx.xxx
Refresh http://165.xxx.xxx.xxx:ppppp
Findings
Using lando php -i, I can see that xdebug is running (and all of my php.ini config is set) as it should, on port 9002.
Using nc -z localhost 9002 || echo 'no tunnel open', I can also tell that SSH tunnel is open for 9002, as it should be.
I don't get any prompt for incoming connections
Update:
Some progress when I forced 9002 open with:
sudo iptables -A INPUT -p tcp -d 0/0 -s 0/0 --dport 9002 -j ACCEPT
However, now I get this error
Log opened at 2019-08-20 02:54:17
I: Connecting to configured address/port: 165.xxx.xxx.xxx:9002.
W: Creating socket for '165.xxx.xxx.xxx:9002', poll success, but error: Operation now in progress (29).
E: Could not connect to client. :-(
Log closed at 2019-08-20 02:54:17

So I've been experiencing this a few times this year. I originally solved this a few weeks ago by upgrading PHPSTORM from 2018.1 to 2019.1. It was indicated to me (on some thread on the internet) that the XML schema for the next version of xdebug was different. It worked immediately after updating. It seems though that this broke again in 2019.2.
The last time I got this to work was 2019.1.3. Best of luck!
Update: As I've continued to update both storm and php/xdebug, I've found that this has stabilized. It was all over the place for a while.

Related

Clickhouse doen't work on port 8123. Code: 210. DB::NetException: Connection refused (localhost:9000)

everyone. I have install clickhouse on Ubuntu. But when I am trying to start server:
sudo systemctl start clickhouse-server
nothing happen. Also I noticed that db doesn't listen default port 8123. For, instance commands below do not give any result:
sudo netstat -tulpn | grep clickhouse
sudo netstat -tulpn | grep 8123
When I try to start server 'clickhouse-client --password', I get:
ClickHouse client version 22.8.4.7 (official build).
Password for user (default):
Connecting to localhost:9000 as user default.
Code: 210. DB::NetException: Connection refused (localhost:9000). (NETWORK_ERROR)
Clickhouse status:
clickhouse-server.service - ClickHouse Server (analytic DBMS for big data) Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; ve> Active: activating (auto-restart) (Result: exit-code) since Wed 2022-09-07> Process: 10730 ExecStart=/usr/bin/clickhouse-server --config=/etc/clickhous> Main PID: 10730 (code=exited, status=233/RUNTIME_DIRECTORY) CPU: 90ms
Clickhouse clickhouse-server.err.log:
11. _start in /usr/bin/clickhouse (version 22.8.4.7 (official build)). Will overwrite it 2022.09.07 11:42:23.694750 [ 21604 ] {} <Error> Application: DB::Exception: Caught Exception Code: 76. DB::ErrnoException: Cannot open file /var/lib/clickhouse/uuid, errno: 13, strerror: Permission denied. (CANNOT_OPEN_FILE) (version 22.8.4.7 (official build)) while writing the Server UUID file /var/lib/clickhouse/uuid
What about
sudo systemctl status clickhouse-server
sudo tail -300 /var/log/clickhouse-server/clickhouse-server.err.log
sudo tail -300 /var/log/clickhouse-server/stderr.log

How connect docker container application to a non containerized database? [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 1 year ago.
Firt of all: I don't want to connect to a docker container, running mongo.
I am building a docker container that should access the mongo database I have installed in my running Ubuntu 18.04 machine.
Docker suggests this could be done fairly easy by just adding the flag -pto the run command, so I did this:
docker run -p 27017:27017 --name mycontainer myimage
Port 27017 is the default port for mongo (see here) and running netstat -pna | grep 27017 confirms by returning the following:
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:27017 127.0.0.1:55880 ESTABLISHED -
tcp 0 0 127.0.0.1:55882 127.0.0.1:27017 ESTABLISHED -
tcp 0 0 127.0.0.1:55880 127.0.0.1:27017 ESTABLISHED -
tcp 0 0 127.0.0.1:27017 127.0.0.1:55884 ESTABLISHED -
tcp 0 0 127.0.0.1:27017 127.0.0.1:55882 ESTABLISHED -
tcp 0 0 127.0.0.1:55884 127.0.0.1:27017 ESTABLISHED -
unix 2 [ ACC ] STREAM LISTENING 77163 - /tmp/mongodb-27017.sock
But running the docker command shown above, I get an error indicating that I can't connect to the port because it is already in use (which is actually the whole point of connecting to it):
docker: Error response from daemon: driver failed programming external connectivity on endpoint mycontainer (1c69e178b48ee51ab2765479e0ecf3eea8f43e6420e34b878882db8d8d5e07dd): Error starting userland proxy: listen tcp4 0.0.0.0:27017: bind: address already in use.
ERRO[0000] error waiting for container: context canceled
How should I proceed? What did I do wrong?
This dependents on how your application connects to a database.
Almost all languages needs the connection parameters.
Example with nodes & mysql:
const knex = require('knex')({
client: 'mysql',
connection: {
host: '10.10.10.10',
user: 'root',
password: 'changeme',
database: 'stackoverflow'
},
debug: true
});
Example with python & mongo
import pymongo
conn = pymongo.MongoClient('mongodb://root:pass#10.10.10.10:27017/')
Traditionally these connection parameters are stored in a properties or configuration file. One per environment: dev, staging, prod, etc
Configuration file
If your application use this method to get the connection parameters, you just need to follow these steps:
set the ip, port, user, password in the configuration file. Usually inside of your source code: application.properties, config.yml, parameters.ini, etc
perform a docker build ... of your app.
perform a docker run ... of your app. In this step, you don't need to pass any mongo parameter because there are already "inside" of your app. Check this and this to understand why localhost is not used in docker.
Disadvantage: This approach works in simple scenarios but if you have several environments like dev, testing, staging, pre-prod, prod, etc you will need to perform a build for each environment because the connection parameters are inside of your app.
Environment variables
This is my favorite and also is recommended in several platforms like heroku, openshift, couldfoundry, etc
In this approach you just need one build. This image could be deployed on any environment just playing with the correct parameters before the run.
Example with nodes & mysql using environment variables:
const knex = require('knex')({
client: 'mysql',
connection: {
host: process.env.database_host,
user: process.env.database_user,
password: process.env.database_password,
database: process.env.database_database
},
debug: true
});
Example with python & mongo using environment variables:
import pymongo
import os
database_host = os.environ['database_host']
database_user = os.environ['database_user']
database_password = os.environ['database_password']
urlConnect = "mongodb://{}:{}#{}:27017/".format(database_user, database_password,database_host )
conn = pymongo.MongoClient(urlConnect)
As you can see, the source code does not need to read a properties file to get the connection parameters because it hopes they are available as environment variables
Finally, the steps with this approach will be:
perform a docker build ... of your app.
perform a docker run ... of your app. In this case, you need to sent the variables from host to your container
docker run -it -p 8080:80 \
-e "database_host=10.10.10.10" \
-e "database_user=root" \
-e "database_password=pass" \
--name my_app my_container:1.0
Remote variables
If you have a distributed environment, scalable, etc you will want to manage your variables.
Basically you will have a web console to create, edit, delete and export your variables. Also these variables must be injected to your docker container in a easy way.
Example of how Heroku offer you a way to manage your variables
Check:
#4 Centralized and manageable configuration

Failed to load resource: net::ERR_CONNECTION_TIMED_OUT on remote but works fine on localhost

i have react with asp.net core website . it worked fine on localhost but when published on iis remote server the timeout error occurs.
the front-end (react client) and back-end(server) asp.netcore webapi work independently.
before uploading i changed the following in program.cs in webapi.
usUrl("https://localhost:4000")
to useUrl("https://www.virtualcollege.pk:4000")
i also changed the front-end baseurl similarly.
moreover, the connectionstrings in appsettings.json is correct for both databases.
i added migration and updated the databases successfully.
the website is live but timeout error occur :
virtualcollege.pk
i also tried the url with "https://myip-address:4000"
thanks in advance for help.
if i remove port number from url and publish on local folder than upload to remote server . the webapi.exe on local machine runs as follows:
You have to open incoming request for 4000 port. Try some methods below.
Windows Server
Please check this link or this one
Ubuntu/Debian
sudo ufw allow 4000/tcp
sudo ufw status // check status
CentOS
First, you should disable selinux, edit file /etc/sysconfig/selinux so it looks like this:
SELINUX=disabled
SELINUXTYPE=targeted
Save file and restart system.
Then you can add the new rule to iptables:
iptables -A INPUT -m state --state NEW -p tcp --dport 4000 -j ACCEPT
and restart iptables with /etc/init.d/iptables restart

error on start nginx service on ubuntu vps

i am begginer user of vps, i have a reactJS app, and i wnat to deploy it on my ubuntu 18 vps with nginx.
I have followed the stpes of this tutorial Deploying create-react-app with Nginx and Ubuntu
i had already check all the steps, but when i put the command
sudo service nginx start
the system show's me, the next error message:
Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xe" for details.
and when i put "journalctl -xe" shows me this:
nginx, error
ngnix, error
please help me friends
Look in your log file on messages before the error "Failed to startup nginx"
You will see the reason of problem.
bind() to 0.0.0.0:80 failed (98: Address already in use)
Looks like port 80 on your vps server is already in use by some application.
Port 80 used for HTTP services.
So most likely you already have run Apache HTTP server, or some other.
use this command to see what application use it
sudo netstat -tulpn | grep ":80"
If you see apache
tcp6 0 0 :::80 :::* LISTEN 349/apache2
then you can stop apache
# apache service name also can be httpd (use right command)
# sudo service httpd stop
sudo service apache stop
and run nginx
sudo service nginx start
But you should be sure that you don't use apache for another website.

MongoDB: 'connection attempts failed' while doing remote access

I want others to access mongoDB on my OS X, the firewall is off and my configuration file is like:
systemLog:
destination: file
path: /usr/local/var/log/mongodb/mongo.log
logAppend: true
storage:
dbPath: /data/db
net:
bindIp: 0.0.0.0
port: 27017
and i start the service like
sudo mongod --config /usr/local/etc/mongod.conf
I can always access the database using mongo and mongo 127.0.0.1
but when i use mongo xx.xxx.xxx.xxx(my ip address)
the access fails with the report:
mongo xx.xxx.xxx.xxx
MongoDB shell version v3.4.17
connecting to: mongodb://xx.xxx.xxx.xxx:27017/test
2018-10-04T09:05:54.316+0800 W NETWORK [thread1] Failed to connect to xx.xxx.xxx.xxx:27017 after 5000ms milliseconds, giving up.
2018-10-04T09:05:54.317+0800 E QUERY [thread1] Error: couldn't connect to server xx.xxx.xxx.xxx:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:240:13
#(connect):1:6
exception: connect failed
I'm really puzzled here. Obviously it's a timeout but my network connection is ok. I used homebrew to install mongoDB, and i've tried every version from 3.0 to 4.0 and the result seems always the same. And Google didn't give any help.
Here is mongodb version and environment info.
root#Backup:~# mongod --version
db version v3.2.22
git version: 105acca0d443f9a47c1a5bd608fd7133840a58dd
OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
allocator: tcmalloc
modules: none
build environment:
distmod: ubuntu1604
distarch: x86_64
target_arch: x86_64
What I did is to update
bindIp: 0.0.0.0 in both files: /etc/mongod.conf and /etc/mongodb.conf. After the service restart, I could be able to connect mongodb remotely.

Resources