Objective : I am trying to establish a test environment with a test server instance running on a static machine. I wish to be able to connect/make use/test code against this server running from all other machines on the same local network.
Problem :
I am unable to establish a connection to the appserver that is running on a machine with static ip. It's an appengine server and runs without errors ( tested ). Connection can be established from the browser on the local machine but when trying to connect from any browser on another machine in the same network then no connection is established.
The same setup is made available for jenkins server and jenkins dashboard can be accessed from all other machines in the network.
If you are running a local version of app engine, you can use the --host option.
dev_appserver.py --host=0.0.0.0 myapp
The host address to use for the server. You may need to set this to be able to access the development server from another computer on your network. An address of 0.0.0.0 allows both localhost access and hostname access. Default is localhost.
https://cloud.google.com/appengine/docs/python/tools/devserver
For java, use the --address option instead.
Related
I had setup a WSL2 Ubuntu. Now I am running a local SQL Server instance on the 1401 port using Docker.
Container port:
0.0.0.0:1401->1433/tcp
I would like to connect this instance from SSMS but I am getting following error:
Server name: localhost, 1401
Error:
Cannot connect to localhost,1401.
A network-related or instance-specific error occurred while
establishing a connection to SQL Server.
The server was not found or was not accessible. Verify that the
instance name is correct and that SQL Server is configured to allow
remote connections.
(provider: TCP Provider, error: 0 - The wait operation timed out.)
(Microsoft SQL Server, Error: 258)
[Solution]
I am able to connect it via the WSL2 IP. I run "hostname -I" command in WSL2 and use the same IP in SSMS. And, I am able to make a connection
First question -- Is there a VPN running/connected in Windows? If so, ignore the rest of this and suspect that first. Make sure the VPN is not running, stop Docker, issue a wsl --shutdown, restart and try again.
Assuming that's not the problem ...
Normally, WSL2 provides a feature known as "localhost forwarding" which allows services/apps on Windows to communicate with the virtualized WSL2 IP using localhost. It essentially takes any localhost traffic that isn't directed to a port bound under Windows and forwards it to the Hyper-V virtual network for WSL2.
All WSL2 instances (including the Docker instance) share the same WSL2 network interface as they are all running in the same virtual machine/kernel.
So you seem to be doing the right thing in attempting to connect to localhost from SSMS.
But ... sometimes that localhost forwarding breaks. There are two common (related) scenarios that can cause this (and perhaps others):
Hibernation of the Windows host
Having Windows Fast Startup enabled in Power Manager
First check to make sure you can access 1401 from within WSL2:
nc -zv localhost 1401
^^^ assumes netcat is installed, which it is by default in the WSL2 Ubuntu distribution. For other distributions, install it or check connectivity via other methods.
If that doesn't succeed, then I'd suspect some configuration issue in SQL Server.
If that does succeed, then run the same test from the Windows host in PowerShell:
Test-NetConnection -ComputerName "localhost" -Port 1401
If that doesn't succeed, then I'd suspect a localhost forwarding issue.
Side note: I'm assuming you are running Docker Desktop, but if you are just running Docker Engine in a WSL2 instance, that's no problem. Just ignore the Docker Desktop instructions below.
First, check if you have a /etc/wsl.conf in any of your running WSL2 instances that mention disabling localhostForwarding. I'm assuming no, since that is not the default. However, if you happen to, make sure you set these to true.
Stop all WSL2 services, instances, shells, apps, etc. (including Docker Desktop)
From PowerShell:
wsl --shutdown
Then restart Docker Desktop and/or your container and try again
If localhost doesn’t work, try use [::1] in the server name. In WSL2, port 1433 is using IP/TCPv6, SSMS some times is not able to resolve localhost to loopback IP [::1].
Source: https://jayfuconsulting.wordpress.com/2020/11/14/sql-server-2019-docker-wsl-2/
One last thing which you could try is to modify the windows host file. I almost tried all the steps mentioned over different link, but all goes in vain. Then I opened the host file which could be accessed using
C:\Windows\System32\Drivers\Etc
Open the host file and uncomment(remove # sign) from the localhost name resolution section
I have a react client running locally on my machine using react-scripts. I can work with it normally on that machine. I also have a nodeJS server running on a different port. I figured I should be able to connect to these from my other computer or cellphone and I can connect to the server using my local IPv4 address + server port, but when I try to connect to the client using the same address + client port I get a loading spinner in the browser tab and the connection times out after some time.
I figured it might be a firewall issue so I added an inbound rule (and an outbound one though I don't think that should do anything) on the machine running the client+server letting all traffic through the relevant port but this changed nothing. When I do a network diagnostic using chrome from the tab where I can't open the client it tells me the webpage is online but isn't responding. I get the same behavior when connection from my cellphone.
One thing that might be of note is that when I serve the build folder of my app through my NodeJS server, the app opens just fine on the other computer using the server port but I don't want to have to build my app after every change manually.
What could this be down to?
EDIT: Forgot to mention, all devices are on the same router. The laptop running the servers and the cellphone are connected via wireless and the PC is connected via ethernet.
I have a RPi running an instance of volttron-central. I can VNC into the RPi and view the Web UI from a browser pointed to localhost, so I know that it is running. However, when I attempt to connect from a PC connected to the same LAN using the RPi's IP address, I get "refused to connect" error.
Is this a security feature? If so, is there any-way of viewing the WEB UI from a different machine, or does it need to be running an instance of volttron-central locally?
Edit your config file to use an external address (e.g. not 127.0.0.1). In VOLTTRON_HOME (~/.volttron) edit the config file. Change the bind-web-address setting to equal (what you have above as 192.168.1.4) to http://192.168.1.4:8080. Then restart the platform.
Note: you should also make sure your /etc/hosts file has a mapping from the 192.168.1.4 onto your hostname, then you could goto https://foo:8080 rather than using the address. This will work with the bind-web-address, but not the vip-address.
On the same virutal machine (remote, ubuntu), I have
An SQL Server running in a Docker
An .NET Core 2.2 (IdentityServer) application running in a docker
An instance of jwilder.nginx-proxy serving as a reverse-proxy for every web application on the machine
A multitude of other .NET Core apps
I am able to connect to all of my websites using both machine IP + port and domain name, which means the reverse proxy works as expected and the dockers are well-configured
I am able to connect to the SQL Server using SSMS from my local machine, which means that the SQL Server docker properly forwards the TCP connection on port 1433
The IdentityServer .NET Core 2 web application is able to connect to the SQL Server when run on my local machine.
The remote-docker IdentityServer application can't reach the SQL Server instance with the following error (shortened for clarity - removed stack trace)
System.Data.SqlClient.SqlException (0x80131904):
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.
(provider: TCP Provider, error: 40 - Could not open a connection to SQL Server) at [...]
I know that the SQL server is running and reachable from the internet, and I know that the application's code is not at fault because I tested both.
So I deduced it had to be the IdentityServer docker that was blocking the connexion. So I tried:
Using the --expose 20 command on the IdentityServer docker
Opening mapping the port 20 inside the container to some port outside -p 45264:20 in addition to the already exposed port 80
I originally worked on using the port 1433 on both sides of the mapping but since it didn't work I tried using an other port on the outside (20). Didn't change anything
Here is the connexion string used by the IdentityServer (sensitive data hidden):
Data Source=***.***.***.***,20;Initial Catalog=Identity;Persist Security Info=True;User ID=**;Password=******************
Why can't my IdentityServer docker reach the SQL Server docker while the SQL Server itself is perfectly reachable? How can I make this setup work?
When wrapping SQL server into Docker, the first thing to anticipate is the way you connect. SQL Server prefers named pipes and you have to explicitly set mode to tcp.
If connection is done locally, don't use localhost, change it to 127.0.0.1. Also writing explicit tcp: prefix may help, like this: Server=tcp:x.y.z.q,1433
As I understood you run Sql Server and IdentityServer (which has connection problem) in separate docker containers.
If this is so then referring to localhost (i.e. 127.0.0.1) is not correct. Because in this case IdentityServer tries to connect to itself. This would work if the IdentityServer have run on the host machine, since you forward SQL server port to it. But in your case, you should connect to the SQL server container IP instead.
Considering all above I see three options for you to solve this:
You can get ip address of SQL Server container by running docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <sql_container_name_or_id>
Run SQL container with static IP via docker run --ip <static_ip_value> <sql_container_name_or_id> and then use static ip you have specified in connection string.
Run SQL container with specified host name via docker run --hostname <sql_host_name> <sql_container_name_or_id> and then use specified hostname in the connection string.
It is up to you which way to go.
Use tcp, 127.0.0.1 and host port to connect. Mention in the identity server docker settings that it depends on sql database server container. Like this,
identityservice:
...
depends_on:
- sqldataservice
This way the database container will be made available first.
"ConnectionString": "Server=tcp:127.0.0.1,8433;Database=dbname;User Id=sa;Password=abc#1234;"
I ended up giving up on getting this to work using a single host, so I simply decided to have the SQL Server run in a separate machine.
I'm very new to Google Cloud and running applications in general. I currently have a Django app running in a Docker container on Google Flexible App Engine that connects to a Google Cloud SQL (PostgreSQL) instance in the same project. The latest version has been running for about 3 days now without issue.
The Problem:
Today I started receiving OperationalError: server closed the connection unexpectedly errors repeatedly from the application.
I can run the Cloud SQL Proxy and it starts up normally (Ready for new connections), but if I try to connect with psql, I receive the error:
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
And the proxy reports:
couldn't connect to "<instance_name>:us-central1:<instance_name>":
dial tcp <ip address>:3307: connect: connection refused
On SSHing into my running flex app instance and running sudo docker logs <cloud proxy container>, the last lines are, similarly:
couldn't connect to "<instance_name>:us-central1:<instance_name>":
dial tcp <ip address>:3307: getsockopt: connection refused
Things I've Tried/Checked
Restarted cloud sql instance. The instance itself is running fine and I can access it using cloud shell from the console.
Checked db instance name and ip address - they match.
Restarted the flex app engine instance. No change as far as I can tell.
Upgraded my local copy of cloud_sql_proxy to 1.09.
Checked quotas - I don't seem to have hit any API or simultaneous connection limits.
I'm able to connect to the sql instance by authorizing my local IP address.
I'm able to connect to a different (but very similar) Google Cloud SQL instance using the proxy locally so I'm not sure if the proxy is at fault.
Any help at all would be appreciated, at this point I'm out of ideas. Thank you!
This could also be an issue if the CloudSQL instance is configured with only a private IP address. Per a small paragraph hidden in the documentation:
The proxy does not provide a new connectivity path; it relies on existing IP connectivity. For example, you cannot use the proxy to connect with an instance using Private IP unless the proxy is using a VPC network that has been configured for private services access.
In this case, the only solution seems to be adding a public IP to the server.
I first restarted the Cloud SQL instance. That did not help. Then, I simply clicked "Stop" for the SQL instance and once it had, clicked "Start" and now it works. This is pretty random and annoying.
In my case, I had upgraded the machine type of the SQL instance earlier in the day and it seems like on doing so, Google Cloud simply "restarts" the instance where as what is needed is "stop" and then "start". This is only a guess.
tl;dr Stop and then Start the Cloud SQL Instance. Don't Restart as "Restart" != "Stop + Start"
Hope it helps others who face this random issue.
We ended up "fixing" the problem by rolling back to an earlier backup. Google Support noted "the issue started right around the maintenance window for your Cloud SQL instance, so it's possible that a change was made that caused the connection to break".