Zeppelin Remote endpoint 'localhost:47811' is not accessible (might be initializing) - apache-zeppelin

I want to deploy a zeppelin notebook in my mesos cluster where I have installed the Spark Cluster. The thing is that I achieve to deploy the spark script (I see the driver Id) but nothing happens, it is in running mode but nothing executing.
If I see the zeppelin logs I have this entry repeteadly:
Remote endpoint 'localhost:47811' is not accessible (might be initializing)
And at the end it gives an error:
Caused by: org.apache.zeppelin.interpreter.InterpreterException: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
But the spark driver is still running...
Any idea what is happening?
Thanks.

Well, after hours of investigation I achieved to configure Zeppelin in my Mesos cluster. The fault was that I had a MASTER environment variable set im my Spark distribution. After removed and following the documentation of Zeppelin everything worked!
thanks.

Related

Postgresql abnormal database system shutdown

I am very new to both aws ec2 instances and postgresql. I was able to get a database up and working for a web application mainly using the phppgadmin interface. Today I was working on my web app and was abruptly disconnected from the database. In my web app I got the error " pg_connect(): Unable to connect to PostgreSQL server: could not connect to server: Connection refused\n Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432?" I now can't log in to phppg admin or connect to postgres from my ec2 instance command line. After inspecting the log file at /var/log/postgresql/postgresql-12-main.log I am seeing the error "FATAL: could not open file "global/pg_filenode.map": Permission denied" followed by "LOG: abnormal database system shutdown" I have my pg_hba.conf and postgresql.conf files are configured correctly as I have been doing work on this database for a few days now. Any help would be appreciated.
You or something else wrecked your PostgreSQL installation by changing permissions on or ownership of PostgreSQL files. On Windows and its interesting concepts of file locking, I'd suspect an anti-virus program, but you seem to be running some kind of Unix. Little more can be said with the little information in the question.

Connectivity issues between SqlAlchemy/PyODBC and SQL Server running in Docker

I have a flask app that uses SQLAlchemy (v1.3.17)/PyODBC (v4.0.30) to connect to a SQL Server. In my development setup, the SQL server runs inside a Docker container using mcr.microsoft.com/mssql/server:2017-latest-ubuntu image with Docker Desktop for Mac (v2.3.0.3).
It has been working with this setup for a while now (~6 months), but for the last month or so I have been running into the following error all the time
(pyodbc.OperationalError) ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x274C (10060)
It happens intermittently, at different places in the code, and when I rerun the same code again it doesn't occur. So I don't think it is anything fundamentally wrong with my code.
10060 appears to be a connection error, but since it is able to connect initially okay, I am thinking that something failing in pyodbc when it is trying to keep the connection alive.
Has anyone else run into something similar? Is there a timeout value that I need to set in my SQL Server config? Thankfully, on the production server (which isn't running in Docker) I am not running into this problem. But it happens every couple of minutes so is very frustrating when I am debugging.
Thanks Ed, Gin (feel free to post as answer if you would like me to endorse it). http://github.com/sqlalchemy/sqlalchemy/issues/5148 is the same issue and the 'pool_pre_ping' engine option suggested on there seems to have done the trick.

.Net Core app in docker container can't connect to SQL Server

So I know there are a load of questions on SO related to this already, but I think at this point I've read them all, tried all the suggestions, and still haven't found a resolution.
I've got a simple .Net core MVC app with a connection to a local MSSQL database. I have been unable to get it to connect to SQL when running it in a container... I just get an error that a connection couldn't be established. When run in IIS Express it connects fine.
My connection string is:
Data Source=10.11.56.36,1433;Initial Catalog=TestDB;Integrated Security=false;User id=testdb;Password=######;MultipleActiveResultSets=True
My container is launched via:
docker run -it -p 8080:80 testing
Here are the things I've attempted so far:
Ensured the SQL server is configured to accept remote connections
Used "host.docker.internal" for server name
Ping'd the SQL server IP to ensure it's accessible to the container
Verified port 1433 is allowed through the firewall
Tried a different port and configured SQL server to listen on that
Tried without the port in the connection string
There were a host of other things I've tried as well in the last few hours of beating my head on this, but I've made no progress at all. What am I missing?
Any help would be greatly appreciated.
Please check if this bug is the reason for your error, if you are running a Linux Container:
https://github.com/dotnet/corefx/issues/29147
If so, the fix is to add the following to the runtime image in dockerfile
RUN apk add --no-cache icu-libs
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false
I just had this issue. I spent 3 days and the fix for me was to use a fully qualified domain name.
MyServerName.MyDomain.com

Connection refused to running Google Cloud SQL instance via proxy or from App Engine

I'm very new to Google Cloud and running applications in general. I currently have a Django app running in a Docker container on Google Flexible App Engine that connects to a Google Cloud SQL (PostgreSQL) instance in the same project. The latest version has been running for about 3 days now without issue.
The Problem:
Today I started receiving OperationalError: server closed the connection unexpectedly errors repeatedly from the application.
I can run the Cloud SQL Proxy and it starts up normally (Ready for new connections), but if I try to connect with psql, I receive the error:
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
And the proxy reports:
couldn't connect to "<instance_name>:us-central1:<instance_name>":
dial tcp <ip address>:3307: connect: connection refused
On SSHing into my running flex app instance and running sudo docker logs <cloud proxy container>, the last lines are, similarly:
couldn't connect to "<instance_name>:us-central1:<instance_name>":
dial tcp <ip address>:3307: getsockopt: connection refused
Things I've Tried/Checked
Restarted cloud sql instance. The instance itself is running fine and I can access it using cloud shell from the console.
Checked db instance name and ip address - they match.
Restarted the flex app engine instance. No change as far as I can tell.
Upgraded my local copy of cloud_sql_proxy to 1.09.
Checked quotas - I don't seem to have hit any API or simultaneous connection limits.
I'm able to connect to the sql instance by authorizing my local IP address.
I'm able to connect to a different (but very similar) Google Cloud SQL instance using the proxy locally so I'm not sure if the proxy is at fault.
Any help at all would be appreciated, at this point I'm out of ideas. Thank you!
This could also be an issue if the CloudSQL instance is configured with only a private IP address. Per a small paragraph hidden in the documentation:
The proxy does not provide a new connectivity path; it relies on existing IP connectivity. For example, you cannot use the proxy to connect with an instance using Private IP unless the proxy is using a VPC network that has been configured for private services access.
In this case, the only solution seems to be adding a public IP to the server.
I first restarted the Cloud SQL instance. That did not help. Then, I simply clicked "Stop" for the SQL instance and once it had, clicked "Start" and now it works. This is pretty random and annoying.
In my case, I had upgraded the machine type of the SQL instance earlier in the day and it seems like on doing so, Google Cloud simply "restarts" the instance where as what is needed is "stop" and then "start". This is only a guess.
tl;dr Stop and then Start the Cloud SQL Instance. Don't Restart as "Restart" != "Stop + Start"
Hope it helps others who face this random issue.
We ended up "fixing" the problem by rolling back to an earlier backup. Google Support noted "the issue started right around the maintenance window for your Cloud SQL instance, so it's possible that a change was made that caused the connection to break".

No suitable driver for sqlserver

I am working on an application that accesses to a miscrosoft database (mssql).
The application is running perfectly in local. But when I deploy it on the tomcat server in Unix I have this error:
Unable to get driver instance for jdbcUrl=jdbc:sqlserver://...
....
Caused by: java.sql.SQLException: No suitable driver
It is really strange cause I verified that the jar (sqljdbc42.jar) is located in the correct place.
Also the url "jdbcUrl=jdbc:sqlserver://..." is correct since it works fine in local.
Do you have any solution?
Thank you.

Resources