Linking a SQL Server with a liferay instance running in a docker container - sql-server

So as the title says, I'm trying to run liferay in side of a docker container. Then from there, connect to a database on an outside node.
I can successfully ping the server that the SQL Server is running on from inside the docker container, however, when I try to connect to the database through liferay's configuration interface, it simply says an connection could not be established, and the logs state that log in for the user failed.
If it's not possible, I understand, just trying to get a better idea of this little mess.
======================================================================
Just to note, I've been using snasello's docker image for liferay, except taking out the preconfigured database to force liferay to go to the configuration page. I'm starting the container with
docker run --rm -it -i 8080:8080 {whatever the local name of the image is}
00:00:34,301 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#6][BasicResourcePool:1851] com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#3b17c58d -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
java.sql.SQLException: Cannot open database "lportal" requested by the login. The login failed.
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:368)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2820)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2258)
at net.sourceforge.jtds.jdbc.TdsCore.login(TdsCore.java:603)
at net.sourceforge.jtds.jdbc.ConnectionJDBC2.(ConnectionJDBC2.java:345)
at net.sourceforge.jtds.jdbc.ConnectionJDBC3.(ConnectionJDBC3.java:50)
at net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:184)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:146)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:195)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:211)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1086)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1073)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:44)
at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:648)
00:00:34,301 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#6][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
00:00:34,303 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#9][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
00:00:34,304 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#1][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.

You should link the mysql container to the liferay container using the --link docker flag. The alias you provide to the mysql container should be db_lep.
docker run -d --name mysqldb --env-file=.crendentials mysql
docker run -d --link mysqldb:db_lep -p 8080:8080 {whatever the local name of the image is}
If you see the https://github.com/snasello/docker-liferay-6.2/blob/master/lep/portal-bd-MYSQL.properties the host for the database is db_lep. If you provide your own properties file then you should change the alias to whatever is in your properties. If you are using localhost then instead of linking you should make the containers to share the same network(localhost).

Rechecking the errors, turned out there was an issue with SQL server's authentication. Solved via this helpful post.
Thanks guys!

Related

Connecting to an external HTTP api behind a proxy from nifi

I have a apache/nifi:latest instance spun inside an Amazon Linux 2 EC2. For reference, see this guide: here
I have a QuerySalesforceObject ver. 1.18.0 that makes use of StandardOauth2AccessTokenProvider.
The oauth2 provider url is configured at https://test.salesforce.com/services/oauth2/token
I can curl this url from the box and from inside the docker container just fine (I don’t get a timeout).
[root#ip-10-229-18-107 \~\]# docker exec -it nifi_container_persistent /bin/sh
printenv | grep -i proxy
HTTPS_PROXY=http://proxy.MY_DOMAIN.com:3128
no_proxy=localhost,127.0.0.1,MY_DOMAIN.com,.amazonaws.com
NO_PROXY=localhost,127.0.0.1, MY_DOMAIN.com,.amazonaws.com
https_proxy=http://proxy.MY_DOMAIN.com:3128
http_proxy=http://proxy.MY_DOMAIN.com:3128
HTTP_PROXY=http://proxy.MY_DOMAIN.com:3128
curl https://test.salesforce.com/services/oauth2/token
{"error":"unsupported_grant_type","error_description":"grant type not supported"}#
But when I run the task, oauth2 fails with an error
java.io.UncheckedIOException: OAuth2 access token request failed
Caused by: java.net.SocketTimeoutException: connect timed out
This leads me to believe the proxy settings are not being honored by the class. How can I fix this?
Here’s more info on this class: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-oauth2-provider-nar/1.17.0/org.apache.nifi.oauth2.StandardOauth2AccessTokenProvider/index.html
The standard way to interface with HTTP resources with a proxy in Nifi is via StandardProxyConfigurationService: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-proxy-configuration-nar/1.19.1/org.apache.nifi.proxy.StandardProxyConfigurationService/index.html
If a component does not have this property, then it means it does not support it.
You can try bootstrapping proxy settings into nifi with /opt/nifi/nifi-current/conf/bootstrap.conf. But there is no standard and support of proxy is not guaranteed. Implementation (bugs and all) depends on the library. aws-java-sdk ver. 1x, for example, has a bug where nonProxyHosts is not honoured. https://github.com/aws/aws-sdk-java/issues/2797
java.arg.18=-Dhttp.nonProxyHosts="foo|localhost|*.bar.org"
java.arg.19=-Dhttp.proxyHost=proxy.foo.com
java.arg.20=-Dhttp.proxyPort=123
java.arg.21=-Dhttp.proxyUser=foo
java.arg.22=-Dhttp.proxyPassword=bar
java.arg.23=-Dhttps.nonProxyHosts="foo|localhost|*.bar.org"
java.arg.24=-Dhttps.proxyHost=proxy.foo.com
java.arg.25=-Dhttps.proxyPort=123
java.arg.26=-Dhttps.proxyUser=foo
java.arg.27=-Dhttps.proxyPassword=bar

Failed to load resource: net::ERR_CONNECTION_TIMED_OUT on remote but works fine on localhost

i have react with asp.net core website . it worked fine on localhost but when published on iis remote server the timeout error occurs.
the front-end (react client) and back-end(server) asp.netcore webapi work independently.
before uploading i changed the following in program.cs in webapi.
usUrl("https://localhost:4000")
to useUrl("https://www.virtualcollege.pk:4000")
i also changed the front-end baseurl similarly.
moreover, the connectionstrings in appsettings.json is correct for both databases.
i added migration and updated the databases successfully.
the website is live but timeout error occur :
virtualcollege.pk
i also tried the url with "https://myip-address:4000"
thanks in advance for help.
if i remove port number from url and publish on local folder than upload to remote server . the webapi.exe on local machine runs as follows:
You have to open incoming request for 4000 port. Try some methods below.
Windows Server
Please check this link or this one
Ubuntu/Debian
sudo ufw allow 4000/tcp
sudo ufw status // check status
CentOS
First, you should disable selinux, edit file /etc/sysconfig/selinux so it looks like this:
SELINUX=disabled
SELINUXTYPE=targeted
Save file and restart system.
Then you can add the new rule to iptables:
iptables -A INPUT -m state --state NEW -p tcp --dport 4000 -j ACCEPT
and restart iptables with /etc/init.d/iptables restart

Dockerized PostgreSQL log to both /log & `docker logs`?

My PostgreSQL 11.6 is running inside a Docker container based on an existing image. Running docker logs my-postgres shows the log messages produced by the dockerized PostgreSQL instance.
Problem: I am trying to set up the system such that PostgreSQL logs the messages to a log file in /var/lib/postgresql/data/log and still be able to show the log messages when you run docker logs my-postgres.
Logging to a file works when /var/lib/postgresql/data/postgresql.conf was modified to have the following:
log_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
However, running docker logs my-postgres now shows
2020-02-23 18:01:49.388 UTC [1] LOG: redirecting log output to logging collector process
2020-02-23 18:01:49.388 UTC [1] HINT: Future log output will appear in directory "log".
and new log messages no longer appear here.
Is it possible to log to the log file and also show the same log messages in docker logs my-postgres?
docker-compose.yml
version: '3.3'
services:
my-postgres:
container_name: my-postgres
image: timescale/timescaledb:latest-pg11
ports:
- 5432:5432
By default docker uses json-file logging driver. This driver saves everything from container's stdout and stderr into /var/lib/docker/containers/<container-id>/<container-id>-json.log on your docker host. The docker logs command just reads from that file. By default postgresql logs into stderr #log_destination = 'stderr'
You enabled the logging collector which catches the logs sent to stderr and saves them into filesystem instead. This is the reason why you don't see them anymore in docker logs output. I don't see anywhere in postgresql documentation how to send logs both to file and stderr. I'm no expert on psql though.
https://www.postgresql.org/docs/9.5/runtime-config-logging.html
Containers should log into stderr and stdout. Configure the cont. process to log into file inside container's filesystem is considered bad practice. You will lose the filesystem when container dies unless you attach the folder as volume.
If you insist your only chance is to change config line log_filename into something static like postgresql.log and create symbolic link (and rewrite the postgresql.log file created by psql) pointing it to it to stderr ln -fs /dev/stderr /var/lib/postgresql/data/log/postgresql.log.
I haven't tested the solution and I have certain doubts about loggin collector and its log_rotate capabilities. I have no idea what happens with postgresql.log when log file is rotated. Maybe it's copied and original file deleted / recreated by postgres and you'll lose your link. You can try to disable log_truncate_on_rotation boolean to overcome this but I'm not sure if this would help tbh.

Unable to connect to database: SequelizeConnectionRefusedError: connect ECONNREFUSED

Whenever i launch my app, part of my task is to first run a migration. however, I get the below error 95% of the time.
Command failed: /bin/sh -c node_modules/.bin/sequelize db:migrate
Unable to connect to database: SequelizeConnectionRefusedError: connect ECONNREFUSED
Details:
killed: false
code: 1
signal: null
cmd: /bin/sh -c node_modules/.bin/sequelize db:migrate
stdout:
Sequelize [Node: 0.12.7, CLI: 2.1.0, ORM: 3.14.0]
Note: I can still query and connect to the database after the failure, also when i check the TCP lsof -i tcp:5432 its only one instance of postgres that runs.
I would appreciate any assistance in solving this issue.
I had similar issue , but in my case it turn out I didn't start my posgres. Make sure your posgress is started and open otherwise Sequelize would fail. I just solve mine. Also mind your username and password . my connection is simple below.
var db ='postgres://user:#localhost:5432/my_db_naame'
var sequelize = new Sequelize(db);
where user is your database user . my_db_naame is your db name. See the doc here . I suggest you install posgres client terminal. Like the one here http://postgresapp.com/documentation/gui-tools.html . Make sure you open it and the elephant sign is shown on top . Or better connect , try your username and password. run some queries and then try your sequelize again.

Possible? How to setup VNC in a Google Managed VM Environment

I'm using Java but this isn't necessarily a Java question. Google's "java-compat" image is Debian (3.16.7-ckt20-1+deb8u3~bpo70+1 (2016-01-19)).
Here is my Dockerfile:
FROM gcr.io/google_appengine/java-compat
RUN apt-get -qqy update && apt-get qqy install curl xvfb x11vnc
RUN mkdir -p ~/.vnc
RUN x11vnc -storepasswd xxxxxxxx ~/.vnc/passwd
EXPOSE 5900
ADD . /app
And in the Admin Console I created a firewall rule to open up 5900. And lastly I am calling the vnc server itself in the "_ah/start" startup hook with this command:
x11vnc -forever -usepw -create
All seems to be setup correctly but I'm unable to connect with TightVNC. I use the public (ephemeral) IP address for the instance I find in the Admin Console followed by ::5900 (TightVNC requires two colons for some reason). I'm getting a message that the server refused the connection. And indeed when I try to telnet to port 5900 it's blocked.
Next I SSH into the container machine and when I test the port on the container with wget xxx.xxx.xxx.xxx:5900 I get a connection. So it seems to me the container is not accepting connections on port 5900. Am I getting this right? Is it possible to open up ports and route my VNC client into the docker container? Any help appreciated.
Why I can't use Compute Engine. Just to preempt some comments about using google's Compute Engine environment instead of Managed VMs. I make heavy use of the Datastore and Task Queues in my code. I don't think those can run (or run natively/efficiently) on Compute Engine. But I may pose that as a separate question.
Update: Per Paul in the comments... having learned some of the docker terminology: Can I publish a port on the container in Google's environment?
Out of curiosity - why are you trying to VNC into your instances? If it's just for management purposes, you can SSH into Managed VM instances.
That having been said - you can use the network/forwarded_ports config to route traffic from the VM to the application container:
network:
forwarded_ports:
- 5900
instance_tag: vnc
Put that in your app.yaml, and re-deploy your app. You'll also need to open the port in your firewall (if you intend on accessing this from the public internet):
gcloud compute firewall-rules create default-allow-vnc \
--allow tcp:5900 \
--target-tags vnc \
--description "Allow vnc traffic on port 5900"
Hope this helps!

Resources