Dockerized PostgreSQL log to both /log & `docker logs`? - database

My PostgreSQL 11.6 is running inside a Docker container based on an existing image. Running docker logs my-postgres shows the log messages produced by the dockerized PostgreSQL instance.
Problem: I am trying to set up the system such that PostgreSQL logs the messages to a log file in /var/lib/postgresql/data/log and still be able to show the log messages when you run docker logs my-postgres.
Logging to a file works when /var/lib/postgresql/data/postgresql.conf was modified to have the following:
log_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
However, running docker logs my-postgres now shows
2020-02-23 18:01:49.388 UTC [1] LOG: redirecting log output to logging collector process
2020-02-23 18:01:49.388 UTC [1] HINT: Future log output will appear in directory "log".
and new log messages no longer appear here.
Is it possible to log to the log file and also show the same log messages in docker logs my-postgres?
docker-compose.yml
version: '3.3'
services:
my-postgres:
container_name: my-postgres
image: timescale/timescaledb:latest-pg11
ports:
- 5432:5432

By default docker uses json-file logging driver. This driver saves everything from container's stdout and stderr into /var/lib/docker/containers/<container-id>/<container-id>-json.log on your docker host. The docker logs command just reads from that file. By default postgresql logs into stderr #log_destination = 'stderr'
You enabled the logging collector which catches the logs sent to stderr and saves them into filesystem instead. This is the reason why you don't see them anymore in docker logs output. I don't see anywhere in postgresql documentation how to send logs both to file and stderr. I'm no expert on psql though.
https://www.postgresql.org/docs/9.5/runtime-config-logging.html
Containers should log into stderr and stdout. Configure the cont. process to log into file inside container's filesystem is considered bad practice. You will lose the filesystem when container dies unless you attach the folder as volume.
If you insist your only chance is to change config line log_filename into something static like postgresql.log and create symbolic link (and rewrite the postgresql.log file created by psql) pointing it to it to stderr ln -fs /dev/stderr /var/lib/postgresql/data/log/postgresql.log.
I haven't tested the solution and I have certain doubts about loggin collector and its log_rotate capabilities. I have no idea what happens with postgresql.log when log file is rotated. Maybe it's copied and original file deleted / recreated by postgres and you'll lose your link. You can try to disable log_truncate_on_rotation boolean to overcome this but I'm not sure if this would help tbh.

Related

Error in Google App Engine - Log Service - SQLite

I am using Google App Engine on Ubuntu within Linux Subsystem for Windows.
When I start dev_appserver.py I receive errors with the following line resulting in this, which I am understanding to be a corrupted sqlite data file.
File "/../google-cloud-sdk/platform/google_appengine/google/appengine/api/logservice/logservice_stub.py", line 181, in start_request
host, start_time, method, resource, http_version, module))
DatabaseError: database disk image is malformed
Based upon this post I am understanding there is a log.db referenced.
GoogleAppEngineLauncher: database disk image is malformed
However, when I run the script referenced, the resultant path does not contain a log.db leading me to believe this is a different issue.
Any help in identifying the appropriate database, for the purposes of removing, would be appreciated.
Per comment added --clear_datastore=1 and did not notice a change
dev_appserver.py --host 127.0.0.1 --port 8080 --admin_port 8082 --storage_path=temp/storage --skip_sdk_update_check true --clear_datastore=1 main/app.yaml main/sync.yaml

Neo4j - Error when having the data folder on external hard drive

I can start Neo4j without issues using default settings.
Now I try to have my Neo4j data in a folder of an external hard drive because I don't have enough space on my local machine for an import.
So in my neo4j.conf I have set: dbms.directories.data=/Volumes/INTENSO/richRich/neo4j-data. Then on neo4j start a databases folder gets created, but the database shuts down immediately after start. This is what I perceived as useful from the debug.log stacktrace:
...
2018-07-22 11:25:55.745+0000 INFO [o.n.k.i.DiagnosticsManager] --- INITIALIZED diagnostics END ---
2018-07-22 11:25:55.912+0000 INFO [o.n.b.BoltKernelExtension] Bolt Server extension loaded.
2018-07-22 11:25:56.069+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Selected format 'RecordFormat:StandardV3_4[v0.A.9]' for the new store
2018-07-22 11:25:56.118+0000 WARN [o.n.k.NeoStoreDataSource] Exception occurred while setting up store modules. Attempting to close things down. Unable to open store file: /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels
org.neo4j.kernel.impl.store.UnderlyingStorageException: Unable to open store file: /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels
...
Caused by: org.neo4j.io.pagecache.impl.FileLockException: Already locked: /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels
...
I checked /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels and the file is there. I also made everything chmod -R 777 to ensure that there are no permission issues.
OS: latest MacOS
Neo4j: 3.4.4 Community Edition

IBM Cloud Private-Community Edition - Waiting for cloudant database initialization

I tried below command
docker run --rm -t -e LICENSE=accept --net=host -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0 install
the response is
Waiting for cloudant initialization
I entered the command received the logs shown in the image. No error shown. Please give a solution
From the error message, for cloudant database initialization issue, it may be caused by the cloudant docker image is pulled from dockerhub while ICP installation. The cloudant docker image is big, you can run below command to check whether the image is ready in your environment.
$ docker images | grep icp-datastore
If the cloudant docker image is ready in your environment, and the ICP installation still has cloudant database initialization issue, you can try to install the latest ICP 2.1.0.3 Community Edition. From 2.1.0.3, ICP removes the cloudant database. The ICP 2.1.0.3 installation documentation:
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.3/installing/install_containers_CE.html
If you still want to check the cloudant database initialization issue in ICP 2.1.0.1 environment, you can:
Ensure your ICP nodes match the system and hardware requirements firstly.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/supported_system_config/system_reqs.html
Let us know the ICP installation configurations. You can check the contents for config.yaml and hosts files.
Check the system logs (in /var/log/messages or /var/log/syslog file) to find the relevant errors.
Run 'docker logs ' command to check the logs or errors.

Mesosphere installation PermissionError:/genconf/config.yaml

I got a Mesosphere-EE, and install on fedora 23 server (kernel 4.4)with:
$bash dcos_generate_config.ee.sh --web –v
then output:
Running mesosphere/dcos-genconf docker with BUILD_DIR set to/home/mesos-ee/genconf
Usage of loopback devices is strongly discouraged for production use.Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
07:53:46:: Logger set to DEBUG
07:53:46:: ====> Starting DCOS installer in web mode
07:53:46:: DCOS Installer v1
07:53:46:: Starting server ('0.0.0.0', 9000)
Then I start firefox though vnc, the vnc is on root. then:
07:53:57:: Root page requested. 07:53:57:: Serving/usr/local/lib/python3.4/site-packages/dcos_installer/templates/index.html
07:53:58:: Request for configuration type made.
07:53:58::Configuration file not found, /genconf/config.yaml. Writing new onewith all defaults.
07:53:58:: Error handling request
PermissionError: [Errno 13] Permission denied: '/genconf/config.yaml'
But I already have a genconf/config.yaml, it look like:
bootstrap_url: http://<bootstrap_public_ip>:<your_port>
cluster_name: '<cluster-name>'
exhibitor_storage_backend: zookeeper
exhibitor_zk_hosts: <host1>:2181,<host2>:2181,<host3>:2181
exhibitor_zk_path: /dcos
master_discovery: static
master_list:
- <master-private-ip-1>
- <master-private-ip-2>
- <master-private-ip-3>
superuser_username: <username>
superuser_password_hash: <hashed-password>
resolvers:
- 8.8.8.8
- 8.8.4.4
I do not know what’s going on. If you have any idear, please let me know, thank you very much!
Disable Selinux!
Configure SELINUX=disabled in the /etc/selinux/config file and then reboot!
Be ensure the selinux is disabled by the command getenforce.
$ getenforce
Disabled
zhe.
Correctly installing the enterprise edition depends on the correct system prerequisites. Anyway I suppose you're still on the bootstrap node so I will give you some path to succed in your current task.
Run the script as root or as a user issuing sudo dcos_generate_config.ee.sh
The script will also generate the config file automatically; if you want to use your own configuration file then create a folder named genconf and put it inside before running the script. You should changes the values inside <> with your specific configuration. If you need more help for your specific case send me an email to infofs2 at gmail.com

Linking a SQL Server with a liferay instance running in a docker container

So as the title says, I'm trying to run liferay in side of a docker container. Then from there, connect to a database on an outside node.
I can successfully ping the server that the SQL Server is running on from inside the docker container, however, when I try to connect to the database through liferay's configuration interface, it simply says an connection could not be established, and the logs state that log in for the user failed.
If it's not possible, I understand, just trying to get a better idea of this little mess.
======================================================================
Just to note, I've been using snasello's docker image for liferay, except taking out the preconfigured database to force liferay to go to the configuration page. I'm starting the container with
docker run --rm -it -i 8080:8080 {whatever the local name of the image is}
00:00:34,301 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#6][BasicResourcePool:1851] com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#3b17c58d -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
java.sql.SQLException: Cannot open database "lportal" requested by the login. The login failed.
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:368)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2820)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2258)
at net.sourceforge.jtds.jdbc.TdsCore.login(TdsCore.java:603)
at net.sourceforge.jtds.jdbc.ConnectionJDBC2.(ConnectionJDBC2.java:345)
at net.sourceforge.jtds.jdbc.ConnectionJDBC3.(ConnectionJDBC3.java:50)
at net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:184)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:146)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:195)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:211)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1086)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1073)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:44)
at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:648)
00:00:34,301 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#6][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
00:00:34,303 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#9][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
00:00:34,304 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#1][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
You should link the mysql container to the liferay container using the --link docker flag. The alias you provide to the mysql container should be db_lep.
docker run -d --name mysqldb --env-file=.crendentials mysql
docker run -d --link mysqldb:db_lep -p 8080:8080 {whatever the local name of the image is}
If you see the https://github.com/snasello/docker-liferay-6.2/blob/master/lep/portal-bd-MYSQL.properties the host for the database is db_lep. If you provide your own properties file then you should change the alias to whatever is in your properties. If you are using localhost then instead of linking you should make the containers to share the same network(localhost).
Rechecking the errors, turned out there was an issue with SQL server's authentication. Solved via this helpful post.
Thanks guys!

Resources