Neo4j - Error when having the data folder on external hard drive - database

I can start Neo4j without issues using default settings.
Now I try to have my Neo4j data in a folder of an external hard drive because I don't have enough space on my local machine for an import.
So in my neo4j.conf I have set: dbms.directories.data=/Volumes/INTENSO/richRich/neo4j-data. Then on neo4j start a databases folder gets created, but the database shuts down immediately after start. This is what I perceived as useful from the debug.log stacktrace:
...
2018-07-22 11:25:55.745+0000 INFO [o.n.k.i.DiagnosticsManager] --- INITIALIZED diagnostics END ---
2018-07-22 11:25:55.912+0000 INFO [o.n.b.BoltKernelExtension] Bolt Server extension loaded.
2018-07-22 11:25:56.069+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Selected format 'RecordFormat:StandardV3_4[v0.A.9]' for the new store
2018-07-22 11:25:56.118+0000 WARN [o.n.k.NeoStoreDataSource] Exception occurred while setting up store modules. Attempting to close things down. Unable to open store file: /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels
org.neo4j.kernel.impl.store.UnderlyingStorageException: Unable to open store file: /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels
...
Caused by: org.neo4j.io.pagecache.impl.FileLockException: Already locked: /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels
...
I checked /Volumes/INTENSO/richRich/neo4j-data/databases/graph.db/neostore.nodestore.db.labels and the file is there. I also made everything chmod -R 777 to ensure that there are no permission issues.
OS: latest MacOS
Neo4j: 3.4.4 Community Edition

Related

Dockerized PostgreSQL log to both /log & `docker logs`?

My PostgreSQL 11.6 is running inside a Docker container based on an existing image. Running docker logs my-postgres shows the log messages produced by the dockerized PostgreSQL instance.
Problem: I am trying to set up the system such that PostgreSQL logs the messages to a log file in /var/lib/postgresql/data/log and still be able to show the log messages when you run docker logs my-postgres.
Logging to a file works when /var/lib/postgresql/data/postgresql.conf was modified to have the following:
log_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
However, running docker logs my-postgres now shows
2020-02-23 18:01:49.388 UTC [1] LOG: redirecting log output to logging collector process
2020-02-23 18:01:49.388 UTC [1] HINT: Future log output will appear in directory "log".
and new log messages no longer appear here.
Is it possible to log to the log file and also show the same log messages in docker logs my-postgres?
docker-compose.yml
version: '3.3'
services:
my-postgres:
container_name: my-postgres
image: timescale/timescaledb:latest-pg11
ports:
- 5432:5432
By default docker uses json-file logging driver. This driver saves everything from container's stdout and stderr into /var/lib/docker/containers/<container-id>/<container-id>-json.log on your docker host. The docker logs command just reads from that file. By default postgresql logs into stderr #log_destination = 'stderr'
You enabled the logging collector which catches the logs sent to stderr and saves them into filesystem instead. This is the reason why you don't see them anymore in docker logs output. I don't see anywhere in postgresql documentation how to send logs both to file and stderr. I'm no expert on psql though.
https://www.postgresql.org/docs/9.5/runtime-config-logging.html
Containers should log into stderr and stdout. Configure the cont. process to log into file inside container's filesystem is considered bad practice. You will lose the filesystem when container dies unless you attach the folder as volume.
If you insist your only chance is to change config line log_filename into something static like postgresql.log and create symbolic link (and rewrite the postgresql.log file created by psql) pointing it to it to stderr ln -fs /dev/stderr /var/lib/postgresql/data/log/postgresql.log.
I haven't tested the solution and I have certain doubts about loggin collector and its log_rotate capabilities. I have no idea what happens with postgresql.log when log file is rotated. Maybe it's copied and original file deleted / recreated by postgres and you'll lose your link. You can try to disable log_truncate_on_rotation boolean to overcome this but I'm not sure if this would help tbh.

Error in Google App Engine - Log Service - SQLite

I am using Google App Engine on Ubuntu within Linux Subsystem for Windows.
When I start dev_appserver.py I receive errors with the following line resulting in this, which I am understanding to be a corrupted sqlite data file.
File "/../google-cloud-sdk/platform/google_appengine/google/appengine/api/logservice/logservice_stub.py", line 181, in start_request
host, start_time, method, resource, http_version, module))
DatabaseError: database disk image is malformed
Based upon this post I am understanding there is a log.db referenced.
GoogleAppEngineLauncher: database disk image is malformed
However, when I run the script referenced, the resultant path does not contain a log.db leading me to believe this is a different issue.
Any help in identifying the appropriate database, for the purposes of removing, would be appreciated.
Per comment added --clear_datastore=1 and did not notice a change
dev_appserver.py --host 127.0.0.1 --port 8080 --admin_port 8082 --storage_path=temp/storage --skip_sdk_update_check true --clear_datastore=1 main/app.yaml main/sync.yaml

KNIME Command Line Execution - ClassNotFoundException

I'd like to schedule a KNIME workflow. The workflow does its job very good as long as I start it from the KNIME GUI application. When I execute the same workflow via command line, java complains that com.microsoft.sqlserver.jdbc.SQLServerDriver
could not be found (ClassNotFoundException).
I invoke it via:
"D:\Progamme\KNIME\knime.exe" -nosplash -application -consoleLog org.knime.product.KNIME_BATCH_APPLICATION -preferences="absolutepathto\preferences.epf" -workflowDir="absolutepathto\workflow"
Since the error message signals missing content in the java CLASSPATH I also tried to add the parameters
-vmargs -classpath .;"absolutepathto/sqljdbc42.jar"
But still I earn a java slap, pointing to the same error...
I also tried to run the command from within the knime.exe's directory and I also tried to add the JAR file to Preferences -> Java -> Build Path -> Classpath Variable / User Libraries (referenced via the -preference argument). But that had no effect.
Did anybody face the same problems? Maybe with other third party JARs?
It is all about a Database connector that is configured like this:
Does the integrated security maybe force a misleading error?
System spec: KNIME 3.2.2 on Windows Server 2008 R2
Update - extract from preferences file
/configuration/org.eclipse.core.net/org.eclipse.core.net.hasMigrated=true
/configuration/org.eclipse.ui.ide/MAX_RECENT_WORKSPACES=10
/configuration/org.eclipse.ui.ide/RECENT_WORKSPACES=<list of some workspaces>
/configuration/org.eclipse.ui.ide/RECENT_WORKSPACES_PROTOCOL=3
/configuration/org.eclipse.ui.ide/SHOW_RECENT_WORKSPACES=false
/configuration/org.eclipse.ui.ide/SHOW_WORKSPACE_SELECTION_DIALOG=true
Is there maybe a problem due to the fact that it is a shared KNIME instance among several users and the command line execution does not know which workspace has to be chosen? Is the workspace somehow needed and why?
Partial Solution:
I finally managed it but I don't know exactly why it works now. What I did was to load a fresh portable version of KNIME and ran the same commands only changing the executable path to the new portable version. Before that I started the portable version once to set the workspace directory and register the database driver in preferences dialog and .ini file, nothing else, same configuration so far as the shared KNIME instance. What I am really wondering abpout is that from now on the commands are also working with the shared KNIME instance. I really don't know what caused the change that let KNIME find the driver class.
Info
Because I encountered a few more problems within shared environment in KNIME command line mode, that led to undeterministic execution results, I wrote a little .NET library. This gives me more flexibility/control over the workflow execution (which returncodes and error messages occured and so on). You can find it here if you're interested: KnimeNet
I took a very minimal approach:
cd "C:\Program Files\KNIME"
.\knime -nosplash -noexit -consoleLog -reset -application org.knime.product.KNIME_BATCH_APPLICATION -workflowFile="D:\Work\Knime Workflows\Output\CMD_Test.knwf" -preferences="D:\Work\Knime Workflows\Output\CMD_Test.epf"

How can I uninstall google-cloud-sdk ?

I tried deleted Google Cloud AppEngine SDK from macbook, but I'm getting this
Last login: Thu Aug 11 14:12:18 on ttys002
-bash: /Users/Squirrel/Desktop/google-cloud-sdk/path.bash.inc: No such file or directory
-bash: /Users/Squirrel/Desktop/google-cloud-sdk/completion.bash.inc: No such file or directory
Whenever I open a new terminal window. Is there a way I can stop that from happening?
Converting the comment to an anwer...
Some SDK installations may modify system or user profiles to include proper setup of the SDK environment.
First thing to check, if the user didn't log out from the system since the uninstall, is if the errors are not just side effects of the current user session which already picked up the SDK environment. Logging out and back in should take care of such cases.
If the errors messages persist after logging out/in indicates that the removal of the installation did not clean up the system or user profiles modified at SDK installation, manual cleanup being required.
In your case the errors come from bash, so the files to check are either /etc/bash* or ~/.bash* - look for /Users/Squirrel/Desktop/google-cloud-sdk - the now-removed SDK installation path mentioned in the messages.

Mesosphere installation PermissionError:/genconf/config.yaml

I got a Mesosphere-EE, and install on fedora 23 server (kernel 4.4)with:
$bash dcos_generate_config.ee.sh --web –v
then output:
Running mesosphere/dcos-genconf docker with BUILD_DIR set to/home/mesos-ee/genconf
Usage of loopback devices is strongly discouraged for production use.Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
07:53:46:: Logger set to DEBUG
07:53:46:: ====> Starting DCOS installer in web mode
07:53:46:: DCOS Installer v1
07:53:46:: Starting server ('0.0.0.0', 9000)
Then I start firefox though vnc, the vnc is on root. then:
07:53:57:: Root page requested. 07:53:57:: Serving/usr/local/lib/python3.4/site-packages/dcos_installer/templates/index.html
07:53:58:: Request for configuration type made.
07:53:58::Configuration file not found, /genconf/config.yaml. Writing new onewith all defaults.
07:53:58:: Error handling request
PermissionError: [Errno 13] Permission denied: '/genconf/config.yaml'
But I already have a genconf/config.yaml, it look like:
bootstrap_url: http://<bootstrap_public_ip>:<your_port>
cluster_name: '<cluster-name>'
exhibitor_storage_backend: zookeeper
exhibitor_zk_hosts: <host1>:2181,<host2>:2181,<host3>:2181
exhibitor_zk_path: /dcos
master_discovery: static
master_list:
- <master-private-ip-1>
- <master-private-ip-2>
- <master-private-ip-3>
superuser_username: <username>
superuser_password_hash: <hashed-password>
resolvers:
- 8.8.8.8
- 8.8.4.4
I do not know what’s going on. If you have any idear, please let me know, thank you very much!
Disable Selinux!
Configure SELINUX=disabled in the /etc/selinux/config file and then reboot!
Be ensure the selinux is disabled by the command getenforce.
$ getenforce
Disabled
zhe.
Correctly installing the enterprise edition depends on the correct system prerequisites. Anyway I suppose you're still on the bootstrap node so I will give you some path to succed in your current task.
Run the script as root or as a user issuing sudo dcos_generate_config.ee.sh
The script will also generate the config file automatically; if you want to use your own configuration file then create a folder named genconf and put it inside before running the script. You should changes the values inside <> with your specific configuration. If you need more help for your specific case send me an email to infofs2 at gmail.com

Resources