When we take a Solr backup, without a location specified - it works, and a backup snapshot folder is created in the data directory.
However, when specifying a folder, such as: http://localhost:8983/solr/core_name/replication?command=backup&location=/backup_directory it always fails.
Looking at the Solr logs, I see this error:
SnapShooter
Failed to delete file:///backup_directory/snapshot.20200404134436807 after snapshot creation failed due to: java.nio.file.NoSuchFileException: /backup_directory/snapshot.20200404134436807
SnapShooter
Exception while creating snapshot
I've searched for hours for a solution. It looks like others have had this issue before too with various Solr versions.
Usually these errors are caused by Solr not having write access to the directory where the backup is supposed to go. This happens because Solr in most cases runs as a different user than the one that owns the backup directory (which might be root).
You can compare the user that Solr runs under - usually shown if you issue ps aux | grep solr or similar under Linux, and the who owns the directory - by using ls -al in the parent directory. Use chown to change ownership of the directory to the Solr user (unless it's being shared with other processes - in that case it'll depend on what you want to achieve).
This error might be caused you didn't use trailing slash while setting the backup location. So you need to use the command:
http://localhost:8983/solr/core_name/replication?command=backup&location=/backup_directory/
Related
I'm having a lot of problems running my solr server. When I have problems committing my csv files (its a 500 MB csv) it throws up some error and I am never able to fix it. Which is why I try to clean up entire indexing using
http://10.96.94.98:8983/solr/gettingstarted/update?stream.body=<delete><query>*:*</query></delete>&commit=true
But sometimes it just doesnt delete. In which casese, I use the
bin/solr stop -all
And then try, but again it gives me some errors for updating. Then I dedicided to extract the install tarball deleteing all my revious solr files. And successfully it works!
I was wondering if there is a shorter way to go about it. I'm sure the index files arn't the only that are generated. Is there any revert to fresh installion option?
If you are calling the update command against the right collection and you are doing commit, you should see the content deleted/reset. If that is not happening, I would check that the server/collection you are querying is actually the same one you are executing your delete command against (here gettingstarted). If that does not work, you may have found a bug. But it is unlikely.
If you really want to delete the collection, you can unload it in the Admin UI's Core page and then delete from the disk. To see where the collection is, look at the core's Overview page on the right hand side. You will see Instance variable with path to your core's directory. It could be for example: .../solr-6.1.0/example/techproducts/solr/techproducts So, deleting that directory after unloading the core will get rid of everything there.
I am trying to create a database in linux where:
Its not in the user home
Don't require the client to inform the entire server path for the db file.
Needs to be different from the bin directory to prevent core dump failures.
The documentation says that you can use a url like this:
jdbc:h2:file:data/sample
but this simple url doesn't work and get the follow error:
Exception in thread "main" org.h2.jdbc.JdbcSQLException: A file path
that is implicitly relative to the current working directory is not
allowed in the database URL
"jdbc:h2:file:db/datadb;TRACE_LEVEL_FILE=3". Use an absolute path,
~/name, ./name, or the baseDir setting instead. [90011-187]
Observation: I know you can use ".", but what will be the url from the client in that case?
The documentation is wrong. I will update it.
jdbc:h2:file:data/sample
It should be:
jdbc:h2:file:./data/sample
Many users ran into problems because they used something like jdbc:h2:test and then either didn't find the database file, or created a second database when running the application in a different directory. That's why in version 1.4.x, now relative path only work when using ., as in jdb:h2:./test.
By the way, you have asked this question in the H2 Google Group as well.
I have a TFS build process that drops outputs on sandbox which is another server in the same network. In other words, the build agent and sandbox are separate machines. After the outputs are created, a batch script defined within the build template does the following:
Rename existing deployment folder to some prefix + timestamp (IIS can now no longer find the app when users attempt to access it)
Move newly-created outputs to deployment location
The reason why I wanted to rename and move files instead of copy/delete/overwrite is the latter takes a lot of time because we have so many files (over 5500). I'm trying to find a way to complete builds in the shortest amount of time possible to increase developer productivity. I hope to create a windows service to delete dump folders and drop folder artifacts periodically so sandbox doesn't fill up.
The problem I'm facing is IIS maintains a handle to the original deployment folder so the batch script cannot rename it. I used Process Explorer to see what process is using the folder. It's w3wp.exe which is a worker process for the application pool my app sits in. I tried killing all w3wp.exe instances before renaming the folder, but this did not work. I then decided to stop the application pool, rename the folder, and start it again. This did not work either.
In either case, Process Explorer showed that there were still uncollected handles to my outputs except this time the owner name wasn't w3wp.exe, but it was something along the lines of unidentified process. At one point, I saw that the owner was System, but killing System's process tree shuts down the server.
Is there any way to properly remove all handles to my deployment folder so the batch script can safely rename it?
https://technet.microsoft.com/en-us/sysinternals/bb896655.aspx
Use windows systernal tool called Handle v4.0
Tools like Process Explorer, that can find and forcibly close file handles, however the state and behaviour of the application (both yours and, in this case, IIS) after doing this is undefined. Some won't care, some will error and others will crash hard.
The correct solution is to allow IIS to cleanly release locks and clean up after itself to preserve server stability. If this is not possible, you can either create another site on the same box, or set up a new box with the new content, and move the domain name/IP across to "promote" the new content to production
I have two load balanced servers each running exactly the same copy of various PHP based websites.
An admin user (or multiple admin users) hitting their site might therefore hit one or other of the servers when wanting to change content e.g. upload an image, delete a file from a media library, etc.
These operations mean one or other or both the servers go out of sync with each other and need to be brought back into line.
Currently I'm looking at rsync for this with the --delete option but am unsure how it reacts to files being deleted vs. new files being created between servers.
i.e. if I delete a file on server Server A and rsync with Server B the file should also be deleted from Server B (as it no longer exists on A) BUT If I separately upload a file to Server B as well as deleting a file from Server A before running the sync, will the file that got uploaded to Server B also get removed as it doesn't exist on Server A?
A number of tutorials on the web deal with a Master-Slave type scenario where Server B is a Mirror of Server A and this process just works, but in my situation both servers are effectively Masters mirroring each other.
I think rsync keeps a local history of files it's dealing with and as such may be able to deal with this problem gracefully but am not sure if this is really the case, or if it's dangerous to rely on this alone?
Is there a better way of dealing with this issue?
I wasn't happy with my previous answer. Sounds too much like somebody must have invented a way to do this already.
It seems that there is! Check out Unison. There's a GUI for it and everything.
First, if you're doing a bidirectional rsync (i.e. running it first one way, and then the other) then you need to be using --update, and you need to have the clocks on both servers precisely aligned. If both servers write to the same file, the last write wins, and the earlier write is lost.
Second, I don't think you can use delete. Not directly anyway. The only state that rsync keeps is the state of the filesystem itself, and if that's a moving target then it'll get confused.
I suggest that when you delete a file you write its name to a file. Then, instead of using rsync --delete, do it manually with, for example, cat deleted-files | ssh serverb xargs rm -v.
So, your process would look something like this:
ServerA:
rsync -a --update mydir serverB:mydir
cat deleted-files | ssh serverB xargs rm -v
ServerB:
rsync -a --update mydir serverA:mydir
cat deleted-files | ssh serverA xargs rm -v
Obviously, the two syncs can't run at the same time, and I've left off other important rsync options: you probably want to consider --delay-updates, --partial-dir and others.
Is it possible to create/delete different databases in the graph database Neo4j like in MySQL? Or, at least, how to delete all nodes and relationships of an existing graph to get a clean setup for tests, e.g., using shell commands similar to rmrel or rm?
You can just remove the entire graph directory with rm -rf, because Neo4j is not storing anything outside that:
rm -rf data/*
Also, you can of course iterate through all nodes and delete their relationships and the nodes themselves, but that might be too costly just for testing ...
even more simple command to delete all nodes and relationships:
MATCH (n)
OPTIONAL MATCH (n)-[r]-()
DELETE n,r
From Neo4j 2.3,
We can delete all nodes with relationships,
MATCH (n)
DETACH DELETE n
Currently there is no any option to create multiple databases in Noe4j. You need to make multiple stores of Neo4j data. See reference.
Creating new Database in Neo4j
Before Starting neo4j community click the browse option
and choose a different directory
and click start button.
New database created on that direcory
quick and dirty way that works fine:
bin/neo4j stop
rm -rf data/
mkdir data
bin/neo4j start
For anyone else who needs a clean graph to run a test suite - https://github.com/jexp/neo4j-clean-remote-db-addon is a great extension to allow clearing the db through a REST call. Obviously, though, don't use it in production!
Run your test code on a different neo4j instance.
Copy your neo4j directory into a new location. Use this for testing. cd into the new directory.
Change the port so that you can run your tests and use it normally simultaneously. To change the port open conf/neo4j-server.properties and set org.neo4j.server.webserver.port to an unused one.
Start the test server on setup. Do ./neo4j stop and rm -rf data/graph.db on teardown.
For more details see neo4j: How to Switch Database? and the docs.
In Neo4j 2.0.0 the ? is no longer supported. Use OPTIONAL MATCH instead:
START n=node(*)
OPTIONAL MATCH (n)-[r]-()
delete n,r;
Easiest answer is: NO
The best way to "start over" is to
move to another empty data folder
or
close Neo4j completely
empty the old data folder
restart Neo4j and set the empty folder as the data folder
There is a way to delete all nodes and relationships (as described here)
MATCH (n)
OPTIONAL MATCH (n)-[r]-()
DELETE n,r
As of version 3 I believe it is now possible to create separate database instances and thus their location is slightly different.
Referring to:https://neo4j.com/developer/guide-import-csv/
The --into retail.db is obviously the target database, which must not contain an existing database.
On my Ubuntu box the location is in:
/var/lib/neo4j/data/databases where I currently see only graph.db which I believe must be the default.
In 2.0.0 -M6 You can execute the following Cypher script to delete all nodes and relations:
start n=node(*)
match (n)-[r?]-()
delete n,r
If you have very large database,
`MATCH (n) DETACH DELETE n`
would take lot of time and also database may get stuck(I tried to use it, but does not work for a very large database). So here is how I deleted a larger Neo4j database on a linux server.
First stop the running Neo4j database.
sudo neo4j stop
Second, delete the databases folder and transactions folder inside data folder in neo4j folder. So where to find the neo4j folder? You can find the neo4j executable path by executing which neo4j. Check for data folder going through that path (it is located inside neo4j folder). And go inside the data folder and you will see databases and transactions folders.
rm -rf databases/ rm -rf transactions/
Restart the Neo4j server
sudo neo4j start
You can delete your data files and if you want to go through this way, I would recommend delete just your graph.db, for example. Otherwise your are going to mess your authentication info.