How to revert an update on single mongo db document - database

I just updated a single document on mongo and my transaction wrong and i lost the previous data. Is there any way to get the data before making the update?

If you have written the document you are trying to restore recently, and you are using a replica set, you should be able to extricate the previous version of the document out of the oplog. Start here.
Atlas provides a point in time restore feature.

Related

MongoDB data deleted from database

I have a problem using mongodb, yesterday I was using 'test' database given by default by mongodb. I store some data and did a few queries with db.collectionName.find() and the information was there. Today I woke up to continue working and the information in my collection is no longer there. Does anyone know what could've happended? Is it because I used 'test' one of the databases given by default? I don't get it. I've been working with the database for 3 days and I did not have the problem.

Tools for compare data before writing into db

I am writing a project which needs to scrape data from a website, I am using pyspider, and it runs automatically every 24 hours(scraping the data every 24hours). Problem is , before writing the new data entry into the dB, I want to compare the new data with the existing data in the dB.
Is there a tool/lib I can use?
I am running my project on aws, what’s the best tool I can use to work with aws?
My idea is to set up some rule for the data to update/insert into the dB, but when the new data is somehow conflict with rule then I will be able to view the data/scrape log(where the tool will label it as pending)and waiting for admin to do further operation.
Thanks in advance.
[List of data compare, synchronization and migration tools.]https://dbmstools.com/categories/data-compare-tools
visit there. it might be helpful

Posting large directory of files to SOLR using post tool, how to commit after every file

I am using the java post tool for solr to upload and index a directory of documents. There are several thousand documents. Solr only does a commit at the very end of the process and sometimes things stop before it completes so I lose all the work.
Has anyone a technique to fetch the name of each doc and call post on that so you get the commit for each document? Rather than the large commit of all the docs at the end?
From the help page for the post tool:
Other options:
..
-params "<key>=<value>[&<key>=<value>...]" (values must be URL-encoded; these pass through to Solr update request)
This should allow you to use -params "commitWithin=1000" to make sure each document shows up within one second of being added to the index.
Committing after each document is an overkill for the performance, in any case it's quite strange that you had to resubmit anything from start if something goes wrong. I suggest to seriously to change the indexing strategy you're using instead of investigating in a different way to commit.
Given that, if you not have any other way that change the commit configuration, I suggest to configure autocommit in your Solr collection/index or use the parameter commitWithin, as suggested by #MatsLindh. Just be aware if the tool you're using has the chance to add this parameter.
autoCommit
These settings control how often pending updates will be automatically pushed to the index. An alternative to autoCommit is
to use commitWithin, which can be defined when making the update
request to Solr (i.e., when pushing documents), or in an update
RequestHandler.

CouchDB - document removed by PUT _deleted attribute is still available

I remove document in CouchDB by setting the _deleted attribute to true (PUT method). The last revision of document is deleted but previous revision is still available.
And when I pull documents of specific type from database, this document is still available.
How I should delete document to not be available?
I use synchronization between CouchDB on the server and PouchDB instances on mobile applications (Ionic).
You need to compact your database. Compaction is a process of removing unused and old data from database or view index files, not unlike vacuum in RDBMS. It could be triggered by calling _compact end-point of a database, e.g. curl -X POST http://192.168.99.100:5984/koi/_compact -H'Content-Type: application/json'. After that the attempts to access the previous revisions of a deleted document should return error 404 with a reason missing.
Note that the document itself not going to completely disappear, something called "tombstone" will be left behind. The reason is that CouchDB needs to track deleted documents during replication to prevent accidental document recovery.

Solr search engine Updating document

I'm using solr search engine.I'm new to this. I want to update data automatically every time when my database getting update or new data created in the tables.I tried delta import and full import.In these method I have to do it manually when ever I need to update.
Which way is best for update solr document.?
How to make it automatically?
Thanks for your help.
There isn't a built in way to do this using Solr. I wouldn't recommend running a full or delta import when just updating one row in a table. What most Solr deployments do with a database is update the corresponding document when updating a row. This will be application specific, but this is the most efficient and standard way of dealing with this issue.
Using full or delta imports would be something that would run nightly or every few hours typically.
So, basically you want to process document before adding in solr.
This can be achieved by adding new update processor in update process chain you can go through : Solr split joined dates to multivalue field.
Here they split data in a field and saved it as multi valued field

Resources