MongoDB data deleted from database - database

I have a problem using mongodb, yesterday I was using 'test' database given by default by mongodb. I store some data and did a few queries with db.collectionName.find() and the information was there. Today I woke up to continue working and the information in my collection is no longer there. Does anyone know what could've happended? Is it because I used 'test' one of the databases given by default? I don't get it. I've been working with the database for 3 days and I did not have the problem.

Related

How to revert an update on single mongo db document

I just updated a single document on mongo and my transaction wrong and i lost the previous data. Is there any way to get the data before making the update?
If you have written the document you are trying to restore recently, and you are using a replica set, you should be able to extricate the previous version of the document out of the oplog. Start here.
Atlas provides a point in time restore feature.

Tools for compare data before writing into db

I am writing a project which needs to scrape data from a website, I am using pyspider, and it runs automatically every 24 hours(scraping the data every 24hours). Problem is , before writing the new data entry into the dB, I want to compare the new data with the existing data in the dB.
Is there a tool/lib I can use?
I am running my project on aws, what’s the best tool I can use to work with aws?
My idea is to set up some rule for the data to update/insert into the dB, but when the new data is somehow conflict with rule then I will be able to view the data/scrape log(where the tool will label it as pending)and waiting for admin to do further operation.
Thanks in advance.
[List of data compare, synchronization and migration tools.]https://dbmstools.com/categories/data-compare-tools
visit there. it might be helpful

Laravel - Migrate database from other frameworks

I was working on a project which is already advanced, specially the database, that has been filled by someone from my team, even though part of it has not been used yet. Some tables can be emptied and refilled as it's sample data, however most of them contains data which will be actually used or it's been used in the parts we're doing now.
The project started in CodeIgniter, by we've realized that Laravel can save us hours of work, so we're planning to migrate it. The point is that we didn't use the CodeIgniter's migration system, and we've seen in the Laravel documentation that only the table structure will be migrated, plus we have to create every migration.
The question here is if there's a way to both, create the migratinos files automatically, and to keep the relevant data that will be used in the application so we don't need to refill the database again (yep, there are kind of big tables). We thought on the seeders, but what we've seen is that they only store sample data...
You can use Laravel migration generator. It will simply help you to generate migrations from your
existing database. Check it out below. You will find in readme how to use it. Good luck.
https://github.com/Xethron/migrations-generator
Hope , it helps. Thanks. Cheers. -_-

database table is getting emptied on phpmyadmin automatically,randomly

My database table on phpmyadmin is being automatically emptied periodically. Sometimes all the data is getting deleted after 2-3 days, sometimes it takes weeks to get emptied. I didn't find any solution to this problem on google. I am using Mysql5.
If there's anymore information you need to solve this problem, let me know in the comments. Can anyone kindly tell me about how can I solve this problem?

SOLR 3.1 Indexing Issue

We are facing some issues with SOLR search.
We are using SOLR 3.1 with Jetty. We have set schema according to our requirement. We have set data-config.xml to import records into the Collection (Core) from our database (Sql Server 2005).
There are 320, 000 records in the database which we need to import.
After finished import, when i try to search all the records by SOLR admin
http://localhost:8983/solr/Collection_201/admin/
It shows me total number found 290, 000. So, 30, 000 records are missing.
Now following questions are in my mind
How could i know which record is not properly indexed? OR which record is missing? To know that, i tried a trick, i thought i should have put a field in the database to know that which record is imported into the SOLR collection and which is not. But the big question is how would i update this database field while import from data-config.xml. Because tag allows you only search queries OR in other words something to return. So, i got another idea to still update that database field. I created a stored procedure in my database, which contains update query that would update the field in the database and after that i have select query which is simply return 1 record to fulfill requirement. But when i tried to run DIH with that it returns "Index Failed. Rollback all the changes" error message and nothing imported. When i commented update query into the stored procedure, then it works. So it was not allowing me to run update query even it from stored procedure. So i tried really hard to find a way to update the database from DIH. But i was really failed to find anything Sad smile i refused this idea to update database.
I cleared the index and started import data again. This time i tried it manually run the solr admin import page for 5, 000 records per turn. At the end, for some how records are still missing.
Is this possible it is not committed properly. I red in the documentation that import page (http://localhost:8983/solr/Collection_201/dataimport?command=full-import&clean=false) automatically committed the imported data. But i personally noticed some time it does or sometime it does not. So it is really driving me crazy Sad smile
Now i am fully frustrated and start thinking the way i am using to use SOLR is right or not. If i am right, then is it reliable???? If am wrong, please guide me what is my mistake??
Please Please Please guide me how easily we can sync. collection with our database and make sure it is 100% synced.
What field are you using for your IDs in Solr and the database? The id field needs to be unique, so if you have 30,000 records that have the same ID as some 30,000 other records then the data will overwrite those records.
Also, when you run data import handler, you can query it for status (?command=status) and that should tell you the total number of records imported on the last run.
The first thing I would do is check for non-unique IDs in your database WRT the solr id field.
Also be aware, that when one record in the batch is wrong, the whole batch gets rolledback. So if it happened 3 times, and you are indexing 10K docs each, that would explain it.
At the time, I solved it: https://github.com/romanchyla/montysolr/blob/master/contrib/invenio/src/java/org/apache/solr/handler/dataimport/NoRollbackDataImporter.java
but there should be a better/more elegant solution than that. I don't know how to get missing records in your case. But if you have indexed the ids, then you can compare the indexed ids with the external source and get the gaps

Resources