Qt qsqlite db file update / remove - database

In my program I use qsqlite database. I changed the data in the database and then I iterated through the updated database but It took no effect, I could see only the old data. Nothing helped me until I manually deleted the db file so the new one was created, containing the desired new data.
I didn't use any exec to put my new data in the database. I just rewrote it manually with new values.
I wonder what I have to do in order to see my new data without manually deleting the db file. Can I update the binary db file? Or could I remove it after the connection is closed from within the program?
I consider this as a basic question still I can't find the answer.

Related

Want to replace old liquibase migration script file with new ones

I am using liquibase scripts with Cordapp. And previously the first version databaseChangeLog file was having all table creations in one single change set and in a later point of time we have split it into different databaseChangeLog having each changeset.
Now the problem is some production testing environments have the data in it with the older script, but we want to use the new scripts.
The change done is like →
Old: abc-master.xml contained abc-init.xml (usual way)|
Now: abc-master.xml contains abc-v1.xml and
abc-v1.xml contains table-v1.xml files for each table creation
Solution we were thinking is like,
create new tables with slight name change → then copy the data from old tables here → then drop old tables. So then we can remove old tables and old scripts (i assume)
Still the DATABASECHANGELOG will probably have the old scripts data, would that be a problem?
Or is there a far better way to do this?
Many thanks.
I answered this also on the Liquibase forums, but I'll copy it here for other people.
The filename is part if the unique key on the databasechangelog table (ID/Author/Filename). So when you change the filename of a changeset that has already executed, that is now in-fact a new changeset according to Liquibase.
I normally recommend that my customers never manually update the databasechangelog table, but in this case I think it might be the best course of action for you. That way your new file structure is properly reflected in the databasechangelog table.
I would run an update-sql command on the new file structure, against one of your database where you have already executed the chagesets. This will show you what changesets are pending, and also the values for the filenames that you need to update.
we are planning to go with
<preConditions onFail="MARK_RAN">
<not>
<tableExists tableName="MY_NEW_TABLE"/>
</not>
</preConditions>
For all those table creation changeset in new distributed structure ones, so our assumptions are:
We can keep this new structure alone in code & remove the old INIT file.
For environments having existing data, eventhough these new structure of changesets will be considered as new changeset to run, the preconditions will prevent it running.
For fresh db deployments, it will work as expected, by creating all the required tables.

What is best way to update multipli record in database (800000 row) and persist the new data from csv file using spring bash?

I have File csv that contains large data,every time the user upload new file the old data will be updated or deleted it depends on the file and save the new data.
I am using Spring bash for this task.
I am creating a job that contains two steps :
first steps : A tasklet for updating the old data
second steps : steps that contains a reader,procssor and writer with chunk data to persist the new data
the problèm is in the time of save and update is very lard 12min for file that contains 80000 row.
can I optimize the time for this job ?
Import (export) big data process, update, delete, update using joining to tables, searching process, all these operations very very faster on Databases than on programming languages. I recommended to you these:
Use SQL Server BULK INSERT command to import data from CSV into Database. For example, for 10 million records this process will be executed in 12 seconds.
After importing data you can update, delete or insert new data on the database using joining to import table.
This is the best way, I think that.

Filling a .mdb-File with Data through VB.net

i'm using oledb and DataSets on VB.net for filling an access database (.mdb).
It works in the following process:
i have an existing .mdb-file with Data in it
creating an oledb-dataadapter to the existing .mdb-file
filling a DataSet/DataTAble with the Data from the file (adapter.fill())
adding a new row to the dataset
filling the row with data
updating the dataset/datatable through the dataadapter to the .mdb file
This works so long, the problem is: I'm doing this process a few thousand times, with a few thousand datasets. From time to time, this is during longer and longer. I think it's because the dataadapter has to go through the whole database all the time and because i'm taking the whole dataset from the database all the time out, and updating it back to the database.
So my question:
is there an oppurtunity to do this in an other way? Without taking the whole data out from the database and taking it back? And without going through the whole Databse? Maybe with a sql-connection and then just adding a row to the end of the database??
Thanks for your help!
If You only adding rows - why not use SqlOleDBCommand? He has method .ExecuteScalar()

Mysql query to copy new data from one DB to another

I have a situation. While we upgraded our server drives, we took a backup of our DB and then imported it again and then site was live, its after a weeks we realize that import did not copy entire backup from dump file as when we checked it was only 13 GB which it suppose to be 60 GB. The interesting thing is, there is one big table which we figured out is copied in a funny way, it is huge and its just copied few initial records, say like 2000-5000 and it copied last records say 400000-500000 and there is no records in between. Isn't that weird? cause while we imported we checked initial and last entries and thought it copied everything. Then we created a new DB and imported a dump again and now it seems to be ok, but we have the new entry already in our live DB(13GB data), so we have to copy those and add to our newl imported DB. What will be the idle way to copy those new records in to recovered DB? I mean some query which will search the new DB and add only new records found. So do I have to take dump of our 13 GB and then import? Or is there anyways to copy from live DB?
You can use mysqldump with the following option: --insert-ignore - writes INSERT IGNORE statements rather than INSERT statements (http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_insert-ignore)
With IGNORE keyword, a row that duplicates an existing UNIQUE index or PRIMARY KEY value in the table is not inserted and no duplicate-key error is issued.

Populate SQL database from textfile on a background thread constantly

Currently, I would like provide this as an option to the user when storing data to the database.
Save the data to a file and use a background thread to read data from the textfile to SQL server.
Flow of my program:
- A stream of data coming from a server constantly (100 per second).
- want to store the data in a textfile and use background thread to copy data from the textfile back to the SQL database constantly as another user option.
Has this been done before?
Cheers.
Your question is indeed a bit confusing.
I'm guessing you mean that:
100 rows per second come from a certain source or server (eg. log entries)
One option for the user is textfile caching: the rows are stored in a textfile and periodically an incremental copy of the contents of the textfile into (an) SQL Server table(s) is performed.
Another option for the user is direct insert: the data is stored directly in the database as it comes in, with no textfile in between.
Am I right?
If yes, then you should do something in the lines of:
Create a trigger on an INSERT action to the table
In that trigger, check which user is inserting. If the user has textfile caching disabled, then the insert can go on. Otherwise, the data is redirected to a textfile (or a caching table)
Create a stored procedure that checks the caching table or text file for new data, copies the new data into the real table, and deletes the cached data.
Create an SQL Server Agent job that runs above stored procedure every minute, hour, day...
Since the interface from T-SQL to textfiles is not very flexible, I would recommend using a caching table instead. Why a textfile?
And for that matter, why cache the data before inserting it into the table? Perhaps we can suggest a better solution, if you explain the context of your question.

Resources