How to avoid the duplicate entries in mariaDB - database

Using Node js and Maria db in the project
Used stored procedure to insert the data
Every 5 mins api call will trigger and inserts the data with timestamp. So it was working fine in most of the time but sometimes there is lot of duplicate entries. I don't know exactly whats the problem
Need to prevent the duplicate entries

Related

INSERT trigger from another table from a linked Server is slow

I have three tables:
(table_a) has an AFTER INSERT trigger that inserts rows from another table (table_b) from a linked server into a local table (table_c).
Whenever a lot number is inserted into (table_a), the trigger inserts from (table_b) into (table_c) rows that contain the same column value as the lot number.
This seems to slow, sometimes freeze operations on my server. I found that insert from a table from a local server seems to run fine, so I suspect the problem is caused because it inserts from a linked server.
How can I improve insert speed?
Triggers should always be written to be as fast as possible to avoid exactly this problem. Intensive operations of any kind should ideally not be carried out within a trigger. That applies doubly when the operation involves contacting other servers as that can easily end up taking real time.
Instead queue the action and process your queue outside the trigger using a service or agent.
A queue might looks like a record in a table which gets flagged once processed. The record needs to contain enough information for the service to carry out the related actions, which could be contained within a Stored Procedure.

How to update a table while using it at the same time?

I have a Local DB (I'm using SQL Server Express) named PCNAME\SQLEXPRESS. I need to load data from the main database at MAINDB\HYEAH, so I linked the mainDB and I was able to insert data from the main DB into local DB using a stored procedure.
The problem I have is that I can't figure out the correct way to do the following:
I'm constantly using the data imported from the mainDB, that data is in the table Credits, I'm always consulting, inserting or updating a record from that table.
But every 10 minutes I have to reload the data from the mainDB into Credits again, but I can't stop using the data. I need to find a way to be able to use this data and manipulate it, while is being reloaded from the mainDB.
I'm not an expert in DB or SQL transactions so I thought about this solution:
The first time I load the data from mainDB I'll do it directly on table Credits. The other times I'll load the data in a temporary table and when the stored procedure finishes, I'll replace Credits with data from the temporary table. But I think this is dumb cause if I delete all the data from Credits to replace it with temporary table I will not be able to continue using the data so I'm stuck.
Is there a way to properly achieve this?
Thank you!
One option would be to use synonyms.
BEGIN TRY
DROP SYNONYM working_table
END TRY
BEGIN CATCH
END CATCH
CREATE SYNONYM working_table FOR import_table_a
you can now do your selects and updates to working_table and they will go into import_table_a. When you need to reload the data (into import_table_b) you just drop the synonym and point it at the new version of the table.
But do take on board the other comments that imply that you might be fixing the wrong problem :)

Auto updating access database (can't be linked)

I've got a CSV file that refreshes every 60 seconds with live data from the internet. I want to automatically update my Access database (on a 60 second or so interval) with the new rows that get downloaded, however I can't simply link the DB to the CSV.
The CSV comes with exactly 365 days of data, so when another day ticks over, a day of data drops off. If i was to link to the CSV my DB would only ever have those 365 days of data, whereas i want to append the existing database with the new data added.
Any help with this would be appreciated.
Thanks.
As per the comments the first step is to link your CSV to the database. Not as your main table but as a secondary table that will be used to update your main table.
Once you do that you have two problems to solve:
Identify the new records
I assume there is a way to do so by timestamp or ID, so all you have to do is hold on to the last ID or timestamp imported (that will require an additional mini-table to hold the value persistently).
Make it happen every 60 seconds. To get that update on a regular interval you have two options:
A form's 'OnTimer' event is the easy way but requires very specific conditions. You have to make sure the form that triggers the event is only open once. This is possible even in a multi-user environment with some smart tracking.
If having an Access form open to do the updating is not workable, then you have to work with Windows scheduled tasks. You can set up an Access Macro to run as a Windows scheduled task.

Is it wise to use triggers as part of an import routine

Hi all I have a requirement to create a web based application using SQL server 2005. The data is coming from a third party source in a text format. This is my idea so far.
I have a file system watcher looking for a file in a directory
I loop through the file found, find the columns and insert the data one by one in a table
Once all data has been inserted, run a stored procedure against the table to do some more cleaning and create totals used within the web app
As you can see there are mainly 2 steps involved within the import after the file has been found. Those are storing data in SQL server and the second to clear up values and do some other work within my database. My question is if as I am looping through the values anyway can I have a trigger (and yes I do know that a trigger is per execution not for every row) to do the cleaning for me as I insert the records in my table.
For example I loop through one by one figure out the columns and then insert them into the table. As that happens a trigger is fired to runs some script (possibly stored procedures)to do some other work on other tables. That way all my file system watch needs to do is get the data and insert them into the table. The trigger will do all the other work. Is this advisable and what will happen if a trigger is already running a script and it is called again by another insert to the table?
Sorry for the long question
Thanks

Copy data from one column to another in oracle table

My current project for a client requires me to work with Oracle databases (11g). Most of my previous database experience is with MSSQL Server, Access, and MySQL. I've recently run into an issue that seems incredibly strange to me and I was hoping someone could provide some clarity.
I was looking to do a statement like the following:
update MYTABLE set COLUMN_A = COLUMN_B;
MYTABLE has about 13 million rows.
The source column is indexed (COLUMN_B), but the destination column is not (COLUMN_A)
The primary key field is a GUID.
This seems to run for 4 hours but never seems to complete.
I spoke with a former developer that was more familiar with Oracle than I, and they told me you would normally create a procedure that breaks this down into chunks of data to be commited (roughly 1000 records or so). This procedure would iterate over the 13 million records and commit 1000 records, then commit the next 1000...normally breaking the data up based on the primary key.
This sounds somewhat silly to me coming from my experience with other database systems. I'm not joining another table, or linking to another database. I'm simply copying data from one column to another. I don't consider 13 million records to be large considering there are systems out there in the orders of billions of records. I can't imagine it takes a computer hours and hours (only to fail) at copying a simple column of data in a table that as a whole takes up less than 1 GB of storage.
In experimenting with alternative ways of accomplishing what I want, I tried the following:
create table MYTABLE_2 as (SELECT COLUMN_B, COLUMN_B as COLUMN_A from MYTABLE);
This took less than 2 minutes to accomplish the exact same end result (minus dropping the first table and renaming the new table).
Why does the UPDATE run for 4 hours and fail (which simply copies one column into another column), but the create table which copies the entire table takes less than 2 minutes?
And are there any best practices or common approaches used to do this sort of change? Thanks for your help!
It does seem strange to me. However, this comes to mind:
When you are updating the table, transaction logs must be created in case a rollback is needed. Creating a table, that isn't necessary.

Resources