How to prevent updating duplicate rows in SQLite Database? - database

I'm inserting new rows into a SQLite table, but I don't want to insert duplicate rows.
I also don't want to specify every column in the database if possible.
I don't even know if this is possible.
I should be able to take my values and create a new row with them, but if they duplicate another row they should either overwrite the existing row or do nothing.

This is one of the very first steps in database design and normalization. You have to be able to explicitly define what you mean by a duplicate row, and then place a primary key constraint, (or a unique constraint), on the columns in your table that represent that definition.
Before you can define what duplicate means, you have to define (or decide) exactly what the table is to contain,. i.e., what real-world business domain entity or abstraction each row in the table represents, or will hold data for...
Once you have done this, the PK or unique constraint will stop you from inserting duplicate rows... The same PK will help you find the duplicate row when it does exist, and update it with the values of the non-duplicate-defining (non-PK) columns that are different from the values in the existing duplicate row. Only after all this has been done, can an insert or replace (as defined by SQL Lite) process help. This command checks whether a duplicate row (*as dedined by yr PK constraint) exists, and if it does, instead of inserting a new row, it updates the non-PK defined columns in that row with the values spplied by your Replace query.

Your desires appear mutually contradictory. While Andrey's insert or replace answer will get you close to what say you want, what should probably clarify for yourself what you really want.
If you don't want to specify every column, and you want a (presumably) partial row to update rather than insert, you should probably look at the unique constraint and know that the ambiguity of your requirements was also made by the SQL92 Committee.

http://www.sqlite.org/lang_insert.html
insert or replace might interest you

Related

Using 2 same values in 1 row -> SQL 2008 table

I've searched for this thingy a lot.. and I can not find a solution since I'm beginner in SQL itself.
I used to edit games database. now, I need to create a new table with 1 row called "CodeName128" and it should contain the same value many times..
when I place something like
CODE_NAME1
CODE_NAME1
it tells me No rows was updated blabla which means this table already have this code.
how can I get over it and enable the duplication in table?
Live example:
You must be having Primary key or Unique key defined on that column which is not allowing you to enter the duplicate values. The keys must be defined for a reason, so its not advisable to remove those, still if you think that duplicate values are required for that column, you have to alter the table structure and remove those constraints from that column.

SQL Server 2008 - Database Design Query

I have to load the data shown in the below image into my database.
For a particular row, either field PartID would be NULL OR field GroupID will be NULL, and the other available columns refers to the NON-NULL entity. I have following three options:
To use one database table, which will have one unified column say ID, which will have PartID and GroupID data. But, in this case I won't be able to apply foreign key constraint, as this column will be containing both entities' data.
To use one database table, which will have columns for both PartID and GroupID, which will contain the respective data. For each row, one of them will be NULL, But in this case I will be able to apply foreign key constraint.
To use two database tables, which will have similar structure, the only difference will be the column PartID and GroupID. In this case I will be able to apply foreign key constraint.
One thing to note here is that, the table(s) will be used in import processes to import about 30000 rows in one go and will also be heavily used in data retrieve operations. Also, the other columns will be used as pivot columns.
Can someone please suggest what should be best approach to achieve this?
I would use option 2 and add a constraint that only one can be non-null and the other must be null (just to be safe). I would not use option 1 because of the lack of a FK and the possibility of linking to the wrong table when not obeying the type identifier in the join.
There is a 4th option, which is to normalize them as "items" with another (surrogate) key and two link tables which link items to either parts or groups. This eliminates NULLs. There are further problems with that approach (items might be in both again or neither without any simple constraint), so unless that is necessary for other reasons, I wouldn't generally go down that path.
Option 3 could be fine - it really depends if these rows are a relation - i.e. data associated with a primary key. That's one huge problem I see with the data presented, the lack of a candidate key - I think you need to address that first.
IMO option 2 is the best - it's not perfectly normalized but will be the easiest to work with. 30K rows is not a lot of rows to import.
I would modify the table so it has one ID column and then add an IDType that is either "G" for Group or "P" for Part.

how can i have a unique column in many tables

I have ten or more(i don't know) tables that have a column named foo with same datatype.
how can i tell sql that values in all the tables should be unique.
I mean If(i have value "1" in table1) I should NOT be able to have value "1" in table2
Have a common ID's table, which these ten tables reference. That will work well in that it will ensure unique ID's, but doesn't mean you couldn't duplicate the ID's in the table if someone really wants to.
What I mean is a common ID's table ensures that you don't have duplicates for insert (by also inserting an ID into this common table), but the thing is the way to guarantee that it never happens is by building the business rules into the system or placing check constraints to cross reference the other tables (which would ensure uniqueness, but degrade performance).
The question is phrased vaguely; if you need to generate a column that's unique among several tables, use row GUIDs or a common ID generator table; if you need to enforce uniqueness (and the field values are already there), use triggers.
Generally, if you generate the values, you don't need to enforce anything. The generation logic, if done right, will take care of that. If you are inserting, say, user input, then you can and should enforce uniqueness during insertion. As a validation rule or something.
You can define the field as a GUID (or a UNIQUEIDENTIFIER in SQL server). Then it will always be unique no matter what.
How about setting a check constraint on each table, such that ID % 10 = N (where N is the table number, from 0-9). And use IDENTITY(N,10) each time.
I would suggest that possibly your design is flawed. Why are these separate tables? It ouwld be better to put them in one table with one id field and another filed to identify whatever is making these spearate tables (cusotmer id for instance). Then you can read about partioning tables if you want them to be split by customer for performance reasons.

How do I manage identities with ETL?

I need help figuring out a workflow and I'm not sure how to go about it... Let's say I'm transforming (ETL?) data from Table A to Table B. Table A has a composite primary key A.a+A.b+A.c, while Table B has just an automatically populated identity column. How can I map the composite keys from A back to the identities created when inserting into B?
Preferably I would like to not have any columns in table B related to A's composite key because there are many other tables that need to undergo the same operation but don't have the same composite key structure.
If I understand you correctly, you can't relate records from table B back to the records of table A after the transformation unless you somehow capture a mapping between A's composite key and B's identifier during the transformation.
You could add a column to A and pre-compute the identifiers to be used when inserting into B. Then you would have a mapping. This could also be done using a separate mapping table, if you don't want to add a column to A.
If you don't want to override the default assignment of identifiers, then you will have to capture them during the load. Oracle provides the returning clause for insert in PL/SQL for this purpose. I'm not sure about SQL Server. It may also be possible to accomplish this by using a trigger on B to insert into a separate mapping table or update a column in A. Though that's likely to slow down your load considerably.
If nothing else, you could create additional columns in B to hold the keys of A during the load, query out the mappings into a separate table afterwards, and then drop the extra columns.
I hope that helps.
Ask yourself exactly what you need the original keys for. The answer may vary depending on the source system. This may lead you to maintain a "source system" column and a "original source keys" column. The latter may need to be a comma-delimited list of the original keys.
Or, you may find that you never actually need to map back, so don't need to keep anything.

Can DB constraits ignore existing records and apply for only new data?

I want to learn the answer for different DB engines but in our case;
we have some records that are not unique for a column and now we want to make that column unique which forces us to remove duplicate values.
We use Oracle 10g. Is this reasonable? Or is this something like goto statement :) ? Should we really delete? What if we had millions of records?
To answer the question as posted: No, it can't be done on any RDBMS that I'm aware of.
However, like most things you can work around it, by doing the following.
Create a composite key, with a new column and the existing column
You can make it unique without deleting anything by adding a new column, call it PartialKey.
For existing rows you set PartialKey to a unique value (starting at Zero).
Create a unique constraint on the existing column and PartialKey (you can do this because each of these rows will now be unique).
For new rows, only use a default value of Zero for PartialKey (because zero has already been used), this will force the existing column to have unqiue values in the table.
IMPORTANT EDIT
This is weak - if you delete a row with partial key 0. Now another row can be added with a value that is already in the existing column, because the 0 in partial key will guarentee uniqueness.
You would need to ensure that either
You never delete the row with
partial key 0
You always have a dummy row with
partial key 0, and you never delete
it (or you immediately reinsert it automatically)
Edit: Bite the bullet and clean the data
If as you said you've just realised that the column should be unique, then you should (if possible) clean up the data. The above approach is a hack, and you'll find yourself writing more hacks when accessing the table (you may find you've two sets of logic for dealing with queries against that table, one for where the column IS unique, and one where it's NOT. I'd clean this now or it'll come back and bite you in the arse a thousand times over.
This can be done in SQL Server.
When you create a check constraint,
you can set an option to apply it
either to new data only or to existing
data as well. The option of applying
the constraint to new data only is
useful when you know that the existing
data already meets the new check
constraint, or when a business rule
requires the constraint to be enforced
only from this point forward.
for example
ALTER TABLE myTable
WITH NOCHECK
ADD CONSTRAINT myConstraint CHECK ( column > 100 )
You can do this using NOVALIDATE ENABLE constraint state, but deleting is much more preferred way.
You have to set your records straight before adding the constraints.
In Oracle you can put a constraint in a enable novalidate state. When a constraint is in the enable novalidate state, all subsequent statements are checked for conformity to the constraint. However, any existing data in the table is not checked. A table with enable novalidated constraints can contain invalid data, but it is not possible to add new invalid data to it. Enabling constraints in the novalidated state is most useful in data warehouse configurations that are uploading valid OLTP data.

Resources