use tablockx for insert non duplicable data outside a primary key - sql-server

In SQL Server I have a table with the following data: first name, last name, birthplace, etc, etc.
The table has an identity ID column (the primary key of the table). In the interface of my application, the user can modify this data until they hit a button named "Close Record", when this happens I have to generate a Closed_Record_ID with syntax year + consecutive_number. For all the records in my table, the order they entered the database (given by the identity ID), may not be the same in wich they were closed, so I have to generate a new consecutive number.
How should I use the Tablockx hint, or what should I do to avoid duplicate consecutive numbers in the Closed_Record_ID column?

Related

try to add a new column as foreign key in existing table with data and existing data manipulation

A very simple example. I have web API with a table in the database
Employees
---------
Id
---------
Name
and for example, I have 50 records.
Now I have to Implement a feature to add extra info about the department. Because I have one to many relationships the new database schema is with department id
Employees Department
---------- -----------
Id Id
--------- -----------
Name Name
---------
DepartmentId
for this, I run the query (i use SQL server)
alter table Employees add constraint fk_employees_departmentid
foreign key (DepartmentId) references Department(Id);
But now I have some issues to handle
1)Now I have the 50 existing records without departmentId. However, I must add manually this value? What is the best practice? For 50 records it is possible but for 2000 records and more?
2) when I add departmentId column I set this column to have null values(is correct?), but as a foreign key, I don't want to allow null values. Can I change it or how can I handle it?
1)Now I have the 50 existing records without departmentId. However, I must add manually this value? What is the best practice? For 50 records it is possible but for 2000 records and more?
It depends. You could set up a new department for "unassigned" and assign them all to that; you could send out a spreadsheet to HR saying "the following employees don't have an assigned department; what department are they in? ps; don't remove the EmployeeID column from the sheet before you send it back; i need it to update the DB". It's very much a business contextual question, not a technical one. X thousand records is easy to handle.. It'll just take a bit of time to work through if you (or someone else) is doing it manually. This information is likely to be available somewhere else; you could perhaps send a list out to all department heads saying "are any of these guys yours? Please remove all the names you don't have in your team from this spreadsheet and send it back to me" then update the DB based on what you get back
As this is a one time operation you don't need anything particularly whizz for it - you can just get your Excel sheet back and in an empty column put:
="UPDATE emp SET departmentID = 5 WHERE id = " & A1
And fill it down to generate a bunch of update statements, copy the text into your query tool and hit go; don't need to get all fancy loading the sheet into a table, doing update joins etc - just hacky style sling together something in excel that will write the SQL for you, copy/paste/run. If HR have sent back the sheet with a list of department names, then put the dept name and id somewhere else on the sheet and use VLOOKUP or XLOOKUP to turn the name into the department number, then compose your SQL based on that
2) when I add departmentId column I set this column to have null values(is correct?), but as a foreign key, I don't want to allow null values. Can I change it or how can I handle it?
Foreign keyed columns are allowed to have NULL values - it isn't the FK that imposes a "No Nulls" restriction, it's the nullability of the column (alter the column to departmantid INT NOT NULL) that imposes that. A FK references a primary key and the primary key may not be null (or in some DB, at most one record can have a [partly] null PK), but you could just leave those departments null. If you do alter the column to be not null, you'll need to correct the NULL values first or the change will fail

The row-limitation in compound primary key in SQL Server 2014

I am going to insert a 2.3 billion rows (2,300,000,000) from table_a into table_b. The schema of table_a and table_b are identical, the only difference is table_a doesn't have a primary key but table_b has set up a 4 columns compound primary key with 0 rows of data. I encounter the error message after 24 hours:
Msg 666, Level 16, State 2, Line 1
The maximum system-generated unique value for a duplicate group was exceeded for index with partition ID 422223771074560. Dropping and re-creating the index may resolve this; otherwise, use another clustering key.
This is my compound PK in table_b and the sample query code, any help will be thankful.
column1: varchar(10), not null
column2: nvarchar(50), not null
column3: nvarchar(100), not null
column4: int, not null
Sample code
insert into table_b
select *
from table_a
where date < '2017-01-01' -- some filters here
According to the SQL Server Documentation part of creating a primary key includes creating a unique index on that same table.
When you create a PRIMARY KEY constraint, a unique index on the
column, or columns, is automatically created. By default, this index
is clustered; however, you can specify a nonclustered index when you
create the constraint.
When a unique index is not on the table, each row gets what the docs are calling a "uniqueifier" which is 4 bytes in length (aka ~2.14 Billion combinations)
If the clustered index is not created with the UNIQUE property, the
Database Engine automatically adds a 4-byte uniqueifier column to the
table. When it is required, the Database Engine automatically adds a
uniqueifier value to a row to make each key unique. This column and
its values are used internally and cannot be seen or accessed by
users.
From this information and your error message we can tell two things:
There is a clustered index on the table
There is not a primary key on the table
Given the volume of the data you're dealing with, I'm betting you have a Clustered Columnstore Index on the table, which in SQL Server 2014 does not have the ability to have a primary key on.
One possible solution is to partition table_b based on particular column value (that has less than 15K unique values based on the limitations specified in the documentation). As a side-note, the same partitioning effort could have a significant impact on minimizing run time of any queries using table_b depending on which column is used in the partition function.
You know that:
If the clustered index is not created with the UNIQUE property, the
Database Engine automatically adds a 4-byte uniqueifier column to the
table. When it is required, the Database Engine automatically adds a
uniqueifier value to a row to make each key unique. This column and
its values are used internally and cannot be seen or accessed by
users.
While it´s unlikely that you will face an issue related with uniqueifiers, we have seen rare cases where customer reaches the uniqueifier limit of 2,147,483,648, generating error 666.
And from this topic about the issue we have:
As of February 2018, the design goal for the storage engine is to not
reset uniqueifiers during REBUILDs. As such, rebuild of the index
ideally would not reset uniquifiers and issue would continue to occur,
while inserting new data with a key value for which the uniquifiers
were exhausted. But current engine behavior is different for one
specific case, if you use the statement ALTER INDEX ALL ON
REBUILD WITH (ONLINE = ON), it will reset the uniqueifiers (across all
version starting SQL Server 2005 to SQL Server 2017).
So, if this is the cause if your issue, you can add additional integer column and build the index over it.

Can one alter a PostgresSql table to have an autogenerated keys after the table has values?

Is it possible to only alter a table to make an existing column a serial auto generated key, without adding a new column? Sorry if this question is a bit newbie-ish for PostgreSQL, I'm more a SQL Server person but moving to PostgreSQL..
In a nut shell the program will copying an existing SQL Server database into PostgreSQL. With the desire to have a mirrored DB in PostgreSQL as the source from SQL Server with the only caveat one may selectively include/exclude any table or column as desired, or do everything...
Given the process copies all values, thought one should be able create the keys after the copy has finished just as one may do in SQL Server. Thought PostgreSQL would have a comparable methods as SQL Server's SET INSERT_IDENTITY [ON|OFF] so one may override the auto generated key with a desired value. Not seeing an equivalent in PostgreSQL. So my fallback is to create the mirrored records in Postgres without keys any keys and then alter the tables. But it seems to fix up the table as desired one has create a new column, but doing this break or cause a headache fixing up the RI for PK/FK relationships.
Any suggestions? Thanks in advance.
In PostgreSQL, the auto-generated key is always overridden if you insert an explicit value for it. If you don't specify a value (omit the column), or specify the keyword DEFAULT, a generated key is used.
Given table
CREATE TABLE t1 (id serial primary key, dat text);
then both these will get a generated key from sequence t1_id_seq:
INSERT INTO t1 (dat) VALUES ('fred');
INSERT INTO t1 (id, dat) VALUES (DEFAULT, 'bob');
This will instead provide its own value:
INSERT INTO t1 (id, dat) VALUES (42, 'joe');
You are responsible for ensuring that the provided value doesn't conflict with existing data, or with future values the identity sequence will generate. PostgreSQL will not notice that you manually inserted a row with id 42 and skip when its own sequence counter gets to that point.
Usually what you do is load with provided values, then reset the sequence to the max of all keys already in the table, so it keeps counting from there for new local inserts.

Identity column not incremented by 1?

I've created 4 tables:
`Patient` (Id, Name, ..)
`Donor` (Id, Name, ..)
`BloodBank` (Id, Name, ..)
BloodBankDonors(DonorId, BloodBankId, ..)
And set the Id columns to Identity incremented by 1, seed 1. and made a relationship between (Donor, BloodBank) and (BloodBankDonors).
The problem is when I entered some data in the tables BloodBank and the patient, the auto generated Id column was: 1,3,4 and 1,4,5,8 respectively?!
So many things can cause gaps in an IDENTITY column. For example rollbacks not resetting IDENTITY, deletes, etc.
So, why do you care about gaps? You shouldn't. If you need a contiguous sequence of numbers, stop using IDENTITY.
You might deleting (DELETE command) some records from tables "BloodBank" and the "patient".Deleting record from table holds the log info of column ID(auto generated column) for recovery Purpose. Instead use below mentioned code snippet after "DELETE" command:
DBCC CHECKIDENT('databasename.dbo.tablename', RESEED, number)
if number=0 then in the next insert the auto increment field will contain value 1
if number=101 then in the next insert the auto increment field will contain value 102.
for more clear answer, please share sql script which you are using to create tables and insert records.
Deleting data from table holds the log info holding ID (auto generated columns) for recovery purpose.
Try to truncate table and re-enter the data
truncate table Patient
May this help

Prevent non-primary key column to have duplicates

I read in this site that it is recommended use an auto-number ID rather than username for primary keys because it will not change. However, how do I prevent the database to have only unique usernames. I am using Access.
In Access, open the table in Design View and click on the username field. In the "Field Properties" pane at the bottom, select Yes (No Duplicates) for the Indexed property. That will prevent duplicate username values from being entered.
Set unique constraint on username column (some main table for user).
You always can validate before inserting (for prompting user) or on trigger before insert.
I take it you've already made the table, so run this query:
ALTER TABLE users
ADD UNIQUE(username)
Change table name and column name in the query to match your table and column name, obviously.
Here's the reference: http://www.w3schools.com/sql/sql_unique.asp

Resources