Issue updating sequences - database

I want to update the sequences for a table in RDS Postgres 11. Tried the following commands but I don't see the changes committed to the db. I even used commit.
What am I missing?
1. SELECT setval(pg_get_serial_sequence('table1', 'id'), coalesce(max(id),0) + 1, false) FROM table1;
2. SELECT setval('table1_id_seq', (SELECT COALESCE(max(id), 0) + 1 FROM table1));
ALTER TABLE table1 ALTER COLUMN id SET DEFAULT nextval('table1_id_seq');
CREATE TABLE public.table1 (
id serial4 NOT NULL, --default nextval('table1_id_seq'::regclass)
account_id int4 NOT NULL,
CONSTRAINT table1_pkey PRIMARY KEY (id) );
select currval('table1_id_seq') returns 6.

If you ...
don't see the changes committed to the db
... even though you positively committed the transaction (and you are connected to the right database), then I see only two possible explanations.
1. Barking up the wrong tree
In your question, public.table1 is schema-qualified in the CREATE TABLE statement, but not in either of:
SELECT setval(pg_get_serial_sequence('table1', 'id'), ...
SELECT setval('table1_id_seq', ...
If you have another table1 in another schema that comes before 'public' in the search_path, you end up modifying the respective sequence of that table.
Since the factory default in Postgres is search_path = "$user",public, the obvious suspect is a table of the same name in the "home" schema of the current role. See:
How does the search_path influence identifier resolution and the "current schema"
Solution:
Fix the search path or schema-qualify table and sequence names.
2. Missing privilege
It should be safe to assume your database role has SELECT (or even all) privileges on table1. But you need separate, additional privileges on the underlying SEQUENCE to run setval() on it.
You should see an error message for missing privileges, though!
See:
Explicitly granting permissions to update the sequence for a serial column necessary?
Solution:
GRANT USAGE ON SEQUENCE table1_id_seq TO the_role; -- your role here
Or work with an IDENTITY column instead of a serial (Postgres 10+), which inherits privileges for the table implicitly. See:
Auto increment table column
How to reset Postgres' primary key sequence when it falls out of sync?

Related

Change dependent records on delete in SQL

I'm adding a new job category to a database. There are something like 20 tables that use jobCategoryID as a foreign key. Is there a way to create a function that would go through those tables and set the jobCategoryID to NULL if the category is ever deleted in the parent table? Inserting the line isn't the issue. It's just for a backout script if the product owners decide at a later date that they don't want to keep the new job category on.
You need some action. First of all update the dirty records to NULL. For each table use:
Update parent_table
Set jobCategoryID = NULL
WHERE jobCategoryID NOT IN (select jobCategoryID FROM Reerenced_tabble)
Then set delete rule of foreign keys to SET NULL.
If you care about performance issue, follow the below instruction too.
When you have foreign key but dirty records it means, that these constraints are not trusted. It means that SQL Optimizer can not use them for creating best plans. So run these code to see which of them are untrusted to optimizer:
Select * from sys.foreign_keys Where is_not_trusted = 1
For each constraint that become in result of above code edit below code to solve this issue:
ALTER TABLE Table_Name WITH CHECK CHECK CONSTRAINT FK_Name

Can I use Inheritance of PostgreSQL for getting a particular set of columns in many tables in one database?

In Postgresql 10 , I want to have same set of columns for audit purpose in all transactional tables of a particular database with same Foreign Key Constraints.
I am thinking of creating a master table with the set of 4 columns:
createdBy createdOn updatedBy updatedOn
Then inherit all transactional tables from this master table.
Is this the right approach and is inheritance suited for this? When it comes to storage of data, how it works behind the scenes when I insert records into the derived/child tables. What happens when data is deleted from child tables. Can I lock my master table so that no one accidentally deletes any records from master table ?
I see no problem with that approach but it works differently from your description.
I will use the following tables for illustration:
CREATE TABLE MasterAudit (
createdBy TEXT DEFAULT current_user,
createdOn TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp,
updatedBy TEXT DEFAULT current_user,
updatedOn TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp
);
CREATE TABLE SlaveAudit (
Val Text
) INHERITS(MasterAudit);
This definition allows to skip columns when inserting / use the default keyword for inserts and updates.
What does SELECT do (visible when using EXPLAIN)?
Behind the scene, data inserted into SlaveAudit is stored into SlaveAudit; selecting from MasterAudit works with UNION of tables, including MasterAudit itself (it is valid to insert data into the parent table, although it would not make much sense in this very case).
SELECT * FROM SlaveAudit reads data from SlaveAudit. The additional column Val from SlaveAudit is returned.
SELECT * FROM MasterAudit reads data from MasterAudit UNION SlaveAudit. The additional column Val is not returned.
SELECT * FROM ONLY MasterAudit reads data from MasterAudit only.
Illustration aside, the correct way to select from MasterAudit is by using the pseudo-column tableoid in order to determine where each record comes from.
Be careful though, it can be very long to get if all your tables inherit from MasterAudit
SELECT relname, MasterAudit.*
FROM MasterAudit
JOIN pg_class ON MasterAudit.tableoid = pg_class.oid
Let's insert stuff.
INSERT INTO SlaveAudit(Val) VALUES ('Some value');
What query will result in deleting it?
DELETE FROM SlaveAudit will remove that record (obviously).
DELETE FROM MasterAudit will remove the record too. Oops! that is not what we want.
TRUNCATE TABLE SlaveAudit and TRUNCATE Table MasterAudit will have the same result as the 2 DELETE.
Time to manage access.
IMHO, no commands apart from SELECT should ever be granted on MasterAudit.
Creating a table that inherits MasterAudit can only be done by its owner. You may want to change the tables' owner.
ALTER TABLE MasterAudit OWNER TO ...
Almost all the privileges must be revoked. It includes the table owner (but please note the super user will not be affected). SELECT on MasterAudit may be granted to everyone if you want.
REVOKE ALL ON MasterAudit FROM public, ...
GRANT SELECT ON MasterAudit TO public
Check the access by ensuring the following queries fail:
INSERT INTO MasterAudit VALUES(default, default, default, default)
DELETE FROM MasterAudit

Resetting the primary key to 1

I have a script for microsoft sql server database which has hundreds of tables and tables contains data as well. This is the database of a web application.what I want to do is to delete the previous records and reset the primary key to 1 or 0.
I have tried
`DBCC CHECKIDENT ('dbo.tbl',RESEED,0); `
but it does not work for me as in most of the tables the primary key is not identity.
I can not truncate the table as its primary key is being used as FK in many other tables.
I have also tried to add the identity specification in the primary key of the table and run the checkident query and then changing it back to non-identity spec, but after adding the record again it starts from where it left.
Making changes in the code is not an option for me.
please help.
According with your question I am not sure about the main objective, Why? If you need truncate a lot of tables and change their structures to have an Identity property why you can't disabled the FK? . In the past I have used an standard process for rebuild a table and migrate all the information, this represent a group of steps, I would try to help you but you should follow the next steps.
Steps:
1) Disable FK for alter the structure of your tables. You can get the solution for this task in the next link:
Temporarily disable all foreign key constraints
2) Alter the table with the new property Identity, this is a classic process of ALTER TABLE xxxxxx.
3) Execute the syntax that previously posted :
DBCC CHECKIDENT ('dbo.tbl',RESEED,0);
Try to follow this path and if you have any problem only ask us.
You can not truncate table that have relation. You shoud remove relation firstly.
My understanding of this question:
You have a database with tables that you want to empty and next have them use primary key values starting at 0 or 1.
Some of these tables use an identity value and you already have a solution for those (you know you can find out which columns have an identity by using the sys.columns view? Look for the is_identity column).
Some tables do not use an identity but get their pk values from an unknown source, which we can't modify.
The only solution I see, is creating an after insert trigger (or modifying) on those tables that subtracts from the new pk value.
E.g.: your "hidden generator" will generate a next value 5254, but you want the next pk value to become one:
CREATE TRIGGER trg_sometable_ai
ON sometable
AFTER INSERT
AS
BEGIN
UPDATE st
SET st.pk_col = st.pk_col - 5253
FROM sometable AS st
INNER JOIN INSERTED AS i
ON i.pk_col = th.pk_col
END
You'll have to determine the next value and thus the "subtract value" for each table.
If the code also inserts child records into tables with a foreign key to this table, and uses the previously generated value, you have to modify those triggers as well...
This is a "last resort" solution and something I would recommend against in any scenario that has other options. Manipulating primary key values is generally not a good idea.

How do you add a NOT NULL Column to a large table in SQL Server?

To add a NOT NULL Column to a table with many records, a DEFAULT constraint needs to be applied. This constraint causes the entire ALTER TABLE command to take a long time to run if the table is very large. This is because:
Assumptions:
The DEFAULT constraint modifies existing records. This means that the db needs to increase the size of each record, which causes it to shift records on full data-pages to other data-pages and that takes time.
The DEFAULT update executes as an atomic transaction. This means that the transaction log will need to be grown so that a roll-back can be executed if necessary.
The transaction log keeps track of the entire record. Therefore, even though only a single field is modified, the space needed by the log will be based on the size of the entire record multiplied by the # of existing records. This means that adding a column to a table with small records will be faster than adding a column to a table with large records even if the total # of records are the same for both tables.
Possible solutions:
Suck it up and wait for the process to complete. Just make sure to set the timeout period to be very long. The problem with this is that it may take hours or days to do depending on the # of records.
Add the column but allow NULL. Afterward, run an UPDATE query to set the DEFAULT value for existing rows. Do not do UPDATE *. Update batches of records at a time or you'll end up with the same problem as solution #1. The problem with this approach is that you end up with a column that allows NULL when you know that this is an unnecessary option. I believe that there are some best practice documents out there that says that you should not have columns that allow NULL unless it's necessary.
Create a new table with the same schema. Add the column to that schema. Transfer the data over from the original table. Drop the original table and rename the new table. I'm not certain how this is any better than #1.
Questions:
Are my assumptions correct?
Are these my only solutions? If so, which one is the best? I f not, what else could I do?
I ran into this problem for my work also. And my solution is along #2.
Here are my steps (I am using SQL Server 2005):
1) Add the column to the table with a default value:
ALTER TABLE MyTable ADD MyColumn varchar(40) DEFAULT('')
2) Add a NOT NULL constraint with the NOCHECK option. The NOCHECK does not enforce on existing values:
ALTER TABLE MyTable WITH NOCHECK
ADD CONSTRAINT MyColumn_NOTNULL CHECK (MyColumn IS NOT NULL)
3) Update the values incrementally in table:
GO
UPDATE TOP(3000) MyTable SET MyColumn = '' WHERE MyColumn IS NULL
GO 1000
The update statement will only update maximum 3000 records. This allow to save a chunk of data at the time. I have to use "MyColumn IS NULL" because my table does not have a sequence primary key.
GO 1000 will execute the previous statement 1000 times. This will update 3 million records, if you need more just increase this number. It will continue to execute until SQL Server returns 0 records for the UPDATE statement.
Here's what I would try:
Do a full backup of the database.
Add the new column, allowing nulls - don't set a default.
Set SIMPLE recovery, which truncates the tran log as soon as each batch is committed.
The SQL is: ALTER DATABASE XXX SET RECOVERY SIMPLE
Run the update in batches as you discussed above, committing after each one.
Reset the new column to no longer allow nulls.
Go back to the normal FULL recovery.
The SQL is: ALTER DATABASE XXX SET RECOVERY FULL
Backup the database again.
The use of the SIMPLE recovery model doesn't stop logging, but it significantly reduces its impact. This is because the server discards the recovery information after every commit.
You could:
Start a transaction.
Grab a write lock on your original table so no one writes to it.
Create a shadow table with the new schema.
Transfer all the data from the original table.
execute sp_rename to rename the old table out.
execute sp_rename to rename the new table in.
Finally, you commit the transaction.
The advantage of this approach is that your readers will be able to access the table during the long process and that you can perform any kind of schema change in the background.
Just to update this with the latest information.
In SQL Server 2012 this can now be carried out as an online operation in the following circumstances
Enterprise Edition only
The default must be a runtime constant
For the second requirement examples might be a literal constant or a function such as GETDATE() that evaluates to the same value for all rows. A default of NEWID() would not qualify and would still end up updating all rows there and then.
For defaults that qualify SQL Server evaluates them and stores the result as the default value in the column metadata so this is independent of the default constraint which is created (which can even be dropped if no longer required). This is viewable in sys.system_internals_partition_columns. The value doesn't get written out to the rows until next time they happen to get updated.
More details about this here: online non-null with values column add in sql server 2012
Admitted that this is an old question. My colleague recently told me that he was able to do it in one single alter table statement on a table with 13.6M rows. It finished within a second in SQL Server 2012. I was able to confirm the same on a table with 8M rows. Something changed in later version of SQL Server?
Alter table mytable add mycolumn char(1) not null default('N');
I think this depends on the SQL flavor you are using, but what if you took option 2, but at the very end alter table table to not null with the default value?
Would it be fast, since it sees all the values are not null?
If you want the column in the same table, you'll just have to do it. Now, option 3 is potentially the best for this because you can still have the database "live" while this operation is going on. If you use option 1, the table is locked while the operation happens and then you're really stuck.
If you don't really care if the column is in the table, then I suppose a segmented approach is the next best. Though, I really try to avoid that (to the point that I don't do it) because then like Charles Bretana says, you'll have to make sure and find all the places that update/insert that table and modify those. Ugh!
I had a similar problem, and went for your option #2.
It takes 20 minutes this way, as opposed to 32 hours the other way!!! Huge difference, thanks for the tip.
I wrote a full blog entry about it, but here's the important sql:
Alter table MyTable
Add MyNewColumn char(10) null default '?';
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 0 and 1000000
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 1000000 and 2000000
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 2000000 and 3000000
go
..etc..
Alter table MyTable
Alter column MyNewColumn char(10) not null;
And the blog entry if you're interested:
http://splinter.com.au/adding-a-column-to-a-massive-sql-server-table
I had a similar problem and I went with modified #3 approach. In my case the database was in SIMPLE recovery mode and the table to which column was supposed to be added was not referenced by any FK constraints.
Instead of creating a new table with the same schema and copying contents of original table, I used SELECT…INTO syntax.
According to Microsoft (http://technet.microsoft.com/en-us/library/ms188029(v=sql.105).aspx)
The amount of logging for SELECT...INTO depends on the recovery model
in effect for the database. Under the simple recovery model or
bulk-logged recovery model, bulk operations are minimally logged. With
minimal logging, using the SELECT… INTO statement can be more
efficient than creating a table and then populating the table with an
INSERT statement. For more information, see Operations That Can Be
Minimally Logged.
The sequence of steps :
1.Move data from old table to new while adding new column with default
SELECT table.*, cast (‘default’ as nvarchar(256)) new_column
INTO table_copy
FROM table
2.Drop old table
DROP TABLE table
3.Rename newly created table
EXEC sp_rename 'table_copy', ‘table’
4.Create necessary constraints and indexes on the new table
In my case the table had more than 100 million rows and this approach completed faster than approach #2 and log space growth was minimal.
1) Add the column to the table with a default value:
ALTER TABLE MyTable ADD MyColumn int default 0
2) Update the values incrementally in the table (same effect as accepted answer). Adjust the number of records being updated to your environment, to avoid blocking other users/processes.
declare #rowcount int = 1
while (#rowcount > 0)
begin
UPDATE TOP(10000) MyTable SET MyColumn = 0 WHERE MyColumn IS NULL
set #rowcount = ##ROWCOUNT
end
3) Alter the column definition to require not null. Run the following at a moment when the table is not in use (or schedule a few minutes of downtime). I have successfully used this for tables with millions of records.
ALTER TABLE MyTable ALTER COLUMN MyColumn int NOT NULL
I would use CURSOR instead of UPDATE. Cursor will update all matching records in batch, record by record -- it takes time but not locks table.
If you want to avoid locks use WAIT.
Also I am not sure, that DEFAULT constrain changes existing rows.
Probably NOT NULL constrain use together with DEFAULT causes case described by author.
If it changes add it in the end
So pseudocode will look like:
-- without NOT NULL constrain -- we will add it in the end
ALTER TABLE table ADD new_column INT DEFAULT 0
DECLARE fillNullColumn CURSOR LOCAL FAST_FORWARD
SELECT
key
FROM
table WITH (NOLOCK)
WHERE
new_column IS NULL
OPEN fillNullColumn
DECLARE
#key INT
FETCH NEXT FROM fillNullColumn INTO #key
WHILE ##FETCH_STATUS = 0 BEGIN
UPDATE
table WITH (ROWLOCK)
SET
new_column = 0 -- default value
WHERE
key = #key
WAIT 00:00:05 --wait 5 seconds, keep in mind it causes updating only 12 rows per minute
FETCH NEXT FROM fillNullColumn INTO #key
END
CLOSE fillNullColumn
DEALLOCATE fillNullColumn
ALTER TABLE table ALTER COLUMN new_column ADD CONSTRAIN xxx
I am sure that there are some syntax errors, but I hope that this
help to solve your problem.
Good luck!
Vertically segment the table. This means you will have two tables, with the same primary key, and exactly the same number of records... One will be the one you already have, the other will have just the key, and the new Non-Null column (with default value) .
Modify all Insert, Update, and delete code so they keep the two tables in synch... If you want you can create a view that "joins" the two tables together to create a single logical combination of the two that appears like a single table for client Select statements...

How to increment (or reserve) IDENTITY value in SQL Server without inserting into table

Is there a way to reserve or skip or increment value of identity column?
I Have two tables joined in one-to-one relation ship. First one has IDENTITY PK column, and second one int PK (not IDENTITY). I used to insert in first, get ID and insert in second. And it works ok.
Now I need to insert values in second table without inserting into first.
Now, how to increment IDENTITY seed, so I can insert it into second table, but leave "hole" in ID's of first table?
EDIT: More info
This works:
-- I need new seed number, but not table row
-- so i will insert foo row, get id, and delete it
INSERT INTO TABLE1 (SomeRequiredField) VALUES ('foo');
SET #NewID = SCOPE_IDENTITY();
DELETE FROM TABLE1 WHERE ID=#NewID;
-- Then I can insert in TABLE2
INSERT INTO (ID, Field, Field) VALUES (#NewID, 'Value', 'Value');
Once again - this works.
Question is can I get ID without inserting into table?
DBCC needs owner rights; is there a clean user callable SQL to do that?
This situation will make your overall data structure very hard to understand. If there is not a relationship between the values, then break the relationship.
There are ways to get around this to do what you are looking for, but typically it is in a distributed environment and not done because of what appears to be a data model change.
Then its no more a one-to-one relationship.
Just break the PK constraint.
Use a DBCC CHECKIDENT statement.
This article from SQL Server Books Online discusses the use of the DBCC CHECKIDENT method to update the identity seed of a table.
From that article:
This example forces the current identity value in the jobs table to a value of 30.
USE pubs
GO
DBCC CHECKIDENT (jobs, RESEED, 30)
GO
I would look into the OUTPUT INTO feature if you are using SQL Server 2005 or greater. This would allow you to insert into your primary table, and take the IDs assigned at that time to create rows in the secondary table.
I am assuming that there is a foreign key constraint enforced - because that would be the only reason you would need to do this in the first place.
How do you plan on matching them up later? I would not put records into the second table without a record in the first, that is why it is set up in a foreign key relationship - to stio that sort of action. Just why do you not want to insert records into the first table anyway? If we knew more about the type of application and why this is necessary we might be able to guide you to a solution.
this might help
SET IDENTITY_INSERT [ database_name . [ schema_name ] . ] table { ON | OFF }
http://msdn.microsoft.com/en-us/library/aa259221(SQL.80).aspx
It allows explicit values to be inserted into the identity column of a table.

Resources