I have gone through a bunch of documentation for PostgresSQL 10 partitioning but I am still not clear on whether existing tables can be partitioned. Most of the posts mention about partitioning existing tables using PostgreSQL 9.
Also, in the official PostgresSQL website : https://www.postgresql.org/docs/current/static/ddl-partitioning.html, it mentions 'It is not possible to turn a regular table into a partitioned table or vice versa'.
So, my question is can existing tables be partitioned in PostgreSQL 10?
If the answer is YES, my plan is :
Create a partitions
Alter the existing table to include the range so new data goes into the new partition. Once that is done, write a script which loops over the master table and moves the data into the right partitions.
Then, truncate the master table and enforce that nothing can be inserted into it.
If the answer is NO, my plan is to make the existing table the first partition
Create a new parent table and children(partitions).
Perform light transaction which will rename the existing table to a partition table name and the new parent to the actual table name.
Are there better ways to partition existing tables in PostgreSQL 10/9?
Related
I have a flat file which has following columns
Device Name
Device Type
Device Location
Device Zone
Which I need to insert into SQL Server table called Devices.
Devices table has following structure
DeviceName
DeviceTypeId (foreign key from DeviceType table)
DeviceLocationId (foreign key from DeviceLocation table)
DeviceZoneId (foreign key from DeviceZone table)
DeviceType, DeviceLocation and DeviceZone tables are already prepopulated.
Now I need to write ETL which reads flat file and for each row get DeviceTypeId, DeviceLocationId and DeviceZoneId from corresponding tables and insert into Devices table.
I am sure this is not new but its being a while I worked on such SSIS packages and help would be appreciated.
Load the flat content into a staging table and write a stored procedure to handle the inserts and updates in T-SQL.
Having FK relationships between the destination tables, can probably make a lot of trouble with a single data flow and a multicast.
The problem is that you have no control over the order of the inserts so the child record could be inserted before the parent.
Also, for identity columns on the tables, you cannot retrieve the identity value from one stream and use it in another without using subsequent merge joins.
The simplest way to do that, is by using Lookup Transformation to get the ID for each value. You must be aware that duplicates may lead to a problem, you have to make sure that the value is not found multiple times in the foreign tables.
Also, make sure to redirect rows that have no match into a staging table to check them later.
You can refer to the following article for a step by step guide to Lookup Transformation:
An Overview of the LOOKUP TRANSFORMATION in SSIS
My company has an application with a bunch of database tables that used to use a sequence table to determine the next value to use. Recently, we switched this to using an identity property. The problem is that in order to upgrade a client to the latest version of the software, we have to change about 150 tables to identity. To do this manually, you can right click on a table, choose design, change (Is Identity) to "Yes" and then save the table. From what I understand, in the background, SQL Server exports this to a temporary table, drops the table and then copies everything back into the new table. Clients may have their own unique indexes and possibly other things specific to the client, so making a generic script isn't really an option.
It would be really awesome if there was a stored procedure for scripting this task rather than doing it in the GUI (which takes FOREVER). We made a macro that can go through and do this, but even then, it takes a long time to run and is error prone. Something like: exec sp_change_to_identity 'table_name', 'column name'
Does something like this exist? If not, how would you handle this situation?
Update: This is SQL Server 2008 R2.
This is what SSMS seems to do:
Obtain and Drop all the foreign keys pointing to the original table.
Obtain the Indexes, Triggers, Foreign Keys and Statistics of the original table.
Create a temp_table with the same schema as the original table, with the Identity field.
Insert into temp_table all the rows from the original table (Identity_Insert On).
Drop the original table (this will drop its indexes, triggers, foreign keys and statistics)
Rename temp_table to the original table name
Recreate the foreign keys obtained in (1)
Recreate the objects obtained in (2)
Is it possible to add cluster to an existing table? For example...
I have a table:
CREATE TABLE table_name(
t_id number PRIMARY KEY,
t_name varchar2(50));
Cluster:
CREATE CLUSTER my_cluster
(c_id NUMBER) SIZE 100;
Is there a command like: ALTER TABLE t_name ADD CLUSTER my_cluster(t_id); or something like that?
Because I want table to look something like this:
CREATE TABLE table_name(
t_id number PRIMARY KEY,
t_name varchar2(50))
CLUSTER my_cluster(t_id);
And dropping all connected tables isn't really what I want to do.
Thanks
You really need to understand what a cluster really is. From the docs:
"Clusters are groups of one or more tables physically stored
together because they share common columns and are often used
together. Because related rows are physically stored together, disk
access time improves." (emphasis mine)
The point being, the tables in a cluster are co-located. This is a physical arrangement. So, for the database to cluster existing tables we must drop and re-create them.
It is possible to minimise the downtime by building the clustered table under a different name. You will need to keep the data in synch with the live table until you are ready to swap. You will need to restrict access to the database while you do this, to prevent data loss. Then you rename the old table, rename the clustered table with the proper name, run the necessary grants and recompile invalid procedures, synonyms, etc.
Normaly i would do a delete * from XXX but on this table thats very slow, it normaly has about 500k to 1m rows in it ( one is a varbinary(MAX) if that mathers ).
Basicly im wondering if there is a quick way to emty the table of all content, its actualy quicker to drop and recreate it then to delete the content via the delete sql statement
The reason i dont want to recreate the table is because its heavly used and delete/recreate i assume will destroy indexs and stats gathered by sql server
Im also hoping there is a way to do this because there is a "clever" way to get row count via sys.sysindexes , so im hoping there is a equaly clever way to delete content
Truncate table is faster than delete * from XXX. Delete is slow because it works one row at a time. There are a few situations where truncate doesn't work, which you can read about on MSDN.
As other have said, TRUNCATE TABLE is far quicker, but it does have some restrictions (taken from here):
You cannot use TRUNCATE TABLE on tables that:
- Are referenced by a FOREIGN KEY constraint. (You can truncate a table that has a foreign key that references itself.)
- Participate in an indexed view.
- Are published by using transactional replication or merge replication.
For tables with one or more of these characteristics, use the DELETE statement instead.
The biggest drawback is that if the table you are trying to empty has foreign keys pointing to it, then the truncate call will fail.
You can rename the table in question, create a table with an identical schema, and then drop the original table at your leisure.
See the MySQL 5.1 Reference Manual for the [RENAME TABLE][1] and [CREATE TABLE][2] commands.
RENAME TABLE tbl TO tbl_old;
CREATE TABLE tbl LIKE tbl_old;
DROP TABLE tbl_old; -- at your leisure
This approach can help minimize application downtime.
I would suggest using TRUNCATE TABLE, it's quicker and uses less resources than DELETE FROM xxx
Here's the related MSDN article
Truncate table in MS Sql Server
Truncate table in Mysql
I have quick question for you SQL gurus. I have existing tables without primary key column and Identity is not set. Now I am trying to modify those tables by making existing integer column as primary key and adding identity values for that column. My question is should I first copy all the records from the table to a temp table before making those changes . Do I loose all the previous records if I ran the T-SQL commnad to make primary key and add identity column on those tables. What are the approaches should I take such as
1) Create temp table to copy all the records from the table to be modified
2) Load all the records to the temptable
3) Make changes on the table schema
4) Finally load the records from the temp table to the original table.
Or
there are better ways that this? I really appreciate your help
Thanks
Tools>Options>Designers>Table and Database Designers
Uncheck "Prevent saving changes that require table re-creation"
[Edit] I've tried this with populated tables and I didn't lose data, but I don't really know much about this.
Hopefully you don't have too many records in the table. What happens if you use Management studio to change an existing field to identity is that it creates another table with the identity field set. it turns identity insert on and inserets the records from the original table, then turns identity insert off. Then it drops the old table and renames the table it just created. This can be quite a lengthy process if you have many records. If so I would script this out and then do it in a job that runs during the off hours because the table will be completely locked while you do this.
just do all of your changes in management studio, copy/paste the generated script into a file. DON'T SAVE CHANGES at this point. Look over and edit that script as necessary, it will probably do almost exactly what you are thinking (it will drop the original table and rename the temp one to the original's name), but handle all constraints and FKs as well.
If your existing integer column is unique and suitable, there should be no problem converting it to a PK.
Another alternative, if you don't want to use the existing column, you can add a new PK columns to the main table, populate it and seed it, then run update statements to update all other tables with new PK.
Whatever way you do it, make sure you do a back-up first!!
You can always add the IDENTITY column after you have finished copying your data around. You can also then reset the IDENTITY seed to the max integer + 1. That should solve your problems.
DBCC CHECKIDENT ('MyTable', RESEED, n)
Where n is the number you want the identity to start at.