SQL Server - best way to validate SQL schema and seed data - sql-server

I am working on a web based app with a SQL Server backend. This is a somewhat legacy app that I've just begun working on a few months ago and the database versioning situation is a bit of a mess.
There is one set of scripts for a new install and another set for upgrading from version to version. Some of the scripts update the schema, others insert seed data. Other people, not developers, do the deployment and running of these scripts. Because of the versioning situation, there are sometimes issues with the scripts.
I'm revising the scripts to be more hardy, less likely to fail, and to have better logging when they do.
Meanwhile I want to do is create a validation script that we can run after a deployment. The script would runs and checks if all the necessary tables are there with the expected schema, and that the seed data scripts ran, everything is how it should be. Other than writing a ton of 'if not exists' (write to log) type statements, is there a better way to do this?
I can sometimes use Visual Studio schema compare to compare the newly updated database to an existing one, but data compare is not feasible in our environment.

I have the same situation on some of my projects and I use the following approach:
I have written the script which collects all significant data from DB (descriptions of tables, columns, their orders\types..., indexes, etc. and even stored procedures) as XML and after that calculates a hash of it. Then the script checks the calculated hash value against expected (calculated in development environment).
As result I have quite simple way to check a consistency of the DB as before development to ensure that I have an actual DB state as during test and production deployments to ensure that all expected changes were included into the release.
I suppose, you can use the similar approach but to include into a list of significant data some additional information to control seeding data too. Of course, to calculate a hash of all data into DB isn’t good idea, but if you know your DB you can find simple signs to control it (rows count, max Id, last modified date, etc.).

Using INFORMAION_SCHEMA in the versioned database would be helpful for this.
First create a persisted table named DBSCHEMA to store all the versions of the database with an initial version of the database you want to track the changes (versions) in:
SELECT ID=IDENTITY(int,1,1), TABLE_CATALOG, TABLE_NAME, COLUMN_NAME,
DATA_TYPE, ORDINAL_POSITION, '1.0.1' AS VERSION, GETDATE() AS VersionDate
INTO DBSCHEMA FROM INFORMATION_SCHEMA.COLUMNS
ORDER BY TABLE_NAME, ORDINAL_POSITION
Note: You can get more columns than you need from information schema for precision, text length, etc. I did not include those.
With a subsequent change to the database schema, you will add a new version, Ex. '1.0.2' and perform an insert of the new schema into the same table and increment the version each time.
INSERT DBSCHEMA (TABLE_CATALOG, TABLE_NAME, COLUMN_NAME, DATA_TYPE, ORDINAL_POSITION,
VERSION, VersionDate)
SELECT TABLE_CATALOG, TABLE_NAME, COLUMN_NAME, DATA_TYPE, ORDINAL_POSITION, '1.0.2',
GETDATE()
FROM INFORMATION_SCHEMA.COLUMNS
ORDER BY TABLE_NAME, ORDINAL_POSITION
Now after you have multiple versions, you can check for changes between the versions with a query similar to the following to see what tables have changed in the database. I would change this up depending on if I am looking for tables or columns that changed.
SELECT T1.TABLE_NAME, T1.COLUMN_NAME from DBSCHEMA T1 INNER JOIN DBSCHEMA T2 ON
T1.TABLE_NAME = T2.TABLE_NAME
WHERE T1.VERSION = '1.0.1'
AND T1.Column_NAME NOT IN (SELECT COLUMN_NAME FROM DBSCHEMA WHERE VERSION = '1.0.2')
OR T1.TABLE_NAME NOT IN (SELECT TABLE_NAME FROM DBSCHEMA WHERE VERSION = '1.0.2')
The results of this query will give you changes that have occurred for tables and columns, but you can get more granular and check for data type, precision, etc. if you need more information. (There is probably a CTE query that would work better for this last query, but the one above gives you an idea of how to find the changes between the versions.)
Regarding the seed data, I would create another table for this that uses the same version as the one you inserted in DBSCHEMA and store the table name and count(*) of each table containing seed data. This will allow you see if it is there, and/or if the count of seed data changed between versions.

Related

Find Data Types and Column Names For Stored Procedure Result in SQL Server

I am using Dapper as an ORM tool to consume a SQL Server database in C# code. To do this, I am creating strongly-typed classes that mirror the database structure so that it is easy to get objects back and forth between C# and SQL Server.
To quickly model C# classes based on SQL Server table definitions (and to write unit tests that ensure that the data layer and database are in sync), I use queries like this:
SELECT
SchemaName = c.table_schema, TableName = c.table_name,
ColumnName = c.column_name, DataType = data_type,
MaxLength = ISNULL(c.CHARACTER_MAXIMUM_LENGTH, -1)
FROM
information_schema.columns c
INNER JOIN
information_schema.tables t ON c.table_name = t.table_name
AND c.table_schema = t.table_schema
AND t.table_type = 'BASE TABLE'
ORDER BY
SchemaName, TableName, ordinal_position;
This returns each schema and table name with each of its columns and data types, which makes things easy for most CRUD-style operations:
The problem I have now is creating model classes to consume certain stored procedures, which join many tables and use aliases for a lot of the columns (eg, join to the employee table twice, alias the first set of employee details as ManagerName, ManagerEmail and the second set as WorkerName, WorkerEmail, etc.) Some of these procedures return hundreds of columns, and I am having to wade through table definitions to see which columns need to be modeled as Int16/Int32/Int64, etc., plus the procedures are not set in stone at this point, and I don't want to have to manually audit these things to make sure they are in sync if I can help it.
So, my question is: does SQL Server provide a way to see "metadata" about a stored procedure's result set (ie, the column names and data types that a procedure will return)? If not, is there a setting in SSMS that will display a data type alongside the column name, or some other creative little hack that will make this task easier?
Depending on how your procedure is written you may be able to get what you need from sp_describe_first_result_set.
https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-describe-first-result-set-transact-sql
You would need to parse out result set but easier then manually hunting all of the columns.

How can I compare tables in two different databases using SQL?

I'm trying to compare the schemas of two tables that exist in different databases. So far, I have this query
SELECT * FROM sys.columns WHERE object_id = OBJECT_ID('table1')
The only thing is that I don't know how to use the sys.columns to reference a database other than the one that the query is connected to. I tried this
SELECT * FROM db.sys.columns WHERE object_id = OBJECT_ID('table1')
but it didn't find anything.
I'm using SQL Server 2005
Any suggestions? thanks!
Take a look at redgate's SQL Compare.
To answer your specific question, you need to fully qualify the table reference.
SELECT * FROM db.sys.columns WHERE object_id = OBJECT_ID('db.SchemaName.table1')
Late one but hopefully useful.
Even though chama asked for SQL solutions I’d still recommend using a third party tools such as ApexSQL Diff or tools from Red Gate Joe already mentioned (I’ve used both and they worked great).
Reason is that query for comparing two tables using information schema has to be quite complex in order to catch all differences.
Note that all of the examples mentioned here only cover columns but none of the queries shown here will show the difference between nvarchar(20) and nvarchar(50) or difference in foreign keys or indexes or….
Short answer is yes – this is possible using information schema views but it can be rather complex if you want to compare every detail in those two tables.
all you need is to specify the DB name and shcema when calling OBJECT_ID function, like:
SELECT *
FROM DB_NAME.sys.columns
WHERE object_id = OBJECT_ID('DB_NAME.SHCEMA_NAME.table1')
Try the information_schema. eg:
select *
from
db1.information_schema.columns col1
join db2.information_schema.columns col2
on col1.table_catalog = col2.table_catalog
and col1.table_schema = col2.table_schema
and col1.column_name = col2.column_name
...
The information_schema simplifies sticking together the information from all the sys.columns,sys.objects etc. It exists automatically in your DB. I think its actually an ISO standard thing, so should work on various DB systems.
More information about the information_schema can be found here
Comparing whether the object or columns exists in both schemas is only a tiny bit of the solution. What if they exist in both databases but are different?
For my bsn ModuleStore project, I implemented a scripting routine which actually scripts most DB objects including table and view columns, indexes, namespaces etc. as XML using T-SQL code only. This may be a good place to start. You can find it on Google code, and the file in question (which generates the SQL query for dumping the object schema to XML) is here.
Code -
drop table #a
drop table #b
select *
into #a
from [databasename1].information_schema.columns a
--where table_name = 'aaa'
select *
into #b
from [databasename2].information_schema.columns b -- add linked server name and db as needed
--where table_name = 'bbb'
select distinct( a.table_name), b.TABLE_SCHEMA+ '.' + (b.table_name) TableName,b.TABLE_CATALOG DatabaseName
from #a a
right join #b b
on a.TABLE_NAME = b.TABLE_NAME and a.TABLE_SCHEMA = b.TABLE_SCHEMA
where a.table_name is null-- and a.table_name not like '%sync%'
Just in case you are using MS VS 2015 (Community is a free download). The SOL Server tools includes a Schema Comparison tool. "SQL Server Data Tools (SSDT) includes a Schema Compare utility that you can use to compare two database definitions".
This is a GPL Java program I wrote for comparing data in any two tables, with a common key and common columns, across any two heterogeneous databases using JDBC: https://sourceforge.net/projects/metaqa/
It intelligently forgives (numeric, string and date) data type differences by reducing them to a common format. The output is a sparse tab delimited file with .xls extension for use in a spreadsheet.
The program ingests SQL that is used to produce a source table that can be compared with the target table. The target table SQL can be generated automatically. The target table is read one row at a time and therefore should be indexed on the common key.
It detects missing rows on either side and common keyed rows with other optional column differences. Obviously the meta data can be accessed by SQL so whether your concern is with the data, or with the meta-data, it will still work.
This is very powerful in a data migration or System migration project, and also for auditing interfaces. You will be astounded at the number of errors it detects. Minimal false positives still do occur.
Informix, Oracle and SQL-Server were the first JDBC targets and you can extend that list if desired.

error when insert into linked server

I want to insert some data on the local server into a remote server, and used the following sql:
select * into linkservername.mydbname.dbo.test from localdbname.dbo.test
But it throws the following error
The object name 'linkservername.mydbname.dbo.test' contains more than the maximum number of prefixes. The maximum is 2.
How can I do that?
I don't think the new table created with the INTO clause supports 4 part names.
You would need to create the table first, then use INSERT..SELECT to populate it.
(See note in Arguments section on MSDN: reference)
The SELECT...INTO [new_table_name] statement supports a maximum of 2 prefixes: [database].[schema].[table]
NOTE: it is more performant to pull the data across the link using SELECT INTO vs. pushing it across using INSERT INTO:
SELECT INTO is minimally logged.
SELECT INTO does not implicitly start a distributed transaction, typically.
I say typically, in point #2, because in most scenarios a distributed transaction is not created implicitly when using SELECT INTO. If a profiler trace tells you SQL Server is still implicitly creating a distributed transaction, you can SELECT INTO a temp table first, to prevent the implicit distributed transaction, then move the data into your target table from the temp table.
Push vs. Pull Example
In this example we are copying data from [server_a] to [server_b] across a link. This example assumes query execution is possible from both servers:
Push
Instead of connecting to [server_a] and pushing the data to [server_b]:
INSERT INTO [server_b].[database].[schema].[table]
SELECT * FROM [database].[schema].[table]
Pull
Connect to [server_b] and pull the data from [server_a]:
SELECT * INTO [database].[schema].[table]
FROM [server_a].[database].[schema].[table]
I've been struggling with this for the last hour.
I now realise that using the syntax
SELECT orderid, orderdate, empid, custid
INTO [linkedserver].[database].[dbo].[table]
FROM Sales.Orders;
does not work with linked servers. You have to go onto your linked server and manually create the table first, then use the following syntax:
INSERT INTO [linkedserver].[database].[dbo].[table]
SELECT orderid, orderdate, empid, custid
FROM Sales.Orders
WHERE shipcountry = 'UK';
I've experienced the same issue and I've performed the following workaround:
If you are able to log on to remote server where you want to insert data with MSSQL or sqlcmd and rebuild your query vice-versa:
so from:
SELECT * INTO linkservername.mydbname.dbo.test
FROM localdbname.dbo.test
to the following:
SELECT * INTO localdbname.dbo.test
FROM linkservername.mydbname.dbo.test
In my situation it works well.
#2Toad: For sure INSERT INTO is better / more efficient. However for small queries and quick operation SELECT * INTO is more flexible because it creates the table on-the-fly and insert your data immediately, whereas INSERT INTO requires creating a table (auto-ident options and so on) before you carry out your insert operation.
I may be late to the party, but this was the first post I saw when I searched for the 4 part table name insert issue to a linked server. After reading this and a few more posts, I was able to accomplish this by using EXEC with the "AT" argument (for SQL2008+) so that the query is run from the linked server. For example, I had to insert 4M records to a pseudo-temp table on another server, and doing an INSERT-SELECT FROM statement took 10+ minutes. But changing it to the following SELECT-INTO statement, which allows the 4 part table name in the FROM clause, does it in mere seconds (less than 10 seconds in my case).
EXEC ('USE MyDatabase;
BEGIN TRY DROP TABLE TempID3 END TRY BEGIN CATCH END CATCH;
SELECT Field1, Field2, Field3
INTO TempID3
FROM SourceServer.SourceDatabase.dbo.SourceTable;') AT [DestinationServer]
GO
The query is run on DestinationServer, changes to right database, ensures the table does not already exist, and selects from the SourceServer. Minimally logged, and no fuss. This information may already out there somewhere, but I hope it helps anyone searching for similar issues.

How can i create a table in sql server 2005 that is totally new for me..?

I have one problem in which i simply want to know how i can create a table that can easily Used as a back end for my solution that is in Vb 2010.
I also want to know that when we choose a data source in a vb.net that is for sql server Which we want to choose....simply which can be used Because there is 2 or 3 with little different name.....
I'm struggling to understand your question, but in an effort to help:
I'm guessing that you want to programmatically create a table to be used by other parts of your VB application, but that you need to ensure the table name is unique...? If I'm right in that assumption, then see below.
You can use this query:
SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE table_type = 'BASE TABLE'
To get a list of table names currently in your database. You can compare the TABLE_NAME column values with your desired table name. If it exists already, change the name by adding an differentiator, eg: MyTable, MyTable1, MyTable2 etc.
Alternatively, SQL Server accepts Guids as table names.
Disclaimer: IMHO, If your table is not going to be temporary deciding table names in this manner is a pretty ugly solution and lacks supportability.

Hidden Features of PostgreSQL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm surprised this hasn't been posted yet. Any interesting tricks that you know about in Postgres? Obscure config options and scaling/perf tricks are particularly welcome.
I'm sure we can beat the 9 comments on the corresponding MySQL thread :)
Since postgres is a lot more sane than MySQL, there are not that many "tricks" to report on ;-)
The manual has some nice performance tips.
A few other performance related things to keep in mind:
Make sure autovacuum is turned on
Make sure you've gone through your postgres.conf (effective cache size, shared buffers, work mem ... lots of options there to tune).
Use pgpool or pgbouncer to keep your "real" database connections to a minimum
Learn how EXPLAIN and EXPLAIN ANALYZE works. Learn to read the output.
CLUSTER sorts data on disk according to an index. Can dramatically improve performance of large (mostly) read-only tables. Clustering is a one-time operation: when the table is subsequently updated, the changes are not clustered.
Here's a few things I've found useful that aren't config or performance related per se.
To see what's currently happening:
select * from pg_stat_activity;
Search misc functions:
select * from pg_proc WHERE proname ~* '^pg_.*'
Find size of database:
select pg_database_size('postgres');
select pg_size_pretty(pg_database_size('postgres'));
Find size of all databases:
select datname, pg_size_pretty(pg_database_size(datname)) as size
from pg_database;
Find size of tables and indexes:
select pg_size_pretty(pg_relation_size('public.customer'));
Or, to list all tables and indexes (probably easier to make a view of this):
select schemaname, relname,
pg_size_pretty(pg_relation_size(schemaname || '.' || relname)) as size
from (select schemaname, relname, 'table' as type
from pg_stat_user_tables
union all
select schemaname, relname, 'index' as type
from pg_stat_user_indexes) x;
Oh, and you can nest transactions, rollback partial transactions++
test=# begin;
BEGIN
test=# select count(*) from customer where name='test';
count
-------
0
(1 row)
test=# insert into customer (name) values ('test');
INSERT 0 1
test=# savepoint foo;
SAVEPOINT
test=# update customer set name='john';
UPDATE 3
test=# rollback to savepoint foo;
ROLLBACK
test=# commit;
COMMIT
test=# select count(*) from customer where name='test';
count
-------
1
(1 row)
The easiest trick to let postgresql perform a lot better (apart from setting and using proper indexes of course) is just to give it more RAM to work with (if you have not done so already). On most default installations the value for shared_buffers is way too low (in my opinion). You can set
shared_buffers
in postgresql.conf. Divide this number by 128 to get an approximation of the amount of memory (in MB) postgres can claim. If you up it enough this will make postgresql fly. Don't forget to restart postgresql.
On Linux systems, when postgresql won't start again you will probably have hit the kernel.shmmax limit. Set it higher with
sysctl -w kernel.shmmax=xxxx
To make this persist between boots, add a kernel.shmmax entry to /etc/sysctl.conf.
A whole bunch of Postgresql tricks can be found here:
http://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks
Postgres has a very powerful datetime handling facility thanks to its INTERVAL support.
For example:
select NOW(), NOW() + '1 hour';
now | ?column?
-------------------------------+-------------------------------
2009-04-18 01:37:49.116614+00 | 2009-04-18 02:37:49.116614+00
(1 row)
select current_date ,(current_date + interval '1 year')::date;
date | date
---------------------+----------------
2014-10-17 | 2015-10-17
(1 row)
You can cast many strings to an INTERVAL type.
COPY
I'll start. Whenever I switch to Postgres from SQLite, I usually have some really big datasets. The key is to load your tables with COPY FROM rather than doing INSERTS. See documentation:
http://www.postgresql.org/docs/8.1/static/sql-copy.html
The following example copies a table to the client using the vertical bar (|) as the field delimiter:
COPY country TO STDOUT WITH DELIMITER '|';
To copy data from a file into the country table:
COPY country FROM '/usr1/proj/bray/sql/country_data';
See also here:
Faster bulk inserts in sqlite3?
My by far favorite is generate_series: at last a clean way to generate dummy rowsets.
Ability to use a correlated value in a LIMIT clause of a subquery:
SELECT (
SELECT exp_word
FROM mytable
OFFSET id
LIMIT 1
)
FROM othertable
Abitlity to use multiple parameters in custom aggregates (not covered by the documentation): see the article in my blog for an example.
One of the things I really like about Postgres is some of the data types supported in columns. For example, there are column types made for storing Network Addresses and Arrays. The corresponding functions (Network Addresses / Arrays) for these column types let you do a lot of complex operations inside queries that you'd have to do by processing results through code in MySQL or other database engines.
Arrays are really cool once you get to know 'em.
Lets say you would like to store some hyper links between pages. You might start by thinking about creating a Table kinda like this:
CREATE TABLE hyper.links (
tail INT4,
head INT4
);
If you needed to index the tail column, and you had, say 200,000,000 links-rows (like wikipedia would give you), you would find yourself with a huge Table and a huge Index.
However, with PostgreSQL, you could use this Table format instead:
CREATE TABLE hyper.links (
tail INT4,
head INT4[],
PRIMARY KEY(tail)
);
To get all heads for a link you could send a command like this (unnest() is standard since 8.4):
SELECT unnest(head) FROM hyper.links WHERE tail = $1;
This query is surprisingly fast when it is compared with the first option (unnest() is fast and the Index is way way smaller). Furthermore, your Table and Index will take up much less RAM-memory and HD-space, especially when your Arrays are so long that they are compressed to a Toast Table. Arrays are really powerful.
Note: while unnest() will generate rows out of an Array, array_agg() will aggregate rows into an Array.
Materialized Views are pretty easy to setup:
CREATE VIEW my_view AS SELECT id, AVG(my_col) FROM my_table GROUP BY id;
CREATE TABLE my_matview AS SELECT * FROM my_view;
That creates a new table, my_matview, with the columns and values of my_view. Triggers or a cron script can then be setup to keep the data up to date, or if you're lazy:
TRUNCATE my_matview;
INSERT INTO my_matview SELECT * FROM my_view;
Inheritance..infact Multiple Inheritance (as in parent-child "inheritance" not 1-to-1 relation inheritance which many web frameworks implement when working with postgres).
PostGIS (spatial extension), a wonderful add-on that offers comprehensive set of geometry functions and coordinates storage out of the box. Widely used in many open-source geo libs (e.g. OpenLayers,MapServer,Mapnik etc) and definitely way better than MySQL's spatial extensions.
Writing procedures in different languages e.g. C, Python,Perl etc (makes your life easir to code if you're a developer and not a db-admin).
Also all procedures can be stored externally (as modules) and can be called or imported at runtime by specified arguments. That way you can source control the code and debug the code easily.
A huge and comprehensive catalogue on all objects implemented in your database (i.e. tables,constraints,indexes,etc).
I always find it immensely helpful to run few queries and get all meta info e.g. ,constraint names and fields on which they have been implemented on, index names etc.
For me it all becomes extremely handy when I have to load new data or do massive updates in big tables (I would automatically disable triggers and drop indexes) and then recreate them again easily after processing has finished. Someone did an excellent job of writing handful of these queries.
http://www.alberton.info/postgresql_meta_info.html
Multiple schemas under one database, you can use it if your database has large number of tables, you can think of schemas as categories. All tables (regardless of it's schema) have access to all other tables and functions present in parent db.
You don't need to learn how to decipher "explain analyze" output, there is a tool: http://explain.depesz.com
select pg_size_pretty(200 * 1024)
pgcrypto: more cryptographic functions than many programming languages' crypto modules provide, all accessible direct from the database. It makes cryptographic stuff incredibly easy to Just Get Right.
A database can be copied with:
createdb -T old_db new_db
The documentation says:
this is not (yet) intended as a general-purpose "COPY DATABASE" facility
but it works well for me and is much faster than
createdb new_db
pg_dump old_db | psql new_db
Memory storage for throw-away data/global variables
You can create a tablespace that lives in the RAM, and tables (possibly unlogged, in 9.1) in that tablespace to store throw-away data/global variables that you'd like to share across sessions.
http://magazine.redhat.com/2007/12/12/tip-from-an-rhce-memory-storage-on-postgresql/
Advisory locks
These are documented in an obscure area of the manual:
http://www.postgresql.org/docs/9.0/interactive/functions-admin.html
It's occasionally faster than acquiring multitudes of row-level locks, and they can be used to work around cases where FOR UPDATE isn't implemented (such as recursive CTE queries).
This is my favorites list of lesser know features.
Transactional DDL
Nearly every SQL statement is transactional in Postgres. If you turn off autocommit the following is possible:
drop table customer_orders;
rollback;
select *
from customer_orders;
Range types and exclusion constraint
To my knowledge Postgres is the only RDBMS that lets you create a constraint that checks if two ranges overlap. An example is a table that contains product prices with a "valid from" and "valid until" date:
create table product_price
(
price_id serial not null primary key,
product_id integer not null references products,
price numeric(16,4) not null,
valid_during daterange not null
);
NoSQL features
The hstore extension offers a flexible and very fast key/value store that can be used when parts of the database need to be "schema-less". JSON is another option to store data in a schema-less fashion and
insert into product_price
(product_id, price, valid_during)
values
(1, 100.0, '[2013-01-01,2014-01-01)'),
(1, 90.0, '[2014-01-01,)');
-- querying is simply and can use an index on the valid_during column
select price
from product_price
where product_id = 42
and valid_during #> date '2014-10-17';
The execution plan for the above on a table with 700.000 rows:
Index Scan using check_price_range on public.product_price (cost=0.29..3.29 rows=1 width=6) (actual time=0.605..0.728 rows=1 loops=1)
Output: price
Index Cond: ((product_price.valid_during #> '2014-10-17'::date) AND (product_price.product_id = 42))
Buffers: shared hit=17
Total runtime: 0.772 ms
To avoid inserting rows with overlapping validity ranges a simple (and efficient) unique constraint can be defined:
alter table product_price
add constraint check_price_range
exclude using gist (product_id with =, valid_during with &&)
Infinity
Instead of requiring a "real" date far in the future Postgres can compare dates to infinity. E.g. when not using a date range you can do the following
insert into product_price
(product_id, price, valid_from, valid_until)
values
(1, 90.0, date '2014-01-01', date 'infinity');
Writeable common table expressions
You can delete, insert and select in a single statement:
with old_orders as (
delete from orders
where order_date < current_date - interval '10' year
returning *
), archived_rows as (
insert into archived_orders
select *
from old_orders
returning *
)
select *
from archived_rows;
The above will delete all orders older than 10 years, move them to the archived_orders table and then display the rows that were moved.
1.) When you need append notice to query, you can use nested comment
SELECT /* my comments, that I would to see in PostgreSQL log */
a, b, c
FROM mytab;
2.) Remove Trailing spaces from all the text and varchar field in a database.
do $$
declare
selectrow record;
begin
for selectrow in
select
'UPDATE '||c.table_name||' SET '||c.COLUMN_NAME||'=TRIM('||c.COLUMN_NAME||') WHERE '||c.COLUMN_NAME||' ILIKE ''% '' ' as script
from (
select
table_name,COLUMN_NAME
from
INFORMATION_SCHEMA.COLUMNS
where
table_name LIKE 'tbl%' and (data_type='text' or data_type='character varying' )
) c
loop
execute selectrow.script;
end loop;
end;
$$;
3.) We can use a window function for very effective removing of duplicate rows:
DELETE FROM tab
WHERE id IN (SELECT id
FROM (SELECT row_number() OVER (PARTITION BY column_with_duplicate_values), id
FROM tab) x
WHERE x.row_number > 1);
Some PostgreSQL's optimized version (with ctid):
DELETE FROM tab
WHERE ctid = ANY(ARRAY(SELECT ctid
FROM (SELECT row_number() OVER (PARTITION BY column_with_duplicate_values), ctid
FROM tab) x
WHERE x.row_number > 1));
4.) When we need to identify server's state, then we can use a function:
SELECT pg_is_in_recovery();
5.) Get functions's DDL command.
select pg_get_functiondef((select oid from pg_proc where proname = 'f1'));
6.) Safely changing column data type in PostgreSQL
create table test(id varchar );
insert into test values('1');
insert into test values('11');
insert into test values('12');
select * from test
--Result--
id
character varying
--------------------------
1
11
12
You can see from the above table that I have used the data type – ‘character varying’ for ‘id’
column. But it was a mistake, because I am always giving integers as id. So using varchar here is a
bad practice. So let’s try to change the column type to integer.
ALTER TABLE test ALTER COLUMN id TYPE integer;
But it returns:
ERROR: column “id” cannot be cast automatically to type integer SQL
state: 42804 Hint: Specify a USING expression to perform the
conversion
That means we can’t simply change the data type because data is already there in the column. Since the data is of type ‘character varying’ postgres cant expect it as integer though we entered integers only. So now, as postgres suggested we can use the ‘USING’ expression to cast our data into integers.
ALTER TABLE test ALTER COLUMN id TYPE integer USING (id ::integer);
It Works.
7.) Know who is connected to the Database
This is more or less a monitoring command. To know which user connected to which database
including their IP and Port use the following SQL:
SELECT datname,usename,client_addr,client_port FROM pg_stat_activity ;
8.) Reloading PostgreSQL Configuration files without Restarting Server
PostgreSQL configuration parameters are located in special files like postgresql.conf and pg_hba.conf. Often, you may need to change these parameters. But for some parameters to take effect we often need to reload the configuration file. Of course, restarting server will do it. But in a production environment it is not preferred to restarting the database, which is being used by thousands, just for setting some parameters. In such situations, we can reload the configuration files without restarting the server by using the following function:
select pg_reload_conf();
Remember, this wont work for all the parameters, some parameter
changes need a full restart of the server to be take in effect.
9.) Getting the data directory path of the current Database cluster
It is possible that in a system, multiple instances(cluster) of PostgreSQL is set up, generally, in different ports or so. In such cases, finding which directory(physical storage directory) is used by which instance is a hectic task. In such cases, we can use the following command in any database in the cluster of our interest to get the directory path:
SHOW data_directory;
The same function can be used to change the data directory of the cluster, but it requires a server restarts:
SET data_directory to new_directory_path;
10.) Find a CHAR is DATE or not
create or replace function is_date(s varchar) returns boolean as $$
begin
perform s::date;
return true;
exception when others then
return false;
end;
$$ language plpgsql;
Usage: the following will return True
select is_date('12-12-2014')
select is_date('12/12/2014')
select is_date('20141212')
select is_date('2014.12.12')
select is_date('2014,12,12')
11.) Change the owner in PostgreSQL
REASSIGN OWNED BY sa TO postgres;
12.) PGADMIN PLPGSQL DEBUGGER
Well explained here
It's convenient to rename an old database rather than mysql can do. Just using the following command:
ALTER DATABASE name RENAME TO new_name

Resources