PostgreSQL Query to change columns to uppercase - database

I am working on a mySQL to PostgreSQL database migration using pgloader. One of the issues I am facing is that my application is looking for any tables beginning with "ao_" to be "AO_" which I was able to solve by making them all uppercase, however the corresponding columns also need to be uppercase.
Is there a good way to make JUST the "AO_" table columns be all uppercase. It does not seem very efficient to just do this for 400 tables with approximately 10 columns per table:
ALTER TABLE "AO_54307E_QUEUE" RENAME project_id TO "PROJECT_ID";
Is there maybe some kind of wildcard we could use to just grab the "AO_" tables and then have all the columns be uppercase?

I would recommend you against doing it, and I am quoting from the documentation.
Quoting an identifier also makes it case-sensitive, whereas unquoted
names are always folded to lower case.
If you want to write portable applications you are
advised to always quote a particular name or never quote it.
So, quoting "JUST the "AO_" table columns be all uppercase" seems like a bad idea.
If you still wish to proceed, you may use a loop through information_schema.columns and run dynamic ALTER statements.
DO $$
DECLARE
rec RECORD;
BEGIN
for rec IN ( SELECT column_name,table_name,table_schema
FROM information_schema.columns
WHERE table_name like 'AO_%'
AND column_name like 'ao_%' )
LOOP
EXECUTE format ( 'ALTER TABLE %I.%I RENAME %I TO %I' ,
rec.table_schema,rec.table_name,rec.column_name,
upper(rec.column_name)) ;
RAISE NOTICE 'COLUMN % in Table %.% RENAMED',
rec.column_name,rec.table_schema,rec.table_name;
END LOOP;
END$$;
Demo

Related

finding sql reserved words used as column names

i'm just getting started (transititioning to) SQL Server, unfortunately the company has developers that have been doing all the table scripting, etc. on a cursory glance i found several tables that have column names that are also SQL reserved words. in DB2 for IBM i, i could just query the system catalogs for reserved words & am wondering if anyone has a script(or stored procedure) that will do that for SQL Server (currently on version 2012) my goal is to clean this up and have the tables rebuilt with proper column names and a script would be orders of magnitude faster than looking at each table's column listing.
Stripped and modified from a validation routine
For SQL Server's Reserved Words
Declare #Reserved table (Word varchar(100))
Insert Into #Reserved values
('ADD'),('ALL'),('ALTER'),('AND'),('ANY'),('AS'),('ASC'),('AUTHORIZATION'),('BACKUP'),('BEGIN'),('BETWEEN'),('BREAK'),('BROWSE'),('BULK'),('BY'),
('CASCADE'),('CASE'),('CHECK'),('CHECKPOINT'),('CLOSE'),('CLUSTERED'),('COALESCE'),('COLLATE'),('COLUMN'),('COMMIT'),('COMPUTE'),('CONSTRAINT'),
('CONTAINS'),('CONTAINSTABLE'),('CONTINUE'),('CONVERT'),('CREATE'),('CROSS'),('CURRENT'),('CURRENT_DATE'),('CURRENT_TIME'),('CURRENT_TIMESTAMP'),
('CURRENT_USER'),('CURSOR'),('DATABASE'),('DBCC'),('DEALLOCATE'),('DECLARE'),('DEFAULT'),('DELETE'),('DENY'),('DESC'),('DISK'),('DISTINCT'),
('DISTRIBUTED'),('DOUBLE'),('DROP'),('DUMP'),('ELSE'),('END'),('ERRLVL'),('ESCAPE'),('EXCEPT'),('EXEC'),('EXECUTE'),('EXISTS'),('EXIT'),('EXTERNAL'),
('FETCH'),('FILE'),('FILLFACTOR'),('FOR'),('FOREIGN'),('FREETEXT'),('FREETEXTTABLE'),('FROM'),('FULL'),('FUNCTION'),('GOTO'),('GRANT'),('GROUP'),
('HAVING'),('HOLDLOCK'),('IDENTITY'),('IDENTITY_INSERT'),('IDENTITYCOL'),('IF'),('IN'),('INDEX'),('INNER'),('INSERT'),('INTERSECT'),('INTO'),('IS'),
('JOIN'),('KEY'),('KILL'),('LEFT'),('LIKE'),('LINENO'),('LOAD'),('MERGE'),('NATIONAL'),('NOCHECK'),('NONCLUSTERED'),('NOT'),('NULL'),('NULLIF'),
('OF'),('OFF'),('OFFSETS'),('ON'),('OPEN'),('OPENDATASOURCE'),('OPENQUERY'),('OPENROWSET'),('OPENXML'),('OPTION'),('OR'),('ORDER'),('OUTER'),('OVER'),
('PERCENT'),('PIVOT'),('PLAN'),('PRECISION'),('PRIMARY'),('PRINT'),('PROC'),('PROCEDURE'),('PUBLIC'),('RAISERROR'),('READ'),('READTEXT'),('RECONFIGURE'),
('REFERENCES'),('REPLICATION'),('RESTORE'),('RESTRICT'),('RETURN'),('REVERT'),('REVOKE'),('RIGHT'),('ROLLBACK'),('ROWCOUNT'),('ROWGUIDCOL'),('RULE'),
('SAVE'),('SCHEMA'),('SECURITYAUDIT'),('SELECT'),('SEMANTICKEYPHRASETABLE'),('SEMANTICSIMILARITYDETAILSTABLE'),('SEMANTICSIMILARITYTABLE'),('SESSION_USER'),
('SET'),('SETUSER'),('SHUTDOWN'),('SOME'),('STATISTICS'),('SYSTEM_USER'),('TABLE'),('TABLESAMPLE'),('TEXTSIZE'),('THEN'),('TO'),('TOP'),('TRAN'),('TRANSACTION'),
('TRIGGER'),('TRUNCATE'),('TRY_CONVERT'),('TSEQUAL'),('UNION'),('UNIQUE'),('UNPIVOT'),('UPDATE'),('UPDATETEXT'),('USE'),('USER'),('VALUES'),('VARYING'),
('VIEW'),('WAITFOR'),('WHEN'),('WHERE'),('WHILE'),('WITH'),('WITHIN GROUP'),('WRITETEXT')
Select A.*
From INFORMATION_SCHEMA.COLUMNS A
Join #Reserved
on Column_Name = Word
Here is the list of reserved words https://msdn.microsoft.com/en-us/library/ms189822.aspx
You could copy these into a text editor or something then clean it up a little. Then throw them into a table then use that table as a reference.

How to execute a long dynamic query (greater than 4000) characters - again

Note: I'm running under SQL Server 2008 R2...
I've taken the time to read dozens of posts on this site and other sites on how to execute dynamic SQL where the query is more than 4000 characters. I've tried more than a dozen solutions proposed. The consensus seems to be to split the query into 4000-character variables and then do:
EXEC (#SQLQuery1 + #SQLQuery2)
This doesn't work for me - the query is truncated at the end of #SQLQuery1.
Now, I've seen samples how people "force" a long query by using REPLICATE a bunch of spaces, etc., but this is a real query - but it gets a little more sophisticated than that.
I have SQL View with a name of "Company_A_ItemView".
I have 10 companies that I want to create the same exact view, with different names, e.g.
"Company_B_ItemView"
"Company_C_ItemView"
..etc.
If you offer help, please don't ask why there are multiple views - just accept that I need to do it this way, OK?
Each company has its own set of tables, and the CREATE VIEW statement references several tables by name. Here's BRIEF sample, but remember, the total length of the query is around 6000 characters:
CREATE view [dbo].[Company_A_ItemView] as
select
WE.[Item No_],
WE.[Location Code],
LOC.[Bin Number],
[..more fields, etc.]
from
[Company_A_Warehouse_Entry] WE
left join
[Company_A_Location] LOC
...you get the idea
So, what I am currently doing is:
a. Pulling the contents of the CREATE VIEW statement into 2 Declared Variables, e.g.
Set #SQLQuery1 = (select text
from syscomments
where ID = 1382894081 and colid = 1)
Set #SQLQuery2 = (select
from syscomments
where ID = 1382894081 and colid = 2)
Note that this is how SQL stores long definitions - when you create the view, it stores the text into multiple syscomments records. In my case, the view is split into a text chunk of 3591 characters into the first syscomment record and the rest of the text is in the second record. I have no idea why SQL doesn't use all 4000 characters in the syscomment field. And the statement is broken in the middle of a word.
Please note in all my examples, all #SQLQueryxxx variables are declared as varchar(max). I've also tried declaring them as nvarchar(max) and varchar(8000) and nvarchar(8000) with the same results.
b. I then do a "Search and Replace" for "Company_A" and replace it with "Company_B". In the code below, the variable "#CompanyID" is first set to "Company_B":
SET #SQLQueryNew1 = #SQLQuery1
SET #SQLQueryNew1 = REPLACE(#SQLQueryNew1, 'Company_A', #CompanyID)
SET #SQLQueryNew2 = #SQLQuery2
SET #SQLQueryNew2 = REPLACE(#SQLQueryNew2, 'Company_A',#CompanyID)
c. I then try:
EXEC (#SQLQueryNew1 + #SQLQueryNew2)
The message returned indicates that it's trying to execute the statement truncated at the end of #SQLQueryNew1, e.g. 80% (approx) of the query's text.
I've tried CAST'ing the final result into a new varchar(max) and nvarchar(max) - no luck
I've tried CAST'ing the original query a new varchar(max) and nvarchar(max)- no luck
I've looked at the result of retrieving the original CREATE VIEW statement, and it's fine.
I've tried various other ways of retrieving the original CREATE VIEW statement, such as:
Set #SQLQuery1 = (select VIEW_DEFINITION)
FROM [MY_DATABASE].[INFORMATION_SCHEMA].[VIEWS]
where TABLE_NAME = 'Company_A_ItemView')`
This one returns only the first 4000 characters of the CREATE VIEW
Set #SQLQuery1 = (SELECT (OBJECT_DEFINITION(#ObjectID))
If I do a
SELECT LEN(OBJECT_DEFINITION(#ObjectID))
it returns the correct length of the query (e.g. 5191), but if I look at #SQLQuery1, or try to
EXEC(#SQLQuery1), the statement is still truncated.
c. There are some references that state that since I'm manipulating the text of the query after retrieving it, the resulting variables are then truncated to 4000 characters. I've tried CAST'ing the result as I do the REPLACE, e.g.
SET #SQLQueryNew1 = SELECT (CAST(REPLACE(#SQLQueryNew1,
'Company_A',
#CompanyID) AS varchar(max))
Same result.
I know there are other methods, such as creating stored procedures for creating the views. But the views are being developed and are somewhat "in flux", so placing the text of the CREATE VIEW inside a stored proc is cumbersome. My goal is to be able to take Company_A's view and replicate it exactly - multiple times, except reference Company_B's view name and table names, Company_C's view name and table names, etc.
I'm wondering if there is anyone out there who has done this type of manipulation of a long SQL "CREATE VIEW" statement and try to execute it.
Just use VARCHAR(MAX) or NVARCHAR(MAX). They work fine for EXEC(string).
FYI,
Note that this is how SQL stores long definitions - when you create
the view, it stores the text into multiple syscomments records.
This is not correct. This is how it used to be done on SQL Server 2000. Since SQL Server 2005 and higher they are saved as NVARCHAR(MAX) in a single entry in sys.sql_modules.
syscomments is still around, but it is retained read-only solely for compatibility.
So all you should need to do is to change your #SQLQuery1,2,etc. variables to a single NVARCHAR(MAX) variable, and pull your View code from the [definition] column of the sys.sql_modules table instead.
Note that you should be careful with your string manipulations as there are certain functions that will revert to (N)VARCHAR(4000) output if all of their input arguments are not (N)VARCHAR(MAX). (Sorry, I do not know which ones, but REPLACE() may be one). In fact, this may be what has been causing so much confusion in your tests.
declare your sql variables (#SQLQuery1...) as nvarchar(4000)
be sure each sql part did't exceed 4000 byte (copy each part to a text file and test the file size in bytes)

How can i create a table in sql server 2005 that is totally new for me..?

I have one problem in which i simply want to know how i can create a table that can easily Used as a back end for my solution that is in Vb 2010.
I also want to know that when we choose a data source in a vb.net that is for sql server Which we want to choose....simply which can be used Because there is 2 or 3 with little different name.....
I'm struggling to understand your question, but in an effort to help:
I'm guessing that you want to programmatically create a table to be used by other parts of your VB application, but that you need to ensure the table name is unique...? If I'm right in that assumption, then see below.
You can use this query:
SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE table_type = 'BASE TABLE'
To get a list of table names currently in your database. You can compare the TABLE_NAME column values with your desired table name. If it exists already, change the name by adding an differentiator, eg: MyTable, MyTable1, MyTable2 etc.
Alternatively, SQL Server accepts Guids as table names.
Disclaimer: IMHO, If your table is not going to be temporary deciding table names in this manner is a pretty ugly solution and lacks supportability.

Hidden Features of PostgreSQL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm surprised this hasn't been posted yet. Any interesting tricks that you know about in Postgres? Obscure config options and scaling/perf tricks are particularly welcome.
I'm sure we can beat the 9 comments on the corresponding MySQL thread :)
Since postgres is a lot more sane than MySQL, there are not that many "tricks" to report on ;-)
The manual has some nice performance tips.
A few other performance related things to keep in mind:
Make sure autovacuum is turned on
Make sure you've gone through your postgres.conf (effective cache size, shared buffers, work mem ... lots of options there to tune).
Use pgpool or pgbouncer to keep your "real" database connections to a minimum
Learn how EXPLAIN and EXPLAIN ANALYZE works. Learn to read the output.
CLUSTER sorts data on disk according to an index. Can dramatically improve performance of large (mostly) read-only tables. Clustering is a one-time operation: when the table is subsequently updated, the changes are not clustered.
Here's a few things I've found useful that aren't config or performance related per se.
To see what's currently happening:
select * from pg_stat_activity;
Search misc functions:
select * from pg_proc WHERE proname ~* '^pg_.*'
Find size of database:
select pg_database_size('postgres');
select pg_size_pretty(pg_database_size('postgres'));
Find size of all databases:
select datname, pg_size_pretty(pg_database_size(datname)) as size
from pg_database;
Find size of tables and indexes:
select pg_size_pretty(pg_relation_size('public.customer'));
Or, to list all tables and indexes (probably easier to make a view of this):
select schemaname, relname,
pg_size_pretty(pg_relation_size(schemaname || '.' || relname)) as size
from (select schemaname, relname, 'table' as type
from pg_stat_user_tables
union all
select schemaname, relname, 'index' as type
from pg_stat_user_indexes) x;
Oh, and you can nest transactions, rollback partial transactions++
test=# begin;
BEGIN
test=# select count(*) from customer where name='test';
count
-------
0
(1 row)
test=# insert into customer (name) values ('test');
INSERT 0 1
test=# savepoint foo;
SAVEPOINT
test=# update customer set name='john';
UPDATE 3
test=# rollback to savepoint foo;
ROLLBACK
test=# commit;
COMMIT
test=# select count(*) from customer where name='test';
count
-------
1
(1 row)
The easiest trick to let postgresql perform a lot better (apart from setting and using proper indexes of course) is just to give it more RAM to work with (if you have not done so already). On most default installations the value for shared_buffers is way too low (in my opinion). You can set
shared_buffers
in postgresql.conf. Divide this number by 128 to get an approximation of the amount of memory (in MB) postgres can claim. If you up it enough this will make postgresql fly. Don't forget to restart postgresql.
On Linux systems, when postgresql won't start again you will probably have hit the kernel.shmmax limit. Set it higher with
sysctl -w kernel.shmmax=xxxx
To make this persist between boots, add a kernel.shmmax entry to /etc/sysctl.conf.
A whole bunch of Postgresql tricks can be found here:
http://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks
Postgres has a very powerful datetime handling facility thanks to its INTERVAL support.
For example:
select NOW(), NOW() + '1 hour';
now | ?column?
-------------------------------+-------------------------------
2009-04-18 01:37:49.116614+00 | 2009-04-18 02:37:49.116614+00
(1 row)
select current_date ,(current_date + interval '1 year')::date;
date | date
---------------------+----------------
2014-10-17 | 2015-10-17
(1 row)
You can cast many strings to an INTERVAL type.
COPY
I'll start. Whenever I switch to Postgres from SQLite, I usually have some really big datasets. The key is to load your tables with COPY FROM rather than doing INSERTS. See documentation:
http://www.postgresql.org/docs/8.1/static/sql-copy.html
The following example copies a table to the client using the vertical bar (|) as the field delimiter:
COPY country TO STDOUT WITH DELIMITER '|';
To copy data from a file into the country table:
COPY country FROM '/usr1/proj/bray/sql/country_data';
See also here:
Faster bulk inserts in sqlite3?
My by far favorite is generate_series: at last a clean way to generate dummy rowsets.
Ability to use a correlated value in a LIMIT clause of a subquery:
SELECT (
SELECT exp_word
FROM mytable
OFFSET id
LIMIT 1
)
FROM othertable
Abitlity to use multiple parameters in custom aggregates (not covered by the documentation): see the article in my blog for an example.
One of the things I really like about Postgres is some of the data types supported in columns. For example, there are column types made for storing Network Addresses and Arrays. The corresponding functions (Network Addresses / Arrays) for these column types let you do a lot of complex operations inside queries that you'd have to do by processing results through code in MySQL or other database engines.
Arrays are really cool once you get to know 'em.
Lets say you would like to store some hyper links between pages. You might start by thinking about creating a Table kinda like this:
CREATE TABLE hyper.links (
tail INT4,
head INT4
);
If you needed to index the tail column, and you had, say 200,000,000 links-rows (like wikipedia would give you), you would find yourself with a huge Table and a huge Index.
However, with PostgreSQL, you could use this Table format instead:
CREATE TABLE hyper.links (
tail INT4,
head INT4[],
PRIMARY KEY(tail)
);
To get all heads for a link you could send a command like this (unnest() is standard since 8.4):
SELECT unnest(head) FROM hyper.links WHERE tail = $1;
This query is surprisingly fast when it is compared with the first option (unnest() is fast and the Index is way way smaller). Furthermore, your Table and Index will take up much less RAM-memory and HD-space, especially when your Arrays are so long that they are compressed to a Toast Table. Arrays are really powerful.
Note: while unnest() will generate rows out of an Array, array_agg() will aggregate rows into an Array.
Materialized Views are pretty easy to setup:
CREATE VIEW my_view AS SELECT id, AVG(my_col) FROM my_table GROUP BY id;
CREATE TABLE my_matview AS SELECT * FROM my_view;
That creates a new table, my_matview, with the columns and values of my_view. Triggers or a cron script can then be setup to keep the data up to date, or if you're lazy:
TRUNCATE my_matview;
INSERT INTO my_matview SELECT * FROM my_view;
Inheritance..infact Multiple Inheritance (as in parent-child "inheritance" not 1-to-1 relation inheritance which many web frameworks implement when working with postgres).
PostGIS (spatial extension), a wonderful add-on that offers comprehensive set of geometry functions and coordinates storage out of the box. Widely used in many open-source geo libs (e.g. OpenLayers,MapServer,Mapnik etc) and definitely way better than MySQL's spatial extensions.
Writing procedures in different languages e.g. C, Python,Perl etc (makes your life easir to code if you're a developer and not a db-admin).
Also all procedures can be stored externally (as modules) and can be called or imported at runtime by specified arguments. That way you can source control the code and debug the code easily.
A huge and comprehensive catalogue on all objects implemented in your database (i.e. tables,constraints,indexes,etc).
I always find it immensely helpful to run few queries and get all meta info e.g. ,constraint names and fields on which they have been implemented on, index names etc.
For me it all becomes extremely handy when I have to load new data or do massive updates in big tables (I would automatically disable triggers and drop indexes) and then recreate them again easily after processing has finished. Someone did an excellent job of writing handful of these queries.
http://www.alberton.info/postgresql_meta_info.html
Multiple schemas under one database, you can use it if your database has large number of tables, you can think of schemas as categories. All tables (regardless of it's schema) have access to all other tables and functions present in parent db.
You don't need to learn how to decipher "explain analyze" output, there is a tool: http://explain.depesz.com
select pg_size_pretty(200 * 1024)
pgcrypto: more cryptographic functions than many programming languages' crypto modules provide, all accessible direct from the database. It makes cryptographic stuff incredibly easy to Just Get Right.
A database can be copied with:
createdb -T old_db new_db
The documentation says:
this is not (yet) intended as a general-purpose "COPY DATABASE" facility
but it works well for me and is much faster than
createdb new_db
pg_dump old_db | psql new_db
Memory storage for throw-away data/global variables
You can create a tablespace that lives in the RAM, and tables (possibly unlogged, in 9.1) in that tablespace to store throw-away data/global variables that you'd like to share across sessions.
http://magazine.redhat.com/2007/12/12/tip-from-an-rhce-memory-storage-on-postgresql/
Advisory locks
These are documented in an obscure area of the manual:
http://www.postgresql.org/docs/9.0/interactive/functions-admin.html
It's occasionally faster than acquiring multitudes of row-level locks, and they can be used to work around cases where FOR UPDATE isn't implemented (such as recursive CTE queries).
This is my favorites list of lesser know features.
Transactional DDL
Nearly every SQL statement is transactional in Postgres. If you turn off autocommit the following is possible:
drop table customer_orders;
rollback;
select *
from customer_orders;
Range types and exclusion constraint
To my knowledge Postgres is the only RDBMS that lets you create a constraint that checks if two ranges overlap. An example is a table that contains product prices with a "valid from" and "valid until" date:
create table product_price
(
price_id serial not null primary key,
product_id integer not null references products,
price numeric(16,4) not null,
valid_during daterange not null
);
NoSQL features
The hstore extension offers a flexible and very fast key/value store that can be used when parts of the database need to be "schema-less". JSON is another option to store data in a schema-less fashion and
insert into product_price
(product_id, price, valid_during)
values
(1, 100.0, '[2013-01-01,2014-01-01)'),
(1, 90.0, '[2014-01-01,)');
-- querying is simply and can use an index on the valid_during column
select price
from product_price
where product_id = 42
and valid_during #> date '2014-10-17';
The execution plan for the above on a table with 700.000 rows:
Index Scan using check_price_range on public.product_price (cost=0.29..3.29 rows=1 width=6) (actual time=0.605..0.728 rows=1 loops=1)
Output: price
Index Cond: ((product_price.valid_during #> '2014-10-17'::date) AND (product_price.product_id = 42))
Buffers: shared hit=17
Total runtime: 0.772 ms
To avoid inserting rows with overlapping validity ranges a simple (and efficient) unique constraint can be defined:
alter table product_price
add constraint check_price_range
exclude using gist (product_id with =, valid_during with &&)
Infinity
Instead of requiring a "real" date far in the future Postgres can compare dates to infinity. E.g. when not using a date range you can do the following
insert into product_price
(product_id, price, valid_from, valid_until)
values
(1, 90.0, date '2014-01-01', date 'infinity');
Writeable common table expressions
You can delete, insert and select in a single statement:
with old_orders as (
delete from orders
where order_date < current_date - interval '10' year
returning *
), archived_rows as (
insert into archived_orders
select *
from old_orders
returning *
)
select *
from archived_rows;
The above will delete all orders older than 10 years, move them to the archived_orders table and then display the rows that were moved.
1.) When you need append notice to query, you can use nested comment
SELECT /* my comments, that I would to see in PostgreSQL log */
a, b, c
FROM mytab;
2.) Remove Trailing spaces from all the text and varchar field in a database.
do $$
declare
selectrow record;
begin
for selectrow in
select
'UPDATE '||c.table_name||' SET '||c.COLUMN_NAME||'=TRIM('||c.COLUMN_NAME||') WHERE '||c.COLUMN_NAME||' ILIKE ''% '' ' as script
from (
select
table_name,COLUMN_NAME
from
INFORMATION_SCHEMA.COLUMNS
where
table_name LIKE 'tbl%' and (data_type='text' or data_type='character varying' )
) c
loop
execute selectrow.script;
end loop;
end;
$$;
3.) We can use a window function for very effective removing of duplicate rows:
DELETE FROM tab
WHERE id IN (SELECT id
FROM (SELECT row_number() OVER (PARTITION BY column_with_duplicate_values), id
FROM tab) x
WHERE x.row_number > 1);
Some PostgreSQL's optimized version (with ctid):
DELETE FROM tab
WHERE ctid = ANY(ARRAY(SELECT ctid
FROM (SELECT row_number() OVER (PARTITION BY column_with_duplicate_values), ctid
FROM tab) x
WHERE x.row_number > 1));
4.) When we need to identify server's state, then we can use a function:
SELECT pg_is_in_recovery();
5.) Get functions's DDL command.
select pg_get_functiondef((select oid from pg_proc where proname = 'f1'));
6.) Safely changing column data type in PostgreSQL
create table test(id varchar );
insert into test values('1');
insert into test values('11');
insert into test values('12');
select * from test
--Result--
id
character varying
--------------------------
1
11
12
You can see from the above table that I have used the data type – ‘character varying’ for ‘id’
column. But it was a mistake, because I am always giving integers as id. So using varchar here is a
bad practice. So let’s try to change the column type to integer.
ALTER TABLE test ALTER COLUMN id TYPE integer;
But it returns:
ERROR: column “id” cannot be cast automatically to type integer SQL
state: 42804 Hint: Specify a USING expression to perform the
conversion
That means we can’t simply change the data type because data is already there in the column. Since the data is of type ‘character varying’ postgres cant expect it as integer though we entered integers only. So now, as postgres suggested we can use the ‘USING’ expression to cast our data into integers.
ALTER TABLE test ALTER COLUMN id TYPE integer USING (id ::integer);
It Works.
7.) Know who is connected to the Database
This is more or less a monitoring command. To know which user connected to which database
including their IP and Port use the following SQL:
SELECT datname,usename,client_addr,client_port FROM pg_stat_activity ;
8.) Reloading PostgreSQL Configuration files without Restarting Server
PostgreSQL configuration parameters are located in special files like postgresql.conf and pg_hba.conf. Often, you may need to change these parameters. But for some parameters to take effect we often need to reload the configuration file. Of course, restarting server will do it. But in a production environment it is not preferred to restarting the database, which is being used by thousands, just for setting some parameters. In such situations, we can reload the configuration files without restarting the server by using the following function:
select pg_reload_conf();
Remember, this wont work for all the parameters, some parameter
changes need a full restart of the server to be take in effect.
9.) Getting the data directory path of the current Database cluster
It is possible that in a system, multiple instances(cluster) of PostgreSQL is set up, generally, in different ports or so. In such cases, finding which directory(physical storage directory) is used by which instance is a hectic task. In such cases, we can use the following command in any database in the cluster of our interest to get the directory path:
SHOW data_directory;
The same function can be used to change the data directory of the cluster, but it requires a server restarts:
SET data_directory to new_directory_path;
10.) Find a CHAR is DATE or not
create or replace function is_date(s varchar) returns boolean as $$
begin
perform s::date;
return true;
exception when others then
return false;
end;
$$ language plpgsql;
Usage: the following will return True
select is_date('12-12-2014')
select is_date('12/12/2014')
select is_date('20141212')
select is_date('2014.12.12')
select is_date('2014,12,12')
11.) Change the owner in PostgreSQL
REASSIGN OWNED BY sa TO postgres;
12.) PGADMIN PLPGSQL DEBUGGER
Well explained here
It's convenient to rename an old database rather than mysql can do. Just using the following command:
ALTER DATABASE name RENAME TO new_name

SQL 2005 copy single column between databases

I'm still fairly new to T-SQL and SQL 2005. I need to import a column of integers from a table in database1 to a identical table (only missing the column I need) in database2. Both are sql 2005 databases. I've tried the built in import command in Server Management Studio but it's forcing me to copy the entire table. This causes errors due to constraints and 'read-only' columns (whatever 'read-only' means in sql2005). I just want to grab a single column and copy it to a table.
There must be a simple way of doing this. Something like:
INSERT INTO database1.myTable columnINeed
SELECT columnINeed from database2.myTable
Inserting won't do it since it'll attempt to insert new rows at the end of the table. What it sounds like your trying to do is add a column to the end of existing rows.
I'm not sure if the syntax is exactly right but, if I understood you then this will do what you're after.
Create the column allowing nulls in database2.
Perform an update:
UPDATE database2.dbo.tablename
SET database2.dbo.tablename.colname = database1.dbo.tablename.colname
FROM database2.dbo.tablename INNER JOIN database1.dbo.tablename ON database2.dbo.tablename.keycol = database1.dbo.tablename.keycol
There is a simple way very much like this as long as both databases are on the same server. The fully qualified name is dbname.owner.table - normally the owner is dbo and there is a shortcut for ".dbo." which is "..", so...
INSERT INTO Datbase1..MyTable
(ColumnList)
SELECT FieldsIWant
FROM Database2..MyTable
first create the column if it doesn't exist:
ALTER TABLE database2..targetTable
ADD targetColumn int null -- or whatever column definition is needed
and since you're using Sql Server 2005 you can use the new MERGE statement.
The MERGE statement has the advantage of being able to treat all situations in one statement like missing rows from source (can do inserts), missing rows from destination (can do deletes), matching rows (can do updates), and everything is done atomically in a single transaction. Example:
MERGE database2..targetTable AS t
USING (SELECT sourceColumn FROM sourceDatabase1..sourceTable) as s
ON t.PrimaryKeyCol = s.PrimaryKeyCol -- or whatever the match should be bassed on
WHEN MATCHED THEN
UPDATE SET t.targetColumn = s.sourceColumn
WHEN NOT MATCHED THEN
INSERT (targetColumn, [other columns ...]) VALUES (s.sourceColumn, [other values ..])
The MERGE statement was introduced to solve cases like yours and I recommend using it, it's much more powerful than solutions using multiple sql batch statements that basically accomplish the same thing MERGE does in one statement without the added complexity.
You could also use a cursor. Assuming you want to iterate all the records in the first table and populate the second table with new rows then something like this would be the way to go:
DECLARE #FirstField nvarchar(100)
DECLARE ACursor CURSOR FOR
SELECT FirstField FROM FirstTable
OPEN ACursor
FETCH NEXT FROM ACursor INTO #FirstField
WHILE ##FETCH_STATUS = 0
BEGIN
INSERT INTO SecondTable ( SecondField ) VALUES ( #FirstField )
FETCH NEXT FROM ACursor INTO #FirstField
END
CLOSE ACursor
DEALLOCATE ACursor
MERGE is only available in SQL 2008 NOT SQL 2005
insert into Test2.dbo.MyTable (MyValue) select MyValue from Test1.dbo.MyTable
This is assuming a great deal. First that the destination database is empty. Second that the other columns are nullable. You may need an update instead. To do that you will need to have a common key.

Resources