Declaring temporary variables in PostgreSQL - sql-server

I'm migrating from SQL Server to PostgreSQL. I've seen from How to declare a variable in a PostgreSQL query that there is no such thing as temporary variables in native sql queries.
Well, I pretty badly need a few... How would I go about mixing in plpgsql? Must I create a function and then delete the function in order to get access to a language? that just seems error prone to me and I'm afraid I'm missing something.
EDIT:
cmd.CommandText="insert......" +
"declare #app int; declare #gid int;"+
"set #app=SCOPE_IDENTITY();"+ //select scope_identity will give us our RID that we just inserted
"select #gid=MAX(GROUPID) from HOUSEHOLD; set #gid=#gid+1; "+
"insert into HOUSEHOLD (APPLICANT_RID,GROUPID,ISHOH) values "+
"(#app,#gid,1);"+
"select #app";
rid=cmd.ExecuteScalar();
A direct rip from the application in which it's used. Note we are in the process of converting from SQL server to Postgre. (also, I've figured out the scope_identity() bit I think)

What is your schema for the table being inserted? I'll try and answer based on this assumption of the schema:
CREATE TABLE HOUSEHOLD (
APPLICANT_RID SERIAL, -- PostgreSQL auto-increment
GROUPID INTEGER,
ISHOH INTEGER
);
If I'm understanding your intent correctly, in PostgreSQL >= 8.2, the query would then be:
INSERT INTO HOUSEHOLD (GROUPID, ISHOH)
VALUES ((SELECT COALESCE(MAX(GROUPID)+1,1) FROM HOUSEHOLD), 1)
RETURNING APPLICANT_RID;
-- Added call to the COALESCE function to cover the case where HOUSEHOLD
-- is empty and MAX(GROUPID) returns NULL
In PostgreSQL >= 8.2, any INSERT/DELETE/UPDATE query may have a RETURNING clause that acts like a simple SELECT performed on the result set of the change query.

If you're using a language binding, you can hold the variables there.
For example with SQLAlchemy (python):
my_var = 'Reynardine'
session.query(User.name).filter(User.fullname==my_var)
If you're in psql, you have variables:
\set a 5
SELECT :a;
And if your logic is in PL/pgSQL:
tax := subtotal * 0.06;

Must I create a function and then
delete the function in order to get
access to a language?
Yes, but this shortcoming is going to be removed in PostgreSQL 8.5, with the addition of DO command. 8.5 is going to be released in 2010.

You can also declare session variables using plperl - http://www.postgresql.org/docs/8.4/static/plperl-global.html

you install a language that you want to use with the CREATE LANGUAGE command for known languages. Although you can use other languages.
Language installation docs
CREATE LANGUAGE usage doc
You will have to create a function to use it. If you do not want to make a permanent function in the db then the other choice would be to use a scrip in python or something that uses a postgresql driver to connect to the db and do queries. You can then manipulate or look through the data in the script. For instance in python you would install the pygresql library and in your script import pgdb which you can use to connect to the db.
PyGreSQL Info

I think that PostgreSQL's row-type variable would be the closest thing:
A variable of a composite type is
called a row variable (or row-type
variable). Such a variable can hold a
whole row of a SELECT or FOR query
result, so long as that query's column
set matches the declared type of the
variable.

You mentioned the post (How to declare a variable in a PostgreSQL query).
I believe there is a suitable answer farther down the chain of solutions if using psql and the \set command:
my_db=> \set myvar 5
my_db=> SELECT :myvar + 1 AS my_var_plus_1;

Related

SQL Server - How do i get multiple rows of data into a returned variable

First question here so hoping that someone can help!
Im doing a lot of conversions of Access backends on to SQL server, keeping the front end in Access.
I have come across something that i need a little help with.
In Access, I have a query that is using a user-defined function in order to amalgamate some data from rows in a table into one variable. (By opening a recordset and enumerating through, adding to a variable each time.)
For example:
The query has a field that calls the function like this:
ProductNames: Product(ContractID)
And the VBA function "Product()" searches a table based on the ContractID. Cycles through each row it finds and concatenates the results of one field into one variable, ultimately returned to the query.
Obviously, moving this query to SQL server as a view means that that function will not be found as its in Access.
Can I use a function or stored procedure in order to do the same thing? (I have never used them before)
I must stress that I cannot create, alter or drop tables at run-time due to very strict production environment security.
If someone could give me an example id be really grateful.
So i need to be able to call it from the view as shown above.
Let say the table im looking at for the data is called tbl_Products and it has 2 columns:
| ContractID | Product |
How would that be done?! any help massively appreciated!
Andy
Yes you can most certainly do the same thing and adopt the same approach in SQL like you did in the past with VBA + SQL.
The easy solution would be to link to the view, and then build a local query that adds the additional column. However, often for reasons of performance and simply converting sql from Access to T-SQL, then I often “duplicate” those VBA functions as T-SQL functions.
The beauty of this approach is once you make this function, then this goes a “long” way towards easy converting some of your Access SQL to t-sql and views.
I had a GST calculation function in VBA that you would pass the amount, and a date (because the gst rate changes at a known date (in the past, or future).
So I used this function all over the place in my Access SQL.
When I had to convert to sql server, then I was able to use “views” and pass-though quires from Access and simply use “very” similar sql and include that sql function in the sql just like I did in Access.
You need to create what is called a SQL function. This function is often called a scaler function. This function works just like a function in VBA.
So in t-sql store procedure, or even as a expression in your SQL just like in Access!!!!
In your example, lets assume that you have some contract table, and you want to grab the “status” column (we assume text).
And there could be one, 1 or “several” or none!.
So we will concatenate each of the child records “status” code based on contract id.
You can thus fire up SSMS and in the database simply expand your database in the tree view. Now expand “programmability”. Now expand functions. You see “scaler-valued functions”. These functions are just like VBA functions. Once created, you can use the function in code (t-sql) or in views etc.
At this point, you can now write t-sql code in place of VBA code.
And really, you don’t have to “expand” the tree above – but it will allow you to “find” and “see” and “change” your functions you create. Once created then ANY sql, or code for that database can use the function as a expression just like you did in Access.
This code should do the trick:
CREATE FUNCTION [dbo].[ContractStatus]
(#ContractID int)
RETURNS varchar(255)
AS
BEGIN
-- Declare a cursor (recordset)
DECLARE #tmpStatus varchar(25)
DECLARE #MyResult varchar(255)
set #MyResult = ''
DECLARE rst CURSOR
FOR select Status from tblContracts where ID = #ContractID
OPEN rst
FETCH NEXT FROM rst INTO #tmpStatus
WHILE ##FETCH_STATUS = 0
BEGIN
IF #MyResult <> ''
SET #MyResult = #MyResult + ','
SET #MyResult = #MyResult + #tmpStatus
FETCH NEXT FROM rst INTO #tmpStatus
END
-- Return the result of the function
RETURN #MyResult
END
Now, in sql, you can go:
Select ProjectName, ID, dbo.ProjectStatus([ID]) as MyStatus from tblProjects.

Create SQL user-defined function in ColdFusion with MS SQL Server

I'm doing queries in which I want to extract the left-most n characters from a string that has been stripped of all leading and following spaces. An example is:
Select SUBSTRING(LTRIM(RTRIM(somefield)), 0, #n) AS mydata
FROM sometable
It's the only way I can figure to do it on a SQL Server.
I've never written a UDF before, but I think if I was just working on a SQL Server, I could create a user-defined function such as:
CREATE FUNCTION udfLeftTrimmed
(
#inputString nvarchar(50),
#n int
)
RETURNS nvarchar(#n)
AS
BEGIN
RETURN SUBSTRING(LTRIM(RTRIM(#inputString)), 0, #n);
END
I could then do something like:
Select udfLeftTrimmed(somefield,6) AS mydata
FROM sometable
which is at least a little easier to read and understand.
The question is, how do I create the UDF in ColdFusion? All my searches for SQL user-defined function in ColdFusion just gave me how to create ColdFusion functions.
Since there is nothing special or "dynamic" about your UDF you really don't need to create it in CF. You should just create it using MSSQL Manager. UDFs in SQL are like stored procedures. Once created they are a part of the DB/Schema. so create once, use as many times as you like (as #leigh has mentioned).
Keep in mind that using a SQL udf in SQL usually requires the user prepend as in:
<cfquery...>
Select dbo.udfLeftTrimmed(somefield,6) AS mydata
FROM sometable
</cfquery>
Note the "dbo.udf..." that dbo is important and may be why your subsequent try is failing - besides getting a duplicate UDF error by now. :)
NOTE:
To follow up on your comments and Leighs, you can create your UDF in a DB accessible to your user then access it as dbo.dbname.function ... as inthe following code:
<cfquery...>
Select dbo.myspecialDatabase.udfLeftTrimmed(somefield,6) AS mydata
FROM sometable
</cfquery>
Then you need only create it one time.

Mongodb ObjectId generator as SQL Server proc

I have a hybrid application where part of data (mostly legacy) is stored in SQL Server and another part in Mongodb. I just converted all primary key types in SQL Server to use ObjectId which I generate in the application when inserting new records into SQL Server.
Now, I found that I need to clone some template records (about 10-20 records at a time), and in order to do that I need to be able to generate ObjectId values via a SQL Server function or stored proc.
Is it possible and is there code available?
This question is old, but I was trying to do the same thing. This is what I came up with on SQL Server 2012.
Create Function NewObjectId(#counter binary(3))
returns binary(12)
begin
declare #epoch datetime2, #seconds binary(4), #process binary(2), #hostname binary(3)
set #epoch = '1/1/1970'
select #seconds = cast(Datediff(ss, #epoch, getutcdate()) as binary(4))
select #hostname = cast(HashBytes('MD5', HOST_NAME()) as binary(3))
select #process = cast(##SPID as binary(2))
declare #objectId binary(12)
select #objectId = (#seconds + #hostname + #process + #counter)
return #objectId
end
This can be called like this:
select NewObjectId(CRYPT_GEN_RANDOM(3))
The reason CRYPT_GEN_RANDOM(3) is passed in is because calling that function is apparently side-effecting and it can't be used inside another function. I would have preferred to use an incrementing sequence for the counter portion, but a random number works as well.
Also, I noticed that you said you are using a char(24) to store the value. This returns a binary(12) since that's what the MongoDB ObjectIds are. Using binary(12) also requires half of the space to store the value.
I'm sure this isn't helpful now, but it was a fun problem to solve.
I'm going to try taking my ObjectID C# code and see if I can load it as a CLR function into SQL Server. That might give better results and performance.
I think, you can use NEWID function which generates 16-byte uniqueidentifier.
But in MongoDB The BSON ObjectId Datatype is a 12-byte binary value.
Try this
SELECT LEFT(REPLACE(CAST(NEWID() as varchar(36)),'-',''),24)
Hope, this helps.
EDITED
In article Object IDs described BSON ObjectID specification. The format includes:
TimeStamp. This is a unix style timestamp. It is a signed int representing the number of seconds before or after January 1st 1970
(UTC).
Machine. This is the first three bytes of the (md5) hash of the machine host name, or of the mac/network address, or the virtual
machine id.
Pid. This is 2 bytes of the process id (or thread id) of the process generating the object id.
Increment. This is an ever incrementing value, or a random number if a counter can't be used in the language/runtime.
The server itself and almost all drivers use the format above.
So, it is impossible to generate MongoDB ObjectID in SQL Server.
The only way to solve this problem is to change logic of the application.

Hidden Features of PostgreSQL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm surprised this hasn't been posted yet. Any interesting tricks that you know about in Postgres? Obscure config options and scaling/perf tricks are particularly welcome.
I'm sure we can beat the 9 comments on the corresponding MySQL thread :)
Since postgres is a lot more sane than MySQL, there are not that many "tricks" to report on ;-)
The manual has some nice performance tips.
A few other performance related things to keep in mind:
Make sure autovacuum is turned on
Make sure you've gone through your postgres.conf (effective cache size, shared buffers, work mem ... lots of options there to tune).
Use pgpool or pgbouncer to keep your "real" database connections to a minimum
Learn how EXPLAIN and EXPLAIN ANALYZE works. Learn to read the output.
CLUSTER sorts data on disk according to an index. Can dramatically improve performance of large (mostly) read-only tables. Clustering is a one-time operation: when the table is subsequently updated, the changes are not clustered.
Here's a few things I've found useful that aren't config or performance related per se.
To see what's currently happening:
select * from pg_stat_activity;
Search misc functions:
select * from pg_proc WHERE proname ~* '^pg_.*'
Find size of database:
select pg_database_size('postgres');
select pg_size_pretty(pg_database_size('postgres'));
Find size of all databases:
select datname, pg_size_pretty(pg_database_size(datname)) as size
from pg_database;
Find size of tables and indexes:
select pg_size_pretty(pg_relation_size('public.customer'));
Or, to list all tables and indexes (probably easier to make a view of this):
select schemaname, relname,
pg_size_pretty(pg_relation_size(schemaname || '.' || relname)) as size
from (select schemaname, relname, 'table' as type
from pg_stat_user_tables
union all
select schemaname, relname, 'index' as type
from pg_stat_user_indexes) x;
Oh, and you can nest transactions, rollback partial transactions++
test=# begin;
BEGIN
test=# select count(*) from customer where name='test';
count
-------
0
(1 row)
test=# insert into customer (name) values ('test');
INSERT 0 1
test=# savepoint foo;
SAVEPOINT
test=# update customer set name='john';
UPDATE 3
test=# rollback to savepoint foo;
ROLLBACK
test=# commit;
COMMIT
test=# select count(*) from customer where name='test';
count
-------
1
(1 row)
The easiest trick to let postgresql perform a lot better (apart from setting and using proper indexes of course) is just to give it more RAM to work with (if you have not done so already). On most default installations the value for shared_buffers is way too low (in my opinion). You can set
shared_buffers
in postgresql.conf. Divide this number by 128 to get an approximation of the amount of memory (in MB) postgres can claim. If you up it enough this will make postgresql fly. Don't forget to restart postgresql.
On Linux systems, when postgresql won't start again you will probably have hit the kernel.shmmax limit. Set it higher with
sysctl -w kernel.shmmax=xxxx
To make this persist between boots, add a kernel.shmmax entry to /etc/sysctl.conf.
A whole bunch of Postgresql tricks can be found here:
http://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks
Postgres has a very powerful datetime handling facility thanks to its INTERVAL support.
For example:
select NOW(), NOW() + '1 hour';
now | ?column?
-------------------------------+-------------------------------
2009-04-18 01:37:49.116614+00 | 2009-04-18 02:37:49.116614+00
(1 row)
select current_date ,(current_date + interval '1 year')::date;
date | date
---------------------+----------------
2014-10-17 | 2015-10-17
(1 row)
You can cast many strings to an INTERVAL type.
COPY
I'll start. Whenever I switch to Postgres from SQLite, I usually have some really big datasets. The key is to load your tables with COPY FROM rather than doing INSERTS. See documentation:
http://www.postgresql.org/docs/8.1/static/sql-copy.html
The following example copies a table to the client using the vertical bar (|) as the field delimiter:
COPY country TO STDOUT WITH DELIMITER '|';
To copy data from a file into the country table:
COPY country FROM '/usr1/proj/bray/sql/country_data';
See also here:
Faster bulk inserts in sqlite3?
My by far favorite is generate_series: at last a clean way to generate dummy rowsets.
Ability to use a correlated value in a LIMIT clause of a subquery:
SELECT (
SELECT exp_word
FROM mytable
OFFSET id
LIMIT 1
)
FROM othertable
Abitlity to use multiple parameters in custom aggregates (not covered by the documentation): see the article in my blog for an example.
One of the things I really like about Postgres is some of the data types supported in columns. For example, there are column types made for storing Network Addresses and Arrays. The corresponding functions (Network Addresses / Arrays) for these column types let you do a lot of complex operations inside queries that you'd have to do by processing results through code in MySQL or other database engines.
Arrays are really cool once you get to know 'em.
Lets say you would like to store some hyper links between pages. You might start by thinking about creating a Table kinda like this:
CREATE TABLE hyper.links (
tail INT4,
head INT4
);
If you needed to index the tail column, and you had, say 200,000,000 links-rows (like wikipedia would give you), you would find yourself with a huge Table and a huge Index.
However, with PostgreSQL, you could use this Table format instead:
CREATE TABLE hyper.links (
tail INT4,
head INT4[],
PRIMARY KEY(tail)
);
To get all heads for a link you could send a command like this (unnest() is standard since 8.4):
SELECT unnest(head) FROM hyper.links WHERE tail = $1;
This query is surprisingly fast when it is compared with the first option (unnest() is fast and the Index is way way smaller). Furthermore, your Table and Index will take up much less RAM-memory and HD-space, especially when your Arrays are so long that they are compressed to a Toast Table. Arrays are really powerful.
Note: while unnest() will generate rows out of an Array, array_agg() will aggregate rows into an Array.
Materialized Views are pretty easy to setup:
CREATE VIEW my_view AS SELECT id, AVG(my_col) FROM my_table GROUP BY id;
CREATE TABLE my_matview AS SELECT * FROM my_view;
That creates a new table, my_matview, with the columns and values of my_view. Triggers or a cron script can then be setup to keep the data up to date, or if you're lazy:
TRUNCATE my_matview;
INSERT INTO my_matview SELECT * FROM my_view;
Inheritance..infact Multiple Inheritance (as in parent-child "inheritance" not 1-to-1 relation inheritance which many web frameworks implement when working with postgres).
PostGIS (spatial extension), a wonderful add-on that offers comprehensive set of geometry functions and coordinates storage out of the box. Widely used in many open-source geo libs (e.g. OpenLayers,MapServer,Mapnik etc) and definitely way better than MySQL's spatial extensions.
Writing procedures in different languages e.g. C, Python,Perl etc (makes your life easir to code if you're a developer and not a db-admin).
Also all procedures can be stored externally (as modules) and can be called or imported at runtime by specified arguments. That way you can source control the code and debug the code easily.
A huge and comprehensive catalogue on all objects implemented in your database (i.e. tables,constraints,indexes,etc).
I always find it immensely helpful to run few queries and get all meta info e.g. ,constraint names and fields on which they have been implemented on, index names etc.
For me it all becomes extremely handy when I have to load new data or do massive updates in big tables (I would automatically disable triggers and drop indexes) and then recreate them again easily after processing has finished. Someone did an excellent job of writing handful of these queries.
http://www.alberton.info/postgresql_meta_info.html
Multiple schemas under one database, you can use it if your database has large number of tables, you can think of schemas as categories. All tables (regardless of it's schema) have access to all other tables and functions present in parent db.
You don't need to learn how to decipher "explain analyze" output, there is a tool: http://explain.depesz.com
select pg_size_pretty(200 * 1024)
pgcrypto: more cryptographic functions than many programming languages' crypto modules provide, all accessible direct from the database. It makes cryptographic stuff incredibly easy to Just Get Right.
A database can be copied with:
createdb -T old_db new_db
The documentation says:
this is not (yet) intended as a general-purpose "COPY DATABASE" facility
but it works well for me and is much faster than
createdb new_db
pg_dump old_db | psql new_db
Memory storage for throw-away data/global variables
You can create a tablespace that lives in the RAM, and tables (possibly unlogged, in 9.1) in that tablespace to store throw-away data/global variables that you'd like to share across sessions.
http://magazine.redhat.com/2007/12/12/tip-from-an-rhce-memory-storage-on-postgresql/
Advisory locks
These are documented in an obscure area of the manual:
http://www.postgresql.org/docs/9.0/interactive/functions-admin.html
It's occasionally faster than acquiring multitudes of row-level locks, and they can be used to work around cases where FOR UPDATE isn't implemented (such as recursive CTE queries).
This is my favorites list of lesser know features.
Transactional DDL
Nearly every SQL statement is transactional in Postgres. If you turn off autocommit the following is possible:
drop table customer_orders;
rollback;
select *
from customer_orders;
Range types and exclusion constraint
To my knowledge Postgres is the only RDBMS that lets you create a constraint that checks if two ranges overlap. An example is a table that contains product prices with a "valid from" and "valid until" date:
create table product_price
(
price_id serial not null primary key,
product_id integer not null references products,
price numeric(16,4) not null,
valid_during daterange not null
);
NoSQL features
The hstore extension offers a flexible and very fast key/value store that can be used when parts of the database need to be "schema-less". JSON is another option to store data in a schema-less fashion and
insert into product_price
(product_id, price, valid_during)
values
(1, 100.0, '[2013-01-01,2014-01-01)'),
(1, 90.0, '[2014-01-01,)');
-- querying is simply and can use an index on the valid_during column
select price
from product_price
where product_id = 42
and valid_during #> date '2014-10-17';
The execution plan for the above on a table with 700.000 rows:
Index Scan using check_price_range on public.product_price (cost=0.29..3.29 rows=1 width=6) (actual time=0.605..0.728 rows=1 loops=1)
Output: price
Index Cond: ((product_price.valid_during #> '2014-10-17'::date) AND (product_price.product_id = 42))
Buffers: shared hit=17
Total runtime: 0.772 ms
To avoid inserting rows with overlapping validity ranges a simple (and efficient) unique constraint can be defined:
alter table product_price
add constraint check_price_range
exclude using gist (product_id with =, valid_during with &&)
Infinity
Instead of requiring a "real" date far in the future Postgres can compare dates to infinity. E.g. when not using a date range you can do the following
insert into product_price
(product_id, price, valid_from, valid_until)
values
(1, 90.0, date '2014-01-01', date 'infinity');
Writeable common table expressions
You can delete, insert and select in a single statement:
with old_orders as (
delete from orders
where order_date < current_date - interval '10' year
returning *
), archived_rows as (
insert into archived_orders
select *
from old_orders
returning *
)
select *
from archived_rows;
The above will delete all orders older than 10 years, move them to the archived_orders table and then display the rows that were moved.
1.) When you need append notice to query, you can use nested comment
SELECT /* my comments, that I would to see in PostgreSQL log */
a, b, c
FROM mytab;
2.) Remove Trailing spaces from all the text and varchar field in a database.
do $$
declare
selectrow record;
begin
for selectrow in
select
'UPDATE '||c.table_name||' SET '||c.COLUMN_NAME||'=TRIM('||c.COLUMN_NAME||') WHERE '||c.COLUMN_NAME||' ILIKE ''% '' ' as script
from (
select
table_name,COLUMN_NAME
from
INFORMATION_SCHEMA.COLUMNS
where
table_name LIKE 'tbl%' and (data_type='text' or data_type='character varying' )
) c
loop
execute selectrow.script;
end loop;
end;
$$;
3.) We can use a window function for very effective removing of duplicate rows:
DELETE FROM tab
WHERE id IN (SELECT id
FROM (SELECT row_number() OVER (PARTITION BY column_with_duplicate_values), id
FROM tab) x
WHERE x.row_number > 1);
Some PostgreSQL's optimized version (with ctid):
DELETE FROM tab
WHERE ctid = ANY(ARRAY(SELECT ctid
FROM (SELECT row_number() OVER (PARTITION BY column_with_duplicate_values), ctid
FROM tab) x
WHERE x.row_number > 1));
4.) When we need to identify server's state, then we can use a function:
SELECT pg_is_in_recovery();
5.) Get functions's DDL command.
select pg_get_functiondef((select oid from pg_proc where proname = 'f1'));
6.) Safely changing column data type in PostgreSQL
create table test(id varchar );
insert into test values('1');
insert into test values('11');
insert into test values('12');
select * from test
--Result--
id
character varying
--------------------------
1
11
12
You can see from the above table that I have used the data type – ‘character varying’ for ‘id’
column. But it was a mistake, because I am always giving integers as id. So using varchar here is a
bad practice. So let’s try to change the column type to integer.
ALTER TABLE test ALTER COLUMN id TYPE integer;
But it returns:
ERROR: column “id” cannot be cast automatically to type integer SQL
state: 42804 Hint: Specify a USING expression to perform the
conversion
That means we can’t simply change the data type because data is already there in the column. Since the data is of type ‘character varying’ postgres cant expect it as integer though we entered integers only. So now, as postgres suggested we can use the ‘USING’ expression to cast our data into integers.
ALTER TABLE test ALTER COLUMN id TYPE integer USING (id ::integer);
It Works.
7.) Know who is connected to the Database
This is more or less a monitoring command. To know which user connected to which database
including their IP and Port use the following SQL:
SELECT datname,usename,client_addr,client_port FROM pg_stat_activity ;
8.) Reloading PostgreSQL Configuration files without Restarting Server
PostgreSQL configuration parameters are located in special files like postgresql.conf and pg_hba.conf. Often, you may need to change these parameters. But for some parameters to take effect we often need to reload the configuration file. Of course, restarting server will do it. But in a production environment it is not preferred to restarting the database, which is being used by thousands, just for setting some parameters. In such situations, we can reload the configuration files without restarting the server by using the following function:
select pg_reload_conf();
Remember, this wont work for all the parameters, some parameter
changes need a full restart of the server to be take in effect.
9.) Getting the data directory path of the current Database cluster
It is possible that in a system, multiple instances(cluster) of PostgreSQL is set up, generally, in different ports or so. In such cases, finding which directory(physical storage directory) is used by which instance is a hectic task. In such cases, we can use the following command in any database in the cluster of our interest to get the directory path:
SHOW data_directory;
The same function can be used to change the data directory of the cluster, but it requires a server restarts:
SET data_directory to new_directory_path;
10.) Find a CHAR is DATE or not
create or replace function is_date(s varchar) returns boolean as $$
begin
perform s::date;
return true;
exception when others then
return false;
end;
$$ language plpgsql;
Usage: the following will return True
select is_date('12-12-2014')
select is_date('12/12/2014')
select is_date('20141212')
select is_date('2014.12.12')
select is_date('2014,12,12')
11.) Change the owner in PostgreSQL
REASSIGN OWNED BY sa TO postgres;
12.) PGADMIN PLPGSQL DEBUGGER
Well explained here
It's convenient to rename an old database rather than mysql can do. Just using the following command:
ALTER DATABASE name RENAME TO new_name

Using SQL Server 2008 Geography types with nHibernate's CreateSQLQuery

I am trying to issue a SQL update statement with nHibernate (2.0.1GA) like this:
sqlstring = string.Format("set nocount on;update myusers set geo=geography::Point({0}, {1}, 4326) where userid={2};", mlat, mlong, userid);
_session.CreateSQLQuery(sqlstring).ExecuteUpdate();
However I receive the following error: 'geography#p0' is not a recognized built-in function name.
I thought CreateSQLQuery would just pass the SQL I gave it and execute it...guess not. Any ideas on how I can do that within the context of nHibernate?
I'm pretty sure I can tell you what is happening, but I don't know if there is a fix for it.
I think the problem is that the ':' character is used by NHibernate to create a named parameter. Your expression is getting changed to:
set nocount on;update myusers set geo=geography#p0({0}, {1}, 4326) where userid={2};
And #p0 is going to be a SQL variable. Unfortunately I can't find any documentation for escaping colons so they are not treated as a named parameter.
If an escape character exists (my quick skim of the NHibernate source didn't find one; Named parameters are handled in NHibernate.Engine.Query.ParameterParser if you want to spend a little more time searching), then you could use that.
Other solutions:
Add an escape character to the source. You can then use a modified version of NHibernate. If you do this, you should submit your patch to the team so it can be included in the real thing and you don't have to maintain a modified version of the source (no fun).
Create a user defined function in your DB that returns a geography::Point, then call your function instead of the standard SQL function. This seems like the quickest/easiest way to get up and running, but also feels a bit like a band-aid.
See if there is something in NHibernate Spatial that will let you programmatically add the geography::Point() [or edit the code for that project to add one and submit the patch to that team].
"{whatever} is not a recognized built-in function name" is a SQL Server error message, not sure what Hibernate is doing there but SQL Server is the one complaining about it.
There is an implicit conversion from varchar to Point.
Use NHibernate to set the geographic parameters to their string representation
Define a SQL query template with named paramter loc:
const string Query = #"SELECT {location.*}
FROM {location}
WHERE {location}.STDistance(:loc) is not null
ORDER BY {location}.STDistance(:loc)";
Set the parameter to a string representation of Point:
return session
.CreateSQLQuery(Query)
.AddEntity("location", typeof (Location))
.SetString("loc", "Point (53.39006999999999 -3.0084007)")
.SetMaxResults(1)
.UniqueResult<Location>();
This is for a Select. but I see no reason why it wouldn't work for an Insert or Update.
Following on #Chris's answer, here is a copy and paste solution:
CREATE FUNCTION GetPoint
(
#lat float,
#lng float,
#srid int
)
RETURNS geography
AS
BEGIN
declare #point geography = geography::Point(#lat, #lng, #srid);
RETURN #point
END
GO
The you do
dbo.GetPoint(#Latitude, #Longitude, 4326)
instead of
geography::Point(#Latitude, #Longitude, 4326);
And NH is happy

Resources