How to find number of tables in a database and size of database Pervasive - pervasive

I am using Pervasive Control Center v11. How do I find the size of an entire database and the number of tables?
The following results in error:'error in expression'
SELECT pg_size_pretty( pg_database_size('RECORDS'))

As far as I know, there's no way to get the size of the database for a Pervasive PSQL database through a SQL Statement. You can look at the actual files on disk to see the size.
You can get a count of the tables by executing the following statement:
select count(*) from x$file
This count will include the system tables (X$XXXX).
The pg_ functions aren't Pervasive PSQL functions but are PostgreSQL functions and won't work with a Pervasive / Actian database.

Related

Migrate PostgreSQL database into MS SQL Server [duplicate]

I have a PostgreSQL database that I want to move to SQL Server -- both schema and data. I am poor so I don't want to pay any money. I am also lazy, so I don't want to do very much work. Currently I'm doing this table by table, and there are about 100 tables to do. This is extremely tedious.
Is there some sort of trick that does what I want?
You should be able to find some useful information in the accepted answer in this Serverfault page: https://serverfault.com/questions/65407/best-tool-to-migrate-a-postgresql-database-to-ms-sql-2005.
If you can get the schema converted without the data, you may be able to shorten the steps for the data by using this command:
pg_dump --data-only --column-inserts your_db_name > data_load_script.sql
This load will be quite slow, but the --column-inserts option generates the most generic INSERT statements possible for each row of data and should be compatible.
EDIT: Suggestions on converting the schema follows:
I would start by dumping the schema, but removing anything that has to do with ownership or permissions. This should be enough:
pg_dump --schema-only --no-owner --no-privileges your_db_name > schema_create_script.sql
Edit this file to add the line BEGIN TRANSACTION; to the beginning and ROLLBACK TRANSACTION; to the end. Now you can load it and run it in a query window in SQL Server. If you get any errors, make sure you go to the bottom of the file, highlight the ROLLBACK statement and run it (by hitting F5 while the statement is highlighted).
Basically, you have to resolve each error until the script runs through cleanly. Then you can change the ROLLBACK TRANSACTION to COMMIT TRANSACTION and run one final time.
Unfortunately, I cannot help with which errors you may see as I have never gone from PostgreSQL to SQL Server, only the other way around. Some things that I would expect to be an issue, however (obviously, NOT an exhaustive list):
PostgreSQL does auto-increment fields by linking a NOT NULL INTEGER field to a SEQUENCE using a DEFAULT. In SQL Server, this is an IDENTITY column, but they're not exactly the same thing. I'm not sure if they are equivalent, but if your original schema is full of "id" fields, you may be in for some trouble. I don't know if SQL Server has CREATE SEQUENCE, so you may have to remove those.
Database functions / Stored Procedures do not translate between RDBMS platforms. You'll need to remove any CREATE FUNCTION statements and translate the algorithms manually.
Be careful about encoding of the data file. I'm a Linux person, so I have no idea how to verify encoding in Windows, but you need to make sure that what SQL Server expects is the same as the file you are importing from PostgreSQL. pg_dump has an option --encoding= that will let you set a specific encoding. I seem to recall that Windows tends to use two-byte, UTF-16 encoding for Unicode where PostgreSQL uses UTF-8. I had some issue going from SQL Server to PostgreSQL due to UTF-16 output so it would be worth researching.
The PostgreSQL datatype TEXT is simply a VARCHAR without a max length. In SQL Server, TEXT is... complicated (and deprecated). Each field in your original schema that are declared as TEXT will need to be reviewed for an appropriate SQL Server data type.
SQL Server has extra data types for UNICODE data. I'm not familiar enough with it to make suggestions. I'm just pointing out that it may be an issue.
I have found a faster and easier way to accomplish this.
First copy your table (or query) to a tab delimited file like so:
COPY (SELECT siteid, searchdist, listtype, list, sitename, county, street,
city, state, zip, georesult, elevation, lat, lng, wkt, unlocated_bool,
id, status, standard_status, date_opened_or_reported, date_closed,
notes, list_type_description FROM mlocal) TO 'c:\SQLAzureImportFiles\data_script_mlocal.tsv' NULL E''
Next you need to create your table in SQL, this will not handle any schema for you. The schema must match your exported tsv file in field order and data types.
Finally you run SQL's bcp utility to bring in the tsv file like so:
bcp MyDb.dbo.mlocal in "\\NEWDBSERVER\SQLAzureImportFiles\data_script_mlocal.tsv" -S tcp:YourDBServer.database.windows.net -U YourUserName -P YourPassword -c
A couple of things of note that I encountered. Postgres and SQL Server handle boolean fields differently. Your SQL Server schema need to have your boolean fields set to varchar(1) and the resulting data will be 'f', 't' or null. You will then have to convert this field to a bit. doing something like:
ALTER TABLE mlocal ADD unlocated bit;
UPDATE mlocal SET unlocated=1 WHERE unlocated_bool='t';
UPDATE mlocal SET unlocated=0 WHERE unlocated_bool='f';
ALTER TABLE mlocal DROP COLUMN unlocated_bool;
Another thing is the geography/geometry fields are very different between the two platforms. Export the geometry fields as WKT using ST_AsText(geo) and convert appropriately on the SQL Server end.
There may be more incompatibilities needing tweaks like this.
EDIT. So whereas this technique does technically work, I am trying to transfer several million records from 100+ tables to SQL Azure and bcp to SQL Azure is pretty flaky it turns out. I keep getting intermittent Unable to open BCP host data-file errors, the server is intermittently timing out and for some reason some records are not getting transferred with no indications of errors or problems. So this technique is not stable for transferring large amounts of data to Azure SQL.
You can use Navicate a powerful GUI tool for working with various databases including Postgres and SQL Server.
You can transfer both schema and data easily as follows:
Create two connection for source and target database
Go to Tools -> Data Transfer
Select source database and target database with its IP, database name and schema
as you can see in the option, if target table is not exist, it would create
Tada, it takes 10 mins to transfer whole my 63 tables and its data from Postgres to SQL Server.
Enjoy it!

Read Oracle DB FROM Microsoft SQL Server Manager 2012 to find Unknown Table

I am working on a project that requires me to read/write to an OLEDB. I do not know what table the information I'm looking for is in and there are about 150+ tables. Is there a way to search each table for a certain column header without querying each table one by one? so far I've just been doing
SELECT * FROM [name]..[name].[name] GO
And reading the headers in data grid view on a vb.net program I made.
Try this if you have a user with dba privileges on the Oracle database.
SELECT OWNER, TABLE_NAME,COLUMN_NAME
FROM sys.dba_tab_columns
WHERE column_name LIKE UPPER('%Your column name%');

Quick way to perform a fulltext-search on MS SQL Server

First of all: i don't need a full-text-search engine, i don't need full-text-search in my code. I have a database with ~2000 tables, and i need to find the table and column in which certain information is stored, for developing purposes. Is there any quick way (maybe an SQL Server Management Studio trick that i should know of) to do this? I think phpmyadmin provides such a feature for mysql dbs. At the moment i'm seriously thinking of dumping the database to an .sql file and use a text editor to search for the phrases i'm looking for.
Check the INFORMATION_SCHEMA. You can select on it - there is a table containing all the field names etc. and you can then do search on that one.
I don't see a way how to do it without dynamic SQL - get list of all tables and their columns from sys.tables and sys.columns (don't forget to add proper schema if you're using them), construct query that checks for the values you're trying to find and stores table and column name in temporary table, place all queries into (temp) table and finally cursor/loop over that table executing all queries.
PS. your idea of dumping everything into *.sql files should work as well, depends on the volume of data.

How to migrate a PostgreSQL database into a SQLServer one?

I have a PostgreSQL database that I want to move to SQL Server -- both schema and data. I am poor so I don't want to pay any money. I am also lazy, so I don't want to do very much work. Currently I'm doing this table by table, and there are about 100 tables to do. This is extremely tedious.
Is there some sort of trick that does what I want?
You should be able to find some useful information in the accepted answer in this Serverfault page: https://serverfault.com/questions/65407/best-tool-to-migrate-a-postgresql-database-to-ms-sql-2005.
If you can get the schema converted without the data, you may be able to shorten the steps for the data by using this command:
pg_dump --data-only --column-inserts your_db_name > data_load_script.sql
This load will be quite slow, but the --column-inserts option generates the most generic INSERT statements possible for each row of data and should be compatible.
EDIT: Suggestions on converting the schema follows:
I would start by dumping the schema, but removing anything that has to do with ownership or permissions. This should be enough:
pg_dump --schema-only --no-owner --no-privileges your_db_name > schema_create_script.sql
Edit this file to add the line BEGIN TRANSACTION; to the beginning and ROLLBACK TRANSACTION; to the end. Now you can load it and run it in a query window in SQL Server. If you get any errors, make sure you go to the bottom of the file, highlight the ROLLBACK statement and run it (by hitting F5 while the statement is highlighted).
Basically, you have to resolve each error until the script runs through cleanly. Then you can change the ROLLBACK TRANSACTION to COMMIT TRANSACTION and run one final time.
Unfortunately, I cannot help with which errors you may see as I have never gone from PostgreSQL to SQL Server, only the other way around. Some things that I would expect to be an issue, however (obviously, NOT an exhaustive list):
PostgreSQL does auto-increment fields by linking a NOT NULL INTEGER field to a SEQUENCE using a DEFAULT. In SQL Server, this is an IDENTITY column, but they're not exactly the same thing. I'm not sure if they are equivalent, but if your original schema is full of "id" fields, you may be in for some trouble. I don't know if SQL Server has CREATE SEQUENCE, so you may have to remove those.
Database functions / Stored Procedures do not translate between RDBMS platforms. You'll need to remove any CREATE FUNCTION statements and translate the algorithms manually.
Be careful about encoding of the data file. I'm a Linux person, so I have no idea how to verify encoding in Windows, but you need to make sure that what SQL Server expects is the same as the file you are importing from PostgreSQL. pg_dump has an option --encoding= that will let you set a specific encoding. I seem to recall that Windows tends to use two-byte, UTF-16 encoding for Unicode where PostgreSQL uses UTF-8. I had some issue going from SQL Server to PostgreSQL due to UTF-16 output so it would be worth researching.
The PostgreSQL datatype TEXT is simply a VARCHAR without a max length. In SQL Server, TEXT is... complicated (and deprecated). Each field in your original schema that are declared as TEXT will need to be reviewed for an appropriate SQL Server data type.
SQL Server has extra data types for UNICODE data. I'm not familiar enough with it to make suggestions. I'm just pointing out that it may be an issue.
I have found a faster and easier way to accomplish this.
First copy your table (or query) to a tab delimited file like so:
COPY (SELECT siteid, searchdist, listtype, list, sitename, county, street,
city, state, zip, georesult, elevation, lat, lng, wkt, unlocated_bool,
id, status, standard_status, date_opened_or_reported, date_closed,
notes, list_type_description FROM mlocal) TO 'c:\SQLAzureImportFiles\data_script_mlocal.tsv' NULL E''
Next you need to create your table in SQL, this will not handle any schema for you. The schema must match your exported tsv file in field order and data types.
Finally you run SQL's bcp utility to bring in the tsv file like so:
bcp MyDb.dbo.mlocal in "\\NEWDBSERVER\SQLAzureImportFiles\data_script_mlocal.tsv" -S tcp:YourDBServer.database.windows.net -U YourUserName -P YourPassword -c
A couple of things of note that I encountered. Postgres and SQL Server handle boolean fields differently. Your SQL Server schema need to have your boolean fields set to varchar(1) and the resulting data will be 'f', 't' or null. You will then have to convert this field to a bit. doing something like:
ALTER TABLE mlocal ADD unlocated bit;
UPDATE mlocal SET unlocated=1 WHERE unlocated_bool='t';
UPDATE mlocal SET unlocated=0 WHERE unlocated_bool='f';
ALTER TABLE mlocal DROP COLUMN unlocated_bool;
Another thing is the geography/geometry fields are very different between the two platforms. Export the geometry fields as WKT using ST_AsText(geo) and convert appropriately on the SQL Server end.
There may be more incompatibilities needing tweaks like this.
EDIT. So whereas this technique does technically work, I am trying to transfer several million records from 100+ tables to SQL Azure and bcp to SQL Azure is pretty flaky it turns out. I keep getting intermittent Unable to open BCP host data-file errors, the server is intermittently timing out and for some reason some records are not getting transferred with no indications of errors or problems. So this technique is not stable for transferring large amounts of data to Azure SQL.
You can use Navicate a powerful GUI tool for working with various databases including Postgres and SQL Server.
You can transfer both schema and data easily as follows:
Create two connection for source and target database
Go to Tools -> Data Transfer
Select source database and target database with its IP, database name and schema
as you can see in the option, if target table is not exist, it would create
Tada, it takes 10 mins to transfer whole my 63 tables and its data from Postgres to SQL Server.
Enjoy it!

What is the best way to move data between postgresql and SQL Server databases

If we have the same database schema in a database on Postgresql and SQL Server (table, primary keys, indexes and triggers are the same) what would be the best way to move data from one database to another? Currently we have one in-house .NET program that does the following through two ODBC connections:
read a row from source database table 1
construct an insert statement
write a row into destination database table 1
Go to 1 if there are more rows in the table
Move to next table in database and go to 1
Needless to say: this is a very slow process and I would be interested if there was a better/faster solution to this?
If it's a "one off" migration, there's a tool you get with SQL Server which allows you to move data around between databases (I'm not on a Windows machine right now, so can't tell you what it's called - something like import/export tool).
If it's an ongoing synchronisation, you can look at the MS Sync framework, which plays nice with SQL Server and Postgres.
The answer is bulk export and bulk loading. You can go much faster by using the copy command in PostgreSQL https://www.postgresql.org/docs/current/static/sql-copy.html to dump data from the tables in the CSV format and then use the bulk insert in SQLServer Import CSV file into SQL Server. A rule of thumb is to harness parallelism for the process. Check if you can load the data ins CSV in parallel to SQL Server and if you have many tables then you can also have a parallelism on the level of separate tables. By the way, loading or migrating data row by row is one of the slowest ways.

Resources