Postgresql table backup restoration - database

I have a database which contains 50 tables (5 schemas, 5 tablespaces). And tried to take a backup of few tables (each table in different tablespace) using following command.
$psql -U my_db_user my_db_name -t my_table_1 -t my_table_2 -t my_table_3 > ttables.sql
Above command is working fine to take the *sql backup. But the table column value is having null values. While restoring the the dump using the following command getting some error due to null (\N) values which is in backup file (ttables.sql).
$cat ttables.sql | psql -d new_db -U new_db_user
Is there any way to avoid \N characters in backup dump file? or Any wrong with backup / restore command which I have used?
(Postgres version 9.1)

Related

How to restore database when files are claimed?

I have to restore a database and am following this official documentation where I follow two steps:
- List the files
- Run the Restore command with respect to the files aforementioned.
However, I am facing "already claimed" error.
I tried to use different names but it is not possible since the backup has certain files. I also tried other answers across different domains, all have GUI.
The first command that I ran was:
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd -S localhost \
-U SA -P '<YourStrong#Passw0rd>' \
-Q 'RESTORE FILELISTONLY FROM DISK = "/var/opt/mssql/backup/us_national_statistics.bak"' \
| tr -s ' ' | cut -d ' ' -f 1-2
I got the following output:
LogicalName PhysicalName
-------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
us_national_statistics C:\Program
us_national_statistics_log C:\Program
Then, as per the documentation, I ran this command:
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd \
-S localhost -U SA -P '<YourStrong#Passw0rd>' \
-Q 'RESTORE DATABASE US_NATIONAL FROM DISK = "/var/opt/mssql/backup/us_national_statistics.bak" WITH MOVE "us_national_statistics" TO "C:\Program", MOVE "us_national_statistics_log" TO "C:\Program"'
Here, I get the following error:
Msg 3176, Level 16, State 1, Server 0a6a6aac7476, Line 1
File 'C:\Program\New' is claimed by 'us_national_statistics_log'(2) and 'us_national_statistics'(1). The WITH MOVE clause can be used to relocate one or more files.
Msg 3013, Level 16, State 1, Server 0a6a6aac7476, Line 1
RESTORE DATABASE is terminating abnormally.
I expect the database to be restored.
You can't restore to C:\Program for multiple reasons. That's not a full path (you seem to have lost the string after the first space in Program Files); the data and log can't both be put in the same file; you don't typically have write access to the root of any drive; and C:\ is not valid in Docker or Linux.
You need the LogicalName, but you should not be using the PhysicalName directly, either in the case where you are restoring to Docker or Linux, or in the case where you are restoring a database alongside an existing copy that you want to keep, or in the case where you are restoring a database to a different instance (which will more than likely have a different data folder structure).
Try:
RESTORE DATABASE US_NATIONAL_COPY
FROM DISK = "/var/opt/mssql/backup/us_national_statistics.bak"
WITH REPLACE, RECOVERY,
MOVE "us_national_statistics" TO "/var/opt/mssql/data/usns_copy.mdf",
MOVE "us_national_statistics_log" TO "/var/opt/mssql/data/usns_copy.ldf";

Backup MSSQL DB Schema

I'm trying to back up my production database to my local dev machine with the following constraints:
It will be a regular backup, so using the UI should (ideally?) be avoided.
Some tables are marked to be deleted, so these should not be included in the backup.
I would like to be able to pass the solution (file/package/etc) to other members of the team and they should only have to change a couple of variables in one file and then they can execute and get their own backup.
The DB is over 100GB and contains data that I won't need. I have identified the top largest tables and would only like to take say 5k rows from each - this should provide me with enough data for my purposes and limit space used on my local drives.
I have tried beginning with backing up the schema only using the following methods:
Using the UI to backup Schema only (Tasks -> Generate Scripts)
Get the following error:
Microsoft.SqlServer.Management.Smo.FailedOperationException: Discover dependencies failed. ---> System.ArgumentException: Item has already been added. Key in dictionary: 'Server[#Name='PRODSQL']/Database[#Name='Database1']/UnresolvedEntity[#Name='SomeObjectName' and #Schema='Some.Schema.SomeObjectName']' Key being added: 'Server[#Name='PRODSQL']/Database[#Name='Database1']/UnresolvedEntity[#Name='SomeObjectName' and #Schema='Some.Schema.SomeObjectName']' at System.Collections.SortedList.Add(Object key, Object value) at Microsoft.SqlServer.Management.Smo.DependencyTree..ctor(Urn[] urns, DependencyChainCollection dependencies, Boolean fParents, Server server) at Microsoft.SqlServer.Management.Smo.DependencyWalker.DiscoverDependencies(Urn[] urns, Boolean parents) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.SqlScriptPublish.GeneratePublishPage.worker_DoWork(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)
So I moved on.
Tasks -> Copy Database
I get a message saying there is not enough space on disk
Extract Data-tier Application
Get same error as in 1. above.
Powershell script, and a batch file calling sqlcmd on the generated .sql files after the PS script was run.
I was sure this method would work and it took me 2 days to get this far, but still working through multiple errors.
Basically I am doing the following:
Create db objects from the source DB (Schemas, SPs, Tables, Views, UDFs, Triggers, Indexes) and output them to .sql files - Roughly followed http://cfmumbojumbo.com/index.cfm/coding/using-powershell-to-backup-your-stored-procedures-and-triggers/ with some more work added.
If the database already exists on my server, kill, drop, then recreate it (DropCreate.sql):
IF(db_id(#DatabaseName) IS NOT NULL)
BEGIN
DECLARE #SQL VARCHAR(max)
SELECT #SQL = COALESCE(#SQL,'') + 'Kill ' + Convert(VARCHAR, SPId) + ';'
FROM MASTER..SysProcesses
WHERE DBId = DB_ID(#DatabaseName) AND SPId <> ##SPId
EXEC(#SQL);
END
DROP DATABASE MYDATABASE
CREATE DATABASE MYDATABASE ON PRIMARY (...)
The .bat file is essentially doing this
sqlcmd -S %Server% -U %UserName% -P %Password% -i C:\Database\DatabaseScripts\TestDatabase\DropCreateDB.sql
#loop through and execute multiple .sql files in the directory
for /f %f in (`dir /b C:\Database\DatabaseScripts\TestDatabase\2018-04-18-15-13-13\StoredProcedures\`) do sqlcmd -S %Server% -d %Database% -U %UserName% -P %Password% -i %f
#Just one sql file in this directory, execute it
sqlcmd -S %Server% -d %Database% -U %UserName% -P %Password% -i C:\Database\DatabaseScripts\TestDatabase\2018-04-18-15-13-13\Schemas\AllSchemas.sql
sqlcmd -S %Server% -d %Database% -U %UserName% -P %Password% -i C:\Database\DatabaseScripts\TestDatabase\2018-04-18-15-13-13\Tables\AllTables.sql
.............
The latest error I'm experiencing is:
Changed database context to 'master'.
Msg 6107, Level 14, State 1, Server MYSERVER, Line 1
Only user processes can be killed.
do was unexpected at this time.
Everywhere I turn I am experiencing new errors and have spent over 2 days on it, and I haven't even got to getting the data yet..
TLDR: Is there any easier way backup MSSQL Db schema and top n rows of data from certain tables?

Sybase IQ - BCP error

Trying to copy data from Sybase IQ Table to .txt file using bcp
Command
C:\Windows\system32>bcp xxTable out \xx\xx\xxTable.txt -S HSPREP_15 -U xxUser -A 16384 -c -t\t$$ -P xxpwd -F -L 1>\xx\xx\xxTable_rpt.txt
Source Table got about 132 million records
bcp copy about 60 million records to .txt file and then fails with below error:
There is no issue with disk spare or user credentials.
Any clues? where should i be looking into?
Bcp out doesnt create a file bigger than 2GB which might the case please check

Postgres pg_dump issue

Good Day,
I've been trying to restore a dump file using the psql client and I'm getting this error:
psql.bin:/home/user/Desktop/dump/dumpfile.sql:907:
ERROR: more than one function named "pg_catalog.avg"
CONTEXT: COPY pg_aggregate, line 1, column aggfnoid: "pg_catalog.avg"
I created the dump file from a different Postgres DB (version: 9.4.5) using the command:
pg_dump --username=pgroot ${tables} --no-owner --no-acl --no-security
--no-tablespaces --no-unlogged-table-data --data-only dbname > dumpfile.sql
Where ${tables} is a variable in the for:
-T table1 -T table2 -T table3 ...
This is because I'm dumping specific tables listed in a new-line delimited file. Hence its not the entire database but specific tables I want to dump.
I tried loading the the dump file int another Postgres DB (9.6) using the following command:
psql -d dbname -U superuser -v "ON_ERROR_STOP=1" -f
${DUMP_DIR}dumpfile.sql -1 -a > ${LOG_ERR_DIR}dumpfile.log
2>${LOG_ERR_DIR}dumpfile.err
This gave the error mentioned above. It seems this error is occurring because the dump file tries to add the function "pg_catalog.avg" to the database and it gives an error because it already exists.
The sql file generated by the pg_dump does not have anywhere in it where it creates the pg_catalog.avg function, so i don't know why this is occurring.
So I tried dropping the database and creating it from template0, and still I got the error. It seems to me that its a bug based on the follwoing post:
Re: BUG #6176: pg_dump dumps pg_catalog tables
I'm stuck trying to reslove this issue. If anyone can help me resolve this issue please respond?
Thank you in advance,
j3rg
I found out what was causing this issue. It seems that there was an extra newline in the file containing the table listing. This was causing an extra table argument with no table specified and in turn pg_dump exported the sys tables into the file. I file I was searching for the avg function was the wrong file too.

How to migrate a MySQL4 to MySQL5 database

I have a database (db4) that was created by MySQL4, and a database (db5) that was created by MySQL5. db4 contains several tables with the charset latin1 and several indices, but no data that was encrypted using the MySQL "PASSWORD" function. db5 is empty.
I want to migrate all tables and indices from db4 to db5 (which are actually on the same server). Ideally this should be done without any loss of information and within a short period of time.
Which terminal commands do I need to download the complete database from MySQL4 and insert the data afterwards to db5? Do I have to re-create the indices?
You can make a dump of database in mysql4 using mysqldump. And than upload it to MySQL5 using mysql command.
mysqldump dbname > file
mysql dbname < file
All the indexes will be recreated automatically.
In case anyone else needs to move a database from Mysql4 to Mysql5, here's what I did.
dump the database from the mysql4 server
mysqldump -uuser -ppass db4 > db4.sql
fix some syntax problems (source)
# change comment style from -- to #
sed -r -i -e 's/^--(.*)$/#\1/' db4.sql
# change type declaration keyword from "TYPE" to "ENGINE"
sed -i -e 's/) TYPE=/) ENGINE=/' db4.sql
# adapt timestamp field definition
sed -i -e 's/timestamp(14) NOT NULL,$/timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,/' db4.sql
on the mysql5 server you can now import the modified SQL dump
mysql -uuser -ppass db5 < db4.sql
Sven, I think doing a backup of your data in db4 and restoring it in db5 will work for you.
Backup
mysqldump database_name > file_name.sql
Restore
mysql < file_name.sql
It can be also done within a single step, using the following command:
mysqldump -u dbo4 --password="..." --default-character-set="latin1" db4 | mysql -S /tmp/mysql5.sock -u dbo5 --password="..." --default-character-set="latin1" db5
Unfortunately the standard-values with special characters are not correctly imported, and there seems to be no way to avoid that: How to maintain character-set of standard-values when uploading MySQL-dump.

Resources