First forgive me for my English. It is a little bad. Second forgive my ignorance, i'm newiest in postgres
I'm having trouble when I try to up a backup database on another database. I need to dump the database just to get one table, but I only have the files that was in /var/lib/pgsql/data/base/
Here what I try:
I create a database named "test" with OID 227763 so I put the files of the old database to this new database with another OID. I fix the folder and files permissions, but when I log into "test" and run select * from pg_tables; the tables does not appears to me. And when I try to create the table on PhpPgAdmin, I got
ERROR: relation already exists
I'm trying to do this because I need to know which of this files is the table that i want. I will log into database and run SELECT oid,* from pg_class; to get the OID.
I found the old OID database in /var/lib/pgsql/data/global/pg_database
If anyone can help me, I thank you.
There are many ways to backup and restore an entire database or a single table. It sounds like you need to be using pgDump instead of working on individual files. A file level copy is likely to corrupt your database if not in backup mode and if not copying the entire thing + archive logs.
If you MUST copy it by files, make sure the database is shut down for maximum safety.
For me, if I had one table to backup, I'd use pg_dump
pg_dump -U {user-name} {source_db} -f {dumpfilename.sql}
you can use the -t flag to list a single table if you like.
Related
Basically when creating new database, PostgreSQL make directory in "data/base" with OID.
Now I have a directory like this from my previous database, can i import this directory as a real database in pgAdmin?
I have a folder from my old database called 16384.I past it in my data/base folder but pgAdmin does not recognize it as a database.
I want to import it in my Pgadmin.
Is there any way to do this?
Thanks a lot.
no. you can't do it with pgadmin.
try starting postgres with your data_directory pg_ctl -D DIR_WITH_BASE_CATALOG. If you lucky to star it, you can try taking a full backup - if success, the directory is usable, if not, you would recommend asking experts to help you extract as much as possible. If you decide to use zero_damaged_pages or other advanced features you can irreversibly destroy data (if there still is what to destroy)
anyway, before you start, I'd recommend copying the whole data_directory to somewhere else, and trying to start cluster from copy, not original...
Here are the details:
The database has to be archived such that records older than 6 months can be copied to a new database and deleted from the main(production) database. The complexity here will be to copy all rows in all tables which have reference each other. After that, these copied rows from some of the tables (which are really huge and whose data is no more needed) will be deleted.
The postgres database is an Amazon RDS instance.
What is the best way to achieve this?
I was thinking either a Springboot application
OR
Have postgresql.conf invoke a shell script which invokes a sql batch.
For the second approach, I am not sure how to edit a amazon RDS postgresql.conf file and where to specify the shell script. Where would be the sql batch written? This is a little new to me, appreciate any pointers.
It will be much faster if you do everything server side instead of using a Springboot application. The problem is not dump/restore which you could easily do with pg_dump utility or psql -d dbname -t -A -F";" -c "SELECT * FROM yourdata WHERE cutdate<=current_timestamp-interval '6 months'" > output.csv
But you have to guarantee that everything that is exported is loaded into the second database and that you do not delete anything that has not been exported.
I would first SELECT a subset of primary keys into a temporary table. Then use server side COPY command to export the preselected keys (and all its dependencies)
COPY (SELECT d.* FROM yourdata d INNER JOIN temporal t WHERE d.pk=t.pk) To '/tmp/yourdata.csv' WITH CSV DELIMITER ',';
After all the export files have been generated
DELETE FROM yourdata WHERE pk IN (SELECT pk FROM temporal)
Then on the backup database do
COPY yourdata(column1,column2,column3) FROM '/tmp/yourdata.csv' DELIMITER ',' CSV
You can write a script that invokes all that commands on server side using psql command line tool and last move the imported files into a permanent location just in case something went wrong and you need to process them again.
See Save PL/pgSQL output from PostgreSQL to a CSV file and How to import CSV file data into a PostgreSQL table?
Someone has sent me a database dump as a .sql file dumped using phymyadmin interface. I am trying to restore the dump using the mysql command prompt, however I keep getting empty tables. The .sql file creates a database before creating tables and populating them. When the empty tables message first showed up I thought it was because the database had to be created before running the script, so I created the db and ran the script again, however the tables still show up as empty set.
I tried these steps,
logged in as root.
create database x (this is the name of the db in the create db command in the .sql file)
mysql x -u root -p < my_x_db.sql
logged in as root
show databases
use x
show tables -- empty set
What should I do different and how can I troubleshoot this?
Thanks
There was a create database statement in the .sql file. I solved this by simply commenting the statement. I had already created the database with the same name externally.
I'm struggling to find a suitable solution to this. I have a fairly large SQL Server 2008 Express database containing 60+ tables (many with key constraints) and a whole bunch of data.
I need to essentially copy all of these tables and the data and the constraints exactly from one database to another. I'm basically duplicating website A - to produce an exact copy (website B) on a different domain so we end up with two completely identical websites running in parallel, each with their own identical database to begin with.
Database A is up and running on website A. Database B is set up and has it's own user. I just need to get the tables and the data intact from A to B. I can them modify my web.config connection to use the log-in credentials for database B and it should work.
I've tried backing up database A and restoring to database B via Management Studio Express, but it tells me:
System.Data.SqlClient.SqlError: The backup set holds a backup of a database other than the existing 'database-B' database.
(Microsoft.SqlServer.Smo)
I've also tried right clicking database A in Management Studio Express and going to Tasks > Generate scripts. But when I do this and run the SQL scripts on database B I get a whole load of errors to do with foreign keys etc as it imports the content. It seems like it's doing the right thing, but can't handle the different keys/relationships.
So does anyone know of a simple, sure-fire way of getting my data 100% exact and intact from database A to database B?
I think I used SQL Server Database Publishing Wizard to do something like this about 5 years ago, but that product seems to be defunct now - I tried to install it and it wanted me to regress my version of SQL Server to 2005, so I'm not going there!
Don't use the UI for this. If you're not familiar with the various aspects of BACKUP/RESTORE the UI is just going to lead you down the wrong path for a lot of options. The simplest backup command would be:
BACKUP DATABASE dbname TO DISK = 'C:\some folder\dbname.bak' WITH INIT;
Now to restore this as a different database, you need to know the file names because it will try to put the same files in the same place. So if you run the following:
EXEC dbname.dbo.sp_helpfile;
You should see output that contains the names and paths of the data and log files. When you construct your restore, you'll need to use these, but replace the paths with the name of the new database, e.g.:
RESTORE DATABASE newname FROM DISK = 'C\some folder\dbname.bak'
WITH MOVE 'dbname' TO 'C:\path_from_sp_helpfile_output\newname_data.mdf',
MOVE 'dbname_log' TO 'C:\path_from_sp_helpfile_output\newname_log.ldf';
You'll have to replace dbname and newname with your actual database names, and also some folder and C:\path_from_sp_helpfile_output\ with your actual paths. I can't get more specific in my answer unless I know what those are.
** EDIT **
Here is a full repro, which works completely fine for me:
CREATE DATABASE [DB-A];
GO
EXEC [DB-A].dbo.sp_helpfile;
Partial results:
name fileid filename
-------- ------ ---------------------------------
DB-A 1 C:\Program Files\...\DB-A.mdf
DB-A_log 2 C:\Program Files\...\DB-A_log.ldf
Now I run the backup:
BACKUP DATABASE [DB-A] TO DISK = 'C:\dev\DB-A.bak' WITH INIT;
Of course if the clone target (in this case DB-B) already exists, you'll want to drop it:
USE [master];
GO
IF DB_ID('DB-B') IS NOT NULL
BEGIN
ALTER DATABASE [DB-B] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE [DB-B];
END
GO
Now this restore will run successfully:
RESTORE DATABASE [DB-B] FROM DISK = 'C:\dev\DB-A.bak'
WITH MOVE 'DB-A' TO 'C:\Program Files\...\DB-B.mdf',
MOVE 'DB-A_log' TO 'C:\Program Files\...\DB-B_log.ldf';
If you are getting errors about the contents of the BAK file, then I suggest you validate that you really are generating a new file and that you are pointing to the right file in your RESTORE command. Please try the above and let me know if it works, and try to pinpoint any part of the process that you're doing differently.
I realize this is an old question, but I was facing the same problem and I found that the UI was easier and faster than creating scripts to do this.
I believe Dan's problem was that he created the new database first and then tried to restore another database into it. I tried this as well and got the same error. The trick is to not create the database first and name the database during the "Restore Database" process.
The following article is somewhat useful in guiding you through the process:
http://msdn.microsoft.com/en-us/library/ms186390(v=sql.105).aspx
I use a Wordpress plugin called 'Shopp'. It stores product images in the database rather than the filesystem as standard, I didn't think anything of this until now.
I have to move server, and so I made a backup, but restoring the backup is proving a horrible task. I need to restore one table called wp_shopp_assets which is 18MB.
Any advice is hugely appreciated.
Thanks,
Henry.
For large operations like this it is better to go to command line. phpMyAdmin gets tricky when lots of data is involved because there are all sorts of timeouts in PHP that can trip it up.
If you can SSH into both servers, then you can do a sequence like the following:
Log in to server1 (your current server) and dump the table to a file using "mysqldump" --- mysqldump --add-drop-table -uSQLUSER -pPASSWORD -h
SQLSERVERDOMAIN DBNAME TABLENAME > BACKUPFILE
Do a secure copy of that file from server1 to server2 using "scp" ---
scp BACKUPFILE USER#SERVER2DOMAIN:FOLDERNAME
Log out of server 1
Log into server 2 (your new server) and import that file into the new DB using "mysql" --- mysql -uSQLUSER -pPASSWORD DBNAME < BACKUPFILE
You will need to replace the UPPERCASE text with your own info. Just ask in the comments if you don't know where to find any of these.
It is worthwhile getting to know some of these command line tricks if you will be doing this sort of admin from time to time.
try HeidiSQL http://www.heidisql.com/
connect to your server and choose the database
go to menu "import > Load sql file" or simply paste the sql file into the sql tab
execute sql (F9)
HeidiSQL is an easy-to-use interface
and a "working-horse" for
web-developers using the popular
MySQL-Database. It allows you to
manage and browse your databases and
tables from an intuitive Windows®
interface.
EDIT: Just to clarify. This is a desktop application, you will connect to your database server remotely. You won't be limited to php script max runtime, or upload size limit.
use bigdupm.
create a folder on your server witch is not easy to guess like "BigDump_D09ssS" or w.e
Download the http://www.ozerov.de/bigdump.php importer file and add them to that directory after reading the instructions and filling out your config information.
FTP The .SQL File to that folder along side the bigdump script and go to your browser and navigate to that folder.
Selecting the file you uploaded will start importing the SQL is split chunks and would be a much faster method!
Or if this is an issue i reccomend the other comment about SSH And mysql -u -p -n -f method!
Even though this is an old post I would like to add that it is recommended to not use database-storage for images when you have more than like 10 product(image)s.
Instead of exporting and importing such a huge file it would be better to transfer the Shopp installation to file-storage for images before transferring.
You can use this free plug-in to help you. Always backup your files and database before performing this action.
What I do is open the file in a code editor, copy and paste into a SQL window within phpmyadmin. Sounds silly, but I swear by it via large files.