How to restore data to postgresql database after adding column? - database

I have a postgresql database for code (flask, ember) that is being developed. I did a db_dump to back up the existing data. Then I added a column in the code. I have to create the database again so the new column will be in the database. When I try to restore the data with psql -d dbname -f dumpfile I get many errors such as 'relation "xx" already exists', " violates foreign key constraint", etc.
I'm new to this. Is there a way to restore old data to a new empty database that has all the relationships set up already? Or do I have add a column "by hand" to the database when I add a column in the code, to keep the data?

The correct way to proceed is to use ALTER TABLE to add a column to the table.
When you upgrade code, you can simply replace the old code with new one. Not so with a database, because it holds state. You will have to provide SQL statements that modify the existing database so that it changes to the desired new state.
To keep this manageable, use specialized software like Flyway or Liquibase.

When you did the pg_dump, you only dumped the data and table structure, bit did not drop any tables. Now, you are trying to restore the dump, and that will attempt to re-create the tables.
You have a couple options (the first is what I'd recommend):
Add --clean to your pg_dump command -- this will DROP all the tables when you go to restore the dump file.
You can also --data-only your pg_dump command -- this will only dump the existing data, and will not attempt to re-create the tables. However, you will have to find a way to truncate your tables (or delete the data out of them) so as not to encounter any FK errors or PK collisions.

Related

Change the TABLESPACE of the destination when restoring a database in Oracle 11g (Windows Server 2012 R2)

I was doing an import of a database (schema) from a previous version (Oracle 10g express editon) to a more recent version (Oracle 11g express edition) in Oracle from a .dmp file (I did not export, I was only responsible for make the import into the new environment), the way to do it I consulted in a previous forum and I managed to import using imp, I could not use the impdp because the export was not done with the expdp.
Well, once the restoration was done what I needed most was to restore all the objects in another tablespace, for that I had previously created a user, a tablespace associated with that user and naturally a datafile associated with that tablespace. But all the objects had been restored within the system tablespace (USERS), since in the source database these are in that tablespace.
The instruction that I used and thought would help me was the following:
imp my_user/password#XE FILE=C:\oraclexe\app\oracle\admin\XE\dpdump\my_file.dmp FROMUSER=my_user TOUSER=my_user
However, even though I tried to change the user (FR0MUSER and TOUSER), the data is still restored in the USERS tablespace and also in SYSTEM.
I guess the only way to solve this is to export again. I have three options: The exp, expdp and the RMAN. Although I am not sure that these help me alone to change or modify the destination tablespace.
Any reference would be very helpful.
To clarify, you have already completed an import and the data has been loaded into the database, however, the imported objects were loaded into the Users and System tablespaces, rather than the tablespace you created for them, correct?
There are a number of ways to do this, such as using the DBMS_REDEFINITION package or issuing commands like ALTER TABLE [SCHEMA].[TABLE] MOVE TABLESPACE [NEW TABLESPACE], however that can be extremely tedious, and may present problems if the database is in use. You would have to rebuild the indexes, and you would also have to move the lob files.
I would recommend using data pump (EXPDP) to create a new export of the schema, then remap the tablespaces while re-importing it into the database. Your steps would follow this general outline:
Export the schema using a command similar to this: expdp [user]/[pass] SCHEMAS=[SCHEMA] DIRECTORY=DATA_PUMP_DIR DUMPFILE=Export.dmp LOGFILE=export.log, where [SCHEMA] is the name of the schema that you want to remap. You can use any directory, dumpfile, and logfile name you want - this is just an example.
You'll want to drop the schema before re-importing it. Make sure to use the cascade option so that all of the objects are dropped: DROP USER [SCHEMA] CASCADE;
Finally, you can re-import the schema and use the REMAP_TABLESPACE clause to remap the objects to your desired tablespace: impdp [user]/[pass] SCHEMAS=[SCHEMA] REMAP_TABLESPACE=SYSTEM:[TABLESPACE] REMAP_TABLESPACE=USERS:[TABLESPACE] DIRECTORY=DATA_PUMP_DIR DUMPFILE=Export.dmp LOGFILE=import.log, where [TABLESPACE] is the tablespace you created for the schema.
Assuming all goes well, the schema will be re-imported into the database, and the objects of that schema that were originally mapped to the USERS and SYSTEM tablespaces will be remapped to your [TABLESPACE].
As you've already imported everything, you could move all the tables and indexes to your preferred tablespace with alter table and alter index commands, e.g.:
alter table my_table move tablespace my_tablespace;
alter index my_index rebuild tablespace my_tablespace;
You can use the data dictionary to generate those statements, with something like:
select 'alter table "' || object_name || '" move tablespace my_tablespace;'
from user_objects
where object_type = 'TABLE';
select 'alter index "' || object_name || '" rebuild tablespace my_tablespace;'
from user_objects
where object_type = 'INDEX';
with the output written to a script you then run; or you could use similar queries and dynamic SQL to do it all in one go.
There may be edge cases you need to deal with - maybe additional steps for partitions, and I'm not sure if IOTs will cause problems, but that might get you started - and at least reduce the amount of re-export/re-import you need to do.
Of course, you need enough space for both the old and new tablespaces to move the objects, as they'll exist in both as they move - one at a time, but may still be an issue too...

Why is there unrelated SQL script in SSMS?

I am using SSMS and cloning tables with same structure by using "script table as->create -> new query window".
My database have around 100 tables and my main task is to to perform data archiving by creating a clone table (same constraint,index,triggers,stats as old table) and importing certain data i want from the old table to new table.
My issue is inside the generated script say I want to clone table A , and in the script, there are sql scripts like { create table for table B} , {create table for table K}, etc along with their index and constraint scripts. Therefore, it makes the whole script very tedious and long.
I just want to focus on table A script so i can clone it and insert the relevant data into it . I know it has something to do with my options setting but I am unsure which options I should set to True for scripting, if i just want to clone table with same constraint,columns,indexes,triggers and stats. Does anyone know why there are unrelated script and how do i fix it ?

Bucardo add sync to replicate data

I am using Bucardo to replicate data in a database. I have one database, called mydb, and another called mydb2. They both contain identical tables, called "data" in both cases. Following the steps on this website, I have installed Bucardo and added the two databases:
bucardo_ctl add database mydb
bucardo_ctl add database mydb2
and added the tables:
bucardo_ctl add all tables
Now when I try to add a sync using the following command:
bucardo_ctl add sync testfc source=mydb targetdb=mydb2 type=pushdelta tables=data
I get the following error:
DBD::Pg::st execute failed: ERROR: error from Perl function "herdcheck": Cannot have goats from different databases in the same herd (1) at line 17. at /usr/bin/bucardo_ctl line 3346.
Anyone have any suggestions? Any would be appreciated.
So, in the source option you should put the name of the herd (which, as I know, is the list of tables.
Then, instead of:
bucardo_ctl add all tables
use
bucardo_ctl add all tables --herd=foobar
And instead of using
bucardo_ctl add sync testfc source=mydb targetdb=mydb2 type=pushdelta tables=data
use
bucardo_ctl add sync testfc source=foobar targetdb=mydb2 type=pushdelta tables=data
The thing is that the source option is not a place where you put the source database, but the "herd" or tables.
Remember that the pushdelta are for tables with primary keys, and the fullcopy are for tables that doesn't matter is they have a PK or not.
Hope that helps.

Combining several sqlite databases (one table per file) into one big sqlite database

How to combine several sqlite databases (one table per file) into one big sqlite database containing all the tables. e.g. you have database files: db1.dat, db2.dat, db3.dat.... and you want to create one file dbNew.dat which contains tables from all the db1, db2...
Several similar questions have been asked on various forums. I posted this question (with answer) for a particular reason. When you are dealing with several tables and have indexed many fields there. It causes unnecessary confusion to create index properly into the destination database tables. You may miss 1-2 index and its just annoying. The given method can also deal with large amount of data i.e. when you really have gbs of tables. Following are the steps to do so:
Download sqlite expert: http://www.sqliteexpert.com/download.html
Create a new database dbNew: File-> New Database
Load the 1st sqlite database db1 (containing a single table): File-> Open Database
Click on the 'DDL' option. It gives you a list of commands which are needed to create the particular sqlite table CONTENT.
Copy these commands and select 'SQL' option. Paste the commands there. Change the name of destination table DEST (from default name CONTENT) into whatever you want.
6'Click on 'Execute SQL'. This should give you a copy of the table CONTENT in db1 with the name DEST. The main utility of doing it is that you create all the index also in the DEST table as they were in the CONTENT table.
Now just click and drag the DEST table from the database db1 to the database dbNew.
Now just delete the database db1.
Go back to step 3 and repeat with the another database db2 etc.

MYSQl dump single table from huge file

How do I dump single table data from a huge dump file into the database.
If I understand your question correctly - you already have a dump file of many tables and you only need the restore one table (right?).
I think the only way to do that is to actually restore the whole file into a new DB, then copy the data from the new DB to the existing one, OR dump only the table you just restored from the new DB using:
mysqldump -u username -p db_name table_name > dump.sql
And restore it again wherever you need it.
To make things a little quicker and save some disk, you can kill the first restore operation after the desired table was completely restored, so I hope the table name begins with one of the first letters of the alphabet :)
There are some suggestions on how you might do this in the following articles:
http://blog.tsheets.com/2008/tips-tricks/mysql-restoring-a-single-table-from-a-huge-mysqldump-file.html
http://blog.tsheets.com/2008/tips-tricks/extract-a-single-table-from-a-mysqldump-file.html
I found these by searching for "load single table from mysql database dump" on Google: http://www.google.com/search?q=load+single+table+from+mysql=database+dump

Resources