I have a large (multi-GB) data file exported from an Oracle table. I want to import this data into another Oracle instance, but I want the table name to be different from the original table. Is this possible? How?
Both importing and exporting systems are Oracle 11g. The table includes a BLOB column, if this makes any difference.
Thanks!
UPDATES:
The idea here was to update a table while keeping the downtime on the system that's using it to a minimum. The solution (based on Vincent Malgrat's answer and APC's update) is:
Assuming our table name is A
Make a temp schema TEMP_SCHEMA
Import our data into TEMP_SCHEMA.A
CREATE REAL_SCHEMA.B AS SELECT * FROM TEMP_SCHEMA.A
DROP TABLE REAL_SCHEMA.A Rename REAL_SCHEMA.A to REAL_SCHEMA.A_OLD
Rename REAL_SCHEMA.B to REAL_SCHEMA.A
DROP REAL_SCHEMA.A_OLD
This way, the downtime is only during steps 4 and 5, both should be independent of data size. I'll post an update here if this does not work :-)
If you are using the old EXP and IMP utilities you cannot do this. The only option is to import into a table of the same name (although you could change the schema which owns the table.
However, you say you are on 11g. Why not use the DataPump utility introduced in 10g, which replaces Import and Export. Because in 11g that utility offers the REMAP_TABLE option which does exactly what you want.
edit
Having read the comments the OP added to another response while I was writing this, I don't think the REMAP_TABLE option will work in their case. It only renames new objects. If a table with the original name exists in the target schema the import fails with ORA-39151. Sorry.
edit bis
Given the solution the OP finally chose (drop existing table, replace with new table) there is a solution with Data Pump, which is to use the TABLE_EXISTS_ACTION={TRUNCATE | REPLACE} clause. Choosing REPLACE drops the table whereas TRUNCATE merely, er, truncates it. In either case we have to worry about referential integrity constraints, but that is also an issue with the chosen solution.
I post this addendum not for the OP but for the benefit of other seekers who find this page some time in the future.
I suppose you want to import the table in a schema in which the name is already being used. I don't think you can change the table name during the import. However, you can change the schema with the FROMUSER and TOUSER option. This will let you import the table in another (temporary) schema.
When it is done copy the table to the target schema with a CREATE TABLE AS SELECT. The time it will take to copy the table will be negligible compared to the import so this won't waste too much time. You will need two times the disk space though during the operation.
Update
As suggested by Gary a cleverer method would be to create a view or synonym in the temporary schema that references the new table in the target schema. You won't need to copy the data after the import as it will go through directly to the target table.
Use the option REMAP_TABLE=EXISITNG_TABLE_NAME:NEW_TABLE_NAME in impdp. This works in 11gR2.
Just import it into a table with the same name, then rename the table.
Create a view as select * from ... the table you want to import into, with the view matching the name of the table in the export. Ignore errors on import.
Related
I have an existing database in MS SQL server and want to rename some tables and columns because the names currently used aren't accurate to what it represents.
I have multiple web and desktop applications that access the database, using Entity Framework (code first). Too many to update in one go and cannot afford for all apps to start working.
I was thinking it was nice is SQL server allowed a 'permanent' alias for tables and columns but I don't think this feature exists.
Or I was wondering if there was a way in EF to have two names for the same property?
For the tables, you could rename them and then create a synonym with the old name pointing to the new name.
For the columns, changing their name will break your application.You could create computed columns with the old name as well, that simply display the value of the new named column though (but this seems a little silly).
Note, however, that a computed column cannot reference another computed column, so you would have to duplicate the column in its entirety. That could lead to problems down the line if you don't update the definition of both columns.
A view containing a simple select statement acts exactly like a table. You really need to fix this properly across the database and applications. However if you want to go the view route, I suggest you do this:
Say you have a table called MyTable that you rename TheTable and with a column called MyColumn that you want to rename to TheColumn
Create a schema, say, new
Move the original table into it with this ALTER SCHEMA new TRANSFER MyTable
Rename the table and column.
Now you have a table called new.TheTable with a column called TheColumn. Everything is broken
Lastly, create a view that looks just like the old table
CREATE VIEW dbo.MyTable
AS
SELECT Column1, Column2, Column3, TheColumn As MyColumn
FROM new.TheTable;
Now everything works again.
All your fixed 'new' tables are in the new schema
However now everything is extra complicated
This is basically an illustration that you should just fix it properly across the whole app one at a time with careful change management. Definitely don't complicate it with triggers
Since you are using code first with multiple web and desktop applications, you are likely managing database changes from one place through migrations and ignoring changes other places.
You can create an empty migration and add code that will change the table name and column names to what you want. The migration should then create a view that will select from that table with the original table and column names. When you apply this migration, everything should still be working as normal from all applications. There are no model changes since you didn’t touch the model classes. Inserts, updates, and deletes will still happen through the view. There is no need for potentially buggy triggers or synonyms on the table in this option.
Now that you have the table changed, you can focus on the application code. If it helps, you can add annotations over the column and table names and start refactoring the code. You need to make sure you don’t make model changes that will break the other apps. If apps ignore model changes, you can get away with adding annotations over the columns and classes on all the apps before refactoring. You can get rid of the view sooner this way.
I'm not sure if something like this is possible, but I have a db file with multiple tables in it. In some of those tables there are guids that are used as references across the tables. Is there anyway I can basically do a search and replace of a value in all the tables? (One thing that make make this easier is that the column has the same name in every table). Thanks for any help.
You can open the .sqlite or .db file in any of the editors and do find and replace.
You can import the DB file in database and add property called ON UPDATE CASCADE and then perform update on the parent table.
You can write a script to update that particular element.
I'm looking to switch to DACPACs for our database changes, but I'm a bit at a loss about what to do when it comes to more complex database updates. To illustrate what I mean, let me use a simple example that has the same problem.
Say I have a Customer table that is currently live and I want to add a new CustomerType table with a foreign key from Customer to CustomerType. The new column in Customer should be required (not nullable), but should not have a default value.
I want to use some arbitrary formula to setup the initial type for the existing customers upon upgrading. How would I accomplish this using a DACPAC?
The DACPAC will only know there's a new column and will try to add it to the Customer table, which will of course fail because it is required. Setting a default value is undesirable, as is allowing null values.
Since the DACPAC should be usable to upgrade from every state to the latest, I don't see what kind of configuration or pre/post scripts I should setup to make this work.
Various searches have produced a disappointing lack of useful results :(
I hope there's someone here that can help out. Thanks in advance.
The answer will vary a bit depending on how you're planning to deploy the dacpac(s). One common case is having the dacpac replace some collection of T-SQL update scripts that are executed in sequence to update a database schema from one version to the next. In this case you might choose to have one dacpac file for each schema-version of your database and to update a database you would plan to publish the dacpacs in sequence to update a database to the latest version.
In that case, it's possible to use a post-deploy script to fix up the schema as appropriate. For your example scenario, you can model the database in the database project with the new column specified as NULL and without the FK relationship with the new table. Then, in a post-deploy script you can author the T-SQL necessary to execute an UPDATE statement to fill the new table and the new column, an ALTER statement to change the column's type from NULL to NOT NULL, and finally to add the foreign key relationship.
Then moving forward you can remove the post-deploy script and model the new column and table with the proper column type and FK relationship.
So, I have a relatively common task, and hope to get some suggestions here.
Idea is that I have a small database in mind, database will have at least 2 types of tables:
dictionary-table - it will have just the id and few columns of text
aggregation-table - it should combine different dictionary entries into some aggregation, so it will be basically mapping id's of different dictionary entries all together.
So, what I hoped to do is to have some software that will help me to fill database easily. I will add data to dictionary-tables, and will say that 'this particular column of my aggregation table can have values only from this dictionary-table', so I would type words, and it will just add id's from dictionary-table instead. You know, like a relationships in database.
Except that in the end I want it to be a plain sqlite database, and sqlite doesn't support relationships.
So what I want is some cool high-level GUI tool that will simplify the way I input data to database, and will help me to maintain the data when DB grows in future, but also be able to export to a simple SQLite.
I tried: SQliteBrowser, SqliteAdmin, Libreoffice Base + Sqlite ODBC. Neither supports what I want.
Anything else worth checking out?
How about PhpLiteAdmin? - https://code.google.com/p/phpliteadmin/
It allows you to directly add/modify the structure and data of an sqlite database but also allows import and export of tables, structure, indexes, and data (SQL, CSV). If you're dealing with thousands of entries then this may be the important feature for whatever tool you use.
There's no installation and it's open-source
you said
...be a plain sqlite database, and sqlite doesn't support relationships.
But sqlite supports relationships. By default it is disabled. you can enable it with
sqlite> PRAGMA foreign_keys = ON;
now you can code your requirement by having proper foreign key.
you said
I will add data to dictionary-tables,
Instead of multiple dictionary tables, just have one dictionary table and add one more column in it as dictionary_name
your aggregate table can simply have foreign key referring the dictionary table
We're implementing a New system using Java/Spring/Hibernate on PostgreSQL. This system needs to make a copy of Every Record as soon as a modification/deletion is done on the record(s) in the Tables(s). Later, the Audit Table(s) will be queried by Reports to display the data to the users.
I was planning to implement this auditing/versioning feature by having a trigger on the table(s) which would make a copy of the modified row(deleted row) "TO" a TABLE called ENTITY_VERSIONS which would have about 20 columns called col1, col2, col3, col4, etc which would store the columns from the above Table(s); However, the problem is that if there is more than 1 Table to be versioned and ONLY 1 TARGET table(ENTITY_VERSIONS) to store all the tables' versions, how do I design the TARGET table ?
OR is it better that there will be a COPY of the VERSION Table for each Table that needs versioning ?
It will be bonus if some pointers towards PostgreSQL Triggers (and associated Stored Procedure ) code for implementing the auditing/versioning can be shared.
P.S : I looked at Suggestions for implementing audit tables in SQL Server? and kinda like the answer except I would NOT know what type should OldValue and NewValue be ?
P.P.S : If the Tables use SOFT DELETEs (phantom deletes) instead of HARD deletes, do any of your advice change ?
I would have a copy of each table to hold the versions of that table you wish to keep. It sounds like a bit of a nightmare to maintain and use a global versioning table.
This link in the Postgres documentation shows some audit trigger examples in Postgres.
In global table all columns can be stored in single column as hstore type. I just tried audit and I it is works great, I recommend it. Awesome audit table example tracks all changes in single table by simply adding a trigger onto the tables you want to begin to keep audit history on. all changes are stored in as hstore type- works for v 9.1+
this link