Replace in all tables in db file - database

I'm not sure if something like this is possible, but I have a db file with multiple tables in it. In some of those tables there are guids that are used as references across the tables. Is there anyway I can basically do a search and replace of a value in all the tables? (One thing that make make this easier is that the column has the same name in every table). Thanks for any help.

You can open the .sqlite or .db file in any of the editors and do find and replace.
You can import the DB file in database and add property called ON UPDATE CASCADE and then perform update on the parent table.
You can write a script to update that particular element.

Related

Initial load creates my tables in the sym_x tables schema

After setting up a master-master replication on top of PostgreSQL , I tried to perform an initial load using:
./symadmin -engine octopusdb reload-node 2
My setup is:
1. I created all sym_x tables in a separate schema (replication).
2. I created all my application tables in other schemas of their own.
3. I inserted into sym_trigger.source_schema_name the suitable schema name for each application table.
Still, the initial load seem to create the application tables under the 'replication' schema instead of in their own schemas.
Is there some parameter I am missing for the properties file, or the initial load command?
So apparently for multi-schmea configuration,you need to create a separate record for each schema in sym_router (with a separate router_id, and appropriate target_schema_name), and for each table put a record in sym_trigger_router and sym_trigger with the appropriate router_id and schema name).
Also, once failed, I needed to remove everything from the tmp directory which is under the symmetric software, so the updates to sym tables will be recognized.

Rename table or column in SQL server without breaking existing apps

I have an existing database in MS SQL server and want to rename some tables and columns because the names currently used aren't accurate to what it represents.
I have multiple web and desktop applications that access the database, using Entity Framework (code first). Too many to update in one go and cannot afford for all apps to start working.
I was thinking it was nice is SQL server allowed a 'permanent' alias for tables and columns but I don't think this feature exists.
Or I was wondering if there was a way in EF to have two names for the same property?
For the tables, you could rename them and then create a synonym with the old name pointing to the new name.
For the columns, changing their name will break your application.You could create computed columns with the old name as well, that simply display the value of the new named column though (but this seems a little silly).
Note, however, that a computed column cannot reference another computed column, so you would have to duplicate the column in its entirety. That could lead to problems down the line if you don't update the definition of both columns.
A view containing a simple select statement acts exactly like a table. You really need to fix this properly across the database and applications. However if you want to go the view route, I suggest you do this:
Say you have a table called MyTable that you rename TheTable and with a column called MyColumn that you want to rename to TheColumn
Create a schema, say, new
Move the original table into it with this ALTER SCHEMA new TRANSFER MyTable
Rename the table and column.
Now you have a table called new.TheTable with a column called TheColumn. Everything is broken
Lastly, create a view that looks just like the old table
CREATE VIEW dbo.MyTable
AS
SELECT Column1, Column2, Column3, TheColumn As MyColumn
FROM new.TheTable;
Now everything works again.
All your fixed 'new' tables are in the new schema
However now everything is extra complicated
This is basically an illustration that you should just fix it properly across the whole app one at a time with careful change management. Definitely don't complicate it with triggers
Since you are using code first with multiple web and desktop applications, you are likely managing database changes from one place through migrations and ignoring changes other places.
You can create an empty migration and add code that will change the table name and column names to what you want. The migration should then create a view that will select from that table with the original table and column names. When you apply this migration, everything should still be working as normal from all applications. There are no model changes since you didn’t touch the model classes. Inserts, updates, and deletes will still happen through the view. There is no need for potentially buggy triggers or synonyms on the table in this option.
Now that you have the table changed, you can focus on the application code. If it helps, you can add annotations over the column and table names and start refactoring the code. You need to make sure you don’t make model changes that will break the other apps. If apps ignore model changes, you can get away with adding annotations over the columns and classes on all the apps before refactoring. You can get rid of the view sooner this way.

Creating Access Database Copies with different uniqueID

I have multiple access databases with approximately 30 tables each. Each database corresponds to an airplane and its allied tables. Most of the data in these tables is same. Hence, I would just like to change the UniqueID of the first (perfect/tested) database in order to have the same structure for rest of the databases (along with Data) and have multiple databases ready.
I tried the following:
1. Importing data: This creates copies of the data-tables in the new database and then they have to be renamed plus the uniqueID problem persists.
Broke all relationships of the main table, changed the Primary Key and again proceeded to add relationships. This is somehow not a good solution as it complicates the work.
Copied data by modifying tables in Excel and then pasting data in Access. In this I kept a lookout for the IDs in each table and modified accordingly. This is also a tedious process.
I am looking for a good solution and suggestions. Thanks in advance!

How do I automatically populate a new table in MS-Access with info from an existing table?

I am very new to MS Access and I am struggling with some things that seem like they should be the most basic. I have imported a table of data from Excel and have defined the data types for the fields. I have no problem there, but now I want to make a new table that has as a primary key one of the fields from the imported table. It looks like I can manually create this table, set the relationship, and then go back and type in each record associated with the new primary key, but this seems completely ridiculous. Surely there must be a way to automatically create one record for each unique instance in the matching field from the original table. Yet, I've scrolled through hundreds of pages of Access tutorials and Googled the question and found no satisfactory guidance.
Do I completely misunderstand what Access is all about? How do I create a new table with entries from a field on an existing table? What am I missing?
You don't specify which version of Access you are using, the suggestions listed below apply to 2010, but should be similar is other versions.
You can create new tables from existing tables using either a 'Make Table' option after selecting 'Create' -> 'Query Design', or you can manually create your table first, then use an 'Append' query.
Without knowing the design of your table it's hard to get more descriptive.
Are you populating your new table's primary key ahead of time, or relying on Auto Number to do it (preferred method)?

Oracle -- Import data into a table with a different name?

I have a large (multi-GB) data file exported from an Oracle table. I want to import this data into another Oracle instance, but I want the table name to be different from the original table. Is this possible? How?
Both importing and exporting systems are Oracle 11g. The table includes a BLOB column, if this makes any difference.
Thanks!
UPDATES:
The idea here was to update a table while keeping the downtime on the system that's using it to a minimum. The solution (based on Vincent Malgrat's answer and APC's update) is:
Assuming our table name is A
Make a temp schema TEMP_SCHEMA
Import our data into TEMP_SCHEMA.A
CREATE REAL_SCHEMA.B AS SELECT * FROM TEMP_SCHEMA.A
DROP TABLE REAL_SCHEMA.A Rename REAL_SCHEMA.A to REAL_SCHEMA.A_OLD
Rename REAL_SCHEMA.B to REAL_SCHEMA.A
DROP REAL_SCHEMA.A_OLD
This way, the downtime is only during steps 4 and 5, both should be independent of data size. I'll post an update here if this does not work :-)
If you are using the old EXP and IMP utilities you cannot do this. The only option is to import into a table of the same name (although you could change the schema which owns the table.
However, you say you are on 11g. Why not use the DataPump utility introduced in 10g, which replaces Import and Export. Because in 11g that utility offers the REMAP_TABLE option which does exactly what you want.
edit
Having read the comments the OP added to another response while I was writing this, I don't think the REMAP_TABLE option will work in their case. It only renames new objects. If a table with the original name exists in the target schema the import fails with ORA-39151. Sorry.
edit bis
Given the solution the OP finally chose (drop existing table, replace with new table) there is a solution with Data Pump, which is to use the TABLE_EXISTS_ACTION={TRUNCATE | REPLACE} clause. Choosing REPLACE drops the table whereas TRUNCATE merely, er, truncates it. In either case we have to worry about referential integrity constraints, but that is also an issue with the chosen solution.
I post this addendum not for the OP but for the benefit of other seekers who find this page some time in the future.
I suppose you want to import the table in a schema in which the name is already being used. I don't think you can change the table name during the import. However, you can change the schema with the FROMUSER and TOUSER option. This will let you import the table in another (temporary) schema.
When it is done copy the table to the target schema with a CREATE TABLE AS SELECT. The time it will take to copy the table will be negligible compared to the import so this won't waste too much time. You will need two times the disk space though during the operation.
Update
As suggested by Gary a cleverer method would be to create a view or synonym in the temporary schema that references the new table in the target schema. You won't need to copy the data after the import as it will go through directly to the target table.
Use the option REMAP_TABLE=EXISITNG_TABLE_NAME:NEW_TABLE_NAME in impdp. This works in 11gR2.
Just import it into a table with the same name, then rename the table.
Create a view as select * from ... the table you want to import into, with the view matching the name of the table in the export. Ignore errors on import.

Resources