Copy tables containing BLOB columns between Oracle Databases [closed] - database

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
On adhoc basis, we want to copy contents from 4 of our Oracle production tables to QA/UAT environments.
This is not a direct copy and we need to copy data based on some input criteria for filtering.
Earlier we were using Sybase database hence BCP utility worked with charm there. However, we have recently migrated to Oracle and need similar data copy requirement.
Based on the analyses till now, I have analyzed below options -
RMAN (Recovery Manager) - Cannot use as it does not allow us to copy selected tables or filtering on data.
SQLLDR (SQL Loader) – Cannot use this as we have BLOB columns and hence not sure how to create a CSV file for these BLOBS. Any suggesstions?
Oracle Data pump (Expdp/Imbdp) – Cannot use this as even though it allows copying selected tables it does not allow us to filter data using some query with joins (I know it allows to add query but it works only on single table). A workaround is to create temp tables with desired dataset and dmp them using EXPDP and IMPDP. Any suggesstions if I have missed anything in this approach?
Database Link – This is the best approach which seems possible in this use case. But needs to check if DBA will allow us to create links to/from PRD db.
SQL PLUS COPY - Cannot use this as it does not work with BLOB fields.
Can someone please advise on which should be the best approach w.r.t performance.

I would probably use a DATAPUMP format external table. So it would be something like
create table my_ext_tab
organization external
(
type oracle_datapump
default directory UNLOAD
location( 'my_ext_tab.dmp' )
)
as
<my query>
You can then copy the file across to your other database, create the external table, and then insert into your new table via an insert, something like:
insert /*+ APPEND */ into my_tab
select * from my_ext_tab
You can also use parallelism to read and write the files

Taking all your constraints into account, it looks like Database links is the best option. You can create views for your queries with joins and filters on the PROD environment and select from these views through the db links. That way, the filtering is done before the transfer over the network and not after, on the target side.

Related

For Snowflake prod->dev scheduled cloning, what is a good way to handle custom rules? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 months ago.
Improve this question
We are trying to setup dev and qa environments using data from a prod environment.
We are not using CREATE DATABASE dev CLONE prod because we are trying to avoid cloning database-specific objects like stages and pipes, since we are using per-environment Terraform to manage pipe-related objects and want to avoid out-of-band changes to those objects.
On top of that, there are some tables that should not be cloned from prod->dev. I'm trying to design a cleaner solution than the cobbled mess that we have.
We have a scheduled script that does the following:
Connect to prod and dev databases and fetch the right src and dst schemas
Run SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = '<>' AND TABLE_TYPE = 'BASE TABLE' to get tables to clone
Cloning tables across databases results in dangling references to constraints and sequences, so those have to be manually cloned https://docs.snowflake.com/en/user-guide/database-replication-considerations.html?#references-to-objects-in-another-database
For each table:
If it shouldn't be cloned, skip it
Run CREATE OR REPLACE TABLE <dev> CLONE <prod> COPY GRANTS;
Run GET_DDL(<dev>) to see if the table has sequences/constraints to update
Run CREATE OR REPLACE SEQUENCE <dev> CLONE <prod> to update the nextval of the sequence since our table was cloned and references the sequence from the source database (and it also has the wrong value anyways)
Run ALTER TABLE <dev> ALTER COLUMN <> SET DEFAULT <new seq>.nextval
Check if there are constraints
Run ALTER TABLE <dev> DROP CONSTRAINT <> since the cloned tables reference the source database
Run ALTER TABLE <dev> ADD CONSTRAINT <> to rebuild them to reference the destination database
So... it works, but it's hacky, fragile, and prone to updating because of custom rules. We currently have this running on an AWS lambda, but a first step would be to migrate this to pure Snowflake.
Does anyone have any suggestions to improve this process? Or at least have recommendations on Snowflake tools that
I realise this is not really an answer to your question but I would absolutely not do what you are proposing to do - it's not the way to manage your SDLC (in my opinion) and, especially if your data contains any PII information, copying data from a Prod to a non-Prod database runs the risk of all sorts of regulatory and audit issues.
I would do the following:
As a one-off exercise, create the scripts necessary to build the objects for your "standard" environment - presumably basing this off your current Prod environment
Manage these scripts in a version-controlled repository e.g. Git
You can then use these scripts to build any environment and you would change them by going through the standard Dev, Test, Prod SDLC.
As far as populating these environments with data goes, if you really need Production-like data (and production volumes of data) then you should build routines for copying the data from Prod to the chosen target environment that, where necessary, anonymise the data. These scripts should be managed in your code repository and as part of your SDLC there should be a requirement to build/update the script for any new/changed table

Updating a table in SQL Server Management Studio [duplicate]

This question already has answers here:
How to auto daily execute queries SQL Server? [duplicate]
(3 answers)
Closed 3 years ago.
I am using SQL Server Management Studio v17.4
I have a view v_fetch_rates. I have created a table using the command
SELECT *
INTO RATES
FROM v_fetch_rates
My question is how do I update the table RATES on daily basis automatically? Is there a way to do it by existing view or do I need to write stored procedure for this?
I did some googling but it confused me even more.
I have never created a job before so any help/resources to refer would help a lot.
If the issue is that the view is slow (because of its definition or the amount of data it returns) and you want to materialized the data in order to improve performance you can simply create a indexed view.
The idea is simple - creating an index on the view forces the engine to materialized it. Of course, there are various limitations and requirements of having index view. You can find more information in the specified link.
If you just want to have the data in a table and populated in on daily basis, you can:
create simple stored procedure which is truncating the current table and populating the data again calling the view
create a complex routine, which will modify (insert/update/delete) data only if needed

Using SqlBulkCopy with a partitioned view in SQL Server

I want to use SqlBulkCopy to get data from my .Net app into SQL Server, to improve performance.
But the DBA has made all the really big tables (the ones where SqlBulkCopy would really shine) into partitioned views.
There are no articles on SO about this, and there are questions on the web but none of them are answered.
I'm looking for a workaround to make this work.
Note:
I'm going to edit my question tomorrow with the exact error message and whatever other details I can bring. None of the questions on the internet include the error that SQL Server returns.
Given that SQL Server has no support for partitioned views - partitioned tables are something different - likely the view is read only and you msut write to the underlying correct table. Simple like that.
Possibly also that there is an instead of trigger on the view that is not triggered by bulk copy. That said, it is pretty bad to sql bulk copy to a table (sql builk copy is written by someone who loves non scalable scenarios) so the best practives are to sql bulk copy to a temporary table then insert into the final table (avoiding the bad locking code in sql bulk copy). In this case the trigger fires-

Access database design in SQL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have 52 MS Access databases and each database has 4 tables. The total data in our databases is around 5 million. Now we are planning to move to SQL Server. We have designed our new database which will be an SQL Server database with approximately 60 tables.
My question is - how will we integrate the 52 Access databases into one SQL Server database?
Is it possible, or we would have to create 52 database in SQL Server too, in order to migrate our data? these 52 databases are interrelated with each other having same structure in access?
If I was you (and I'm not, but if I was...) I would load all of that data into 4 tables. Just append all the data from each Access database into one table. Doctor, Project, Contract, Institution. However, as I'm appending each database, I would add a new field to each table; Country. Then, when you append the data for England to the tables, you also populate the Country field of that table with "England". Etc... with all your countries.
Now, when it comes time to access the data, you can force certain users to only be able to see the data for England, and certain other people to only see the data for Spain, etc... This way, those 4 tables can house all of your data, and you can still filter by any country you like.
From a technical point of view, there's no problem in creating only one SQL Server database, containing all 52 * 4 tables from the MS Access databases. SQL Server provides various options for logically separating your objects, for example by using Schemas. But then again, if you decide to create separate databases, you still have the ability to write queries across databases, even if the databases are not hosted on the same SQL Server instance (although there might be a performance penalty when writing queries across linked servers).
It's difficult to give a more precise answer with the limited detail in your question, but in most cases, a single database with multiple database schema (perhaps 1 schema for each MS access database) would probably be the best solution.
Now, for migrating the data from MS Access to SQL Server, you have various options. If you just need to perform a one-time migration, you could simply use the Import-Export wizard that comes with SQL Server. The wizard automatically creates the tables in the destination database for you, and it also lets you save SSIS-packages that you can use to migrate the data again.

How to insert data from one table to other table in sql server [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have two sqlserver database both having same database and tables structure now I want to insert data from one specific table to other database table, both database table having same structure but user for both database is diffrent
I tried with this query but this does not work
insert into Database1.dbo.Audit_Assessment1
select * from Database2.dbo.Audit_Assessment1
Please help me
SQL Server Management Studio's "Import Data" task (right-click on the DB name, then tasks) will do most of this for you. Run it from the database you want to copy the data into.
If the tables don't exist it will create them for you, but you'll probably have to recreate any indexes and such. If the tables do exist, it will append the new data by default but you can adjust that (edit mappings) so it will delete all existing data.
Pulled from https://stackoverflow.com/a/187852/435559
1-You can use Linked server , set up it on view option on top left and select registered server . Then you can open a new query window and write your query.
2-You can use replication.
Snapshot replication if it is just one time or sometimes .
Transactional replication if your insert is repeatedly.
Read more about replication :
http://msdn.microsoft.com/en-us/library/gg163302.aspx
Read more about linked servers :
http://msdn.microsoft.com/en-us/library/aa560998.aspx
Try approaching this differently. Why not script out the table you need and manipulate that way?
From the scripted out insert statement you should be able to easily modify this to go into your new database.
Sounds like your login doesn't have insert permissions on Database2.dbo.Audit_Assessment1. The error about it being an invalid object name is probably because you don't currently have view definition permissions either.

Resources