I am new to Oracle, I am using the SSMA for Oracle tool to do the migration work.
I am having some issues migrating data from an Oracle database to a blank SQL Database. I have followed the guides I have found online but still no luck.
I cannot seem to get the data(I only need tables) from the Oracle Database into the Blank SQL Database.
I have the tables within a Schema named System. I have expanded the Schema and selected the tables checkbox on my Oracle connection, when doing the migration I always get:
Data migration was not performed because no objects were selected. When selecting migrate data and putting in my connection details etc.
I have tried to select the destination database under the Server explorer at the bottom and right clicked and selected Synchronize with database however I get:
Nothing to process by this operation, because all objects are equal.
Can someone please shed some light on what I have failed to do?
Could it be related to the System Schema?
Thanks,
Managed to work around this. you cannot migrate from the SYSTEM schema as per: https://support.microsoft.com/en-us/kb/2020714
However what I did was used Redgate Oracle Compare(free trial) to migrate objects from the SYSTEM Schema to my target Schema. I could then right click this schema in SSMA to convert Schema and proceed with the migration.
Related
We have an on-prem SQL Server DB (SQL Server 2017 Comp 140) that is about 1.2 TB. We need to do a repeatable migration of just the data to an on cloud SQL (Paas). The on-prem has procedures and functions that do cross DB queries which eliminates the Data Migration Assistant. Many of the tables that we need to migrate are system versioned tables (just to make this more fun). Ideally we would like to move the data into a different schema of a different DB so we can avoid the use of External tables (worried about performance).
Moving the data is just the first step as we also need to do an ETL job on the data to massage it into the new table structure.
We are looking at using ADF but it has trouble with versioned tables unless we turn them off first.
What are other options that we can look and try to be able to do this quickly and repeatedly? Do we need to change to IaaS or use a third party tool? Did we miss options in ADF to handle this?
If I summarize your requirements, you are not just migrating a database to cloud but a complete architecture of your SQL Server, which includes:
1.2 TB of data,
Continuous data migration afterwards,
Procedures and functions for cross DB queries,
Versioned tables
Point 1, 3, and 4 can be done easily by creating and exporting .bacpac file using SQL Server Management Studio (SSMS) from on premises to Azure Blob storage and then importing that file in Azure SQL Database. The .bacpac file that we create in SSMS allows us to include all version tables which we can import at destination database.
Follow this third-party tutorial by sqlshack to migrate data to Azure SQL Database.
The stored procedures can also be moved using SQL Scripts. Follow the below steps:
Go the server in Management Studio
Select the database, right click on it Go to Task.
Select Generate Scripts option under Task
Once its started select the desired stored procedures you want to copy
and create a file of them and then run script from that file to the Azure SQL DB which you can login in SSMS.
The repeatable migration of data is challenging part. You can try it with Change Data Capture (CDC) but I'm not sure that is what exactly your requirement. You can enable the CDC on database level using below command:
Use <databasename>;
EXEC sys.sp_cdc_enable_db;
Refer to know more - https://www.qlik.com/us/change-data-capture/cdc-change-data-capture#:~:text=Change%20data%20capture%20(CDC)%20refers,a%20downstream%20process%20or%20system.
I have to move data from existing database oracle to which I don't have direct access. The data is about 11 tables, 5GB each. The database admin can export the tables to some .csv or xml. The problem with csv is that some data is textual with lots of special characters. The problem with xml is that the markup is an overhead which will increase significantly the size of the files. The DBA admin is not competent enough to provide a working and neat solution. He uses toad as the database tool. Can you provide some ideas how to perform such a migration in the best possible way?
Please refer the below steps to migrate the data from Oracle to SQL server.
Recommended Migration Process
To successfully migrate objects and data from Oracle databases to SQL Server, Azure SQL DB, or Azure SQL Data Warehouse, use the following process:
1.Create a new SSMA project.
2.After you create the project, you can set project conversion, migration, and type mapping options. For information about project settings, see Setting Project Options (OracleToSQL). For information about how to customize data type mappings, see Mapping Oracle and SQL Server Data Types (OracleToSQL).
3.Connect to the Oracle database server.
4.Connect to an instance of SQL Server.
5.Map Oracle database schemas to SQL Server database schemas.
6.Optionally, Create assessment reports to assess database objects for conversion and estimate the conversion time.
7.Convert Oracle database schemas into SQL Server schemas.
8.Load the converted database objects into SQL Server.
You can do this in one of the following ways:
* Save a script and run it in SQL Server.
* Synchronize the database objects.
9. Migrate data to SQL Server.
10.If necessary, update database applications.
For more details :
[https://learn.microsoft.com/en-us/sql/ssma/oracle/migrating-oracle-databases-to-sql-server-oracletosql?view=sql-server-2017]
After the admin export data into CSV, try to convert it into a character set which will recognize all special characters.
Then, try to follow the steps from this link: link, it might work.
If after the import, there are still special characters, thy to manually convert them.
Get the DBA to export the tables using the ASCII delimiters which were designed for this purpose:
Row delimiter: Decimal 30 / 0x1E
Column delimiter: Decimal 31 / 0x1F
Then you can use BCP (or any other similar product) to upload the data to SQL Server.
I'm new to SQL Server and trying to automatically update tables in SQL Server from tables in MS Access.
I have an Access database of metadata that must be kept updated for sending records to other groups. I also have a database in SQL Server which also has these same metadata tables. Currently these tables in the SQL Server database get updated manually by exporting the Access tables as Excel files, and then importing them into the SQL Server tables.
It's not the most efficient process and could lead to errors in the SQL Server database if someone forgets to check that they are using the most recent data from Access. So I would like to integrate some of the tables from Access to my database in SQL Server. Ideally I would like for the tables in my SQL Server database to be updated whenever Access is updated or at least update the tables automatically in the SQL Server database when I open it.
Would replicating the Access tables be the best? I am using SQL Server 2014 Developer so I think I have this capability. From my understanding, mirroring is for an entire database not just pieces of it. However, I do not want to be able to alter the metadata from SQL Server and have it reflected in Access. I cannot tell if reflecting the tables would do this...?
I also looked at this post about writing multiple insert statements but was confused (What is the best way to auto-generate INSERT statements for a SQL Server table?). Someone else suggested importing all the data into SQL Server and then using an ODBC driver to connect the two, but I'm also not sure how this would update the database in SQL Server anytime Access is updated.
If you have any suggestion and a link to easy to follow tutorial I would really appreciate it!
Thanks
In Access, go to 'External Data', ODBC Database, and connect to the SQL Server database directly - make sure you select 'Link to the data source by creating a linked table' on the first page of the wizard. Now, this linked table is available in Access, but is actually the SQL Server table.
Get rid of the local Access tables, using the new linked tables in their place in whatever queries, forms, reports, etc that you have in Access.
Now, any changes to the tables you see in this Access db ARE changes to the SQL Server database.
We have a production SQL Server and my desktop has SQL Server 2008 R2 Management Studio software installed. I have recently been given a task to perform data mining on our server DBs.
We have around 100 or more of tables there and it is getting very difficult for me to see how tables are related or has been created.
For a particular scenario I have cornered to 3 tables amongst the 100's that we have - but I cannot formulate how these tables are related with each other. I mean if only I know that one's table column is PK / FK of other then only I can execute something like below to extract data's -
SELECT *
FROM tablea,tableb
WHERE tableb.id = tablea.id
and do data mining on the result data set.
Please let me know how can I get all the tables and it relation details? What tool I can use such that further on information like above can be extracted or database designs can be known?
I tried to create the DB diagram but it showed me below error:
Do I need to install any other tool?
Below is my MS SQL Studio version details:
I think your solution is to use a database diagram (https://msdn.microsoft.com/en-us/library/ms189078.aspx)
Just drag all tables on the screen and it will show you the relations, this of course only when the primary-keys/foreign-keys are there.
For the error you are getting:
if I google that for you I get:
The backend version is not supported to design database diagrams or tables
The answer marked as the solution is:
This is commonly reported as an error due to using the wrong version
of SMSS. Use the version designed for your database version. You can
use select ##version to check which version of sql server you are
actually using
I'm running into a problem when accessing a SQL Server table from an Oracle setup via ODBC.
I can access 90% of the tables absolutely fine, but there's a few tables that have a name that's longer than 30 characters. Whenever I try to interact with the table (describes, selects, etc) Oracle throws an "identifier too long" error and gives up.
Is there a way to coax Oracle into playing nice with the SQL Server tables?
Assuming that we are talking about an Oracle database that has a database link created to a SQL Server database via Heterogeneous Services, you would need to write code using the DBMS_HS_PASSTHROUGH package to interact with the tables in question. You'd also need to use this package if you have tables where there are column names that are not valid Oracle identifiers.