What kind of database the Sql Server Migration Assistant uses as an internal data repository? - sql-server

What kind of database the Sql Server Migration Assistant uses as an internal data repository and stores it in the source-metabase.mb file?
I guess that this it is one of standard tool that I could use to open and edit some entries (I need to autamatically add some custom scripts for tables with BLOBs data migration )
You could also just suggest the way how to check most popular database formats: SqlServer Compact, MySQL, Access..

it is one of standard tool that I could use to open and edit some entries
I would not count on it :) It was a proprietary metadata format that has nothing to do with DB products that SSMA supports. It can store metadata for representing Oracle and also SQL Server among others, obviously formats are not connected with file structure that actual DBs use. SSMA format has no open docs, also it may fail to synchronize your changes after manual intervention if you reverse engineer it (due to the fact it was designed just as the migration tool to target SQL Server and was supposed to mostly create new objects there based on source database counterparts).
Can you just write some stored procedures or triggers in your database? For most DBs metadata is exposed as special tables/views anyway. Probably you need to do it only for SQL Server as it's your target db after migration, right? Looking into ways to directly parse or manipulate files managed by "big" DB (like SQL Server or Oracle) doesn't seem to be a good idea for most scenarios. (except digital forensics, for example)
SQL Server metadata related views are here and functions are here. You may profile your SQL Server instance while connecting to it with SSMA just to get some feel what it does to extract metadata (object names, columns of tables, source of SPs etc.)
Data manipulation is pretty much clear from the DB side if you need it too.

Related

SQLite schema based on SQL Server DB schema

I maintain a Windows based application backed by SQL Server DB so there is a set of SQL entities, like tables, views. With time I add new features and fix bugs so schema of the tables and views changes. Once I need to deploy a new version of the application I deploy the DB part by relying on DacPac/DacFx which automatically generates a difference between already deployed DB and the supplied DacPac so the already deployed DB is altered to match the DacPac's content. This way I don't have to write a code which compares 2 schemas and then generates a difference - DacFx does that for me.
That works well but now I need to expand the application so it also supports SQLite DB, I will for sure have to create a new application layer working with SQLite which is doable but one place I need help with is being able to create and maintain SQLite DB schema in the same way I do for SQL Server with DacPac/DacFx so a difference in schemas is computed and applied. While doing that I ideally want do write the SQL schema once so it could be applied to SQL Server as well as SQLite. Ideally, I need to generate SQLite schema based on the schema specific to SQL Server.
I looked into sqldiff which is capable of generating difference between 2 SQLIte DBs and thought I could:
use a technique from here to migrate SQL Server schema to SQLite
generate a temporary SQLite DB based on the generated above schema
compare the above deployed temporary DB to an existing SQLite DB by using the sqldiff and finally apply the difference to the target SQLite DB
but the sqldiff, as stated in the Limitations section:
The sqldiff utility is not designed to support schema migrations
In addition it has limitations around views:
The sqldiff.exe utility does not (currently) display differences in
TRIGGERs or VIEWs.
So I interpret that like that tool could probably be used for some migration cases but it is not really recommended.
How do you suggest generating and applying the schema differences?
I'm also interested to know how others solve the task of incrementally updating schema of their SQLite DB even if I take SQL Server completely out of equation and would instead maintain SQLite schema, in the source code, only. Does everyone create their own schema comparing tools instead of using something similar like DacFx in SQL Server world?

best way to transfer tables from SQL Server to Azure SQL?

I recently moved a SQL Server 2012 database from an old web server to Azure SQL. I also keep a copy of the database on my personal machine which is now running SQL Server 2019. During development, I frequently make changes to tables on my local machine and then need to transfer those tables to the server. I used to do this using a Visual Studio SSIS package. It was very easy. I used the "Transfer SQL Server Objects" task to select one or more tables, specify whether the existing tables should be dropped first, and replaced the tables on the server. The "Transfer SQL Server Objects" task does not work when trying to transfer objects to Azure SQL because it uses the "USE" statement. There must be an easy way to transfer tables to an Azure SQL database. I've used the "Microsoft Data Migration Assistant" and it works great for the initial migration, but does not allow you to replace tables. I feel like I am missing something very obvious because transferring tables is a routine task and there must be an easy way to do this with Azure SQL.
Manually managing and synchronizing different database versions can be time-consuming. The Schema Compare addon facilitates database comparison and provides you complete control when syncing them — you may filter particular differences and categories of differences before making modifications. The Schema Compare addon is a trustworthy tool that will save you time and code.
Hence, the Schema Compare extension provides an easy-to-use experience
to compare two database definitions and apply the differences from the
source to the target. MSFT Ddocument which could be usefull : Schema
Compare extension- here and How to: Use Schema Compare to Compare
Different Database Definitions - Here.

Keep two different databases synchronized

I'm modeling a new microservice architecture migrating some part of a monolithic software to microservices.
I'm adding a new PostgreSQL database and the idea is in the future use that database but for now I still need to keep updated the old SQL Server database and also synchronize the PostgreSQL database if something new appears in the old database.
I've searched for ETL tools but are meant to move data to a datawarehouse (that's not what I need). I just can't replicate the information because the DB model is not the same.
Basically I need a way to detect new rows inserted in the SQL Server database, transform that information and insert it in my PostgreSQL.
Any suggestions?
PostgreSQL's foreign data wrappers might be useful. My approach would be, to change the frontend to use PostgreSQL and let postgreSQL handle the split via it's various features (triggers, rules, ...)
Take a look at StreamSets Data Collector. It can detect changes in SQL Server and insert/update/delete to any DB that has a JDBC driver including Postgres. It is open source but you can buy support. You can also make field changes/additions/removals/renaming to the data stream so that the fields match the target table.

How to connect the Data Modeler with SQL Server 2008?

I want to document a SQL Server 2008 database. I have been asked for the Diagram and the specifications of the tables, fields, data type, etc. (Data Dictionary).
The problem is that I can not find a program that suits my requirements. The Erwin Data Modeler has a reverse engineering tool, but it is not useful because it does not allow me to specify only the tables that I want to diagram, at the very same request the schemas (But it is in this program that I am asked to do the diagramming) . SQL Server Management Studio is not an option because it is the same that manages the Databases (it implies modifying the DB in some way) and it is not very flexible with the choice of tables.
So I resorted to Data Modeler works perfectly with an Oracle database, you can make diagrams, generate scripts (the latter helped me to pass diagrams to Erwin), and even generate documentation of objects, etc. But I have not been able to connect it to SQL Server 2008 to do the same thing I did with Oracle. I downloaded the jtds-1.2.jar to make the connection but I do not know exactly how to do it.
In summary I need a program that allows me to choose the tables to which I want to diagram (These are things that are achieved with the Data Modeler) and then be able to open them in the Erwin (With the script that generates the latter) which is the target program. And also the field documentation, although this is a secondary issue to the question.
It would be helpful if you know of any other method, program or procedure.
download from sourceforge jTDS v1.3 or 1.3.1
get the JAR out of the zip
add it to your sqldev folder
open preferences in the tool, and go to third party drivers, under database page i think, and then when you get to connection dialog, there will be a SQL Server and Sybase connection type
i talk about this in more detail here
You can connect to a SQL 2008 db and RE the databases into one or more data models, and then generate data dictionary reports and DDL scripts. And a lot more..get v4.2 if you want to generate HTML reports that include the diagrams themselves.

How do I configure my Microsoft Access database to pull source data directly from SAP BW?

I use several Microsoft Access databases on a regular basis to create reports. To get the source data, I currently have to log in to SAP BW (via SAP NetWeaver), run the source data report, export the results as a .csv file (but actually saving it as a .txt file), and then import that file into Microsoft Access. Is there a way that I can have Access pull the data from SAP BW directly?
Any help is appreciated!
All of the databases used by SAP are industry standard databases and the data is thus going to be stored in a system that supports ODBC.
As far as I know, SAP in general uses Sybase which is also what SQL server was originally based on.
So SAP is running on an industry standard SQL server (Sybase or SQL server). If running on IBM, then the data is in DB2 (often the as400 system).
You thus simply need to contact your IT department and obtain the required ODBC connection strings to the database. You “might” also need to install the latest Sybase drivers if you not running SAP on SQL server but again such information would be available from your SAP support folks.
So you simply setup linked tables in access to the SAP database, and thus no export or download or importing of data is required – you be reporting on live data at all times. The “challenge” is thus of course to grasp the table structures in SAP - a LARGE challenge since in most cases a report you been using for exports is the result of MANY related tables joined together into a "easy" view for exporting. So be prepared for some complex quires to get the data the way you want.

Resources