Convert SQL.Bak to BacPac - sql-server

I'm using SQL server 2012, I have a Database file (.bak) trying to export it to a (.BacPac) file so I can import it to Azure. The problem is in the converting progress (Validating scheme model),
I have the following error :
"Error SQL71501: View: [dbo].[AC_Section] has an unresolved reference to object [dbo].[sueres].".
"Error SQL71562: Procedure: [dbo].[milp] has an unresolved reference to object [tempdb].[dbo].[sysob].[xtyp]."
and the errors are going with many other tables and objects.
How do I solve this or if there is another way to convert the database to .bacpac.

Try to resolve invalid objects before migrating the database to SQL Azure. Invalid objects are objects (stored procedures, views, etc.) that are making reference to objects no longer exist, including objects on tempdb.
SELECT
QuoteName(OBJECT_SCHEMA_NAME(referencing_id)) + '.'
+ QuoteName(OBJECT_NAME(referencing_id)) AS ProblemObject,
o.type_desc,
ISNULL(QuoteName(referenced_server_name) + '.', '')
+ ISNULL(QuoteName(referenced_database_name) + '.', '')
+ ISNULL(QuoteName(referenced_schema_name) + '.', '')
+ QuoteName(referenced_entity_name) AS MissingReferencedObject
FROM
sys.sql_expression_dependencies sed
LEFT JOIN sys.objects o
ON sed.referencing_id=o.object_id
WHERE
(is_ambiguous = 0)
AND (OBJECT_ID(ISNULL(QuoteName(referenced_server_name) + '.', '')
+ ISNULL(QuoteName(referenced_database_name) + '.', '')
+ ISNULL(QuoteName(referenced_schema_name) + '.', '')
+ QuoteName(referenced_entity_name)) IS NULL)
ORDER BY
ProblemObject,
MissingReferencedObject
I would like to recommend you using Data Migration Assistant before migrating the database to SQL Azure. This tool does not detect invalid objects at this time.
Microsoft Data Migration Assistant v3.1
Hope this helps.
Regards,
Alberto Morillo

So I came to a conclusion that some data, functions, tables, objects and etc... in my (.bak) file are not supported by Azure, or you can say they're outdated. The ways to deploy the (.bak) file to Azure is by avoiding, deleting or re-write these data before deploying the file to Azure, which is going to change the data flow.

Related

SQL Server trigger reports 'inserted' table is missing

I am running a script against a SQL Server 2016 database that creates various tables, views and triggers. This same script has been working against dozens of other servers but it is getting an error against this one particular server.
All of the triggers seem to be created but when I check for invalid objects it reports all of them as invalid. The really strange part is, it says the problem is the "inserted" table (or "deleted" table, depending on the trigger) is missing.
I am checking for invalid objects using this query:
SELECT
QuoteName(OBJECT_SCHEMA_NAME(referencing_id)) + '.'
+ QuoteName(OBJECT_NAME(referencing_id)) AS ProblemObject,
o.type_desc,
ISNULL(QuoteName(referenced_server_name) + '.', '')
+ ISNULL(QuoteName(referenced_database_name) + '.', '')
+ ISNULL(QuoteName(referenced_schema_name) + '.', '')
+ QuoteName(referenced_entity_name) AS MissingReferencedObject
FROM
sys.sql_expression_dependencies sed
LEFT JOIN
sys.objects o ON sed.referencing_id = o.object_id
WHERE
(is_ambiguous = 0)
AND (OBJECT_ID(ISNULL(QuoteName(referenced_server_name) + '.', '')
+ ISNULL(QuoteName(referenced_database_name) + '.', '')
+ ISNULL(QuoteName(referenced_schema_name) + '.', '')
+ QuoteName(referenced_entity_name)) IS NULL)
ORDER BY
ProblemObject,
MissingReferencedObject
which I got from here
Find broken objects in SQL Server
The triggers are "instead of" triggers against the views that then modify the underlying tables. There is really nothing special about any of them.
Is there something wrong with my query and the objects aren't really invalid or is there something with the database? I am running the script as the database owner so I don't think it is a permissions issue.
Thanks
Run this query against the problem SQL Server, and against one where no problems are shown:
SELECT *
FROM
sys.sql_expression_dependencies sed
LEFT JOIN
sys.objects o ON sed.referencing_id = o.object_id
WHERE type = 'TR'
ORDER BY
type_desc,
referenced_entity_name
If the inserted / deleted tables show up in one list but not the other, then there is some difference causing this. I'd check identifier quoting rules to start.

SQL - WHERE statement with text added to value from table

I need to copy data from an old database to a newer one.
Both of these databases have a user setup table with the primary key of "USER ID".
The problem is, in the old database the users didn't have the domain in the name, but in the new one they have.
Example:
Primary Key old DB: USER1
Primary Key new DB: DOMAIN\USER1
This prevents a standard WHERE clause to update the correct user because it can't find it due to the domain being added.
My code:
'FROM [' + #src_DB + '].dbo.[' + #src_table + '] as src '
'WHERE [' + #dest_DB + '].dbo.[' + #dest_table + '].[User ID] = ' + #domain_name + 'src.[User ID]'
printing the result:
WHERE [Destination_DB].dbo.[Destination_Table].[User ID] = DOMAIN\src.[User ID]
The problem is it doesn't add the DOMAIN to the value but rather to the statement...
How can I add the Domain to the actual value of src.[User ID]?
I think there's a dot missing, and you should use QUOTENAME
'WHERE ' + QUOTENAME(destination_table) + '.[User ID] = ' + QUOTENAME(#domain) + '.' + QUOTENAME(source_table) + '.[User ID]'
Whenever you create a SQL statement dynamically it's a good idea to print it out, copy it into a new query window and check for syntax errors...
UPDATE You: Yes, both databases are in the same server
An object can be (fully) specified with
ServerName.DatabaseName.Schema.ObjectName
A table's column would add one more .ColumnName
When both objects live on the same server you can let the first part away.
Objects of the same database let this part away.
Objects of the default schema might be called with the ObjectName alone.
But if you state a DatabaseName you must also state a SchemaName!
Use QUOTENAME() to add the brackets and add just the dots via string concatenation (or use CONCAT()-function).
UPDATE 2 Did I get this wrong completely?
After you comment I think I understand it now: You want to compare the values of both [USER ID] columns, but the new is DOMAIN\MyUserId while the older was just MyUserId.
You have two approaches
Add the Domain\ as string to the value of [User ID]
Use SUBSTRING([User ID],CHARINDEX('\',[UserID])+1,1000) to cut the newer value down to the naked value of [User ID]
For the first something like this
'WHERE [' + #dest_DB + '].dbo.[' + #dest_table + '].[User ID] = ''' + #domain_name + ''' + src.[User ID]'
The second is quite clumsy with dynamically created SQL...

rename all mapped domain logins for instance

I am looking for a way to rename 100+ mapped domain groups in SQL server.
something like this:
old:
DOMAIN\Group01
DOMAIN\Group02
DOMAIN\Group03
DOMAIN\Group04
DOMAIN\Group05
new:
DOMAIN\Group01_OLD
DOMAIN\Group02_OLD
DOMAIN\Group03_OLD
DOMAIN\Group04_OLD
DOMAIN\Group05_OLD
Is there a fast way to bulk rename the logins in SQL?
You may try following.
In SSMS run the script:
select
'alter login ' + quotename(name) + ' with name = ' + quotename(name + '_OLD')
from
sys.syslogins
where
name like 'DOMAIN\Group%'
Copy results into clipboard and paste into SSMS New Query tab. Check the commands generated and then run.

Copy table schema from DB2 to SQL Server

I'm looking at creating staging tables in SQL Server for one of our SSIS packages to reduce the number of calls to DB2 since calls to DB2 may experience timeouts when DB2 recycles inactive connections. Is there an automated method for copying table schema from DB2 to SQL Server? There would need to be a 1 to 1 mapping of data types between DB2 and SQL Server for this to work. If there isn't a tool that exists, I may write one myself since some of our DB2 tables have 20+ columns and it would be a pain to manually recreate in SQL Server.
I have a partially working script you're welcome to use. We don't care about primary keys and such from DB2 into our SQL Server side of things. Our only concern is to get the data over. Plus, the data I've had to deal with was only string or date based so where I build the data_type might be incorrect for the decimal.
The core concept is that I inspect the sysibm.syscolumns to derive a list of all the tables and columns and then try to provide a translation between the DB2 data types and SQL Server.
Anyways, give it a shot. Feel free to edit or make a comment about what's broken and I'll see if I can fix it.
This is built using a mix of the SQL Server 2012 CONCAT function and the classic string concatenation operator +. It also assumes a Linked server exists for the OPENQUERY to work.
WITH SRC AS
(
SELECT
OQ.NAME AS column_name
, OQ.TBNAME AS table_name
--, RTRIM(OQ.COLTYPE) AS data_type
, CASE RTRIM(OQ.COLTYPE)
WHEN 'INTEGER' THEN 'int'
WHEN 'SMALLINT' THEN 'smallint'
WHEN 'FLOAT' THEN 'float'
WHEN 'CHAR' THEN CONCAT('char', '(', OQ.LENGTH, ')')
WHEN 'VARCHAR' THEN CONCAT('varchar', '(', OQ.LENGTH, ')')
WHEN 'LONGVAR' THEN CONCAT('varchar', '(', OQ.LENGTH, ')')
WHEN 'DECIMAL' THEN CONCAT('decimal', '(', OQ.SCALE, ')')
WHEN 'DATE' THEN 'date'
WHEN 'TIME' THEN 'time'
WHEN 'TIMESTMP' THEN ''
WHEN 'TIMESTZ' THEN ''
WHEN 'BLOB' THEN ''
WHEN 'CLOB' THEN ''
WHEN 'DBCLOB' THEN ''
WHEN 'ROWID' THEN ''
WHEN 'DISTINCT' THEN ''
WHEN 'XML' THEN ''
WHEN 'BIGINT' THEN ''
WHEN 'BINARY' THEN ''
WHEN 'VARBIN' THEN ''
WHEN 'DECFLOAT' THEN ''
ELSE ''
END AS data_type
, OQ.LENGTH
, OQ.SCALE
, CONCAT(CASE OQ.NULLS WHEN 'Y' THEN 'NOT' ELSE '' END, ' NULL') AS allows_nulls
, OQ.UPDATES AS updateable
FROM
OPENQUERY(LINKED, 'SELECT * FROM abcde01.sysibm.syscolumns T WHERE T.TBCREATOR = ''ABCD'' ' ) AS OQ
)
, S2 AS
(
SELECT
CONCAT(QUOTENAME(S.column_name), ' ', S.data_type, ' ', S.allows_nulls) AS ColumnDeclaration
, S.table_name
FROM
SRC AS S
)
, MakeItPretty AS
(
SELECT DISTINCT
QUOTENAME(S.TABLE_NAME) AS TABLE_NAME
, STUFF
(
(
SELECT ',' + ColumnDeclaration
FROM S2 AS SI
WHERE
SI.TABLE_NAME = S.TABLE_NAME
FOR XML PATH('')),1,1,''
) AS column_list
FROM
S2 AS S
)
SELECT
CONCAT('CREATE TABLE ', MP.TABLE_NAME, char(13), MP.column_list) AS TableScript
FROM
MakeItPretty AS MP;

Compare structures of two databases?

I wanted to ask whether it is possible to compare the complete database structure of two huge databases.
We have two databases, the one is a development database, the other a production database.
I've sometimes forgotten to make changes in to the production database, before we released some parts of our code, which results that the production database doesn't have the same structure, so if we release something we got some errors.
Is there a way to compare the two, or synchronize?
For MySQL database you can compare view and tables (column name and column type) using this query:
SET #firstDatabaseName = '[first database name]';
SET #secondDatabaseName = '[second database name]';
SELECT * FROM
(SELECT
CONCAT(cl.TABLE_NAME, ' [', cl.COLUMN_NAME, ', ', cl.COLUMN_TYPE, ']') tableRowType
FROM information_schema.columns cl, information_schema.TABLES ss
WHERE
cl.TABLE_NAME = ss.TABLE_NAME AND
cl.TABLE_SCHEMA = #firstDatabaseName AND
ss.TABLE_TYPE IN('BASE TABLE', 'VIEW')
ORDER BY
cl.table_name ) AS t1
LEFT JOIN
(SELECT
CONCAT(cl.TABLE_NAME, ' [', cl.COLUMN_NAME, ', ', cl.COLUMN_TYPE, ']') tableRowType
FROM information_schema.columns cl, information_schema.TABLES ss
WHERE
cl.TABLE_NAME = ss.TABLE_NAME AND
cl.TABLE_SCHEMA = #secondDatabaseName AND
ss.TABLE_TYPE IN('BASE TABLE', 'VIEW')
ORDER BY
cl.table_name ) AS t2 ON t1.tableRowType = t2.tableRowType
WHERE
t2.tableRowType IS NULL
UNION
SELECT * FROM
(SELECT
CONCAT(cl.TABLE_NAME, ' [', cl.COLUMN_NAME, ', ', cl.COLUMN_TYPE, ']') tableRowType
FROM information_schema.columns cl, information_schema.TABLES ss
WHERE
cl.TABLE_NAME = ss.TABLE_NAME AND
cl.TABLE_SCHEMA = #firstDatabaseName AND
ss.TABLE_TYPE IN('BASE TABLE', 'VIEW')
ORDER BY
cl.table_name ) AS t1
RIGHT JOIN
(SELECT
CONCAT(cl.TABLE_NAME, ' [', cl.COLUMN_NAME, ', ', cl.COLUMN_TYPE, ']') tableRowType
FROM information_schema.columns cl, information_schema.TABLES ss
WHERE
cl.TABLE_NAME = ss.TABLE_NAME AND
cl.TABLE_SCHEMA = #secondDatabaseName AND
ss.TABLE_TYPE IN('BASE TABLE', 'VIEW')
ORDER BY
cl.table_name ) AS t2 ON t1.tableRowType = t2.tableRowType
WHERE
t1.tableRowType IS NULL;
If you prefer using tool with UI you can also use this script
https://github.com/dlevsha/compalex
which can compare tables, views, keys etc.
Compalex is a lightweight script to compare two database schemas. It
supports MySQL, MS SQL Server and PostgreSQL.
Screenshot (compare tables)
You can use the command line:
mysqldump --skip-comments --skip-extended-insert -d --no-data -u root -p dbName1>file1.sql
mysqldump --skip-comments --skip-extended-insert -d --no-data -u root -p dbName2>file2.sql
diff file1.sql file2.sql
You can just dump them with --no-data and compare the files.
Remember to use the --lock-tables=0 option on your production database to avoid the big nasty global lock.
If you use the same mysqldump version (your dev and production systems should have the same software, right?) then you'll expect to get more-or-less identical files out. The tables will be in alpha order so a simple diff will show discrepancies up easily.
To answer this kind of question currently, I've made a script that uses information_schema content to compare column, datatype, and table
SET #database_current = '<production>';
SET #database_dev = '<development>';
-- column and datatype comparison
SELECT a.TABLE_NAME, a.COLUMN_NAME, a.DATA_TYPE, a.CHARACTER_MAXIMUM_LENGTH,
b.COLUMN_NAME, b.DATA_TYPE, b.CHARACTER_MAXIMUM_LENGTH
FROM information_schema.COLUMNS a
LEFT JOIN information_schema.COLUMNS b ON b.COLUMN_NAME = a.COLUMN_NAME
AND b.TABLE_NAME = a.TABLE_NAME
AND b.TABLE_SCHEMA = #database_current
WHERE a.TABLE_SCHEMA = #database_dev
AND (
b.COLUMN_NAME IS NULL
OR b.COLUMN_NAME != a.COLUMN_NAME
OR b.DATA_TYPE != a.DATA_TYPE
OR b.CHARACTER_MAXIMUM_LENGTH != a.CHARACTER_MAXIMUM_LENGTH
);
-- table comparison
SELECT a.TABLE_SCHEMA, a.TABLE_NAME, b.TABLE_NAME
FROM information_schema.TABLES a
LEFT JOIN information_schema.TABLES b ON b.TABLE_NAME = a.TABLE_NAME
AND b.TABLE_SCHEMA = #database_current
WHERE a.TABLE_SCHEMA = #database_dev
AND (
b.TABLE_NAME IS NULL
OR b.TABLE_NAME != a.TABLE_NAME
);
Hope this script can also help people that looks for a non-application solution, but the usage of script. Cheers
I tried mysqldiff without success, so I would like to enrich the future readers by drawing attention to mysqlworkbench's compare function. http://dev.mysql.com/doc/workbench/en/wb-database-diff-report.html#c13030
if you open a model tab, and select the databases menu, you get a compare schemas option, which you can use to compare two different schemas on two different servers, or two schemas on the same server, or a schema and a model, or a lot of other options i haven't tried yet.
For mysql on Linux, it is possible via phpmyadmin to export the databases without 'data' and only structure.
Scrolling through the export options for the entire database, just deselect 'data' and set the output to text. Export both databases you wish to compare.
Then in file compare in your preferred program / site, compare the two text file outputs of the databases. Synchronization is still manual in this solution, but this is effective for comparing and finding the structural differences.
Depending on your database, the tools available vary.
I use Embarcadero's ER/Studio for this. It has a Compare and Merge feature.
There are plenty others, such as Toad for MySQL, that also have compare. Also agree on the Red-Gate suggestion, but never used it for MySQL.
Check out Gemini Delta - SQL Difference Manager for .NET. A free beta version is available for download, but the full version is only a few days away from public release.
It doesn't compare row-level data differences, but it compares tables, functions, sprocs, etc... and it is lightning fast. (The new version, 1.4, loads and compares 1k Sprocs in under 4 seconds, compared with other tools I've tested which took over 10 seconds.)
Everyone is right though, RedGate does make great tools.

Resources