I'm using the dbatools powershell module to generate the schema of a SQLServer database.
The generated schema seems to be in alphabetical order, rather than in dependency order, so if there's a table A that has a constraint referencing table B then the generated schema cannot be executed. Similarly if there is a view VA referencing view VB.
I have a workaround for generating tables involving two export operations: one for the table definition, and one for the table constraints, using different scripting options. Kludgy, but it works. However I don't have a similar workaround for handling dependencies in views.
The dbatools powershell module delegates to SQLServer Mangement Objects (SMO), as does another tool mssql-scripter (implemented in python). I've experimented with mssql-scripter and it will generate schema in dependency order.
So I am assuming it must be theoretically possible for dbatools to generate schema in dependency order as well. Is this actually possible, and if so, how?
Related
We intend to create DACPAC files using SQL database projects and distribute them automatically to several environments, DEV/QA/PROD, using Azure Pipeline. I can make changes to the schema for a table, view, function, or procedure, but I'm not sure how we can update specific data in a table. I am sure this is very common use case but unfortunately I am having hard time implementing it.
Any idea how can I automate creating/updating/deleting a row for a table?
E.g.: update myTable set myColumn = 5 where someColumn = 'condition'
In your database project you can add a Post Deployment Script
Do not. Seriously. I found DACPAC always to be WAY too limiting for serious operations. Look how the SQL is generated and - realize how little control you have.
The standard approach is to have deployment scripts that you generate and that do the changes in the database, plus a table in the db tracking which have executed (possibly with a checksum so you do not need t change the name to update them).
You can easily generate them partially by schema compare (and then generate the change script), but those also allow you to do things like data scrubbing and multi step transformations that DACPAC by design cannot efficiently and easily do.
There are plenty of frameworks for this around. They generally belong in the category of developer tools.
Is it possible to clone schemas selectively in Snowflake?
For e.g.
Original:
DB_OG
--schema1
--schema2
--schema3
Clone:
DB_Clone
--schema1
--schema3
The CREATE <object> … CLONE statement does not support applying a filter or pattern or multiple objects, and its behaviour is to recursively clone every object underneath:
For databases and schemas, cloning is recursive:
Cloning a database clones all the schemas and other objects in the database.
There are a few explicit ways to filter the clone:
Clone the whole database, then follow up with DROP SCHEMA commands to remove away unnecessary schema
Create an empty database and selectively clone only the schemas required from the source database into it
Both of the above can also be automated by logic embedded within a stored procedure that takes a pattern or a list of names as its input and runs the appropriate SQL commands.
Currently the elimination of certain schemas and cloning all the other schema's of a database is not supported.
If the use case has schemas that are not required, are the recently created schemas, you could use the AT | BEFORE clause to eliminate the schemas(clone till a particular timestamp, that will eliminate the schemas that are created post the mentioned timestamp).
Ref: https://docs.snowflake.com/en/sql-reference/sql/create-clone.html#notes-for-cloning-with-time-travel-databases-schemas-tables-and-streams-only
Other options include dropping the schemas post the cloning operation or cloning only the required schemas
I have created SSIS package for fuzzy lookup.
I just want to know how to make each following properties dynamic passed to execute package for any database table's column.
OLEDB_Source - Server, Database,Table and Column name.
FL_Large_Data - Server, Database,Table and Column name.
FL_Large_Data - Similarity threshold.
OLE DB Destination - Server, Database and Table name.
Since you are aiming to use this package for different tables and columns (Maybe you can if you have a fixed and unified table structure), it cannot be achieved using expressions. You must automate the package creation in order to do that, you have many choices:
Use BIML (Business Intelligence Markup Language )
Use SQL Server DTS libraries
Use some Wrapper libraries such as EzApi
For each one of the choices above, there are many tutorials found online, you can refer to them in order to create the package.
We have a multi tenant system where each tenant has their own database. Tenants also have the option to create their own data structures which will be their own table in the database.
This causes an issue where when we run the visual studio schema compare it will always flag these tables as differences and we will have to unselect them. This becomes a big issue as the schema compare has major performance issues when unselecting multiple differences.
These user defined tables will all have a certain naming pattern e.g. UserTable1,UserTable2 so what we really need is a way to perform the schema comparison while ignoring tables that contain a substring in this example it would be UserTable. Is this possible or is their a suitable alternative to using the Visual studio comparison tool?
For those coming here from Google looking for a solution to this.
All you have to do is right click on the section and ta-da, you can
Include or Exclude all objects depending on the existing state of the
objects.
In this case, section means the Delete, Change, and Add parent folders in the schema compare window.
Due to an employee quitting, I've been given a project that is outside my area of expertise.
I have a product where each customer will have their own copy of a database. The UI for creating the database (licensing, basic info collection, etc) is being outsourced, so I was hoping to just have a single stored procedure they can call, providing a few parameters, and have the SP create the database. I have a script for creating the database, but I'm not sure the best way to actually execute the script.
From what I've found, this seems to be outside the scope of what a SP easily can do. Is there any sort of "best practice" for handling this sort of program flow?
Generally speaking, SQL scripts - both DML and DDL - are what you use for database creation and population. SQL Server has a command line interface called SQLCMD that these scripts can be run through - here's a link to the MSDN tutorial.
Assuming there's no customization to the tables or columns involved, you could get away with using either attach/reattach or backup/restore. These would require that a baseline database exist - no customer data. Then you use either of the methods mentioned to capture the database as-is. Backup/restore is preferrable because attach/reattach requires the database to be offline. But users need to be sync'd before they can access the database.
If you got the script to create database, it is easy for them to use it within their program. Do you have any specific pre-requisite to create the database & set permissions accordingly, you can wrap up all the scripts within 1 script file to execute.