Are Flyway schemas merely informational or do they affect function? - sql-server

I have a SQL Server database that uses schemas to logically group objects; there are ten schemas in the database.
If I baseline my database and create the schema history table in the “foo” schema, will Flyway apply a migration from the migration folder that operates on an object in the “bar” schema?
Do I need one folder of migration scripts for each schema in the database? The documentation explains how to specify schemas on the command line but doesn’t make it clear as to why I must.

The list of schemas on the command line has two effects:
the first named schema is where the history table goes
the named schemas are the ones that are cleaned by flyway clean
(Note - in 6.1 the command line defaultSchema parameter was introduced to separate these usages)
Migrations can refer to any schema in the database that you have access to - indeed, some objects may exist in one schema but depend on objects in another. If you're happy with the history table to go in dbo, and want to control the whole database with Flyway, just don't set these parameters. A folder of scripts per schema may help you with maintaining them but it is not necessary.

Related

Clone Schemas in Snowflake

Is it possible to clone schemas selectively in Snowflake?
For e.g.
Original:
DB_OG
--schema1
--schema2
--schema3
Clone:
DB_Clone
--schema1
--schema3
The CREATE <object> … CLONE statement does not support applying a filter or pattern or multiple objects, and its behaviour is to recursively clone every object underneath:
For databases and schemas, cloning is recursive:
Cloning a database clones all the schemas and other objects in the database.
There are a few explicit ways to filter the clone:
Clone the whole database, then follow up with DROP SCHEMA commands to remove away unnecessary schema
Create an empty database and selectively clone only the schemas required from the source database into it
Both of the above can also be automated by logic embedded within a stored procedure that takes a pattern or a list of names as its input and runs the appropriate SQL commands.
Currently the elimination of certain schemas and cloning all the other schema's of a database is not supported.
If the use case has schemas that are not required, are the recently created schemas, you could use the AT | BEFORE clause to eliminate the schemas(clone till a particular timestamp, that will eliminate the schemas that are created post the mentioned timestamp).
Ref: https://docs.snowflake.com/en/sql-reference/sql/create-clone.html#notes-for-cloning-with-time-travel-databases-schemas-tables-and-streams-only
Other options include dropping the schemas post the cloning operation or cloning only the required schemas

SSDT SQL Server Data Tools Customer specific requirements

We are using SQL Server Data Tools (SSDT) to manage our customer databases.
In theory all databases are identical, but in practice we have a few stored procedures (and one trigger) that would change from one customer to another.
We created a main SSDT for everything common, and then one SSDT per customer containing only the specific stored procedures (no tables).
In the specific SSDTs we get warnings because SSDT can't find the tables referred in the stored procedures, but we can live with that (obviously SSDT won't be able to validate the table's fields since it can't find the table). For the trigger, we get an error (table can't be found), thus the database project doesn't compile.
How should we manage that? I guess we should not be alone in this situation.
Is there a way for a database project to refer objects (tables) from another database project ?
Thanks,
Yves Forget
Daniel N gave the right direction, I'll just explain. Let's say you have database project named DatabaseA which will contain the only objects that 100% match for every customer. Then you create another database project DatabaseB and include DatabaseA as "the same instance, the same database". In database DatabaseB you can add customer specific objects. Then you can create other database for other customer in a similar way.
IN SSDT you can add another database project or dacpac as a reference.
In the properties for the referenced project you will be able to set where the referenced database resides, same server same database, same server diff database etc
https://msdn.microsoft.com/en-us/library/jj684584%28v=vs.103%29.aspx?f=255&MSPPError=-2147217396

Create and compare snapshots of a schema definition

I want to take a snapshot of all table, view and procedure definitions, and diff this snapshot against another version of the same schema. (By snapshot I mean the schema definition stored in some text file.)
I am not interested in procedure bodies, only in what is relevant to my DAOs. (Maybe you could call that a schema interface...?)
Is there a one-command way of creating such snapshot for an Oracle schema?
You can use Oracle SQL Developer "Database Diff" to do this. You can select the two Oracle schemas to be compared and it produces all the differences
The good thing about this tool is that it allows you to select what to consider when differences are produced, and, then, you have the option of only comparing package specs

Case sensitivity and database projects

I created a schema in our SQL Server 2012 database called [Auth]. Then tables and triggers were created as well. Later I was informed that the schema naming standard is lowercase, so it should be [auth]. I renamed the schema in the database project, and all related references. However, the Schema Compare feature doesn't detect the difference, and isn't renaming the schema.
This affects our Entity Framework objects, as they should be 'auth'.
Is there a way to make the database project see a case change as a change, and update the database?
There is an option in the project settings called "Validate Casing on Identifiers" which according to the documentation shall detect difference in case. However, it only seems to be relevant if you select a case sensitive collation in the Database Settings of your project.
Once I selected SQL_Latin1_General_CP1_CS_AS I was able to detect the changes in a schema name and it scripts the DROP/CREATE schema as expected:

Grouping ETL Staging Tables With User Schemas?

I was thinking of putting staging tables and stored procedures that update those tables into their own schema. Such that when importing data from SomeTable to the datawarehouse, I would run a Initial.StageSomeTable procedure which would insert the data into the Initial.SomeTable table. This way all the procs and tables dealing with the Initial staging are grouped together. Then I'd have a Validation schema for that stage of the ETL, etc.
This seems cleaner than trying to uniquely name all these very similar tables, since each table will have multiple instances of itself throughout the staging process.
Question: Is using a user schema to group tables/procs/views together an appropriate use of user schemas in MS SQL Server? Or are user schemas supposed to be used for security, such as grouping permissions together for objects?
This is actually a recommended practice. Take a look at the Microsoft Business Intelligence ETL Design Practices from the Project Real. You will find (download doc from the first link) that they use quite a few schemata to group and identify objects in the warehouse.
In addition to dbo and etl, they also use admin, audit, part, olap and a few more.
I think it's appropriate enough, it doesn't really matter, you could use another database if you liked which is actually what we do.
I'm not sure why you would want a validation schema though, what are you going to do there?
Both the reasons you list (purpose/intent, security) are valid reasons to use schemas. Once you start using them, you should always specify schema when referencing an object (although I'm lazy and never specify dbo).
One trick we use is to have the same-named table in each of several schemas, combined with table partitioning (available in SQL 2005 and up). Load the data in first schema, then when it's validated "swap" the partition into dbo--after swapping the dbo partition into a "dumpster" schema copy of the table. Net Production downtime is measured in seconds, and it's all carefully wrapped in a declared transaction.

Resources