i`m looking for ways to generate DDL ALTER script between different versions of the same model using python.
Let`s say i have two DDL scripts for different versions of the model(newer and older), i need to compare those two and then generate DDL ALTER script for changes in order to update database to a newer version.
I`m looking for some API for Python or command line .
Ive tried using Erwin API for this but seems its only able to generate DDL Create Script Also im not looking at any GUI - based programs
Related
We intend to create DACPAC files using SQL database projects and distribute them automatically to several environments, DEV/QA/PROD, using Azure Pipeline. I can make changes to the schema for a table, view, function, or procedure, but I'm not sure how we can update specific data in a table. I am sure this is very common use case but unfortunately I am having hard time implementing it.
Any idea how can I automate creating/updating/deleting a row for a table?
E.g.: update myTable set myColumn = 5 where someColumn = 'condition'
In your database project you can add a Post Deployment Script
Do not. Seriously. I found DACPAC always to be WAY too limiting for serious operations. Look how the SQL is generated and - realize how little control you have.
The standard approach is to have deployment scripts that you generate and that do the changes in the database, plus a table in the db tracking which have executed (possibly with a checksum so you do not need t change the name to update them).
You can easily generate them partially by schema compare (and then generate the change script), but those also allow you to do things like data scrubbing and multi step transformations that DACPAC by design cannot efficiently and easily do.
There are plenty of frameworks for this around. They generally belong in the category of developer tools.
I’m seriously confused about how flyway generally works to maintain the db as a code. Suppose I have the following V0 script:
Create table student(
Name varchar(25)
)
That would be my initial db. Now, suppose I want to add a new column, why am I being forced to do a V1 script like this one?
Alter table student add column surname varchar(25)
What I’d like to do would be to simply update the v0 script like this:
Create table student(
Name varchar(25),
Surname varchar(25)
)
Then the tool, by comparing the actual db, should be able to understand that a new column should be created!
This is as other code (java, javasctipt,..) tools work and the same I would it like to be for db as a code tools.
So my question is: is there a way to achieve this behavior without dropping/recreating the db?
I tagged this question with flyway and liquibase tools but feel free to suggest other tools that would fit my needs.
Whatever way you develop the database,there is no way to achieve this behavior without dropping/recreating the db, because the CREATE TABLE statement assumes that the table that you specify isn't already there. You can't use a CREATE OR ALTER statement because these aren't supported for tables even where the RDBMS that you use supports that syntax.
In the early stages of a database project, you can work very much quicker with a build script that you use to create a database with tables, views and so on. You can then insert some data, try it out, run a few tests, maybe and then tear it down. Flyway community supports this: you just have a single migration script starting from an empty database that you repeatedly 'clean' and 'migrate', until you reach your first version. Flyway takes care of the 'Tear-down' process. and give you a fresh start, by wiping your configured schemas completely clean.
Flyway Teams supports a special type of migration, the 'repeatable' that allows you to use for migrations SQL files that you can alter. However, you would need to add logic that deletes the table if it already exists before it executes your CREATE TABLE statement. It avoids having to 'Flyway clean', but it is a lot of extra work. It also means that you lose the whole advantage of a version representing an exact state of a database.
At some point, you are going to use migrations because you're likely to have copies of the database to keep up-to-date. Whatever tool you use to update a development or producton database, you are going to need to use a migration for this because of the existing data in tables.
Flyway Enterprise supports the automatic generation of a migration, if you are using Oracle or SQL Server. SQL Compare is provided to compare two versions of a database and produce a migration script from one version to the next. This allows you to use a build script as you suggest, compare it with the current version of the database, and generate a migration script to get from the one to the other.
I have a sql server database working with a .net 2015 mvc 5 application. My database code is source controlled using SSDT project. I am using SqlPackage.exe to deploy database to the staging environment using .Decpac file created by the SSDT project build process. This has been done using a powershell task of VSTS build.
This way I can make db schema changes in a source controlled way. Now the problem is regarding the master data insertion for the database.
I use a sql script file which have data insertion scripts which is executed as a post deployment script. This file is also source controlled.
The problem is that initially we have prepared the insertion script to target a sprint ( taking sprint n as a base) which works well for first release. but in next sprint if update some master data then how should the master data insert should be updated:
Add new update / insert query at the last of the script file? but in this case the post deployment script will be execute by CI and it try to insert the data again and again in the subsequent builds which will eventually get failed if we have made some schema changes in the master tables of this database.
Update the existing insert queries in the data insertion script. in this case also we have trouble because at the post build event, whole data will be re-inserted.
Maintain separate data insertion scripts for each script and update the script reference to the new file for the post build event of SSDT. This approach has a manual effort and error pron because the developer has to remember this process. Also the other problem with this approach is if we need to setup 1 more database server in the distributed server farm. Multiple data insertion script will throw errors because SSDT has latest schema and it will create a database with the same. but older data scripts has data insertion for previous schema ( sprint wise db schema which was changed in later sprints)
So can anyone suggest best approach which have lesser manual effort but it can cover all the above cases.
Thanks
Rupendra
Make sure your pre- and post-deployment scripts are always idempotent. However you want to implement that is up to you. The scripts should be able to be run any number of times and always produce correct results.
So if your schema changes that would affect the deployment scripts, well, updating the scripts is a dependency of the changes and accompanies it in source control.
Versioning of your database is already a built in feature of SSDT. In the project file itself, there is a node for the version. And there is a whole slew of versioning build tasks in VSTS you can use for free to version it as well. When SqlPackage.exe publishes your project with the database version already set, a record is updated in msdb.dbo.sysdac_instances. It is so much easier than trying to manage, update, etc. your own home-grown version solution. And you're not cluttering up your application's database with tables and other objects not related to the application itself.
I agree with keeping sprint information out of the mix.
In our projects, I label source on successful builds with the build number, which of course creates a point in time marker in source that is linked to a specific build.
I would suggest to use MERGE statements instead of insert. This way you are protected for duplicated inserts within a sprint scope.
Next thing is how to distinguish different inserts for different sprints. I would suggest to implement version numbering to sync database with the sprints. So create a table DbVersion(version int).
Then in post deployment script do something like this:
SET #version = SELECT ISNULL(MAX(version), 0) FROM DbVersion
IF #version < 1
--inserts/merge for sprint 1
IF #version < 2
--inserts/merge for sprint 2
...
INSERT INTO DbVersion(#currentVersion)
What I have done on most projects is to create MERGE scripts, one per table, that populate "master" or "static" data. There are tools such as https://github.com/readyroll/generate-sql-merge that can be used to help generate these scripts.
These get called from a post-deployment script, rather than in a post-build action. I normally create a single (you're only allowed one anyway!) post-deployment script for the project, and then include all the individual static data scripts using the :r syntax. A post-deploy script is just a .sql file with a build action of "Post-Deploy", this can be created "manually" or by using the "Add New Object" dialog in SSDT and selecting Script -> Post-Deployment Script.
These files (including the post-deploy script) can then be versioned along with the rest of your source files; if you make a change to the table definition that requires a change in the merge statement that populates the data, then these changes can be committed together.
When you build the dacpac, all the master data will be included, and since you are using merge rather than insert, you are guaranteed that at the end of the deployment the contents of the tables will match the contents of your source control, just as SSDT/sqlpackage guarantees that the structure of your tables matches the structure of their definitions in source control.
I'm not clear on how the notion of a "sprint" comes into this, unless a "sprint" means a "release"; in this case the dacpac that is built and released at the end of the sprint will contain all the changes, both structural and "master data" added during the sprint. I think it's probably wise to keep the notion of a "sprint" well away from your source control!
I am wondering if I can modify a generated script to copy data from 1 database to another. I used the "Generate scripts" tool and can I then take the latest run each time and create a stored procedure to then take latest tables and insert that into new database?
Source tables have an extension of _dateinfo. So table called ABC_05162016 would be on next run table ABC_06162016. Is there a way to take the original generated script and then update that last part 1 time instead of constantly selecting all the tables every time this is needed to be done?
Thanks for the help. Yes I have looked around for an answer to this but have not come across something like this. All deal with importing/exporting data or using Generated Script part. Thanks for the help.
If there is a better way then those 3 ways. I would appreciate knowing that as well.
Using SQL Server 2008.
I would like to publish some data of a sql server 2k to msaccess databases.
I'd like to do that given a table supplying datatransformation info, for example :
tablenameOnServer | pathToPub
------------------------------------------------------
Clients | D:\Data\Pub1\ClientData1.mdb
Orders | D:\OtherData\Pub\Sales.mdb
The given mdb file should be created (or an empty one copied of course) and the table should be created each time the script runs.
I of course don't need a full blown solution, but some pointers as where to start are very welcome. I thought I'd use SSIS for this, but am new to it and I like to know where I start best in order to avoid too much loss of time :
Do I use SSIS with BIDS (vs2008), can I read data in a package and create tables on the fly?
Do I use C# and manipulate and create packages in code?
Or what do I do best? Is SSIS the obvious solotion anyway?
In any case : some pointers to get me started would be very welcome...
UPDATE : This question is about publishing data, so it can be shipped on CD for example. It's not about linking to an sql server.
The simplest solution is to simply link the sql server table in access. Then you can see the data in realtime.
You can create an Access database to do this fairly easily.
Here is a basic algorithm
Within access, create a linked table pointing to your table.
create an Access query to filter data according to your criteria called NewQuery
Use VBA to create new database NewDB
export your NewQuery with structure and columns to the NewDB
If you want to keep the SSIS packages, I would make a template SSIS package and then using the SSIS objects from C# create the packages. I've done this to start off packages which are later customized.
Alternatively, if you aren't going to keep the packages after they are generated and aren't going to use a lot of SSIS features (logging etc.), I would consider doing the whole thing from C# (or other language), because SSIS isn't a great tool for connection managers (source or destination) which change.
From C#, you can simply read the schema, create the table in Access with matching types and columns and populate it.