So my issue is that my new SQL objects are not being released into my on-prem database. Now if I alter existing objects it works fine. I see the update passed through the release and on the on-prem SQL database. When i create a new table or procedure. it doesn't release. It builds, it links the build to release just fine. When I run the manual release I get no errors but in the logs, I do not see the new proc or table in the "deploy using: dacpac" log.
I do see my commits in the build and in the master branch.
Is there a setting that I'm missing?
For new objects to be released
to the on-prem database?
The account that I'm using has full admin right to this database.
please let me know if you need any screenshots or samples.
Related
I have a problem with Amazon RDS SQL Server (2014) and trying to use it with Visual Studio SQL Server Data Tools, and there's not much in the way of help on the AWS support site.
I have run up an instance of RDS with and accessed it with SQL Management Studio with no problems. I create a database and then run a schema compare from SSDT and hit update.
The first thing the update process does is amends the db_owner authorisation which then completely locks out the master user from the database on RDS. The change is identified when you hit compare in SSDT, but there is no way of turning it off that I can see.
Can anyone tell me a way around the problem?
If you want it to stop deploying permissions I wrote this for an environment where I wasn't dbo and the same thing kept happening:
http://agilesqlclub.codeplex.com/
IgnoreSecurity will stop you hurting yourself.
Ed
On the SchemaCompareProject page click on the sttings icon and go to the Object Types tab. Within the Application-scoped list there is an option for Role Memberships.
Uncheck this and re-run the compare - the line item forcing the change of authorisation disappears. This keeps rdsa as the db_owner and everything syncs correctly.
We are having issues with the TFS cube. I don't think it has been built since TFS was installed. The warehouse seems to be working and has new data it just seems to be the cube that doesn't work.
We tried rebuilding it using the TFS Administrator Console but that made things worse, the data that was in there was erased and replaced by what looks like a blank Database.
I tried deleting the Database so that I could see if the cube was actually being built but now when I run the rebuild it says it's looking for an existing Database so it won't even try.
Now that I have deleted it, how can I rebuild the cube from scratch?
Even if I can retrieve the Database it was empty and I wanted to try building it from scratch anyway to see if that fixes the issue.
The Tfs_Warehouse extracts data from Tfs_CollectionA, Tfs_CollectionB, etc databases, and Tfs_Analysis get data from Tfs_Warehouse database. So no worries when the Tfs_Warehouse and Tfs_Analysis is deleted, it is allowed to create a new one.
Here are the steps how to get it work:
Delete Tfs_Warehouse and Tfs_Analysis from SQL instance in SQL Server Management Studio.
Open TFS Administrator Console, go to Reporting node, and click Edit.
On the Reporting dialog, select Use Report checkbox. And fill the Tfs_Warehouse, Tfs_Analysis and Reporting information separately on Warehouse, Analysis Services and Reports tabs.
Please check this blog for the detailed steps: http://social.technet.microsoft.com/wiki/contents/articles/20113.rebuild-tfs-warehouse-and-analysis-databases-from-scratch.aspx
In addition, by default, Tfs_Warehouse and Tfs_Analysis refreshes every 2 hours.
You can manually refresh the cube to get the latest data: https://msdn.microsoft.com/en-us/library/ff400237.aspx
I have a Web application that I usually deployed using Web Deploy directly from Visual Studio (whatever branch I am currently using in VS - normally master). But now I'm introducing a second web app on Azure that will be built from the same repo but different branch. To make things simpler I will be configuring both Web apps on Azure to integrate directly with GitHub and associate them with specific branch.
I also added two additional web.config files: Web.Primary.config and Web.Secondary.config and configured app settings on Azure portal of each web app by adding additional value SCM_BUILD_ARGS and set them to
SCM_BUILD_ARGS=-p:PublishProfile=Primary // in primary web app
SCM_BUILD_ARGS=-p:PublishProfile=Secondary // in secondary web app
which I understand will transform correct config file with specific external services' configurations (DB connection, mail server, etc.).
Now the additional step that I would like to include in continuous deployment is run a set of SQL scripts that I have in my repo that I used to manually upgrade database during Web Deploy in VS. Individual scripts are actually doing specific database upgrade steps:
backup current tables - backup creates a set of Backup_OriginalTableName tables that are copied from existing ones and populated with existing data
drop whole DB model - all non-backup objects are being dropped from procedures, functions, types, views, tables...
create model - creates all tables, views and indices
create user types
create user functions
create stored procedures
restore data to new tables from backup tables - this step may occasionally break if we introduce new non-nullable columns to tables in the new model don't have defaults defined on them; I will somehow have to mitigate this problem by adding an additional script that will add missing columns to backup tables and give them some defaults, but that's a completely different issue.
I used to also have a set of batch files (BAT) in my VS solution that simply executed sqlcmd against specific database instance and executed these scripts in predefined order (as above). Hence I had batches:
Recreate Local.bat - this one used additional SQL scripts to not restore from backup but rather to recreate an empty DB with only lookup tables being populated and some default data for development purposes (like predefined test users)
Restore Local.bat - I used this script to simply restore database from backup tables discarding any invalid data I may have created while debugging/testing since last DB recreate/upgrade/restore
Upgrade Local.bat - upgrade local development DB executing scripts mentioned above
Upgrade Production.bat - upgrade production DB on Azure executing scripts mentioned above
So to support the whole deployment process I was now doing manually in VS I would now like to also execute these scripts against specific Azure SQL DB during continuous deployment. I suppose I should be running these right after code deployment because if that one fails, DB shouldn't be upgraded either.
I'm a bit confused where and how to do this? Can I configure this somewhere in Azure portal? I was looking for resources on the Web but I can't seem to find any relevant information how to do additional deployment steps to execute these scripts. I think this is some everyday scenario as it's hard to think of web apps not requiring databases these days.
Maybe it's just my process that is wrong for DB upgrade/deployment so let me also know if there is any other normal way that does DB upgrade/migration with continuous deployment on Azure... I may change my process to accommodate for this.
Note 1: I'm not using Entity Framework or any other full blown ORM. I'm rather using NPoco and all my DB logic is built in SPs that DAL is using.
Note 2: I'm aware of recently introduced staging capabilities of Azure, but my apps are on cheaper plan that doesn't support staging and I want to keep it this way as I may be introducing additional web apps along the way that will be using additional code branches and resources (DB, mail etc.)
It sounds to me like your db project is a good candidate for SSDT and inclusion in source control. You can create a MyDB.sqlproj that builds your db as a dacpac, and then you can use SqlPackage.exe Publish to accomplish your deployment to Azure.
We recently brought our databases under source control and follow a similar process to build and automatically deploy them (but not to a SQL Azure DB). We've found the source control, SSDT tooling support, and automated deployment options to be worth the effort of setting up and maintaining our project this way.
This SO question has some good notes for Azure deployment of a dacpac using SSDT:
How to publish DACPAC file to a SQL Server database project via SQLPackage.exe of SSDT?
I am assinged for the task of Continuous deployment from development server to production server.
In my development server all the database objects will be created under the 'DBO' Schema. But in Production server based on every Tenants company list differenet SCHEMAS will be there.
for E.g in my development server if a tablename is created like
dbo.ABC
dbo.XYZ
And while i creating a tenant(Omkar---db) (Sarkur,Mathur--- schemas), the database objects will be like
Sarkur.ABC, sarkur.XYZ
Mathur.ABC, Mathur.XYZ
Now, i have to compare these two databases to check whether any changes in structure of the database objects, addition / deletion of database objects. If so that changes has tobe synchronized in the production database.
If anyone know that how to compare these two different schemas object, pls let me know..
1 option that I know is looking suitable
Flyway :
It is Easy to setup, simple to master. Flyway let's you regain control of your database migrations with pleasure and plain sql.
Solves only one problem and solves it well. Flyway migrates your database, so you don't have to worry about it anymore.
Made for continuous delivery. Let Flyway migrate your database on application startup. Releases have never been this easy.
Big Plus It's Open Source framework!
http://flywaydb.org/
I have Continuous delivery from TFS running to Azure for the C# project and this is fine.
I now want to the Continuous delivery to work with my SQL database.
Currently I have a SQL 2008 R2 database which holds the database.
What is the best option to ensure I can deliver using Continuous delivery from TFS to include the database changes?
The important factor is it needs to be automated upon checkin to TFS.
I've just run into this problem and found guidance hard to come by but have managed it in my application, I'm very new to MVC and Azure so excuse me if this isn't too detailed.
Here's what I did;
In my Global.asax.cs Application_start() I added:
Database.SetInitializer(new MigrateDatabaseToLatestVersion<ApplicationDbContext, Configuration>());
Where ApplicationDbContext is the name of your context. Remember to add references to your Models class and Migrations classes.
When you do this, I think migrations are run on your database when the context is first called and it does not drop and recreate your database.
The one problem I had was that I was getting an error saying that objects already existed in the database because none of the migration data in the __MigrarionHistory table was in there so it tried to create the database from scratch. Luckily my application wasn't live yet so I could delete the tables and commit to TFS and it recreated everything and included all of the __MigrationHistory so now, any new code first migrations are run on the first hit and the database is updated. I wouldn't know what to do if this database was live!
Hope that helps