I have a database project in my VS2010 solution. I recently changed a view and and changed a number of functions to use this view instead of going directly against a table.
But now when I deploy I get errors on most of these functions because the column asked for does not exists in the view yet.
The update of the view happends later than the update of UDF's. Is there any way to change this behaviour?
Wouldn't the best thing be if the deploy script updated in this order: tables, views, SP and UDF. It seems like tables is updated first, but the views are just thrown in somewhere in the middle of the deploy script.
Since UDFs may be used in views, and views may be used in UDFs, it would have to analyse all of them to determine a coherent deployment order - it would have to parse all of the SQL. And who knows what it's to do if you have dependencies on other databases.
Edit
There's no documented/supported way to force a deployment order, so far as I can see. With some minimal testing, it appears to me that UDFs (at least table valued) are always deployed before views.
Edit 2
Even stranger, it turns out it does do the dependency analysis. Looking at the .dbschema output file for a db project, I can see it produces a <Relationship Name="BodyDependencies"> element for the function, that does list the view/columns it depends on. But the deployment sql script still puts the view later on. Curious.
Edit 3
Probably final edit. I think, in general, that the problem is unsolvable. (Admittedly, the following only errors because I've specified schemabinding). If the old function definition relies on the old view definition, and the new function definition relies on the new view definition, there's no right way to alter the database using ALTERs:
create table dbo.T1 (
ID int not null,
C1 varchar(10) not null,
C2 varchar(10) not null,
C3 varchar(10) not null
)
go
create view dbo.V1
with schemabinding
as
select ID,C1,C2
from dbo.T1
go
create function dbo.F1()
returns table
with schemabinding
as
return select ID,C1,C2 from dbo.V1 where ID=1
go
alter view dbo.V1
with schemabinding
as
select ID,C1,C3
from dbo.T1
go
alter function dbo.F1()
returns table
with schemabinding
as
return select ID,C1,C3 from dbo.V1 where ID=1
go
result:
Msg 3729, Level 16, State 3, Procedure V1, Line 4
Cannot ALTER 'dbo.V1' because it is being referenced by object 'F1'.
Msg 207, Level 16, State 1, Procedure F1, Line 5
Invalid column name 'C3'.
Related
I have a table that is the result of an INNER JOIN and then to that table I have to a apply a bunch of queries, so at the end the whole code in SQL is really big and in the future (and that's the main problem) I will have problems to understand what did I do.
So for this reason I am considering the possibility of creating views, so in each step I have a view created and I know what does each one of the queries.
So with that goal I started to use IF ELSE statements to create views if they dont' exist etc. but I'm stumbling with a lot of errors and problems.
First of all, this is the way I'm creating a view with the IF statement:
-- This portion of code is common in all the views
IF NOT EXISTS
(
SELECT 1
FROM sys.views
WHERE Name = 'NAME_OF_THE_VIEW'
)
BEGIN
EXEC('CREATE VIEW NAME_OF_THE_VIEW AS SELECT 1 as Val')
END
GO
ALTER VIEW NAME_OF_THE_VIEW
AS
-- Here I put the query of the view
SELECT *
FROM table_1
When I execute this code it works, but the SQL Server Management Studio underlines "NAME_OF_THE_VIEW" in the ALTER VIEW statement and when I hover the mouse it says: Invalid object name 'NAME_OF_THE_VIEW'. I don't understand why if there's a supposed error it still works.
The other problem is that when I introduce more code like the code above in order to create other views in the same script, the whole ALTER VIEW statement is underlined and when I hover this message appears; Incorrect syntax: 'ALTER VIEW' must be the only statement in the batch.
So the question is: hoy can I put everything in the same script, where I can create views to avoid doing a lot of subqueries, and without getting this errors? The SQL-Server version is 15.
So the question is: hoy can I put everything in the same script, where I can create views to avoid doing a lot of subqueries, and without getting this errors?
There's no need to check for existence of the view. Just CREATE OR ALTER VIEW
CREATE OR ALTER VIEW NAME_OF_THE_VIEW
AS
-- Here I put the query of the view
SELECT *
FROM table_1
GO
CREATE OR ALTER VIEW NAME_OF_THE_OTHER_VIEW
AS
-- Here I put the query of the view
SELECT *
FROM table_1
I'm creating some views with a lot of references to tables in another database.
At some point the other database needs to change.
I want to make it easy for the next developer to change the scripts to use another database.
This obviously work like it should:
CREATE VIEW ViewName
AS
SELECT *
FROM AnotherDatabase.SchemaName.TableName;
But when I do:
DECLARE #DB CHAR(100)
SET #DB = 'AnotherDatabase'
GO
CREATE VIEW ViewName
AS
SELECT *
FROM #DB.SchemaName.TableName;
I get the error:
Msg 137, Level 15, State 2, Procedure ViewName, Line 3
Must declare the scalar variable "#DB".
I could do something like:
DECLARE #SQL ...
SET #SQL = ' ... FROM ' + #DB + ' ... '
EXEC (#SQL)
But that goes against the purpose of making it easier for the next developer - because this dynamic SQL approach removed the formatting in SSMS.
So my question is: how do I make it easy for the next developer to maintain T-SQL code where he needs to swap out the database reference?
Notes:
I'm using SQL Server 2008 R2
The other database is on the same server.
Consider using SQLCMD variables. This will allow you to specify the actual database name at deployment time. SQL Server tools (SSMS, SQLCMD, SSDT) will replace the SQLCMD variable names with the assigned string values when the script is run. SQLCMD mode can be turned on for the current query windows from the menu option Query-->SQLCMD mode option.
:SETVAR OtherDatabaseName "AnotherDatabaseName"
CREATE VIEW ViewName AS
SELECT *
FROM $(OtherDatabaseName).SchemaName.TableName;
GO
This approach works best when SQL objects are kept under source control.
When you declare variables, they only live during the execution of the statement. You can not have a variable as part of your DDL. You could create a bunch of synonyms, but I consider that over doing it a bit.
The idea that your database names are going to change over time seems a bit out of the ordinary and conceivably one-time events. However, if you do still require to have the ability to quickly change over to point to a new database, you could consider creating a light utility directly in SQL to automatically generate the views to point to the new database.
An implementation may look something like this.
Assumptions
Assuming we have the below databases.
Assuming that you prefer to have the utility in SQL instead of building an application to manage it.
Code:
create database This;
create database That;
go
Configuration
Here I'm setting up some configuration tables. They will do two simple things:
Allow you to indicate the target database name for a particular configuration.
Allow you to define the DDL of the view. The idea is similar to Dan Guzman's idea, where the DDL is dynamically resolved using variables. However, this approach does not use the native SQLCMD mode and instead relies on dynamic SQL.
Here are the configuration tables.
use This;
create table dbo.SomeToolConfig (
ConfigId int identity(1, 1) primary key clustered,
TargetDatabaseName varchar(128) not null);
create table dbo.SomeToolConfigView (
ConfigId int not null
references SomeToolConfig(ConfigId),
ViewName varchar(128) not null,
Sql varchar(max) not null,
unique(ConfigId, ViewName));
Setting the Configuration
Next you set the configuration. In this case I'm setting the TargetDatabaseName to be That. The SQL that is being inserted into SomeToolConfigView is the DDL for the view. I'm using two variables, one {{ViewName}} and {{TargetDatabaseName}}. These variables are replaced with the configuration values.
insert SomeToolConfig (TargetDatabaseName)
values ('That');
insert SomeToolConfigView (ConfigId, ViewName, Sql)
values
(scope_identity(), 'dbo.my_objects', '
create view {{ViewName}}
as
select *
from {{TargetDatabaseName}}.sys.objects;'),
(scope_identity(), 'dbo.my_columns', '
create view {{ViewName}}
as
select *
from {{TargetDatabaseName}}.sys.columns;');
go
The tool
The tool is a stored procedure that takes a configuration identifier. Then based on that identifier if drops and recreates the views in the configuration.
The signature for the stored procedure may look something like this:
exec SomeTool #ConfigId;
Sorry -- I left out the implementation, because I have to scoot, but figured I would respond sooner than later.
Hope this helps.
I'm trying to create and execute a procedure that creates some tables. It won't recognize my database.
USE [db1]
go
create procedure version_1 as
update db1
set ver=1
where ver=0;
create table Staff_Titles(
title nvarchar(100) not null,
title_description nvarchar(200) null,
[..]
go
It compiles even though the db1 from update db1 is underlined. So is ver=1 and ver=0. After I try to execute it, it says
invalid object name
at USE [DB1] again even though, it's inside the stored procedures...
I tried refreshing the database, I tried looking for Edit -> IntelliSense but I can't find it, I tried Ctrl + shift + R, nothing worked.
The IntelliSense is telling you that it can't find a table called db1 in the db1 database. Make sure the table exists or if it's in a different schema, make sure to include the schema, like this:
update db1.schmaname.db1
set ver=1
where ver=0;
If you are trying to store version data, you have to add a table and a field to store this information. You can not update fields on a database, as there are no fields directly at the database level. You can create a table called "Versions" with a field called "ver".
CREATE TABLE Versions
(
[Ver] [int] NOT NULL,
CONSTRAINT [PK_Versions] PRIMARY KEY CLUSTERED (Ver ASC)
)
Then when you run the procedure it could insert a record to indicate that the tables have been updated with that version. Something like this:
insert into Versions
values (1)
Then if you ever need to query for the latest version you could use:
select max(Ver) Ver from Versions
You might want to try and clear the intellisense cache if you "just" created that Database. Use the keystroke Ctrl-Shift-R or Ctrl-R.
The following tSQL query is puzzling me:
select 1 as FIELD into #TEMP
drop table #TEMP
select 1 as FIELD into #TEMP
When I run it from SQL Server Management Studio session window (pressing F5 to the whole query, as a group), I get the following error:
Msg 2714, Level 16, State 1, Line 3
There is already an object named '#TEMP' in the database.
Note that table #TEMP doesn't exist before the query is executed.
I thought that the code shouldn't produce any errors as line 2 is dropping the temporary table. But it is as if the drop isn't taking effect when line 3 is executed.
My questions:
Why does the error happen?
How do I fix the query so it executes as intended?
PS. The query above is a simplification of a real world query of mine that is showing the same symptoms.
PS2. Regardless of whether this is a sound programming practice or not (as Sean hinted in his comments), this unexpected behavior prompted me to look for information on how these queries are parsed in the hopes that the knowledge will be helpful to me in the future.
As I found the seek of existing tables are different:
select 1 as FIELD into #TEMP
drop table #TEMP
When you use into statement after those commands:
select 1 as FIELD into #TEMP
Error is:
There is already an object named '#TEMP' in the database.
And When you use a select on #TEMP after those commands:
select * from #TEMP
Error is:
Invalid object name '#TEMP'.
So, In first case THERE IS an object with #TEMP name and in the other case THERE IS NOT an object with #TEMP name !.
An important note from technet.microsoft is:
DROP TABLE and CREATE TABLE should not be executed on the same table in the same batch. Otherwise an unexpected error may occur.
In notes of dropping tables by SQL Server Database Engine:
The SQL Server Database Engine defers the actual page deallocations, and their associated locks, until after a transaction commits.
So the second error on using select statement may related to the actual page deallocations and the first error on using into statement may related to duration between lock associated until the transaction commits.
Here try this:
select 1 as FIELD into #TEMP
drop table #TEMP
GO
select 1 as FIELD into #TEMP
In SQL Server data Tools you have the deployment option "Block incremental deployment if data loss might occur", which I'd wager is a best practice to keep checked.
Lets say we have a table foo, and a column bar which is now redundant - has no dependencies, foreign keys etc etc, and we have already removed references to this column in our data layer and stored procedures as it's simply not used. In other words, we are satisfied that dropping this column will have no adverse effects.
There are a couple of flies in the ointment:
The column has data in it
The database is published to
hundreds of distributed clients, and it could take months for the
change to ripple out to all clients
As the column is populated, publishing will fail unless we change the "Block incremental deployment if data loss might occur" option. This option is at the database level, not table level however, and so due to the distributed nature of the clients, we'd have to turn off the "data loss" option for months before all databases were updated, and turn it back on once all clients have updated (our databases have version numbers set by our build).
You may think we could solve this with a pre-deployment script such as
if exists (select * from information_schema.columns where table_name = 'foo' and column_name = 'bar') BEGIN
alter table foo drop constraint DF_foo_bar
alter table foo drop column bar
END
But again this fails unless we turn the "data loss could occur" option off.
I'm simply interested as to what others have done in this scenario as I'd like to have granularity which doesn't currently seem possible.
So I've been accomplishing this task via the following steps:
1) Since we are going to make table #Foo, make sure to drop that table before moving forward if it exists.
2) In a pre-deployment script: If the column exists, create a temporary table #Foo and select all rows from Foo into #Foo.
3) Remove the column from #Foo
4) Delete all rows in Foo (now there will be no data loss since no data exists)
5) In a post-deployment script: If #Foo exists, select all rows from #Foo into Foo
6) Drop table #Foo
And code:
pre-deployment script
if(Object_ID('TempDB..#Foo') is not null)
begin
drop table #Foo
end
if exists (
select *
from sys.columns
where Name = 'Bar'
and Object_ID = Object_ID('Foo')
)
begin
select * into #Foo
from Foo
alter table #Foo drop column Bar
-- Now that we've made a complete backup of Foo, we can delete all its data
delete Foo
end
post-deployment script
if(Object_ID('TempDB..#Foo') is not null)
begin
insert into Foo
select * from #Foo
drop table #Foo
end
Caveat: Depending on your environment, it might be wiser to depend on versions rather than column & temp table existence in your conditionals
The PreDeployment script doesn't work the way you are hoping to use it because of the order of operations for SSDT:
Schema Comparison
Script generation for schema difference
Execute PreDeployment
Excecute generated script
Execute PostDeployment.
So of course, the schema difference is identified as part of #2 and appropriate SQL is generated to drop the column (including the check to block on data loss), before your manual pre-deployment script can 'get rid of it'.
If you take a look at the script generated behind the scenes to detect (and therefore block) on possible data loss, it checks to see if there are any rows by running something along the lines of this:
IF EXISTS (select top 1 1 from [dbo].[Table]) RAISERROR ('Rows were detected. The schema update is terminating because data loss might occur.', 16, 127)
This means the simple existence of rows will stop the column being dropped. We haven't found any way around this other than manually dealing with the problem outside (and before) the SSDT deployment, using conditional deployment steps based on version numbers.
You mention distributed clients, which implies you have some sort of automated publication/update mechanism. You also mention version numbers as part of the database - could you include in your deploy (before the sqlpackage.exe command I assume you are running) a manual SQL script? This is akin to what we do (ours is in Powershell, but you get the gist):
IF VersionNumber < 2.8
BEGIN
ALTER TABLE X DROP COLUMN Y
END
Disclaimer: in no way is that valid SQL, it's simply pseudo code to imply an idea!