Related
I am working with Visual Studio 2017 Database Project (Dacpac) and I have some SQLCMD variables in Publish file (in xml file) like below-
<SqlCmdVariable Include="ClientDBName">
<Value>Client_1</Value>
</SqlCmdVariable>
And my problem is, we have multiple clients and we are deploying the database changes by dacpac for multiple clients in once. So if I assign the static value for my SQLCMD variable "ClientDBName" like above example, it will take the same value (same db name "Client_1") for all the clients.
And to fix that I am using PreDeployment script. In which I am trying to assign dynamic value or db name to the SQLCMD variable "CleintDBName". Like below-
DECLARE #dbname varchar(50)
SET #dbName = "xyz"
:setvar ClientDBName #dbName
But this is not working. I explored it and found this would not work. Another way I am trying to do is by assign the dbname value via calling the script like below-
:setvar ClientDBName "C:\GetDatabaseName.sql"
But this is also not working.
So can anyone help me out on this, how we can assign dynamic values to SQLCMD variable?
The sqlpackage example command below specifies SQLCMD values with the /Variables: argument. These values are used instead of those in the publish profile.
SqlPackage.exe /Action:Publish /SourceFile:"YourDatabase.dacpac" /TargetDatabaseName:YourDatabaseName /TargetServerName:"." /Variables:"ClientDBName=YourValue"
If your actual need is the published database name, you could use the built-in DatabaseName SQLCMD variable instead of a user-defined SQLCMD variable, which will be the /TargetDatabaseName value.
I need to implement a Distributed transaction for a third party product. I have two SQL Servers and two SQLCLR triggers. I want to access the local temp table value from the second trigger context, which is on another instance. Is it possible?
//Server 1
[Microsoft.SqlServer.Server.SqlTrigger (Name="SqlTrigger1", Target="Table1", Event="FOR INSERT")]
public static void SqlTrigger1 ()
{
using (SqlConnection conn = new SqlConnection("context connection=true"))
{
conn.Open();
// Create #Temp table
// Insert some data
// Fire trigger Server 2 via Dblink
}
}
//Server 2
[Microsoft.SqlServer.Server.SqlTrigger (Name="SqlTrigger1", Target="Table1", Event="FOR INSERT")]
public static void SqlTrigger2 ()
{
using (SqlConnection conn = new SqlConnection("context connection=true"))
{
conn.Open();
Read #Temp table ???
}
}
The immediate answer has nothing to do with SQLCLR. It is not even conceptually possible to access a local temporary table (or stored procedure) across instances because like any other object, they are local to the instance that they are created on. And when using a Linked Server, there is no way to access the calling session, so a reference back to the local temporary table on Server 1 will never be accessible by code running on Server 2.
Also, while it is at least possible to access a global temporary table between instances (because those are visible to all sessions), that would still require an additional Linked Server to be created on Server 2 that points back to Server 1 because that is where the global temporary table would exist. That's a bit messy, and offers no advantages over creating a real table (unless you create the global temporary table to include a newly created GUID value as part of its name, but then you still need to transfer that value over to Server 2 in order to build the correct reference back to Server 1, which will need to happen in Dynamic SQL).
Clarification from the O.P.:
When user call query insert into dbo.Account (Name) values('something') I intercept this with clr trigger and execute the same query on server2 insert into Server2.dbo.Account (Name) values('something') and i need to shared context in this transaction for example a guid variable.
There is no such thing as a "shard context" between instances. Whatever data and/or values are needed in both places need to be passed into the remote instance. In this case, you can package up the data as XML in an NVARCHAR(MAX) variable, and execute a stored procedure on Server 2, passing in that NVARCHAR(MAX) value, convert it to XML in the stored procedure, and unpack it using .nodes(). Then you can additionally pass in individual scalar values as other parameters to the remote stored procedure. For example:
DECLARE #DataToTransfer NVARCHAR(MAX),
#SomeGuid UNIQUEIDENTIFIER;
SET #DataToTransfer = (
SELECT *
FROM inserted
FOR XML RAW('row')
);
EXEC [LinkedServerName].[DatabaseName].[SchemaName].[StoredProcedureName]
#Param1 = #DataToTransfer,
#Param2 = #SomeGuid;
The approach shown above works quite well. I have used it to transfer millions of rows per day from 18 production servers to a single archive server. Calling a remote stored procedure has less locking issues than attempting to do the straight DML / INSERT statement over the Linked Server. Also, this approach allows for sending both the table of data (packaged as XML) and individual variable values (e.g. the Guid you mentioned).
The remote stored procedure -- referenced in the EXEC in the example code above -- will be executed locally on Server 2, so it can create a local temporary table that the Trigger on the remote table will have access to, or use either SET CONTEXT_INFO or, if using SQL Server 2016 (or newer), use sp_set_session_context.
Also, as you may have noticed, none of this has anything to do with SQLCLR. I see no reason to introduce the additional complexity of having this in SQLCLR when you will be using none of the benefits of SQLCLR triggers / objects.
With a local temp table no, but with a global temp table (two pound signs: ##globalTemp instead of #temp) you should be able to. That being said, it's likely not a good idea because you never know if ##globalTemp table would exist or not. Who should be creating it?
There are two types of temporary tables: local and global. Local
temporary tables are visible only to their creators during the same
connection to an instance of SQL Server as when the tables were first
created or referenced. Local temporary tables are deleted after the
user disconnects from the instance of SQL Server. Global temporary
tables are visible to any user and any connection after they are
created, and are deleted when all users that are referencing the table
disconnect from the instance of SQL Server.
I have an execute sql task ( Task 1 ) which runs sql to returns a column called Note from Table A and stores it as a String SSIS variable type. In Table A, Note is defined as varchar(2000).
I then have an execute sql task ( Task 2) to run a stored procedure. The input parameter is Note varchar(max).
I run these 2 task in SSIS and get the following error:
DECLARE #..." failed with the following error: "The text, ntext, and image data types are invalid for local variables.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I have tried several solutions with no success. How can you get round this error and get SSIS to store the variable and feed it to the SP ?
On sql server 2012 SSIS and hitting an old 2008 database where the SP resides.
You are somehow mapping a text type column in your tsql code. Before you return from your tsql proc convert the text column to a varchar column with select cast (textval as varchar(max)) and make sure your output variables are defined as varchars.
You need to make sure the Task 1 should only return one row.
example capture
Otherwise you need to use an Object variable. And use a For each loop container to loop thru that object.
I have a large set of pre-existing sql select statements.
From a stored procedure on [Server_A], I would like to execute each of these statements on multiple different SQL Servers & Databases (the list is stored in a local table on [Server_A] , and return the results into a table on [Server_A].
However, I do not want to have to use fully qualified table names in my sql statements. I want to execute "select * from users", not "select * from ServerName.DatabaseName.SchemaName.Users"
I've investigated using Openrowset, but I am unable to find any examples where both the Server name and DatabaseName can be specified as an attribute of the connection, rather than physically embedded within the actual SQL statement.
Is Openrowset capable of this? Is there an alternate way of doing this (from within a stored procedure, as opposed to resorting to Powershell or some other very different approach?)
The inevitable "Why do I want to do this?"
You can do it (specify the server and database in the connection
attributes and then use entirely generic sql across all databases) in
virtually every other language that accesses SQL Server.
Changing all my pre-existing complex SQL to be fully qualified is a
huge PITA (besides, you simply shouldn't have to do this)
This can be done quite easily via SQLCLR. If the result set is to be dynamic then it needs to be a Stored Procedure instead of a TVF.
Assuming you are doing a Stored Procedure, you would just:
Pass in #ServerName, #DatabaseName, #SQL
Create a SqlConnection with a Connection String of: String.Concat("Server=", ServerName.Value, "; Database=", DatabaseName.Value, "; Trusted_Connection=yes; Enlist=false;") or use ConnectionStringBuilder
Create a SqlCommand for that SqlConnection and using SQL.Value.
Enable Impersonation via SqlContext.WindowsIdentity.Impersonate();
_Connection.Open();
undo Impersonation -- was only needed to establish the connection
_Reader = Command.ExecuteReader();
SqlContext.Pipe.Send(_Reader);
Dispose of Reader, Command, Connection, and ImpersonationContext in finally clause
This approach is less of a security issue than enabling Ad Hoc Distributed Query access as it is more insulated and controllable. It also does not allow for a SQL Server login to get elevated permissions since a SQL Server login will get an error when the code executes the Impersonate() method.
Also, this approach allows for multiple result sets to be returned, something that OPENROWSET doesn't allow for:
Although the query might return multiple result sets, OPENROWSET returns only the first one.
UPDATE
Modified pseudo-code based on comments on this answer:
Pass in #QueryID
Create a SqlConnection (_MetaDataConnection) with a Connection String of: Context Connection = true;
Query _MetaDataConnection to get ServerName, DatabaseName, and Query based on QueryID.Value via SqlDataReader
Create another SqlConnection (_QueryConnection) with a Connection String of: String.Concat("Server=", _Reader["ServerName"].Value, "; Database=", _Reader["DatabaseName"].Value, "; Trusted_Connection=yes; Enlist=false;") or use ConnectionStringBuilder
Create a SqlCommand (_QueryCommand) for _QueryConnection using _Reader["SQL"].Value.
Using _MetaDataConnection, query to get parameter names and values based on QueryID.Value
Cycle through SqlDataReader to create SqlParameters and add to _QueryCommand
_MetaDataConnection.Close();
Enable Impersonation via SqlContext.WindowsIdentity.Impersonate();
_QueryConnection.Open();
undo Impersonation -- was only needed to establish the connection
_Reader = _QueryCommand.ExecuteReader();
SqlContext.Pipe.Send(_Reader);
Dispose of Readers, Commands, Connections, and ImpersonationContext in finally clause
If you want to execute a sql statement on every database in a instance you can use (the unsupported, unofficial, but widely used) exec sp_MSforeachdb like this:
EXEC sp_Msforeachdb 'use [?]; select * from users'
This will be the equivalent of going through every database through a
use db...
go
select * from users
This is an interesting problem because I googled for many, many hours, and found several people trying to do exactly the same thing as asked in the question.
Most common responses:
Why would you want to do that?
You can not do that, you must fully qualify your objects names
Luckily, I stumbled upon the answer, and it is brutally simple. I think part of the problem is, there are so many variations of it with different providers & connection strings, and there are so many things that could go wrong, and when one does, the error message is often not terribly enlightening.
Regardless, here's how you do it:
If you are using static SQL:
select * from OPENROWSET('SQLNCLI','Server=ServerName[\InstanceName];Database=AdventureWorks2012;Trusted_Connection=yes','select top 10 * from HumanResources.Department')
If you are using Dynamic SQL - since OPENROWSET does not accept variables as arguments, you can use an approach like this (just as a contrived example):
declare #sql nvarchar(4000) = N'select * from OPENROWSET(''SQLNCLI'',''Server=Server=ServerName[\InstanceName];Database=AdventureWorks2012;Trusted_Connection=yes'',''#zzz'')'
set #sql = replace(#sql,'#zzz','select top 10 * from HumanResources.Department')
EXEC sp_executesql #sql
Noteworthy: In case you think it would be nice to wrap this syntax up in a nice Table Valued function that accepts #ServerName, #DatabaseName, #SQL - you cannot, as TVF's resultset columns must be determinate at compile time.
Relevant reading:
http://blogs.technet.com/b/wardpond/archive/2005/08/01/the-openrowset-trick-accessing-stored-procedure-output-in-a-select-statement.aspx
http://blogs.technet.com/b/wardpond/archive/2009/03/20/database-programming-the-openrowset-trick-revisited.aspx
Conclusion:
OPENROWSET is the only way that you can 100% avoid at least some full-qualification of object names; even with EXEC AT you still have to prefix objects with the database name.
Extra tip: The prevalent opinion seems to be that OPENROWSET shouldn't be used "because it is a security risk" (without any details on the risk). My understanding is that the risk is only if you are using SQL Server Authentication, further details here:
https://technet.microsoft.com/en-us/library/ms187873%28v=sql.90%29.aspx?f=255&MSPPError=-2147217396
When connecting to another data source, SQL Server impersonates the login appropriately for Windows authenticated logins; however, SQL Server cannot impersonate SQL Server authenticated logins. Therefore, for SQL Server authenticated logins, SQL Server can access another data source, such as files, nonrelational data sources like Active Directory, by using the security context of the Windows account under which the SQL Server service is running. Doing this can potentially give such logins access to another data source for which they do not have permissions, but the account under which the SQL Server service is running does have permissions. This possibility should be considered when you are using SQL Server authenticated logins.
I'm migrating from SQL Server to PostgreSQL. I've seen from How to declare a variable in a PostgreSQL query that there is no such thing as temporary variables in native sql queries.
Well, I pretty badly need a few... How would I go about mixing in plpgsql? Must I create a function and then delete the function in order to get access to a language? that just seems error prone to me and I'm afraid I'm missing something.
EDIT:
cmd.CommandText="insert......" +
"declare #app int; declare #gid int;"+
"set #app=SCOPE_IDENTITY();"+ //select scope_identity will give us our RID that we just inserted
"select #gid=MAX(GROUPID) from HOUSEHOLD; set #gid=#gid+1; "+
"insert into HOUSEHOLD (APPLICANT_RID,GROUPID,ISHOH) values "+
"(#app,#gid,1);"+
"select #app";
rid=cmd.ExecuteScalar();
A direct rip from the application in which it's used. Note we are in the process of converting from SQL server to Postgre. (also, I've figured out the scope_identity() bit I think)
What is your schema for the table being inserted? I'll try and answer based on this assumption of the schema:
CREATE TABLE HOUSEHOLD (
APPLICANT_RID SERIAL, -- PostgreSQL auto-increment
GROUPID INTEGER,
ISHOH INTEGER
);
If I'm understanding your intent correctly, in PostgreSQL >= 8.2, the query would then be:
INSERT INTO HOUSEHOLD (GROUPID, ISHOH)
VALUES ((SELECT COALESCE(MAX(GROUPID)+1,1) FROM HOUSEHOLD), 1)
RETURNING APPLICANT_RID;
-- Added call to the COALESCE function to cover the case where HOUSEHOLD
-- is empty and MAX(GROUPID) returns NULL
In PostgreSQL >= 8.2, any INSERT/DELETE/UPDATE query may have a RETURNING clause that acts like a simple SELECT performed on the result set of the change query.
If you're using a language binding, you can hold the variables there.
For example with SQLAlchemy (python):
my_var = 'Reynardine'
session.query(User.name).filter(User.fullname==my_var)
If you're in psql, you have variables:
\set a 5
SELECT :a;
And if your logic is in PL/pgSQL:
tax := subtotal * 0.06;
Must I create a function and then
delete the function in order to get
access to a language?
Yes, but this shortcoming is going to be removed in PostgreSQL 8.5, with the addition of DO command. 8.5 is going to be released in 2010.
You can also declare session variables using plperl - http://www.postgresql.org/docs/8.4/static/plperl-global.html
you install a language that you want to use with the CREATE LANGUAGE command for known languages. Although you can use other languages.
Language installation docs
CREATE LANGUAGE usage doc
You will have to create a function to use it. If you do not want to make a permanent function in the db then the other choice would be to use a scrip in python or something that uses a postgresql driver to connect to the db and do queries. You can then manipulate or look through the data in the script. For instance in python you would install the pygresql library and in your script import pgdb which you can use to connect to the db.
PyGreSQL Info
I think that PostgreSQL's row-type variable would be the closest thing:
A variable of a composite type is
called a row variable (or row-type
variable). Such a variable can hold a
whole row of a SELECT or FOR query
result, so long as that query's column
set matches the declared type of the
variable.
You mentioned the post (How to declare a variable in a PostgreSQL query).
I believe there is a suitable answer farther down the chain of solutions if using psql and the \set command:
my_db=> \set myvar 5
my_db=> SELECT :myvar + 1 AS my_var_plus_1;