How do I write using T-SQL, variable values and query results to a text file, I want to actually keep appending.
You can use SQL CLR to write whatever you want out to a file. You can either code this yourself or you can use the SQL# library (I am the author and it is not free -- for this function -- but not expensive either) and use the File_WriteFile function to write out the text. It allows for appending data. The library is free for most functions but not for the File System functions. It can be found at: http://www.SQLsharp.com/
If you want to try coding this yourself, I found this example which is the basic concept I think you are asking for: http://www.mssqltips.com/tip.asp?tip=1662
This works but it's a little arcane. Adjust to your query. Essentially you'd be creating command shell echo lines for execution through xp_cmdshell and looping through them.
declare #cmd varchar(500), #minid int, #maxid int;
declare #output table(
id int identity(1,1) primary key,
echo varchar(500) not null
)
set #cmd = 'select top 10 ''ECHO ''+name+'' >> C:\test.txt'' from master.dbo.sysobjects'
insert into #output(echo)
exec(#cmd)
select #minid=Min(ID),#maxid=Max(ID) from #output
while #minid <= #maxid
begin
select #cmd = echo from #output where id = #minid;
exec xp_cmdshell #cmd;
set #minid=#minid+1;
end
Enabling and using xp_cmdshell exposes your instance to additional security risk. Using CLR is safer but also not advised for this purpose. These two options only make a poor design decision worse.
Why are you trying to write to a file from T-SQL? Resist the urge to turn SQL Server into a scripting platform. Use an application language like SSIS, PowerShell or C# to interact with SQL Server and write output to a log file from there as needed.
I had problem in this subject, so I want to propose what I did.
I Declared a Cursor for Select results and then I write them to file. For example:
DECLARE #cmd NVARCHAR(400)
DECLARE #var AS NVARCHAR(50)
,#Count = (SELECT COUNT(*) FROM Students)
DECLARE Select_Cursor CURSOR FOR
SELECT name FROM Students OPEN Select_Cursor
FETCH NEXT FROM Select_Cursor INTO #var
WHILE (#Count>0)
SET #cmd = 'echo ' + #var +' >> c:\output.txt'
EXEC master..xp_cmdshell #cmd
SET #Count = #Count - 1
FETCH NEXT FROM Select_Cursor INTO #var
END
CLOSE SelectCursor
DEALLOCATE SelectCursor
Related
I need to find a T-SQL way to remove GO command from scripts which I read from .sql files. I'm using similar to this approach Execute SQL scripts from file, which suits my needs very well, however some of the files contains GO command and breaks execution as sp_executesql doesn't handle non T-SQL commands.
How to create a REPLACE which would remove GO, which mostly sits alone in the row? Or any other method I could apply here? Please keep in mind, that in the script there could be other GOs, which are actually not a command.
DECLARE #sql NVARCHAR(1000) =
'DECLARE #table AS TABLE(
[Id] INT,
[Info] NVARCHAR(100)
);
INSERT INTO #table
([Id],[Info])
VALUES
(1,''Info''),
(2,''Show must go on''),
(3,''GO'');
SELECT * FROM #table;
GO';
PRINT #sql;
EXEC sp_executesql #sql;
Using xp_cmdshell to execute scripts is not an option due to server security restrictions. SQLCMD is not an option too this time.
Well, I would NOT claim that this is the way this should be done, but it was some fun to ticker it down:
Disclaimer: This is a demonstration why TSQL is the wrong tool for this 😁
I added some more GO-separated statements and used quotes within to get it even more tricky:
DECLARE #sql NVARCHAR(1000) =
'DECLARE #table AS TABLE(
[Id] INT,
[Info] NVARCHAR(100)
);
INSERT INTO #table
([Id],[Info])
VALUES
(1,''Info''),
(2,''Show must go on''),
(3,''This includes a GO and some "quoted" text''),
(4,''GO'');
SELECT * FROM #table;
GO
SELECT TOP 10 * FROM sys.objects
GO
PRINT ''Each GO will be used to have separate patches''';
--let's start by removing various kinds of line breaks
SET #sql = REPLACE(REPLACE(STRING_ESCAPE(#sql,'json'),'\r','\n'),'\n\n','\n');
--Using a CS_AS-collation will avoid to take "go" as we (hopefully!) can rely on "GO":
DECLARE #json NVARCHAR(MAX) = CONCAT('["',REPLACE(REPLACE(#sql COLLATE Latin1_General_CS_AS,'GO' COLLATE Latin1_General_CS_AS,'","GO'),'\n','","'),'"]');
--Above I used each upper-case "GO" and each line break to separate the string.
--Doing so we transform your string into a json array
--Now we can fill this into a table using OPENJSON to read the json-array (ommitting empty lines)
DECLARE #tbl TABLE(RowIndex INT IDENTITY,fragment NVARCHAR(MAX));
INSERT INTO #tbl(fragment)
SELECT STRING_ESCAPE(A.[value],'json')
FROM OPENJSON(#json) A
WHERE LEN(TRIM(A.[value]))>0 AND TRIM(A.[value])!=NCHAR(9);
--We need these variable for the cursor
DECLARE #patch NVARCHAR(MAX);
--Now I open a cursor
--We do this by running down a recursive CTE once again building up a json array.
--This time we will separate the strings when the upper-case "GO" is sitting alone in its line.
DECLARE cur CURSOR FOR
WITH cte AS
(
SELECT RowIndex, CAST(CONCAT('["',fragment) AS NVARCHAR(MAX)) growingString
FROM #tbl WHERE RowIndex=1
UNION ALL
SELECT n.RowIndex
,CONCAT(cte.growingString,CASE WHEN TRIM(n.fragment) COLLATE Latin1_General_CS_AS=N'GO' THEN N'","' ELSE n.fragment END)
FROM #tbl n
INNER JOIN cte ON n.RowIndex=cte.RowIndex+1
)
,thePatches AS
(
SELECT TOP 1 CONCAT(growingString,'"]') AS jsonArray
FROM cte ORDER BY RowIndex DESC
)
SELECT A.[value] AS patch
FROM thePatches p
CROSS APPLY OPENJSON(p.jsonArray) A;
--we can - finally - walk down the patches and execute them one by one
OPEN cur;
FETCH NEXT FROM cur INTO #patch;
WHILE ##FETCH_STATUS=0
BEGIN
PRINT #patch; --PRINT out for visual control before execution!
--EXEC(#patch);
FETCH NEXT FROM cur INTO #patch;
END
CLOSE cur;
DEALLOCATE cur;
There are millions of things (e.g. line-breaks within content, commented sections, max recursion) which can destroy this approach. So clearly DO NOT follow this suggestion :-)
we have a very old ERP system which is badly supported.
Now the warehouse want´s to buy a new "store system" for our goods. It´s a fully automatic store system which need´s data from our ERP system. The support of your ERP system can´t help us, so we have to build a solution of our own.
The idea was to "move" the items for the new storage system to a special storage place called (SHUT1) and output the "part number" and "quantity" to a file (xml) which can be read by the new software.
We can´t change anything in the software of our ERP system, so we have to do it on the SQL Server itself.
(I know, a trigger is not the "best" thing to do, but I have have no other choice)
CREATE TRIGGER tr_LagerShut ON dbo.Lagerverwaltung
AFTER INSERT
AS
BEGIN
IF (SELECT [Lagerort] from Inserted) = 'SHUT1'
BEGIN
DECLARE #Cmd VARCHAR(2000) ;
DECLARE #FormatDate4File VARCHAR(200);
SET #FormatDate4File = (SELECT(SYSUTCDATETIME()));
SET #FormatDate4File = (SELECT REPLACE(#FormatDate4File,' ','-'));
SET #FormatDate4File = (SELECT REPLACE(#FormatDate4File,':','-'));
SET #FormatDate4File = (SELECT REPLACE(#FormatDate4File,'.','-'));
SET #Cmd = ( SELECT [Artikelnummer],[Menge] FROM inserted FOR XML PATH('')) ;
SET #Cmd = 'Echo "' + #Cmd + '" >>"C:\Temp\' + #FormatDate4File +'.xml"' ;
EXEC xp_cmdshell #Cmd ;
END;
END;
The trigger "installs" fine, but if I change a storage place to a new one, the ERP system stalls with "ERROR" (there is no error description :(
If I drop the trigger the system is just running fine again. So I think there is a error in the trigger, but I can´t find it.
Can anybody help please?
Aleks.
Don't know what ERP system stalls with "ERROR" looks like... Frozend GUI? Timeout? Just no file created?
My magic glass bulb tells me the following: You are inserting more than one row at once. If so, this statement will break, because a comparison like this is only valid against a scalar value. If there is more than one row in inserted, this will not work:
IF (SELECT [Lagerort] from Inserted) = 'SHUT1'.
Your trigger can be simplified, but I doubt, that you will like the result. Check this with special characters (like üöä) and check for enclosing "-characters...
CREATE TRIGGER tr_LagerShut ON dbo.Lagerverwaltung
AFTER INSERT
AS
BEGIN
IF EXISTS(SELECT 1 FROM inserted WHERE [Lagerort]='SHUT1')
BEGIN
DECLARE #FileName VARCHAR(255) =REPLACE(REPLACE(REPLACE(SYSUTCDATETIME(),' ','-'),':','-'),'.','-');
DECLARE #Content XML=
(
SELECT [Artikelnummer],[Menge] FROM inserted WHERE [Lagerort]='SHUT1' FOR XML AUTO,ELEMENTS
);
DECLARE #Cmd VARCHAR(4000) = 'Echo "' + CAST(#Content AS VARCHAR(MAX)) + '" >>"c:\temp\' + #FileName + '.xml"' ;
PRINT #cmd;
EXEC xp_cmdshell #Cmd ;
END
END;
This might better be done with BCP.
UPDATE Your comment...
First you should check, if this works at all:
DECLARE #FileName VARCHAR(255) =REPLACE(REPLACE(REPLACE(SYSUTCDATETIME(),' ','-'),':','-'),'.','-');
DECLARE #Content XML=
(
SELECT TOP 5 * FROM sys.objects FOR XML AUTO,ELEMENTS
);
DECLARE #Cmd VARCHAR(4000) = 'Echo "' + CAST(#Content AS VARCHAR(MAX)) + '" >>"c:\temp\' + #FileName + '.xml"' ;
PRINT #cmd;
EXEC xp_cmdshell #Cmd ;
If you find no file in c:\temp\: Are you aware, that SQL-Server will always write in its own context? Might be, that you are awaiting a file in your local c-drive, but the file is written to the Server's machine acutally.
If this works isolatedly, it should work within a trigger too. You might pack the call into BEGIN TRY ... END TRY and add an appropriate CATCH block.
So okay, the "simple" trigger could be really a problem. Now I have this idea:
(one more info: time stamp is not inserted into table "Lagerverwaltung" when a new row is inserted)
Pseudo Code on:
Trigger on Table "Lagerverwaltung"
Check if the storage place(s) is "SHUT1"
If "yes"
DECLARE #FileName VARCHAR(255) =REPLACE(REPLACE(REPLACE(SYSUTCDATETIME(),' ','-'),':','-'),'.','-');
create a new table with name dbo + '.' + #filename
Insert all the data (inserted) + row(SYSUTCDATETIME()) AS TimeStamp where 'Lagerort' = 'SHUT1' into new table named 'dbo.' + #filename
DECLARE #Cmd varchar(4000) = 'bcp "select [Artikelnummer],[Menge],[TimeStamp] FROM [wwsbautest].[dbo].#filename WHERE [Lagerort]=''SHUT1'' AND [Menge] > ''0'' FOR XML AUTO, ELEMENTS" queryout "C:\temp\' + #FileName + '.xml" -c -T';
EXEC xp_cmdshell #Cmd;
Drop table dbo.#filename;
Could somthing like that work?
Is it natural that SQL Server does not catch objects dependencies in stored procedures through dynamic SQL:
CREATE PROCEDURE testSp (#filter nvarchar(max)) AS
exec ('select * from testTable where 1=1 AND '+ #filter)
Here SQL Server will not detect dependency between testTable and testSp.
What kind of "advice" do you have for the DBMS? I propose it could be very "cheap query" :
CREATE PROCEDURE testSp (#filter nvarchar(max)) AS
-- cheap query like 'select top 1 #id=id from testTable'
exec ('select * from testTable where 1=1 AND '+ #filter)
So the question is which queries could be good candidates for that purpose?
P.S. Of course I expect that they all will have their minuses..
When using dynamic SQL the query parts that are tekst (between quotes) are not detected as code by the IDE or the engine until the moment they are excuted. So this answers your first question, yes it is natural.
The only way around this that I can think of is to create a view using the generated output of the dynamic sql and check if the view definition is still valid at any point you want to check if the procedure is valid.
Usually when you need to do something like this there is an earlier departure from standard methods that if handled removes the need for such silly tricks.
Example:
USE demo
GO
DECLARE #sql NVARCHAR(MAX) = '
SELECT firstname, lastname FROM dbo.employees'
DECLARE #view NVARCHAR(MAX) = '
CREATE VIEW dbo.test_view
AS ' + #sql
EXEC sp_executesql #view
BEGIN TRY
DECLARE #validation int = (SELECT TOP 1 COUNT(*) FROM demo..test_view)
EXEC sp_executesql #sql
END TRY
BEGIN CATCH
PRINT 'Dynamic SQL out of date'
END CATCH
SET NOEXEC ON
select * from testTable
SET NOEXEC OFF
do the job: code really not executed, but dependecy is declared.
I was wondering if I can make a stored procedure that insert values into dynamic table.
I tried
create procedure asd
(#table varchar(10), #id int)
as
begin
insert into #table values (#id)
end
also defining #table as table var
Thanks for your help!
This might work for you.
CREATE PROCEDURE asd
(#table nvarchar(10), #id int)
AS
BEGIN
DECLARE #sql nvarchar(max)
SET #sql = 'INSERT INTO ' + #table + ' (id) VALUES (' + CAST(#id AS nvarchar(max)) + ')'
EXEC sp_executesql #sql
END
See more here: http://msdn.microsoft.com/de-de/library/ms188001.aspx
Yes, to implement this directly, you need dynamic SQL, as others have suggested. However, I would also agree with the comment by #Tomalak that attempts at universality of this kind might result in less secure or less efficient (or both) code.
If you feel that you must have this level of dynamicity, you could try the following approach, which, although requiring more effort than plain dynamic SQL, is almost the same as the latter but without the just mentioned drawbacks.
The idea is first to create all the necessary insert procedures, one for every table in which you want to insert this many values of this kind (i.e., as per your example, exactly one int value). It is crucial to name those procedures uniformly, for instance using this template: TablenameInsert where Tablename is the target table's name.
Next, create this universal insert procedure of yours as follows:
CREATE PROCEDURE InsertIntValue (
#TableName sysname,
#Value int
)
AS
BEGIN
DECLARE #SPName sysname;
SET #SPName = #TableName + 'Insert';
EXECUTE #SPName #Value;
END;
As can be seen from the manual, when invoking a module with the EXECUTE command, you can specify a variable instead of the actual module name. The variable in this case should be of a string type and is supposed to contain the name of the module to execute. This is not dynamic SQL, because the syntax is not the same. (For this to be dynamic SQL, the variable would need to be enclosed in brackets.) Instead, this is essentially parametrising of the module name, probably the only kind of natively supported name parametrisation in (Transact-)SQL.
Like I said, this requires more effort than dynamic SQL, because you still have to create all the many stored procedures that this universal SP should be able to invoke. Nevertheless, as a result, you get code that is both secure (the #SPName variable is viewed by the server only as a name, not as an arbitrary snippet of SQL) and efficient (the actual stored procedure being invoked already exists, i.e. it is already compiled and has a query plan).
You'll need to use Dynamic SQL.
To create Dynamic SQL, you need to build up the query as a string. Using IF statements and other logic to add your variables, etc.
Declare a text variable and use this to concatenate together your desired SQL.
You can then execute this code using the EXEC command
Example:
DECLARE #SQL VARCHAR(100)
DECLARE #TableOne VARCHAR(20) = 'TableOne'
DECLARE #TableTwo VARCHAR(20) = 'TableTwo'
DECLARE #SomeInt INT
SET #SQL = 'INSERT INTO '
IF (#SomeInt = 1)
SET #SQL = #SQL + #TableOne
IF (#SomeInt = 2)
SET #SQL = #SQL + #TableTwo
SET #SQL = #SQL + ' VALUES....etc'
EXEC (#SQL)
However, something you should really watch out for when using this method is a security problem called SQL Injection.
You can read up on that here.
One way to guard against SQL injection is to validate against it in your code before passing the variables to SQL-Server.
An alternative way (or probably best used in conjecture) is instead of using the EXEC command, use a built-in stored procedure called sp_executesql.
Details can be found here and usage description is here.
You'll have to build your SQL slightly differently and pass your parameters to the stored procedure as arguments as well as the #SQL.
I manage a server with around 400+ databases which have the same database schema, i wish to deploy a custom clr/.net user defined function to them all, is there any easy way to do this, or must it be done individually to each database?
Best Regards,
Wayne
I think if you create it in master (in MSSQL) it can be referenced from any other db in that instance. Certainly seems to work for Stored Procs anyway.
I should add that this only works if the databases are all on the same server instance...
You could write a small app to deploy the udf to the master of each SQL server instance if all 400 reside on multiple servers.
I just find some way i think. Some kind of Inception ;)
USE data_base works only inner EXECUTE context, so...
DECLARE #sql NVARCHAR(max)
DECLARE #innersql NVARCHAR(max)
DECLARE c CURSOR READ_ONLY
FOR
SELECT name FROM sys.databases
DECLARE #name nvarchar(1000)
OPEN c
SET #innersql = 'CREATE FUNCTION Foo(#x varchar(1)) RETURNS varchar(100) AS ' +
' BEGIN RETURN(#x + ''''some text'''') END;'
-- ^^^^ ^^^^
-- every (') must be converted to QUAD ('''') instead of DOUBLE('') !!!
FETCH NEXT FROM c INTO #name
WHILE ##FETCH_STATUS = 0
BEGIN
-- create function must be the first statement in a query batch ???
-- ok, will be... in inner EXEC...
SET #sql = 'USE [' + #name + ']; EXEC (''' + #innersql + ''');'
--PRINT #sql
EXEC (#sql)
FETCH NEXT FROM c INTO #name
END
CLOSE c
DEALLOCATE c
Ups, I missed "CLR/.NET" part reading question. Sorry.
i'd just create a dynamic script to create it on each database. but after that i'd put it in the modal databases so that all new databases are created with it.
you could also use the same script to push out changes if the function ever gets modified.