execute powershell script stored in sql server table - sql-server

Is there any way we can run a powershell script stored inside a sql server table from a Sql Server stored procedure?

There's the built-in sproc xp_cmdshell that can be used to do some...extremely hacky things. It can not only be used to issue command-line statements from within T-SQL, but also be used in conjunction with bcp to save the results of a select query to a file, as described in this article.
So yes, what you're asking is theoretically possible. You can construct a T-SQL statement to save the powershell script to a file, then call xp_cmdshell a second time to execute the file you've just saved. (Caveat: I've successfully done each of these individually for a couple of projects, but never combined the two. Your mileage may vary.)
Whether you should actually do this, though, is another matter. There are two things to consider:
Most developers will consider the use of this (admittedly rather convoluted) logic within a stored procedure a nasty hack, as it is difficult to debug/maintain. How does the caller determine if the script completed or not? How does the caller track that the command was correctly issued at all? How does the next developer maintain this when things are breaking?
There are security considerations as well. Obviously the file will only be saved, and its script will only execute, if the xp_cmdshell calls are issued by a user with enough rights to do so. Opening these rights up could potentially open some security holes....and resolving issues with rights not being adequate could be equally challenging.

Related

Performance impact of running stored procedures in jobs

At work, I have come across several SQL Server stored procedures that are only used by a single job. In that case, wouldn't it just make more sense to run the code in a job step? Is there some benefit from running statements in stored procedures?
These specific stored procedures do not require input variables, nor are they commonly used calculations; they are mostly just complex select statements. Looking for advice on best practice and performance impact.
There should be no material performance difference.
Code in a stored procedure is stored in the user database, present in backups, owned by the database owner, and can be invoked and debugged from anywhere.
Code in a job step is stored in the MSDB system database and owned by the job owner and can only be run through Agent.

Should you use master.dbo when accessing sp_ procedures?

I'm 100% convinced this is a duplicate but after more than an hour of searching I just can't seem to find the answer.
When using special procedures (i.e. the sp_ ones like sp_executesql), is it wise to use the full 3-part identifier master.dbo (or master..) or just use them as is? I'm looking for the most performance optimized version of this:
1. sp_executesql
2. master..sp_executesql
3. master.dbo.sp_executesql
Are 2 and 3 identical in terms of performance specifically regarding the above (i.e. referencing master) and is it safe to user master.. or should you not risk it even on master since someone could still create another schema there at some point?
Much appreciated.
TL;DR;
Shouldn't be any noticeable performance difference.
The long story:
Whenever you are executing a stored procedure that starts with the sp_ prefix SQL Server will first search for it in master.dbo, so all three options should have the same performance.
From an article posted by Eric Cobb in 2015 entitled Why you should not prefix your stored procedures with “sp_”
Whenever SQL Server sees “sp_” at the beginning of a stored procedure, it first tries to find the procedure in the master database. As stated in the Microsoft documentation above, “This prefix is used by SQL Server to designate system procedures“, so when SQL Server sees “sp_” it starts looking for system procedures. Only after it has searched through all of the procedures in the master database and determined that your procedure is not there will it then come back to your database to try to locate the stored procedure.
Also, it quotes another official documentation (with a link to 2008 version, working on finding current version):
A user-defined stored procedure that has the same name as a system stored procedure and is either nonqualified or is in the dbo schema will never be executed; the system stored procedure will always execute instead.
That quote, even though I couldn't find in the documentation of current version, I can easily prove.
Consider the following script:
USE <YourDatabaseNameHere> -- change to the actual name of the db, of course
GO
CREATE PROCEDURE dbo.sp_who
AS
SELECT 'Zohar peled' as myName
GO
-- change to the actual name of the db, of course
EXEC <YourDatabaseNameHere>.dbo.sp_who
EXEC dbo.sp_who
EXEC sp_who
GO
DROP PROCEDURE dbo.sp_who -- cleanup
When tested on 2016 version (which is the server I've had available for testing),
All three exec statements executed the system procedure. I couldn't find any way to execute my procedure.
Now I can't fiddle around with the master DB on my server, so I can only show that it's true for existing system procedures, but I'm pretty sure that it's going to be the same for any procedure that starts with the sp_ prefix, even if you wrote it yourself to both the master database and your own, as Aaron Bertrand illustrated on his article under the title Another side effect : Ambiguity
However, even if that wasn't the case, unless having many procedures in the current schema, and running the stored procedure in a tight loop, I doubt you'll see any noticeable performance difference.
Later on in the same article:
As alluded to in the previous point, procedures named with “sp_” are going to perform slower. It may not always be noticeable, but it is there. Connecting to DatabaseA, jumping over to the master database and scanning every stored procedure there, then coming back to DatabaseA and executing the procedure is always going to take more time than just connecting to DatabaseA and executing the procedure.
Note that this paragraph is talking about performance issues executing a user-defined stored procedure that has the sp_ prefix - so let's reverse this process for a moment:
Suppose SQL Server would have to scan all the stored procedures in the current schema, and only then, if not found, go to Master.Dbo and start looking there.
Easy to see the more procedures you have in the schema the longer it takes. However - have you ever noticed how long it takes SQL Server to find the procedure it needs to run?
I've been working with SQL Server since it's 2000 version, and I've had my share of databases containing hundreds of procedures all cramped up in the same schema - but that was never a performance issue.
In fact, in over 15 years of experience with SQL Server, I've never encountered a performance issue caused by the time it takes SQL Server to find the stored procedure it needs to run.

Execute .sql file from file system through SQL Server Agent

Is there a way to execute .sql scripts that are on my hard drive through SQL Server Agent without having to use xp_cmdshell or sql_cmd? Using SQL Server 2008.
I am looking for a simple solution like:
include 'c:\mysql\test.sql'
Thanks!
Maybe you should consider using a Windows scheduled task instead of SQL Server Agent. No matter how you read files off disk, it's going to come with the same type of restrictions preventing you from using xp_cmdshell or sqlcmd (assuming the reason isn't just fear).
Within modern versions of SSMS you get to pick .sql files and import them directly into the step (effectively creating a copy that will be run instead).
As Aaron says, better to keep the code safely in the server & backed up.
You can always keep a script which creates the job & steps in a file, so that there is a one-shot creation (or in case you need to deploy across multiple servers).
If editing steps in the agent dialog is a frustration, then perhaps this is a different alternative. Unless you make efforts to preserve it, you will lose history on if you simply recreate the job each time though.
if it's just that the script is frequently amended with changes that need to be tested then you could look at putting the xp_cmdshell/sqlcmd into the job, although it makes it much more fragile wrt file locations, access rights & potentially makes your error handling a bit more work.
You will need to check filesystem access is enabled - many sysadmins prefer this to be disabled, due to the risks of something uncontrolled being run.
So don't just assume it will work on servers that aren't yours!

Direct Sql or combine it in a procedure? which is more efficient

in my recent subject ,I have to do some queries through dynamic SQL,But I'm curious about
the efficiency in different ways:
1)combine the sql sentences in my server and then send them to the database ,do the query
2)send my variables to database and combine them in a certain procedure and finally do the query
Hope someone can help
BTW(I use .Net and Sqlserver)
Firstly, one of the main things you should do is to parameterise your SQL - whether that be by wrapping it up as a stored procedure in the DB, or by creating the SQL statement in your application code and then firing the whole thing in to the DB. This will mean:
prevention against SQL injection attacks by not directly concatenating user-entered values into a SQL statement
execution plan reuse (subsequent executions of that query, regardless of parameter values, will be able to reuse the original execution plan) (NB. this could be done if not parameterised yourself, via Forced Parameterisation)
Stored procedures do offer some extra advantages:
security ,only need to grant EXECUTE permissions to the stored procedures, you don't need to grant the user direct access to underlying db tables
maintainability, a change to a query does not involve an application code change, you can just change the sproc in the DB
network traffic, not necessarily a major point but you're sending less over the wire especially if the query is pretty large/complex
Personally, I use stored procedures most of the time. Though the times I need to build up SQL dynamically in application code, it is always parameterised.
Best is to use stored procedure and pass parameters from your application, as Stored procedures are precompiled queries and have execution plan ready which saves lot of time.
You can refer this url which has details http://mukund.wordpress.com/2005/10/14/advantages-and-disadvantages-of-stored-procedure/
Happy coding!!

Am I immune to SQL injections if I use stored procedures?

Lets say on MySQL database (if it matters).
No, you will not be completely safe. As others have mentioned, parameterized queries are always the way to go -- no matter how you're accessing the database.
It's a bit of an urban legend that with procs you're safe. I think the reason people are under this delusion is because most people assume that you'll call the procs with parameterized queries from your code. But if you don't, if for example you do something like the below, you're wide open:
SqlCommand cmd = new SqlCommand("exec #myProc " + paramValue, con);
cmd.ExecuteNonQuery();
Because you're using unfiltered content from the end user. Once again, all they have to do is terminate the line (";"), add their dangerous commands, and boom -- you're hosed.
(As an aside, if you're on the web, don't take unfiltered junk from the query string of the browser -- that makes it absurdly easy to do extremely bad things to your data.)
If you parameterize the queries, you're in much better shape. However, as others here have mentioned, if your proc is still generating dynamic SQL and executing that, there may still be issues.
I should note that I'm not anti-proc. Procs can be very helpful for solving certain problems with data access. But procs are not a "silver-bullet solution to SQL injections.
You are only immune to SQL injections if you consistenly use parameterized queries. You are nearly immune to SQL injections if you use proper escaping everywhere (but there can be, and has been, bugs in the escaping routines, so it's not as foolproof as parameters).
If you call a stored procedure, adding the arguments by concatenation, I can still add a random query at the end of one of the input fields - for example, if you have CALL CheckLogin #username='$username', #password='$password', with the $-things representing directly concatenated variables, nothing stops me from changing the $password variable to read "'; DROP DATABASE; --".
Obviously, if you clean up the input beforehand, this also contributes to preventing SQL injection, but this can potentially filter out data that shouldn't have been cleaned.
It depends what your stored procs do. If they dynamically generate SQL based on their parameters, and then execute that SQL, then you're still vulnerable. Otherwise, you're far more likely to be fine - but I hesitate to sound 100% confident!
nope. If you're constructing SQL that invokes a stored procedure you're still a target.
You should be creating parametized queries on the client side.
No, as you could still use D-SQL in your stored procedures... and validating and restricting your input is a good idea in any case.
Stored Procedures are not a guarantee, because what is actually vulnerable is any dynamic code, and that includes code inside stored procedures and dynamically generated calls to stored procedures.
Parameterized queries and stored procs called with parameters are both invulnerable to injection as long as they don't use arbitrary inputs to generate code. Note that there is plenty of dynamic code which is also not vulnerable to injection (for instance integer parameters in dynamic code).
The benefits of a largely (I'm not sure 100% is really possible) stored procs-based architecture, however, is that injection can even be somewhat defended against (but not perfectly) for dynamic code at the client side because:
Only EXEC permissions are granted to any user context the app is connecting under, so any SELECT, INSERT, UPDATE, DELETE queries will simply fail. Of course, DROP etc should not be allowed anyway. So any injection would have to be in the form of EXEC, so ultimately, only operations which you have defined in your SP layer will even be available (not arbitrary SQL) to inject against.
Amongst the many other benefits of defining your database services as a set of stored procedures (like any abstraction layer in software) are the ability to refactor your database underneath without affecting apps, the ability to better understand and monitor the usage patterns in your database with a profiler, and the ability to selectively optimize within the database without having to deploy new clients.
Additionally, consider using fine grained database access, (also called generally Role Based Access Control) The main user of your database should have exactly the permissions needed to do its job and nothing else. Don't need to create new tables after install? REVOKE that permission. Don't have a legitimate need to run as sysdba? Then don't! A sneaky injection instructing the user to "DROP DATABASE" will be stymied if the user has not been GRANTed that permission. Then all you need to worry about is data-leaking SELECT statements.

Resources