Lets say on MySQL database (if it matters).
No, you will not be completely safe. As others have mentioned, parameterized queries are always the way to go -- no matter how you're accessing the database.
It's a bit of an urban legend that with procs you're safe. I think the reason people are under this delusion is because most people assume that you'll call the procs with parameterized queries from your code. But if you don't, if for example you do something like the below, you're wide open:
SqlCommand cmd = new SqlCommand("exec #myProc " + paramValue, con);
cmd.ExecuteNonQuery();
Because you're using unfiltered content from the end user. Once again, all they have to do is terminate the line (";"), add their dangerous commands, and boom -- you're hosed.
(As an aside, if you're on the web, don't take unfiltered junk from the query string of the browser -- that makes it absurdly easy to do extremely bad things to your data.)
If you parameterize the queries, you're in much better shape. However, as others here have mentioned, if your proc is still generating dynamic SQL and executing that, there may still be issues.
I should note that I'm not anti-proc. Procs can be very helpful for solving certain problems with data access. But procs are not a "silver-bullet solution to SQL injections.
You are only immune to SQL injections if you consistenly use parameterized queries. You are nearly immune to SQL injections if you use proper escaping everywhere (but there can be, and has been, bugs in the escaping routines, so it's not as foolproof as parameters).
If you call a stored procedure, adding the arguments by concatenation, I can still add a random query at the end of one of the input fields - for example, if you have CALL CheckLogin #username='$username', #password='$password', with the $-things representing directly concatenated variables, nothing stops me from changing the $password variable to read "'; DROP DATABASE; --".
Obviously, if you clean up the input beforehand, this also contributes to preventing SQL injection, but this can potentially filter out data that shouldn't have been cleaned.
It depends what your stored procs do. If they dynamically generate SQL based on their parameters, and then execute that SQL, then you're still vulnerable. Otherwise, you're far more likely to be fine - but I hesitate to sound 100% confident!
nope. If you're constructing SQL that invokes a stored procedure you're still a target.
You should be creating parametized queries on the client side.
No, as you could still use D-SQL in your stored procedures... and validating and restricting your input is a good idea in any case.
Stored Procedures are not a guarantee, because what is actually vulnerable is any dynamic code, and that includes code inside stored procedures and dynamically generated calls to stored procedures.
Parameterized queries and stored procs called with parameters are both invulnerable to injection as long as they don't use arbitrary inputs to generate code. Note that there is plenty of dynamic code which is also not vulnerable to injection (for instance integer parameters in dynamic code).
The benefits of a largely (I'm not sure 100% is really possible) stored procs-based architecture, however, is that injection can even be somewhat defended against (but not perfectly) for dynamic code at the client side because:
Only EXEC permissions are granted to any user context the app is connecting under, so any SELECT, INSERT, UPDATE, DELETE queries will simply fail. Of course, DROP etc should not be allowed anyway. So any injection would have to be in the form of EXEC, so ultimately, only operations which you have defined in your SP layer will even be available (not arbitrary SQL) to inject against.
Amongst the many other benefits of defining your database services as a set of stored procedures (like any abstraction layer in software) are the ability to refactor your database underneath without affecting apps, the ability to better understand and monitor the usage patterns in your database with a profiler, and the ability to selectively optimize within the database without having to deploy new clients.
Additionally, consider using fine grained database access, (also called generally Role Based Access Control) The main user of your database should have exactly the permissions needed to do its job and nothing else. Don't need to create new tables after install? REVOKE that permission. Don't have a legitimate need to run as sysdba? Then don't! A sneaky injection instructing the user to "DROP DATABASE" will be stymied if the user has not been GRANTed that permission. Then all you need to worry about is data-leaking SELECT statements.
Related
Is there any way we can run a powershell script stored inside a sql server table from a Sql Server stored procedure?
There's the built-in sproc xp_cmdshell that can be used to do some...extremely hacky things. It can not only be used to issue command-line statements from within T-SQL, but also be used in conjunction with bcp to save the results of a select query to a file, as described in this article.
So yes, what you're asking is theoretically possible. You can construct a T-SQL statement to save the powershell script to a file, then call xp_cmdshell a second time to execute the file you've just saved. (Caveat: I've successfully done each of these individually for a couple of projects, but never combined the two. Your mileage may vary.)
Whether you should actually do this, though, is another matter. There are two things to consider:
Most developers will consider the use of this (admittedly rather convoluted) logic within a stored procedure a nasty hack, as it is difficult to debug/maintain. How does the caller determine if the script completed or not? How does the caller track that the command was correctly issued at all? How does the next developer maintain this when things are breaking?
There are security considerations as well. Obviously the file will only be saved, and its script will only execute, if the xp_cmdshell calls are issued by a user with enough rights to do so. Opening these rights up could potentially open some security holes....and resolving issues with rights not being adequate could be equally challenging.
An answer to this question states that "Ideally, though, you would not allow ad hoc DML against your tables, and control all DML through stored procedures."
Why is this ideal? What problem does this solve versus GRANTing SELECT, UPDATE, INSERT, DELETE to the tables which a user needs to manipulate?
Views and stored procedures are like an API. They allow you to hide the implementation, version changes, provide security and prevent unfortunate client operations like "get all fields from all invoices since the company started".
Views and stored procedures allow the DBA to modify the schema to meet performance requirements without breaking applications. The data may need to be partitioned horizontally or vertically, split across files, fields may have to be added or removed etc. Without stored procedures, these changes would require extensive application changes.
By controlling the stored procedures and views each application uses you can provide versioning of schema changes - each application will see the database API it expects.
Permissions can also be assigned on specific stored procedures, allowing you to restrict what users can do without giving them full access to a table. Eg, you could allow regular employees to change only their contact details on the employee table but allow HR to make more drastic changes.
Additionally, you can encapsulate complex data-intensive operations in a single testable procedure, instead of managing raw SQL statement strings inside a client's source code.
Stored procedure execution can also be tracked a lot easier with SQL Server Profiler or with dynamic management views. This allows a DBA to find the hotspots and culprits for possible performance degradation.
I would believe this follows the general idea that actions should have defined allowable input and return the expected output. The idea of ad-hoc changes to the data or database structure poses risk on several levels, but may be necessary based on what is required.
zerkms noted in the comments: there are no absolutes, only best practices. As a best practice, if you can correctly scope the intended outcomes and restrict users and processes to only necessary actions and permissions, you should for the safety and integrity of the system.
There are a few solid reasons for why you need to use Stored procedure
Stored procedures are compiled once and stored in executable form, so procedure calls are quick and efficient. Executable code is automatically cached and shared among users.
Stored procedure avoids SQL injection.
A set of queries in a stored procedure is executed with a single call. This minimizes the use of slow networks, reduces network traffic, and improves round-trip response time(for example in the case of bulk inserts).
I often use stored procedure for data access purpose but don't know which one is best view or SP.
Stored procedure and views are both compiled and execution plan is saved in database. So please tell me which one is best for data access purpose and why best list down the reason please.
I search google to know which one is best but got no expected answer.
I disagree with Jared Harding when it comes to stored procedures causing application updates to be more difficult. It's much easier to update a stored procedure than to update application code, which might need to be recompiled and require you to kick users out of the system to make an update. If you write all your SQL in stored procedures, you'll probably only have to update the application half as often as you otherwise would. And a stored procedure can be updated with no disruption to the users.
I strongly recommend use of stored procedures over views or SQL written within application code. With parameterized stored procedures that use dynamic SQL built as a string and called with the secure sp_executesql function, you can write a single stored procedure for selecting data that can be used across multiple applications and reports. There is virtually no need to use a view unless you really need to block permissions on underlying tables. It's much better to create your basic SELECT query inside a stored procedure and then use parameter options to change how the results are filtered. For example, each parameter can add a different line to your WHERE clause. By passing no parameters, you get back the full unfiltered recordset. As soon as you need to filter your results by a different field, you only need to add one parameter and one line of code in that procedure, and then have any application that needs to make use of it pass in the parameter. By defaulting your stored procedure parameters to null, you don't have to change any applications which are calling that stored procedure for other purposes.
Views and stored procedures serve entirely different purposes. Views are a convinient way to refer to a complex relational set (such as one that joins across many tables) as a flat table without actually forcing the data to be manifested. You use a view to clean up SQL code. Your stored procedures could call views. Views are often used for permission control. You can grant a database user access to a view without granting them access to the underlying tables. This grants the user column level permissions on the columns in the view which is a far more granular method for permission control than granting access to whole tables.
Stored procedures are used to keep often used functionality together as a unit. To be honest, SPs are falling out of favor among many programmers. While you are correct that SPs have their execution plans cached, dynamic SQL has had execution plan caching since SQL Server 2000 (I believe that's the correct version). The only speed gain you're going to get by going with SPs is by sending less data over the network, and that's going to be extremely minimal. SPs tend to make code more brittle and require changes to the DB to occur when application changes don't really warrant it. For example, if you just wanted to change the conditions for which rows you're selecting. Using SPs, you're going to have to roll changes out to the application and the database code. If you're using dynamic SQL or an ORM tool, you only need to make changes to the application which simplifies deployment. There is absolutely a time and place for SPs, but they don't need to be your only method for interacting with the database.
Also, if you're worried about performance, you can materialize views which reduces the need to repeatedly query the underlying tables. This could greatly enhance your performance if you feel the need to add the extra overhead on inserts/updates that materializing views induces.
To speed up the query you need properly defined indexes on the table. Within a stored procedure you can use paramteres, implement your own logic, however within a view you cannot
Because: Once procedure is compiled it makes its execution plan and use same for every time we call it even when we insert new data in related table as well, untill we make any change in procedure code.
View check for new updated data every time you call it.
You can do whole transaction handling etc with SP.
in my recent subject ,I have to do some queries through dynamic SQL,But I'm curious about
the efficiency in different ways:
1)combine the sql sentences in my server and then send them to the database ,do the query
2)send my variables to database and combine them in a certain procedure and finally do the query
Hope someone can help
BTW(I use .Net and Sqlserver)
Firstly, one of the main things you should do is to parameterise your SQL - whether that be by wrapping it up as a stored procedure in the DB, or by creating the SQL statement in your application code and then firing the whole thing in to the DB. This will mean:
prevention against SQL injection attacks by not directly concatenating user-entered values into a SQL statement
execution plan reuse (subsequent executions of that query, regardless of parameter values, will be able to reuse the original execution plan) (NB. this could be done if not parameterised yourself, via Forced Parameterisation)
Stored procedures do offer some extra advantages:
security ,only need to grant EXECUTE permissions to the stored procedures, you don't need to grant the user direct access to underlying db tables
maintainability, a change to a query does not involve an application code change, you can just change the sproc in the DB
network traffic, not necessarily a major point but you're sending less over the wire especially if the query is pretty large/complex
Personally, I use stored procedures most of the time. Though the times I need to build up SQL dynamically in application code, it is always parameterised.
Best is to use stored procedure and pass parameters from your application, as Stored procedures are precompiled queries and have execution plan ready which saves lot of time.
You can refer this url which has details http://mukund.wordpress.com/2005/10/14/advantages-and-disadvantages-of-stored-procedure/
Happy coding!!
All of those disadvantages of stored procedures (database portability etc.) I am now facing. We are going to migrate our VB.Net/ASP/SQL Server application to something like Mono/Postgresql and fully internationalize.
One of the many issues we face is that we have 800-900 stored procedures. What we are thinking of doing is moving the logic in these SP's in to the application code. I have searched around for how people have done this or how it can be done however I have found very little. Most of the info on stored procedures is about what should be in them etc. So I have come to the conclusion that the best way to approach this is to leave the SQL retrieve/update type queries as stored procedures and move the ones with business logic in to the application.
So my question is what is the best approach to take in doing something like this (there may not be a best approach but some suggestions of where to start would be good)?
Thanks
Liam
Since your stored procedures provide an interface to your database, you now need to ensure that all the calls to the stored procedure are going to the same client code. If you currently call the SP from several places in the application, or even multiple applications (possibly multi-modal), they will all need to go through common code. I would start by ensuring that each SP is called from only one place in your client-side library. Then all that logic from the stored proc is going to be encapsulated by that method call. If your stored procs use transactions to ensure integrity for complex operations, those transactions will now need to be initiated and committed in the application-side library.
That refactoring is ultimately necessary, and there are benefits to that even if you aren't able to port all the SPs.
I also encountered same situation regarding migrating an ASP.NET+ SQL Server application to Postgres database.
Instead of migrating SP to your application code it would be lot easier to convert them to equivalent pl/pgsql functions.
All you will need to do it to change the ADO.NET provider and you wont need any change in Application code. There are plenty of providers for postgres (npgsql) which treat SP and postgres functions as equivalent.