I used an SQLite database and run an EXPLAIN statement before executing the actual query to verify if there was any attempt to write on the database.
Now, we have migrated to SQL Server and I need to know if a query tries to write on the database or is just a simple SELECT statement. I basically try to avoid any malicious statement.
You can see the estimated query plan of any query in SSMS by clicking the estimated query plan button.
See MSDN.
However, if the user shouldn't be writing to the database, is shouldn't have the permissions to do so. Ensure it belongs to a role that has restricted permissions.
If you do decide to go this route, you could do the following:
set showplan_xml on
go
set noexec on
go
select * from sysobjects
go
set noexec off
go
set showplan_xml off
go
This will return 3 result sets containing a single column of XML. The 2nd result set is the query plan for the actual query (in this case, select * from sysobjects)
But as noted in my comment, you'd be better off preventing the user having permissions to make any changes.
It's also possible to craft statements that are "only" selects but that are also pretty malicious. I could easily write a select that exclusively locks every table in the database and takes an hour to run.
Related
I am trying to find out the differences in the way you can define which database to use in SSMS.
Is there any functional difference between using the 'Available Databases' drop down list
Adventure works Available Databases dropdown,
the database being defined in the query
SELECT * FROM AdventureWorks2008.dbo.Customers
and
stating the database at the start?
USE AdventureWorks2008
GO
SELECT * FROM dbo.Customers
I'm interested to know if there is a difference in terms of performance or something that happens behind the scenes for each case.
Thank you for your help
Yes, there is. Very very small overhead is added when you use "USE AdventureWorks2008" as it will execute it against the database every time you execute the query. It also will print the "Command(s) completed successfully.". However it is so small overhead and if you are OK with this message then just do not care about that.
Yes, there can be the difference.
When you execute statements like this: SELECT * FROM AdventureWorks2008.dbo.Customers in the context of another database (not AdventureWorks2008) that another database's settings are applied.
First of all, any database has its Compatibility Level that can be different, so it can limit usage of some code, for example you cannot use APPLY operator in the context of database with CL set to 80 but you can do it within database with CL >= 90
Second, every database has its own set of options such as AUTO_UPDATE_STATISTICS_ASYNC and Forced Parameterization that can affect your query plan.
I did encounter some cases when the context of database influenced the plan:
One case was when I created filtered index for one table and it was used in the plan until I executed my query in the context of database with Simple parameterization, and it was not used for the same query when executed in the context of database with Forced parameterization. When I used the hint to force that index I've got the error that the query plan cannot be produced due to query hint, so I need to investigate and I found out that my query was parameterized and instead of my condition fld = 0 there was fld = #p and it could not use my filtered index with fld = 0 condition.
The second case was reguarding table cardinality estimation: we use staging tables to load the data in our ETL procedures and then switch then to actual tables like this:
insert into stg with(tablock);
...
truncate table actual;
alter table stg swith to actual;
All the staging tables are empty when the procedure compiles but within the proc they are filled with the data, so when we do joins between them they are not emty anymore. Passing from 0 rows to non-0 rows triggers statement recompilation that should take in consideration actual number of rows, but it did not happen on the production server, so all estimations were completely wrong (1 row fo every table) and I need to investigate. The cause was AUTO_UPDATE_STATISTICS_ASYNC set to ON in production database.
Now imagine you have 2 db: db1 and db2 with this option set to ON and OFF respectively, in db1 this code will have wrong estimations while if you execute it in db2 using db1.dbo.stg it will have right estimations. The execution time will be very different in these 2 databases.
I am writing a code which supports different versions of Sybase ASE. I am using union queries and the problem is that different version of Sybase ASE supports different number of tables in union query. The union query is dynamic and will be formed depending on the number of database present in the server.
Is there any way in which I can find the max number of tables supported by a particular Sybase ASE? The only solution that I know right now is to fetch the version using query and pick out the version number from the result and set the number accordingly in the code. But this is not a very good solution. I tried checking if there are any tables which have stores this value but nothing came up. Can anyone suggest any solution for this?
Since that's my SAP response you've re-posted here, I'll add some more notes ...
that was a proof of concept that answered the basic question of how to get the info via T-SQL; it was assumed anyone actually looking to implement the solution would (eventually) get around to addressing the various issues re: overhead/maintenance, eg ...
setting a tracefile is going to require permissions to do it; which permissions depends on whether or not you've got granular permissions enabled (see the notes for the 'set tracefile' command in the Reference manual); you'll need to decide if/how you want to grant the permissions to other users
while it's true you cannot re-use the tracefile, you can create a proxy table for the directory where the tracefile exists, then 'delete' the tracefile from the directory, eg:
create proxy_table tracedir external directory at '/tmp'
go
delete tracedir where filename = 'my_serverlmiits'
go
if you could have multiple copies of the proxy table solution running at the same time then you'll obviously (?) need to make sure you generate a unique tracefile name for each session; while you could do this by appending ##spid to the file name, you could also add the login name (suser_name()), the kpid (select KPID from master..monProcess where SPID = ##spid), etc; you'll also want to make sure such a file doesn't exist before trying to create it (eg, delete tracedir where filename = '.....'; set tracefile ...)
your error (when selecting from the proxy table) appears to be related to your client application running in transaction isolation level 0 (which, by default, requires a unique index on the table ... not something you're going to accomplish against a proxy table pointing to an OS file); try setting your isolation level to 1, or use a client application that doesn't default to isolation level 0 (eg, that example runs fine with the basic isql command line tool)
if this solution were to be productionalized then you'll probably want to get a separate filesystem allocated so that any 'run away' tracing sessions don't fill up an important filesystem (eg, /var, /tmp, $SYBASE, etc)
also from a production/security perspective, I'd probably want to investigate the possibility of encapsulating a lot of the details in a DBA/system proc (created to execute under the permissions of the creator) so as to ensure developers can't create tracefiles in the 'wrong' directories ... and on and on and on re: control/security ...
Then again ...
If you're going to be doing this a LOT ... and you're only interested in the max number of tables in a (union) query, then it'd probably be much easier to just build a static if/then/else (or case) expression that matches your ASE version with the few possible numbers (see RobV's post).
Let's face it, how often are really, Really, REALLY going to be building a query with more than, say, 100 tables, let alone 500, 1000, more? [You really don't want to deal with trying to tune such a monster!! YIKES] Realistically speaking, I can't see any reason why you'd want to productionalize the proxy table solution just to access a single row from dbcc serverlimits when you could just implement a hard limit (eg, max of 100 tables).
And the more I think about it, as a DBA I'm going to do whatever I can to make sure your application can't create some monster, multi-hundred table query that ends up bogging down my dataserver simply because the developer couldn't come up with a more efficient solution. [And heaven forbid this type of application gets rolled out to the general user community, ie, I'd have to deal with dozens/hundreds of copies of this monster running in my dataserver?!?!?!]
You can get such limits by running 'dbcc serverlimits' (enable traceflag 3604 first).
Up until version 15.7, the maximum was 256.
In 16.0, this was raised to 512.
In 16.0 SP01, this was raised again to 1023.
I suggest you open a case/ticket with SAP support to know if there is any system tables that store this information. If there is none, I would implement the tedious solution you mentionned and will monitor the following error in the ASE15.7 logs:
CR 805525 -- If you exceed the number of tables in a UNION query you can get a signal 11 in ord_getrowbounds instead of an error message.
This is the answer that I got from the SAP community
-- enable trace file for your spid
set tracefile '/tmp/my_serverlimits' for ##spid
go
-- dump dbcc serverlimits output to your tracefile
dbcc serverlimits
go
-- turn off tracing
set tracefile off for ##spid
go
-- enable external file access:
sp_configure 'enable file access',1
go
-- create proxy table pointing at the trace file
create proxy_table dbcc_serverlimits external file at '/tmp/my_serverlimits'
go
-- find our column name ('record' of type varchar(255) in this case)
sp_help dbcc_serverlimits
go
-- extract the desired row; store the 'record' value in a #variable
-- and parse for the desired info ...
select * from dbcc_serverlimits where lower(record) like '%union%'
go
record
------------------------------------------------------------------------
Max number of user tables overall in a statement using UNIONs : 512
There are some problems with this approach though. First issue is setting trace file. I am going to use this code mostly daily and in Sybase, I think we can't delete or overwrite a trace file. Second is regarding the proxy table. Proxy table will have to be deleted, but this can be taken care with the following code
IF
exists (select 1 from
sysobjects where type = 'U' and name = 'dbcc_serverlimits')
begin
drop table
dbcc_serverlimits
end
go
Final problem comes when a select query is made from dbcc_serverlimits table. It throws the following error
Could not execute statement. The optimizer could not find a unique
index which it could use to scan table 'dbo.dbcc_serverlimits' for
cursor 'jconnect_implicit_26'. SQLCODE=311 Server=************,
Severity Level=16, State=2, Transaction State=1, Line=1 Line 24
select * from dbcc_serverlimits
All this command will have to be written in procedure (that is what I am thinking). Any more elegant solution?
I have a query that return a huge number of rows and I am using SELECT INTO (instead of INSERT INTO) to avoid having problems with transaction log.
The problem is: while this query is running, I can read objects but not showing them in object explorer. When I try to expand the tables item, for example, I receive the message bellow:
Is there a way to avoid this problem?
As M.Ali explained, SELECT INTO has a table lock on your new table, which is also locking the schema objects that SSMS is trying to query in order to build the tree browser.
I would suggest tuning the query so that the statement can run faster. Since this is inserting into a Heap with no indexes and has the tablock, it will be minimally logged as you stated. So it is likely the SELECT part of the statement that is causing things to be slow. See if that query can be optimized or broken into smaller pieces so that the statement does not run so long.
Alternatively, perform the insert in smaller batches using INSERT INTO (and not specifying the tablock hint)
Now here is a Test for you which will give answer to your question...
Open a Query window in SSMS. Write any query which will return any number or rows, could be only one row or maybe 10. and do as follows
Query window 1
BEGIN TRANSACTION;
SELECT *
INTO NEW_Test_TABLE
FROM TABLE_NAME
Query Window 2
Now Open another window and write a SELECT statement against this NEW_Test_TABLE.
SELECT * FROM NEW_Test_TABLE
Your Query will never finish executing,,, no results will be returned (At this time NEW_Test_TABLE only exists in buffer chache). Unless you go back to your 1st Query Window and commit the transaction, And if you goto query window 1 and ROLLBACK TRANSACTION NEW_Test_TABLE would have existed once in buffer chache and no longer exist anywhere.
Similarly when your Select into statement in being executed nothing is committed to disk, therefore SSMS cannot see it neither can show you any information about it via Object explorer.
So the answer is while the query is being executed be patient and let SQL Server Commit the SELECT INTO transaction to disk and you will be able to access it VIA querying it or via Object explorer.
About 5 times a year one of our most critical tables has a specific column where all the values are replaced with NULL. We have run log explorers against this and we cannot see any login/hostname populated with the update, we can just see that the records were changed. We have searched all of our sprocs, functions, etc. for any update statement that touches this table on all databases on our server. The table does have a foreign key constraint on this column. It is an integer value that is established during an update, but the update is identity key specific. There is also an index on this field. Any suggestions on what could be causing this outside of a t-sql update statement?
I would start by denying any client side dynamic SQL if at all possible. It is much easier to audit stored procedures to make sure they execute the correct sql including a proper where clause. Unless your sql server is terribly broken, they only way data is updated is because of the sql you are running against it.
All stored procs, scripts, etc. should be audited before being allowed to run.
If you don't have the mojo to enforce no dynamic client sql, add application logging that captures each client sql before it is executed. Personally, I would have the logging routine throw an exception (after logging it) when a where clause is missing, but at a minimum, you should be able to figure out where data gets blown out next time by reviewing the log. Make sure your log captures enough information that you can trace it back to the exact source. Assign a unique "name" to each possible dynamic sql statement executed, e.g., each assign a 3 char code to each program, and then number each possible call 1..nn in your program so you can tell which call blew up your data at "abc123" as well as the exact sql that was defective.
ADDED COMMENT
Thought of this later. You might be able to add / modify the update trigger on the sql table to look at the number of rows update prevent the update if the number of rows exceeds a threshhold that makes sense for your. So, did a little searching and found someone wrote an article on this already as in this snippet
CREATE TRIGGER [Purchasing].[uPreventWholeUpdate]
ON [Purchasing].[VendorContact]
FOR UPDATE AS
BEGIN
DECLARE #Count int
SET #Count = ##ROWCOUNT;
IF #Count >= (SELECT SUM(row_count)
FROM sys.dm_db_partition_stats
WHERE OBJECT_ID = OBJECT_ID('Purchasing.VendorContact' )
AND index_id = 1)
BEGIN
RAISERROR('Cannot update all rows',16,1)
ROLLBACK TRANSACTION
RETURN;
END
END
Though this is not really the right fix, if you log this appropriately, I bet you can figure out what tried to screw up your data and fix it.
Best of luck
Transaction log explorer should be able to see who executed command, when, and how specifically command looks like.
Which log explorer do you use? If you are using ApexSQL Log you need to enable connection monitor feature in order to capture additional login details.
This might be like using a sledgehammer to drive in a thumb tack, but have you considered using SQL Server Auditing (provided you are using SQL Server Enterprise 2008 or greater)?
Can they (malicious users) describe tables and get vital information? What about if I lock down the user to specific tables? I'm not saying I want sql injection, but I wonder about old code we have that is susceptible but the db user is locked down. Thank you.
EDIT: I understand what you are saying but if I have no response.write for the other data, how can they see it. The bringing to crawl and dos make sense, so do the others but how would they actually see the data?
Someone could inject SQL to cause an authorization check to return the equivalent of true instead of false to get access to things that should be off-limits.
Or they could inject a join of a catalog table to itself 20 or 30 times to bring database performance to a crawl.
Or they could call a stored procedure that runs as a different database user that does modify data.
'); SELECT * FROM Users
Yes, you should lock them down to only the data (tables/views) they should actually be able to see, especially if it's publicly facing.
Only if you don't mind arbitrary users reading the entire database. For example, here's a simple, injectable login sequence:
select * from UserTable where userID = 'txtUserName.Text' and password = 'txtPassword.Text'
if(RowCount > 0) {
// Logged in
}
I just have to log in with any username and password ' or 1 = 1 to log in as that user.
Be very careful. I am assuming that you have removed drop table, alter table, create table, and truncate table, right?
Basically, with good SQL Injection, you should be able to change anything that is dependent on the database. This could be authorization, permissions, access to external systems, ...
Do you ever write data to disk that was retrieved from the database? In that case, they could upload an executable like perl and a perl file and then execute them to gain better access to your box.
You can also determine what the data is by leveraging a situation where a specific return value is expected. I.e. if the SQL returns true, execution continues, if not, execution stops. Then, you can use a binary search in your SQL. select count(*) where user_password > 'H'; If the count is > 0 it continues. Now, you can find the exact plain text password without requiring it to ever be printed on the screen.
Also, if your application is not hardened against SQL errors, there might be a case where they can inject an error in the SQL or in the SQL of the result and have the result display on the screen during the error handler. The first SQL statement collects a nice list of usernames and passwords. The second statement tries to leverage them in a SQL condition for which they are not appropriate. If the SQL statement is displayed in this error condition, ...
Jacob
I read this question and answers because I was in the process of creating a SQL tutorial website with a readonly user that would allow end users to run any SQL.
Obviously this is risky and I made several mistakes. Here is what I learnt in the first 24 hours (yes most of this is covered by other answers but this information is more actionable).
Do not allow access to your user table or system tables:
Postgres:
REVOKE ALL ON SCHEMA PG_CATALOG, PUBLIC, INFORMATION_SCHEMA FROM PUBLIC
Ensure your readonly user only has access to the tables you need in
the schema you want:
Postgres:
GRANT USAGE ON SCHEMA X TO READ_ONLY_USER;
GRANT SELECT ON ALL TABLES IN SCHEMA X TO READ_ONLY_USER
Configure your database to drop long running queries
Postgres:
Set statement_timeout in the PG config file
/etc/postgresql/(version)/main/postgresql.conf
Consider putting the sensitive information inside its own Schema
Postgres:
GRANT USAGE ON SCHEMA MY_SCHEMA TO READ_ONLY_USER;
GRANT SELECT ON ALL TABLES IN SCHEMA MY_SCHEMA TO READ_ONLY_USER;
ALTER USER READ_ONLY_USER SET SEARCH_PATH TO MY_SCHEMA;
Take care to lock down any stored procedures and ensure they can not be run by the read only user
Edit: Note by completely removing access to system tables you no longer allow the user to make calls like cast(). So you may want to run this again to allow access:
GRANT USAGE ON SCHEMA PG_CATALOG to READ_ONLY_USER;
Yes, continue to worry about SQL injection. Malicious SQL statements are not just about writes.
Imagine as well if there were Linked Servers or the query was written to access cross-db resources. i.e.
SELECT * from someServer.somePayrollDB.dbo.EmployeeSalary;
There was an Oracle bug that allowed you to crash the instance by calling a public (but undocumented) method with bad parameters.