On our Prod Database which is based on Oracle, I want to see the number of queries which get fired.
The reasoning behind is that we want to see the number of network calls we make and the impact firewall could make if we move it to a cloud system.
select sum(EXECUTIONS)
from v$sql
where last_active_time >= trunc(sysdate)-2
and (parsing_schema_name like '%\_RW%' escape '\' or parsing_schema_name = 'TEMP_USER')
and module not in ('DBMS_SCHEDULER')
and sql_text not like '%v$sql%';
Above query doesn't seem very reliable due to SQLs being pushed out of memory which is what the above one returns.
Is there any way to get the number of calls we make on our Oracle DB from the database itself? Logging from all the applications is not a feasible option at the moment.
Thanks!
"we want to see the number of network calls we make and the impact firewall could make if we move it to a cloud system"
The number of SQL statements executed is only tangentially related to the amount of network traffic. Compare the impact of select * from dual with select * from humongous_table.
A better approach might be to talk with your network admin and see what they can tell you about the traffic your applications generate. Alternatively download Wireshark and see for yourself (providing your security team is cool with that).
Just to add some information on V$SQL views:
V$SQLAREA has the lowest retention, and shows the current SQL in memory, parsed, and ready for execution.
V$SQL has better retention, and is updated every 5 seconds after query execution.
V$SQLSTATS has the best retention, and retains SQL even after the cursor has been aged out of the shared pool.
You don't want to run these queries too often on busy production databases, as they can add to shared pool fragmentation.
Related
I've been using Access to rapid-prototype a DB. Now I'd like to do a small group online test so I split the DB and placed the back-end on Azure SQL Server, then re-linked. It's incredibly slow and I've been researching solutions for days without positive results. My local environment is Win10, Office2016 64bit and internet connection is fast and stable.
I have tried different ODBC drivers, including the SQL Native Client v11.
I've disabled auto-tuning level on the NIC.
I've recreated all queries from access on the server.
I've made sure that Tracing in ODBC is off.
But I enabled tracing temporarily to see what was happening. If I opened the front-end, logged in (Small user table), and did something on the first form (Add 1 record with 3 sub-records...and really...nothing fancy or heavy at all and this only takes 1 minute) then closed the DB, I see that the Tracing log file is 1.5MB.
So I created an empty Access file and an ODBC link to only the User table (12 columns, 20 records), and then monitored the tracing log file again. Opening access doesn't add anything to the log file, but opening this one, linked table made the log file grow to 255kb. Opening this table in access took 5 seconds.
Access sent about 800 requests to the server for opening just this one small table. If I paste all the User table data into a text file, it's only 2kb. SO why is it so slow?
Any ideas on this, and specifically other suggestions to get this working faster?
Kind regards,
Well, the reason why using Azure is slower than running Access connected to a local instance of SQL server is because, well slow is slow!
I mean, if you going to travel 30 miles, you have a choice to walk, or to take a car.
So here is the question you need to know:
Why is walking slower than driving a car?
Answer: Because you are travelling at a slower speed!
So why is using Azure slower the using an instance of SQL server running on your local computer or local network?
Answer:
Because the connection speed to Azure is about 100 times slower!
The idea here that you not going to take into account the DIFFERENCE in connection speed is the issue here. It is a disservice to the reading public that may conclude that such a setup (Access front end on a pc to Azure instance of SQL server) is not a viable setup.
So the first issue here is to make a note of your connection speed to the back end database.
A typical office local area network has a speed of 100mbits, or today most are 1gig – even the el-cheapo routers you purchase at Best Buy are now rated at 1gig (1000 mbits).
However, your typical high speed internet is rated at about 5, or 10 mbits. So that is 100 times slower. (Actually 1000/5 = 200 times slower!!!).
That means if something NOW takes 3 seconds on your office network with Access and SQL server? Well then a WAN (over the internet), then you need to multiple the time by the change in your connection speed (this is so simple – yet it seems to escape all!). So, if you lucky, you might have a 5 mbits speed rating for your internet. That means you go
1000 / 5 = 200
You now take the 200 and multiple the existing delay you have of say 3 seconds and you get 600 seconds (that is 10 minutes if you are wondering!). So you going from 3 seconds to 10 minutes!
This kind of comparison in speed would be like walking into a sports shop to purchase a rubber boat to cross the Atlantic. So not taking into account the change in internet speed and wondering why things are slow is the issue here.
You can most certainly use Access to Azure, but you have to realize two simple concepts.
a connection and test with a connection that is 50-200 times slower than your LAN is a test that going to run 50 to 200 times slower! The failure to mention and take into considering the MASSIVE DIFFERENCE in your speed connection of your LAN compared to a WAN is the simple issue here.
opening a form bound to a large table of data is going to case performance issues.
I was sitting at the bus stop talking to a 90 year old granny lady. I asked her the following:
Have you ever used an instant teller?
She said, why yes, I use them all the time.
I then asked here don’t you think it would be bad to have the teller machine download all the peoples accounts while you wait and THEN ask you for your account number?
The old lady stated, of course, that would be silly. I type in my account pin and the machine ONLY downloads my account information – this is practical and obvious.
In other words that old lady realised that downloading a bunch of data BEFORE you the user even types in or does anything is a waste of bandwidth.
So you never want to launch a form bound to a table and THEN ask the user what record to work on. Why have Access download large numbers of records into a form and THEN ask the user or allow the user to navigate to the required record?
Even when using Google, it does not download the whole internet into your web browser page and you then go ctrl+f to search the contents of that web page.
The same concepts should be applied to Access applications. A design that asks for what to work on and then launches a form bound to a table with a "where" clause will thus fix this issue.
So if you have a form (and even a sub form) that displays a customer invoice, you would FIRST ASK FOR the invoice number, and then simply launch that form using a where clause that restricts the form to the ONE invoice!
Keep in mind that you can STILL use that invoice form bound to a table of 1 million rows and ONLY THE ONE record will be pulled down the network connection *if one used the where clause.
So a typical internet connection has adequate speed to run a browser, and also has MORE than adequate bandwidth speed to pull down a few records. Access often gets a bad rap for poor performance, but that is ONLY DUE to Access developers IGNORING the obvious advice that downloading tons of things that you don’t yet need into a form will run slow.
So web based applications, or even desktop applications written in vb.net perform well with SQL Azure running in the cloud over that MUCH slower internet connection because those applications don’t launch forms bound to large datasets WITHOUT FIRST simply allowing the user to request what they need to see and view.
As for Access and using SharePoint? That setup can be VERY fast, and in fact MUCH faster than SQL Azure, MySQL or any traditional database system because when you using SharePoint tables and Access, then Access automatic syncs a copy of the data local. This setup means your application will continue to run WITHOUT ANY internet connection. The instant the connection is restored, then the data sync can resume.
This means that if you have a table with 15,000 rows and run a report on that data the report can run and launch in an instant with SharePoint back end since a local copy of the data exists in the front end at ALL TIMES! So this setup is VERY well suited an off line mode or in cases that you have a poor and slow internet connections since you as noted always have local copy of the data – only when a record is changed does a sync occur, and that sync can occur independent of Access. So you change one record – and it starts syncing with SharePoint.
However for larger data sets that have to be updated, then SQL server is far better since you can execute a sql update on 10,000 rows and ZERO network traffic and transfer of data need occur to update those 10,000 rows when using SQL server (a pass through query) and when using SharePoint, the 10,000 rows WILL transfer over the network since the local copy requires the rows to be updated. So that massive advantage of using SharePoint for the database backend does not exist for applications that have to update lots of rows or do lots of row update types of data processing.
So the key concepts and take away here:
The high speed internet connection you have is often 10-200 times slower than your typical cheap office (local) network. So that means a 2 second operation will now take 10-200 times longer.
The Access application needs to be optimized to avoid things like loading too many records into a form. So building search forms etc. That FIRST ASK the user what they need to work on is a basic and simple requirement for all good developers and that includes Access developers.
Access and SharePoint can be the BEST option, and such a setup allows the application to run EVEN WHEN there is no internet connection at all. If table sizes are below say 10,000 rows, then this setup can often be ideal. However for applications that have to update lots of rows and for data processing heavy applications this setup is poor since updates to any rows will case data syncing to occur over the network. This setup is also the cheapest, since a single office 365 account with SharePoint support for Access can be had for $6 per month, and that $6 account allows up to 500 free users and those 500 users can even use their Gmail or non-Microsoft account for this setup. And such access applications that do fit within the bounds of SharePoint tables tend to need far less changes and optimizing then using SQL server over the internet.
With SQL server, use of views, pass-tough query and in some cases writing store procedures allows updates and code to run WITHOUT using ANY bandwidth. So one can send a single update query to the server that updates 10,000 rows of data – the only network cost will be the “tiny” amount of bandwidth to send that sql statement.
So while bound forms can be used with SQL Azure running in the cloud, one needs to build software like those do for the web, or vb.net in which they FIRST ask the user what account or customer to work on and THEN launch the UI to display that given data.
So in access, you build a search form say like this:
So at the end of the day, it is important to ignore posts here that suggests Access to SQL in the cloud is not viable. Access with proper designs will work rather well over typical internet connections to SQL server running on Azure.
In fact I seen people use Access to SQL over a 56k modem!
One has to adopt sensible designs in which the data pulled for a given task is restricted – this is a hall mark of all developers – the only issue is Access does NOT enforce this approach while most other developer tools don’t let you hang yourself with things like bound forms to large tables! It not that Access is slow, but Access is slow when you make poor design decisions.
Access to SharePoint can be a real winner – especially for poor bandwidth, spotty bandwidth and even when the connection is lost, the application will continue to run and run faster than 99% of the cases if you were running the same application with a SQL back end. There is a BIG caveat here since only certain types of applications will work well with SharePoint tables. For me to explain the why, how, and when such applications are better is beyond a simple post here, but one simply needs to be aware that SharePoint can be incredible solution, but not for all applications and SQL server can and will be better choice. This SharePoint “better” choice can only be determined on a case by case evaluation of the given type of application in question.
The problem is simply that Azure SQL Database is not very fast running with small DTUs (Database Transaction Units) compared to, say, an in-house instance of SQL Server hosted on even a moderate modern server.
I've checked it out too, and it requires extremely careful design of queries and filtering - far from what you normally can get away with - to obtain acceptable overall speed. On the other hand, this is a giving experience that will bring focus to potential bottlenecks you otherwise wouldn't encounter before it might be too late.
OK, so after almost a week of trying to get this to work (Access front-end to SQL Server back-end on Azure), I've come to the conclusion that it's not a viable solution.
I've tried SQL Server, and setup a Sharepoint 2016 server on Azure, which also failed.
What has worked is using a product from Bullzipp called MS Access to MySQL to convert the access tables, then adding a MySQL DB on the server and importing the file generated by Bullzip. The only thing to note here is that Bullzip doesn't like the newer access formats (it wants an MDB file) so go to Access, create a new, empty file, but make sure you set its file type to MDB. then import your tables across and run Bullzip.
It's now working a hell of a lot faster than the SQL Server, but I am getting some write conflicts in Access, so I just need to go through the code and do whatever I need to so I can avoid those messages.
Using Access as a front to Azure SQL tables is the worst solution. But, sometimes you have to do it. I have a client who is adamant that she wants to keep her Access database. When she hired her very first employee, it became clear she needed to SQL tables behind the screens.
This was a bit of a nightmare. However, after redesigning some terrible table structures, creating views and many procs, I've been able to do it. I use local tables in some cases, and refill by pulling from a stored proc and inserting into the local table. I use linked tables for basic data edits, and do explicit save records almost constantly.
I also have a first-load module that opens all forms, goes to the last record, back to the first record, and then hides the form until needed. The load limps along for about 3
My only remaining issue is now that Azure will close connections after idle time of (I think) 30 or more minutes -- or maybe it's when the laptop sleeps? That kills the app and it has to be closed and re-opened.
We have a coldfusion enterprise server with 2 instances. Each instance has 200+ data-sources to databases on one MSSQL server. This number will keep on growing. Now it seems that requests to a single data-source are getting slower even though the database is small. It is possible that requests get slower when CF has more data-sources?
Are the datasources partitioned for a reason (e.g. different clients/customers, etc)? If this is really just a big application with a bunch of databases, you may be able reduce the number of DSNs through cross-database queries through a single CF datasource.
If the account CF is using to connect to SQL Server has read access to both databases on the server, you can do something like this:
SELECT field1, field2, field3...
FROM [databaseA].[dbo].Table1 T1
JOIN [databaseB].[dbo].Table2 T2 ON ...
I've done this with State and Country tables that are shared across multiple DBs. Set the permissions carefully to prevent damage or errant updates.
Of course it's possible, I doubt there are many people with this kind of experience so we could just guess.
Personally I'd never make that many databases in SQL server, and that many datasources in CF. IMHO using db schemas would be much better solution, easier to maintain, administer and so on.
How's situation with memory? Could happen that huge amount of JDBC connections is choking the server. I'd check memory consumption first, SQL stats after to see data through-output and maybe later even SQL Severs performance settings, CF settings to see concurent possible JDBC connections, network settings and so on.
Again, just guessing and trying to give you a hint where to look.
There's more too it than just coldfusion. Each connection is about 4k, and each datasource can use multiple connections. So 200 DSN's might equal 300 or 400 connections (or 800 or 1000 when aggregated). The DB server itself uses the "tempdb" as a work space for handling requests. It expands this workspace to handle the traffic - but it is a shared resource in a way. So one DB can have an impact on another DB on the server.
I would:
Check the total number of connections on the SQL server (perfmon has some good counters for this)
Use server monitor to get a sense of the total number of connections on each instance.
Use network monitoring to determine what capacity the network connection on each server is using...
Of course it goes without saying that your databases need to be fine tuned to perform as well (indexed and optimized - with a good schema and backstopped by good query code). Creating a scalable solution requires all of these things :)
PS - it goes without saying you can contact me for more "formal" help. I'll be glad to chat about your problem.
I've got a scenario when sometimes a user selects the right parameters and makes a query which takes several minutes or more to execute. I cannot prevent him to select such a combination of parameters (it's quite legal), so I'd like to set a timeout on the query.
Note that I really want to stop the query execution itself and rollback any transactions, because otherwise it hogs up most of server resources. Add an impatient user who restarts the application and tries the combination again, and you've got a recipe for a disaster (read: SQL Server DoS).
Can this be done and how?
As far as I know, apart from setting the command or connection timeouts in the client, there is no way to change timeouts on a query by query basis in the server.
You can indeed change the default 600 seconds using sp_configure, but these are server scoped.
Humm!
did you try LOCK_TIMEOUT
Note down what it was orginally before running the query
set it for your query
after running your query set it back to original value
SET LOCK_TIMEOUT 1800;
SELECT ##LOCK_TIMEOUT AS [Lock Timeout];
I might suggest 2 things.
1)
If your query takes a lot of time because it´s using several tables that might involve locks, a quite fast solution is to run your queries with the "NoLock" hint.
Simply add Select * from YourTable WITH (NOLOCK) in all your table references an that will prevent your query to block for concurrent transactions.
2) if you want to be sure that all of your queries runs in (let´s say) less than 5 seconds, then you could add what #talha proposed, that worked sweet for me
Just add at the top of your execution
SET LOCK_TIMEOUT 5000; --5 seconds.
And that will cause that your query takes less than 5 or fail. Then you should catch the exception and rollback if needed.
Hope it helps.
In management studio you can set the timeout in seconds.
menu Tools => Options set the field and then Ok
It sounds like more of an architectual issue, and any timeout/disconnect you can do would be more or less a band-aid. This has to be solved on SQL server side, by the way of read-only replica, transaction log shipping (to give you a read-only server to connect to), replication and such. Basically you give the DMZ sql server that heavy read can go to without killing stuff. This is very common. A well-designed SQL system won't be taken down by DDoS - that'd be like a car that dies if you step on the gas.
That said, if you are at the liberty to change the code, you could guesstimate if the query is too heavy and you could either reject or return only X rows in your stored procedure. If you are mated to some reporting tool and such and can't control the SELECT it generates, you could point it to a view and then do the safety valve in the view.
Also, if up-to-the-minute freshness isn't critical and you could compromise on that, like monthly sales data, then compiling a physical table of complex joins by job to avoid complex joins might do the trick - that way everything would be sub-second per query.
It entirely depends on what you are doing, but there is always a solution. Sometimes it takes extra coding to optimize it, sometimes it takes extra money to get you the secondary read-only DB, sometimes it needs time and attention in index tuning.
So it entirely depends, but I'd start with "what can I compromise? what can I change?" and go from there.
You can set Execution time-out in seconds.
If you have just one query I don't know how to set timeout on T-SQL level.
However if you have a few queries (i.e. collecting data into temporary tables) inside stored procedure you can just control time of execution with GETDATE(), DATEDIFF() and a few INT variables storing time of execution of each part.
You can specify the connection timeout within the SQL connection string, when you connect to the database, like so:
"Data Source=localhost;Initial Catalog=database;Connect Timeout=15"
On the server level, use MSSQLMS to view the server properties, and on the Connections page you can specify the default query timeout.
I'm not quite sure that queries keep on running after the client connection has closed. Queries should not take that long either, MSSQL can handle large databases, I've worked with GB's of data on it before. Run a performance profile on the queries, prehaps some well-placed indexes could speed it up, or rewriting the query could too.
Update:
According to this list, SQL timeouts happen when waiting for attention acknowledgement from server:
Suppose you execute a command, then the command times out. When this happens the SqlClient driver sends a special 8 byte packet to the server called an attention packet. This tells the server to stop executing the current command. When we send the attention packet, we have to wait for the attention acknowledgement from the server and this can in theory take a long time and time out. You can also send this packet by calling SqlCommand.Cancel on an asynchronous SqlCommand object. This one is a special case where we use a 5 second timeout. In most cases you will never hit this one, the server is usually very responsive to attention packets because these are handled very low in the network layer.
So it seems that after the client connection times out, a signal is sent to the server to cancel the running query too.
For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards.
One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM?
It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.
I'd use a startup stored proc that invoked sp_updatestats
It will benefit queries anyway
It already loops through everything anyway (you have indexes, right?)
After a large SQL Query is run that is built through my ASPX Pages I see the following two items listed in sql profiler.
Event Class TextData ApplicationName CPU Reads Writes
SQL:BatchCompleted Select N'Testing Connection...' SQLAgent - Alert Engine 1609 0 0
SQL:BatchCompleted EXECUTE msdb.sbo.sp_sqlagent_get_perf_counters SQLAgent - Alert Engine 1609 96 0
These CPU is the same as the query so does that query actually take 1609*3=4827?
Same thing happens with case :
Audit Logout
Can I limit this? I am using sql server 2005.
First of all, some of what you see in the SQL Profiler is cumulative, so you can't always just add the numbers up. For example, a SPCompleted event will show the total time of all the SPStatementCompleted events that make it up. Not sure if that's your issue here.
The only way to improve the CPU is to actually improve your query. Make sure its using indexes, minimize the number of rows read, etc. Work with an experienced DBA on some of these techniques, or read a book.
Only other mitigation I can think of is to limit the number of CPUs the query runs on (this is called Degree of Parallelism, or DOP). You can set this at the server level, or specify it at the query level. If you have a multiple processor server, this can ensure that a single long-running query doesn't take over all processors on the box--it will leave one or more processors free for other queries to run.
No, it takes 1609 milliseconds of CPU in total. What is the duration?
I bet the same or slighty more because I doubt SQL Agent queries use parallelism.
Are you trying to reduce background processes using CPU? If so, then you reduce functionality by disabling SQL Agent (no backups then for example) and restarting SQL Server with switch -x
You also can not stop "Audit logout" events... this is what happens when you disconnect or close a connection.
However, are you maxing the processors? If so, you'll need to differentiate between "user" memory for queries and "system" memory used for paging or (god forbid) generating your parity on RAID 5 disks.
High CPU can often be solved by more RAM and a better disk config.
SQL Server 2008 has a new "Resource Governor" that may help. I don't know if you're using SQL Server 2008 or not but you may want to take a look here
This is an issue of connection string. If audit logout takes too much of your cpu then try to play with different connection string.