I'm looking for the easiest way to view what users are logging into my database. We have some old user accounts that might not be getting used anymore. Instead of just turning them off and seeing who complains, I thought there might be some way to monitor who logs in and runs some type of query over the next month or so. What would be the easiest way to monitor and track this kind of activity?
Edit:
I would like to do this for all databases on the server.
To see who's connecting, you can use Logon Triggers which allows you to log access. Running a trace for a month or 2 to audit login events may simply not work if you failover, restart SQL etc
However, to see what someone is doing after connection, then you'll really have to use Profiler like Mitch said
Run a profiler trace with the Audit Login event selected: or just select the Standard Trace Template (and perhaps limit the trace size).
See Using SQL Server Profiler
The easiest way to do this would be with a third-party tool that's custom-written to do the work for you. Otherwise you have to fuss with (not SQL Profiler but) traces, regularly loading resulting data, and processing it, and for my money, that just is not an "easy" thing to do.
Not much help. The reason I'm posting is that just because someone (or something) hasn't logged in for a day, a week, or a month, does not mean that the account has gone derelict--I would only consider it an indication. I would recommend that once you've identified it as potentially derelict, then you disable it and see what happens. Give that a month, a quarter, or even a year (depends on your system) before actually deleting it.
(Of course, tracking that information over a month/quarter/year is yet more fuss and bother. Ideally, all accounts get created with deactivation/deletion rules, and their users/owners are informed of the rules under which they get to access the system. This probably won't help you now, but keep it in mind for the next system you design.)
Related
We use Interbase 2020 as production DB using UTF8 (approx 250 simultaneous user). With this database we have two main problems that we are not able to solve.
In history we had a problem with an older udf-function that crashed our database because it was not ready for unicode string operation. As a result we changed to unicode compatible versions.
The last few years sometimes we get hiccup (as we call it). In this case every client looses connection and the guardian restarts. The clients can connect again without us doing anything.
The second problem is that sometimes the interbase does not crash but everyone looses the connection and it is not possible to reconnect (by client, or ibexpert for example). In this case we have to restart the whole server.
These problems are occuring irregular. Most times it first starts with a hiccup. After a time (maybe two to ten hours later), the second problem arrives and we need to restart our database. If we are lucky we need to restart the server 2-3 times, on a bad day we need to restart the server more often as the second problem returns again and again (for example every 30 minutes).
We are not yet able to locate this problem. It doesn't matter if a user is connected to the database or just idling on weekends. It also often happens when nobody is connected.
Even the server logs don't give hints that helped us yet.
-We minimized udf function use as low as possible, changed to newer udfs that support unicode etc.
-functions that crash the server (afaik) are guarded that they dont get for example invalid datetimes
-We update database server regularely to newest version
-also updated client dlls
-also updated connection components (IBDAC) + Delphi 11.1
-wrote exception tracker in our client software (unfortunately there is only the connection lost error)
-regularely check active transactions if something hangs/loops/snapshot creation
Do you have any information that we could use to solve our problems? Is there any possibility to get more info out of the log files (other log levels possible?)? We don't want to log every procedure call if not necessary, but if there are no other options we need to..
Thanks for your help!
Matze,
I suggest you log a Case with our Support team at Embarcadero (https://www.embarcadero.com/support). They will work with you to understand the specifics of the crash, get relevant details (and Performance Monitoring information) from you, and help us work on a resolution (if not addressed already in our latest update).
We have addressed a few corner cases (and other crash reports) in many updates over the past couple years in InterBase 2020, and are eager to get to the bottom of this issue as well. You can see some of the resolved crash reports at https://docwiki.embarcadero.com/InterBase/2020/en/Resolved_Defects
Supporting 250 simultaneous users is not the problem, but understanding how the use cases are running into any potential system resource limits is important.
You do mention that you have the latest updates to InterBase 2020, but I do not see a build number in your message. You can get the most recent update build (14.4.0.804) of the server (if on Windows) from https://my.embarcadero.com/#downloadDetail/1383
I have two database server, one for main usage and the second is for standby. The log file is being recorded every day. About last week, when I found that my log file size is getting bigger and it used like 12GB disk space. I am wondering do I need to create a crontab schedule task like delete the old log like every 1 or two months. Will it affect my system or should I just do backup for it?
Another question is if I am doing streaming replication, the log file size of pg_log in standby server will be the same with primary server?
You may want to look at https://www.postgresql.org/docs/current/static/logfile-maintenance.html for some ideas of what you can do.
Personally, I think it's easiest to rotate logs cyclically, like what's described in this answer: https://dba.stackexchange.com/a/133443. This method also can help you find what you're looking for more quickly, since your logs are separated by day. Of course, if you rotate logs based on the day name and have new logs overwrite the old, you're limiting the amount of time you can go back in the logs, which can be detrimental, but if you're consistently having to go back more than a week to check logs, you might need to get a more proactive monitoring system in place to alert you when problems are happening.
As for your second question, I don't have a definitive answer, but I'd expect that the log file sizes will be similar, but probably not exactly the same. Different servers might be setup with different logging verbosity, and there may be some log messages unique to each server.
Sorry for the long introduction but before I can ask my question, I think giving the background would help understanding our problem much better.
We are using sql server 2008 for our web services as the backend and from time to time it takes too much time for responding back for the requests that supposed to run really fast, like taking more than 20 seconds for a select request that queries a table that has only 22 rows. We went through many potential areas that could cause the issue from indexes to stored procedures, triggers etc, and tried to optimize whatever we can like removing indexes that are not read but write frequently or adding NOLOCK for our select queries to reduce the locking of the tables (we are OK with dirty reads).
We also had our DBA's reviewed the server and benchmarked the components to see any bottlenecks in CPU, memory or disk subsystem, and found out that hardware-wise we are OK as well. And since the pikes are occurring occasionally, it is really hard to reproduce the error on production or development because most of the time when we rerun the same query it yields response times that we are expecting, which are short, not the one that has been experienced earlier.
Having said that, I almost have been suspicious about I/O although it is not seem to be a bottleneck. But I think I was just be able to reproduce the error after running an index fragmentation report for a specific table on the server, which immediately caused pikes in requests not only run against that table but also in other requests that query other tables. And since the DB, and the server, is shared with other applications we use and also from time to time queries can be run on the server and database that take long time is a common scenario for us, my suspicion regarding occasional I/O bottleneck is, I believe, becoming a fact.
Therefore I want to find out a way that would prioritize requests that are coming from web services which will be processed even if there are other resource sensitive queries being run. I have been looking for some kind of prioritization I described above since very beginning of the resolution process and found out that SQL Server 2008 has a feature called 'Resource Governor' that allows prioritization of the requests.
However, since I am not an expert on Resource Governor nor a DBA, I would like to ask other people's experience who may have used or is using Resource Governor, as well as whether I can prioritize I/O for a specific login or a specific stored procedure (For example, if one I/O intensive process is being run at the time we receive a web service request, can SQL server stops, or slows down, I/O activity for that process and give a priority to the request we just received?).
Thank you for anyone that spends time on reading or helping out in advance.
Some Hardware Details:
CPU: 2x Quad Core AMD Opteron 8354
Memory: 64GB
Disk Subsystem: Compaq EVA8100 series (I am not sure but it should be RAID 0+1 accross 8 HP HSV210 SCSI drives)
PS:And I can almost 100 percent sure that application servers are not causing the error and there is no bottleneck we can identify there.
Update 1:
I'll try to answer as much as I can for the following questions that gbn asked below. Please let me know if you are looking something else.
1) What kind of index and statistics maintenance do you have please?
We have a weekly running job that defrags indexes every Friday. In addition to that, Auto Create Statistics and Auto Update Statistics are enabled. And the spikes are occurring in other times than the fragmentation job as well.
2) What kind of write data volumes do you have?
Hard to answer.In addition to our web services, there is a front end application that accesses the same database and periodically resource intensive queries needs to be run to my knowledge, however, I don't know how to get, let's say weekly or daily, write amount to DB.
3) Have you profiled Recompilation and statistics update events?
Sorry for not be able to figure out this one. I didn't understand what you are asking about by this question. Can you provide more information for this question, if possible?
first thought is that statistics are being updated because of the data change threshold is reached causing execution plans to be rebuilt.
What kind of index and statistics maintenance do you have please? Note: index maintenance updates index stats, not column stats: you may need separate stats updates.
What kind of write data volumes do you have?
Have you profiled Recompilation and statistics update events?
In response to question 3) of your Update to the original question, take a look at the following reference on SQL Server Pedia. It provides an explanation of what query recompiles are and also goes on to explain how you can monitor for these events. What I believe gbn is asking (feel free to correct me sir :-) ) is are you seeing recompile events prior to the slow execution of the troublesome query. You can look for this occurring by using the SQL Server Profiler.
Reasons for Recompiling a Query Execution Plan
I have 2 websites connecting to the same instance of MSSQL via classic ASP. Both websites are similar in nature and run similar queries.
One website chokes up every once in a while, while the other website is fine. This leads me to believe MSSQL is not the problem, otherwise I would think the bottleneck would occur in both websites simultaneously.
I've been trying to use Performance Monitor in Windows Server 2008 to locate the problem, but since everything is in aggregate form, it's hard to find the offending asp page.
So I am looking for some troubleshooting tips...
Is there a simple way to check all recent ASP pages and the see amount of time they ran for?
Is there a simple way to see live page requests as they happen?
I basically need to track down this offending code, but I am having a hard time seeing what happening in real-time through IIS.
If you use "W3C Extended Logging" as the log mode for your IIS logfiles, then you can switch on a column "time-taken" which will give you the execution time of each ASP in milliseconds (by default, this column is disabled). See here for more details.
You may find that something in one application is taking a lock in the database (e.g. through a transaction) and then not releasing it, which causes the other app to timeout.
Check your code for transactions and them being closed, and possibly consdier setting up tracing on the SQL server to log deadlocks.
Your best bet is to run SQL Server profiler to see what procedure or sql may be taking a long time to execute. You can also use Process Monitor to see any pages that may be taking a long time to finish execution and finally dont forget to check your IIS logs.
Hope that helps
I'm quite new to SQL Server and was wondering what the difference between the SQL Server log is and a custom log (in my case, using log4net)? I guess there's more choice on what to log using log4net, but what things are automatically logged by the database? For example, if a user signs up to my site, would I have to manually log that transaction, or would that be recorded in the database's log automatically? I'm currently starting a project and would like to figure out exactly what I should bother logging.
Thanks
Apples and Oranges.
Log4net and other custom 'logging' is just a way to capture events an application is reporting. 'Log' in this context reffers to whatever store is used by this infrastucture to persist information about these events.
The database log on the other hand is something compeltely different. In order to maintain consistency and atomicity databases use a so called Write-Ahead-Log protocol. In WAL all changes are first durable written into a journal, or log, before being applied to the data. This allows recovery to replay the log (the journal) and get the data back into a consistent state, by rolling back any uncommited work.
Database logs have absolutely nothing to do with your application code. Any database update will be automatically logged by the engine, simply because this is how any data is updated in a database. You cannot modify that, nor do you have any access to what's written in the log (strictly speaking you can look into the log, but you won't find any usefull information for your application).
SQL log handles tansaction logging for rolling back or comiting data. They are usually only dealt with by someone who knows what they are doing restoring backups or shipping the logs to use for backups.
The log4net and other logging framweworks handle in code logging of exceptions, warning, or debug level info that you would like to output for your own info. They can be sent to a table in a database, command window, flat file or web service. Common logging scenarios are catching unhandled exceptions at the application level to help track down bugs, or in any try catch statements writing out the stack trace.
It keeps track of the transactions so it can roll them back or replay in case of a crash. Quite more involved than simple logging.
The two are almost completely unrelated.
A database log is used to rollback transactions, recover from crashes, etc. All good things to ensure database consistency. It has updates/inserts/deletes in it--not really anything about intent or what your app is trying to do unless it directly affects data in the database.
The application log on the other hand (with Log4Net) can be extremely useful when building and debugging your application. It is driven by you and should contain information that traces what your app is doing. This is something that can safely be turned off or reduced (by toggling the log level) when you no longer need it.
The SQL Server log file is actually used for maintaining it's own stability, but it's not terribly useful for normal developers. It's not what you think (and I what I thought), a list of SQL statements that have been run. It's a propriety format designed to help SQL recover from a crash or roll back transactions.
If you need to track what's going on in the system, the SQL transaction log won't be helpful, and it would be very difficult to get that information back out. Instead, I would suggest adding triggers on your tables that write information off to another table, or add some code in your data layer that saves off a log of what's going on. It could be as simple as wrapping the SQL command object with your own implementation, which saved SQL statements off to log4net in addition to whatever normal code it was executing.
It is the mechanism by which the RMDBS can assure atomicity and consistency, see ACID.