I'm a last year college student and I'm doing my thesis right now. My title is "Index Suggestion based on Log Analysis". This project will analyze the PostgreSQL transaction log to give index recommendation to the database that will be tested.
This research will develop an index recommender tool by analyzing the attribute that is frequently accessed (using SELECT statement).
But, I found it's hard to find the PostgreSQL log file. My question is, where can I find PostgreSQL log transaction dataset? Or maybe other database log transaction dataset?
You are mixing up the transaction log (WAL) and the regular text log file.
The latter does contain statements (if the configuration is set like that), while the transaction log doesn't contain statements at all, just binary information about what has changed in which block.
You won't be able to recommend an index just from looking at the query, I can't do that either.
I have a suggestion for you: if you want to write a tool that suggests indexes, it should take the output of EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) SELECT /* your query */ as input.
Moreover, the tool will have to be connected to the database to query table and index metadata (and perhaps statistics). That makes you dependent on the database version, because metadata can change (and do – see partitioned tables), but that won't concern you so much in a thesis paper.
The task is still not simple (query optimization is AI), but then you have at least a chance.
A bit late to the party here, but the thing you would probably want in practice is pg_stat_statements. Use it to list the queries with the highest total_exec_time, and look at their query plans. Then you would consider adding indexes that would speed up joins or scans in those queries.
This should be possible to automate to some extent. Similarly, recommending indexes to drop should be possible to do using index usage statistics. Personally, I'd love to have a tool that does this kind of suggestions automatically, and it would be a great example of profile guided optimization.
You need to run the query below then restart PostgreSQL to enable logging persistently. *The parameter with ALTER SYSTEM SET is set to postgresql.auto.conf rather than postgresql.conf:
ALTER SYSTEM SET log_statement = 'all';
And, you need to run either of the queries below then restart PostgreSQL to disable logging persistently:
ALTER SYSTEM RESET log_statement;
Or:
ALTER SYSTEM SET log_statement = 'none';
You can also run the query below then need to restart PostgreSQL to enable logging persistently:
ALTER SYSTEM SET log_min_duration_statement = 0;
And, you can also run either of the queries below then need to restart PostgreSQL to disable logging persistently:
ALTER SYSTEM RESET log_min_duration_statement;
Or:
ALTER SYSTEM SET log_min_duration_statement = -1;
You can see my answer explaining more about how to enable and disable query logs on PostgreSQL.
Related
I have a database ("DatabaseA") that I cannot modify in any way, but I need to detect the addition of rows to a table in it and then add a log record to a table in a separate database ("DatabaseB") along with some info about the user who added the row to DatabaseA. (So it needs to be event-driven, not merely a periodic scan of the DatabaseA table.)
I know that normally, I could add a trigger to DatabaseA and run, say, a stored procedure to add log records to the DatabaseB table. But how can I do this without modifying DatabaseA?
I have free-reign to do whatever I like in DatabaseB.
EDIT in response to questions/comments ...
Databases A and B are MS SQL 2008/R2 databases (as tagged), users are interacting with the DB via a proprietary Windows desktop application (not my own) and each user has a SQL login associated with their application session.
Any ideas?
Ok, so I have not put together a proof of concept, but this might work.
You can configure an extended events session on databaseB that watches for all the procedures on databaseA that can insert into the table or any sql statements that run against the table on databaseA (using a LIKE '%your table name here%').
This is a custom solution that writes the XE session to a table:
https://github.com/spaghettidba/XESmartTarget
You could probably mimic functionality by writing the XE events table to a custom user table every 1 minute or so using the SQL job agent.
Your session would monitor databaseA, write the XE output to databaseB, you write a trigger that upon each XE output write, it would compare the two tables and if there are differences, write the differences to your log table. This would be a nonstop running process, but it is still kind of a period scan in a way. The XE only writes when the event happens, but it is still running a check every couple of seconds.
I recommend you look at a data integration tool that can mine the transaction log for Change Data Capture events. We are recently using StreamSets Data Collector for Oracle CDC but it also has SQL Server CDC. There are many other competing technologies including Oracle GoldenGate and Informatica PowerExchange (not PowerCenter). We like StreamSets because it is open source and is designed to build realtime data pipelines between DB at the schema level. Till now we have used batch ETL tools like Informatica PowerCenter and Pentaho Data Integration. I can near real-time copy all the tables in a schema in one StreamSets pipeline provided I already deployed DDL in the target. I use this approach between Oracle and Vertica. You can add additional columns to the target and populate them as part of the pipeline.
The only catch might be identifying which user made the change. I don't know whether that is in the SQL Server transaction log. Seems probable but I am not a SQL Server DBA.
I looked at both solutions provided by the time of writing this answer (refer Dan Flippo and dfundaka) but found that the first - using Change Data Capture - required modification to the database and the second - using Extended Events - wasn't really a complete answer, though it got me thinking of other options.
And the option that seems cleanest, and doesn't require any database modification - is to use SQL Server Dynamic Management Views. Within this library residing, in the System database, are various procedures to view server process history - in this case INSERTs and UPDATEs - such as sys.dm_exec_sql_text and sys.dm_exec_query_stats which contain records of database transactions (and are, in fact, what Extended Events seems to be based on).
Though it's quite an involved process initially to extract the required information, the queries can be tuned and generalized to a degree.
There are restrictions on transaction history retention, etc but for the purposes of this particular exercise, this wasn't an issue.
I'm not going to select this answer as the correct one yet partly because it's a matter of preference as to how you approach the problem and also because I'm yet to provide a complete solution. Hopefully, I'll post back with that later. But if anyone cares to comment on this approach - good or bad - I'd be interested in your views.
In on premise SQL databases, it is normal to have a maintenance plan for rebuilding the indexes once in a while, when it is not being used that much.
How can I set it up in Azure SQL DB?
P.S: I tried it before, but since I couldn't find any options for that, I thought maybe they are doing it automatically until I've read this post and tried:
SELECT
DB_NAME() AS DBName
,OBJECT_NAME(ps.object_id) AS TableName
,i.name AS IndexName
,ips.index_type_desc
,ips.avg_fragmentation_in_percent
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
CROSS APPLY sys.dm_db_index_physical_stats(DB_ID(), ps.object_id, ps.index_id, null, 'LIMITED') ips
ORDER BY ps.object_id, ps.index_id
And found out that I have indexes that need maintaining
Update: Note that the engineering team has published updated guidance to better codify some of the suggestions in this answer in a more "official" from Microsoft place as some customers asked for that. SQL Server/DB Index Guidance. Thanks, Conor
original answer:
I'll point out that most people don't need to consider rebuilding indexes in SQL Azure at all. Yes, B+ Tree indexes can become fragmented, and yes this can cause some space overhead and some CPU overhead compared to having perfectly tuned indexes. So, there are some scenarios where we do work with customers to rebuild indexes. (The primary scenario is when the customer may run out of space, currently, as disk space is somewhat limited in SQL Azure due to the current architecture). So, I will encourage you to step back and consider that using the SQL Server model for managing databases is not "wrong" but it may or may not be worth your effort.
(If you do end up needing to rebuild an index, you are welcome to use the models posted here by the other posters - they are generally fine models to script tasks. Note that SQL Azure Managed Instance also supports SQL Agent which you can also use to create jobs to script maintenance operations if you so choose).
Here are some details that may help you decide if you may be a candidate for index rebuilds:
The link you referenced is from a post in 2013. The architecture for SQL Azure was completely redone after that post. Specifically, the hardware architecture moved from a model that was based on local spinning disks to one based on local SSDs (in most cases). So, the guidance in the original post is out of date.
You can have cases in the current architecture where you can run out of space with a fragmented index. You have options to rebuild the index or to move to a larger reservation size for awhile (which will cost more money) that supports a larger disk space allocation. [Since the local SSD space on the machines is limited, reservation sizes are roughly linked to proportions of the machine. As we get newer hardware with larger/more drives, you have more scale-up options].
SSD fragmentation impact is relatively low compared to rotating disks since the cost of a random IO is not really any higher than a sequential one. The CPU overhead of walking a few more B+ Tree intermediate pages is modest. I've usually seen an overhead of perhaps 5-20% max in the average case (which may or may not justify regular rebuilds which have a much bigger workload impact when rebuilding)
If you are using query store (which is on by default in SQL Azure), you can evaluate whether a specific index rebuild helps your performance visibly or not. You can do this as a test to see if your workload improves before bothering to take the time to build and manage index rebuild operations yourself.
Please note that there is currently no intra-database resource governance within SQL Azure for user workloads. So, if you start an index rebuild, you may end up consuming lots of resources and impacting your main workload. You can try to time things to be done off-hours, of course, but for applications with lots of customers around the world this may not be possible.
Additionally, I will note that many customers have index rebuild jobs "because they want stats to be updated". It is not necessary to rebuild an index just to rebuild the stats. In recent SQL Server and SQL Azure, the algorithm for stats update was made more aggressive on larger tables and the model for how we estimate cardinality in cases where customers are querying recently inserted data (since the last stats update) have been changed in later compatibility levels. So, it is often the case that the customer doesn't even need to do any manual stats update at all.
Finally, I will note that the impact of stats being out of date was historically that you'd get plan choice regressions. For repeated queries, a lot of the impact of this was mitigated by the introduction of the automatic tuning feature over query store (which forces prior plans if it notices a large regression in query performance compared to the prior plan).
The official recommendation that I give customers is to not bother with index rebuilds unless they have a tier-1 app where they've demonstrated real need (benefits outweigh the costs) or where they are a SaaS ISV where they are trying to tune a workload over many databases/customers in elastic pools or in a multi-tenant database design so they can reduce their COGS or avoid running out of disk space (as mentioned earlier) on a very big database. In the largest customers we have on the platform, we sometimes see value in doing index operations manually with the customer, but we often do not need to have a regular job where we do this kind of operation "just in case". The intent from the SQL team is that you don't need to bother with this at all and you can just focus on your app instead. There are always things that we can add or improve into our automatic mechanisms, of course, so I completely allow for the possibility that an individual customer database may have a need for such actions. I've not seen any myself beyond the cases I mentioned, and even those are rarely an issue.
I hope this gives you some context to understand why this isn't being done in the platform yet - it just hasn't been an issue for the vast majority of customer databases we have today in our service compared to other pressing needs. We revisit the list of things we need to build each planning cycle, of course, and we do look at opportunities like this regularly.
Good luck - whatever your outcome here, I hope this helps you make the right choice.
Sincerely,
Conor Cunningham
Architect, SQL
You can use Azure Automation to schedule index maintenance tasks as explained here :Rebuilding SQL Database indexes using Azure Automation
Below are steps :
1) Provision an Automation Account if you don’t have any, by going to https://portal.azure.com and select New > Management > Automation Account
2) After creating the Automation Account, open the details and now click on Runbooks > Browse Gallery
Type on the search box the word “indexes” and the runbook “Indexes tables in an Azure database if they have a high fragmentation” appears:
4) Note that the author of the runbook is the SC Automation Product Team at Microsoft. Click on Import:
5) After importing the runbook, now let’s add the database credentials to the assets. Click on Assets > Credentials and then on “Add a credential…” button.
6) Set a Credential name (that will be used later on the runbook), the database user name and password:
7) Now click again on Runbooks and then select the “Update-SQLIndexRunbook” from the list, and click on the “Edit…” button. You will be able to see the PowerShell script that will be executed:
8) If you want to test the script, just click on the “Test Pane” button, and the test window opens. Introduce the required parameters and click on Start to execute the index rebuild. If any error occurs, the error is logged on the results window. Note that depending on the database and the other parameters, this can take a long time to complete:
9) Now go back to the editor, and click on the “Publish” button enable the runbook. If we click on “Start”, a window appears asking for the parameters. But as we want to schedule this task, we will click on the “Schedule” button instead:
10) Click on the Schedule link to create a new Schedule for the runbook. I have specified once a week, but that will depend on your workload and how your indexes increase their fragmentation over time. You will need to tweak the schedule based on your needs and by executing the initial queries between executions:
11) Now introduce the parameters and run settings:
NOTE: you can play with having different schedules with different settings, i.e. having a specific schedule for a specific table.
With that, you have finished. Remember to change the Logging settings as desired:
Azure Automation is good and pricing is also negligible..
Some other options you have are
1.Create a execute sql task and schedule it through sql agent .The execute sql task should contain the index rebuild code along with stats rebuild
2.You also can create a linked server to SQLAZURE and create a sql agent job.To create a linked server to azure, you can see this SO link:I need to add a linked server to a MS Azure SQL Server
As #TheGamiswar suggested, add a linked server, then create a stored procedure like this:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [LinkedServerName].[RemoteDB].[dbo].[sp_RebuildReorganizIndexes]
AS
BEGIN
ALTER INDEX PK_MyTable ON MyTable REBUILD WITH (STATISTICS_NORECOMPUTE = ON, ONLINE=ON);
ALTER INDEX IX_MyTable ON MyTable REBUILD WITH (STATISTICS_NORECOMPUTE = ON, ONLINE=ON); --Nonclustered index
ALTER INDEX PK_MyTable ON MyTable REORGANIZE;
ALTER INDEX IX_MyTable ON MyTable REORGANIZE;
END
Then on your linked server use "SQL Server Agent" to create a new job and a schedule:
For details please see https://learn.microsoft.com/en-us/sql/ssms/agent/create-a-job?view=sql-server-2017
I am using Azure SQL database in my project and in which some same set of queries are being executed very frequently.
Recently I received a performance recommendation saying - Non-Parameterized queries are causing performance issues. and are suggesting to execute the following statement in my database.
ALTER DATABASE [TestDB] SET PARAMETERIZATION FORCED
I came to know that Forced parameterization may improve the performance of certain databases by reducing the frequency of query compilations and recompilation.
Also, it is known that stored procedures are executable code and are automatically cached and shared among user and it can prevent recompilations.
Please help me with below-listed questions.
1)Do turning database into Forced PARAMETERIZATION would work better than making frequently used queries into stored procedures?
2)Is it safe performing the Forced Parameterization option in my database?
1)Do turning database into Forced PARAMETERIZATION would work better than making frequently used queries into stored procedures?
No. Forced Parameterization is a workaround for applications that don't properly parameterize queries. It's better to use parameters for frequently-run queries, and hard-coded values where you want the plan to be based on individual value.
EG
select *
from Orders
where CustomerId = #customerID
and Active = 1
It's hard to say if it will work better or not without further testing, but if the advisor is telling you that enabling Parameterization will improve performance, then you should definitely try it. Here's why:
You can apply this recommendation quickly and easily by clicking on
the Apply command. Once you apply this recommendation, it will enable
forced parameterization within minutes on your database and it starts
the monitoring process which approximately lasts for 24 hours. After
this period, you will be able to see the validation report that shows
CPU usage of your database 24 hours before and after the
recommendation has been applied. SQL Database Advisor has a safety
mechanism that automatically reverts the applied recommendation in
case a performance regression has been detected.
More info here:
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-advisor#parameterize-queries-recommendations
I have an INSERT statement that is eating a hell of a lot of log space, so much so that the hard drive is actually filling up before the statement completes.
The thing is, I really don't need this to be logged as it is only an intermediate data upload step.
For argument's sake, let's say I have:
Table A: Initial upload table (populated using bcp, so no logging problems)
Table B: Populated using INSERT INTO B from A
Is there a way that I can copy between A and B without anything being written to the log?
P.S. I'm using SQL Server 2008 with simple recovery model.
From Louis Davidson, Microsoft MVP:
There is no way to insert without
logging at all. SELECT INTO is the
best way to minimize logging in T-SQL,
using SSIS you can do the same sort of
light logging using Bulk Insert.
From your requirements, I would
probably use SSIS, drop all
constraints, especially unique and
primary key ones, load the data in,
add the constraints back. I load
about 100GB in just over an hour like
this, with fairly minimal overhead. I
am using BULK LOGGED recovery model,
which just logs the existence of new
extents during the logging, and then
you can remove them later.
The key is to start with barebones
tables, and it just screams. Building
the index once leaves you will no
indexes to maintain, just the one
index build per index.
If you don't want to use SSIS, the point still applies to drop all of your constraints and use the BULK LOGGED recovery model. This greatly reduces the logging done on INSERT INTO statements and thus should solve your issue.
http://msdn.microsoft.com/en-us/library/ms191244.aspx
Upload the data into tempdb instead of your database, and do all the intermediate transformations in tempdb. Then copy only the final data into the destination database. Use batches to minimize individual transaction size. If you still have problems, look into deploying trace flag 610, see The Data Loading Performance Guide and Prerequisites for Minimal Logging in Bulk Import:
Trace Flag 610
SQL Server 2008 introduces trace flag
610, which controls minimally logged
inserts into indexed tables.
I have a SP that has been worked on my 2 people now that still takes 2 minutes or more to run. Is there a way to have these pre run and stored in cache or somewhere else so when my client needs to look at this data in a web browser he doesn't want to hang himself or me?
I am no where near a DBA so I am kind of at the mercy of who I hire to figure this out for me, so having a little knowledge up front would really help me out.
If it truly takes that long to run, you could schedule the process to run using SQL Agent, and have the output go to a table, then change the web application to read the table rather than execute the stored procedure. You'd have to decide how often to run the refresh, and deal with the requests that occur while it is being refreshed, but that can be dealt with as well by having two output files, one live and one for the latest refresh.
But I would take another look at the procedure, look at the execution plan and see where it is slow, make sure it is not doing full table scans.
Preferred solutions in this order:
Analyze the query and optimize accordingly
Cache it in the application (you can use httpRuntime.Cache (even if not asp.net application)
Cache SPROC results in a table in the DB and then add triggers to invalidate the cache (delete the table) so a a call to the SPROC would first look to see if there is any data in the cache table. If none, run SPROC and store the result in the cache table, if so, return the data from that table. The triggers on the "source" tables for the SPROC would just delete * from CacheTable to "clear the cache" (depending on what you sproc is doing and its dependencies, you may even be able to partially update the cache table based on the trigger, but all of this quickly gets difficult to maintain...but sometimes you gotta do what you gotta do...This approach will allow the cache table to update itself as needed. You will always have the latest data and the SPROC will only run when needed.
Try "Analyze query in database engine tuning advisor" from the Query menu.
I usually script the procedure to a new window, take out the query definition part and try different combinations of temp tables, regular tables and table variables.
You could cache the result set in the application as opposed to the database, either in memory by keeping an instance of the datatable around, or by serializing it to disk. How many rows does it return?
Is it too long to post the code here?
OK first things first, indexes:
What indexes do you have on the tables and is the execution plan using them?
Do you have indexes on all the foreign key fields?
Second, does the proc use any of the following performance killers:
a cursor
a subquery
a user-defined function
select *
a search criteria that starts with a wildcard
third
Can the where clause be rewritten to be sargeable? There is more than one way to write almost everything and some ways are better performers than others.
I suggest you buy your developers some books on performance tuning.
Likely your proc can be fixed, but without seeing the code, it is hard to guess what the problems might be.