Time consumption in Memory Optimize table - memory-optimized-tables

I have created one demo like- first created one temp table, then one In Memory table, then some insertion query to insert into the temp table, then used the same insertion query to insert into the In Memory table.
Before running each queries I've cleared the buffers. Finally found that the time consumption for In Memory table is always greater than that of Temp table by 2-4secs. But it shouldn't be.
So will anyone please tell me why it is happening like this? Thanks in advance.

Related

Postgres - relallvisible is 0 after one day

we had problem with performance on table, So I found that relavisible was 0 and relpage around 1000000. I did VACUUM ANALYZE and everything started to working fine but after one day relavisible for this table back to 0. Why this happend? We keep this table as archive so only once a day we move data from orginal table to that one using a simple function. One of solution may be do VACUUM ANALYZE after function but I don't want to do it. I think that postgres should handle this problem on his own. I understand that after some time table again will have 0 relavisible but no after one day where table is around 10m rows and we copy 15-20k rows per day.
That is not normal. Most likely someone did something to the table, like VACUUM FULL or CLUSTER or a bulk UPDATE, which cleared the visibility map.
It is possible each block had a bit of freespace in it, and those got filled up by the next bulk insert. But if that is the case, now that the space is full it shouldn't happen again. Unless whatever created the scattered free space happens again.

use same table multiple times in SP performance

I have a very long lines of code in SP that performs poorly. This sp requires me to validate data from multiple select statement with the same table multiple times.
Is it a good idea to dump data from physical table into temp table first or is it ok to reference it multiple times in multiple select statement within the same SP?
Per your description , you would like to improve the performance. Could you please show us your script of your SP and your execution plan? So that we have a right direction and make some test.
There are some simple yet useful tips and optimization to improve stored procedure performance.
Use SET NOCOUNT ON
Use fully qualified procedure name
sp_executesql instead of Execute for dynamic queries
Using IF EXISTS AND SELECT
Avoid naming user stored procedure as sp_procedurename.
Use set based queries wherever possible.
Keep transaction short and crisp
For more details , you can refer to it : https://www.sqlservergeeks.com/improve-stored-procedure-performance-in-sql-server/
If you think it does not satisfy your requirement, please share us more information.
Best Regards,
Rachel
Is it a good idea to dump data from physical table into temp table first or is it ok to reference it multiple times in multiple select statement within the same SP?
If it is a local temp table, each session using this stored procedure will create a separate temp table for themselves, although it will reduce the weight on original table, but it will increase the usage of memory and tempdb.
If it is a global temp table, we can only create one for all session, then we will need to create one manually before someone using it and then delete it if it is useless.
For me, I will use the Indexed Views, https://learn.microsoft.com/en-us/sql/relational-databases/views/create-indexed-views?view=sql-server-2017
It's hard to answer without the detail. However with such a large SP and such a small table it is likely that a particular select or join is slow rather than just repeatedly hitting the table (SQL server is perfectly happy to cache bits of tables or indexes in memory).
If possible can you get the execution plan of each part of the SP? or log some timings? or run each bit with statistics on?
That will tell you which bit is slow and we can help you fix it.

What is that makes temp tables more efficient than table variables when working with large data?

In SQL Server, the performance of temp tables is much better (in the means of time) compared to table variables when working with large data (say inserting or updating 100000 rows) (reference: SQL Server Temp Table vs Table Variable Performance Testing)
I've seen many articles comparing temp table and table variable, but still don't get what exactly makes temp tables more efficient when working with large data? Is it just how they are designed to behave or anything else?
Table variables don't have statistics, so cardinality estimation of table variable is 1.
You can force at least correct cardinality estimation using recompile option, but in no way can you produce column statistics, i.e. there is no data distribution of column values that exists for temporary tables.
The consequences are evident: every query that uses table variable will have underestimation.
Another con is this one:
Queries that insert into (or otherwise modify) #table_variables cannot
have a parallel plan, #temp_tables are not restricted in this manner.
You can read more on it here:
Parallelism with temp table but not table variable?
The answer in that topic has another link to additional reading that is very helpfull

Is data copied when selecting into a temp table?

I am wondering, when selecting rows and inserting them into a temp table, is the data actually copied or just referenced?
For example:
SELECT * INTO #Temp FROM SomeTable
If the table is very large, is this going to be a costly operation?
From my tests it seems to execute about as fast as a simple SELECT, but I'd like a better insight about how it actually works.
Cheers.
Temporary tables are allocated in tempdb. SQL server will generally try to keep the tempdb pages in memory, but large tables may end up written out to disk.
And yes, the data is always copied. So, for instance, if an UPDATE occurs on another connection between selecting into your temporary table and a later usage, the temporary table will contain the old value(s)

Table variable poor performance on insert in SQL Server Stored Procedure

We are experiencing performance problems using a table variable in a Stored Procedure.
Here is what actually happens :
DECLARE #tblTemp TABLE(iId_company INT)
INSERT INTO #tblTemp(iId_company)
SELECT id FROM .....
The SELECT returns 138 results, but inserting in the TABLE variable takes 1min15 but when I use a temp table with the same SELECT, woops, takes 0sec :
CREATE TABLE #temp (iId_company INT)
INSERT INTO #temp(iId_company)
SELECT id FROM ...
What could cause the behavior ?
Use a temporary table. You will see much better performance.
A detailed explanation for the reasoning behind this is beyond the scope of the initial
question however to summarise:
A table variable is optimized for one
row, by SQL Server i.e. it assumes 1
row will be returned.
A table variable does not create
statistics.
Google temp table Vs. table variable for a wealth of resources and discussions. If you then need specific assistance, fire me an email or contact me on Twitter.
Generally, for smaller sets of data, a table variable should be faster than a temp table. For larger sets of data, performance will fall off because table variables don't support parallelism (see this post).
With that said, I haven't experienced, or found experience with such a small set of data being slower for a table variable vs a temp table.
Not that it should matter but what does your select look like? I had an issue in SQL Server 2005 where my select on it's own ran relatively fast for what my query was doing say 5 minutes to return all the data over the wire about 150,000 rows. But when I tried to insert that same select into a temp table or table variable the statement ran for more than 1 hour before I killed it. I have yet to figure out what really was going on. I ended up adding the query hint force order and it started inserting faster.
Key point about temp tables also is that you can put indexes, etc on them whereas you can't with table variables.

Resources