Does updating statistics in SQL Server refreshes the data pages previously cached? - sql-server

Since during the process of updating statistics, the SQL Server accesses the underlying data pages, does this process also update the data previously cached in RAM? Just to be clear, I am NOT talking about the procedure/query plan cache, I am talking about the actual data pages.

Related

Snowflake Caching Validation

Is the cache valid once it has been retrieved, even after the warehouse has been suspended?
For example, the same query used in BI that is provided to multiple users is executed each time a user visits. The cache is available in this case, but is it still possible to use the cache when the warehouse is stopped? If not available, the perception is that there is a speed benefit in a system where the uptime of the warehouse is charged for, but on the other hand, there is no benefit on the cost side.
Is the cache valid once it has been retrieved, even after the warehouse has been suspended?
#Himanshu gave good explanation. Adding how it looks in query profiler for all three scenarios -
Metadata Cache - For any query such as -
select count(*) from SNOWFLAKE_SAMPLE_DATA.TPCDS_SF100TCL.CATALOG_SALES;
Are you taking of the Warehouse cache, it gets purged once the Warehouse is stopped and restarted.
Snowflake has the following different cache available,
a. Metadata cache – hold object information+ Statistics ( it is also called Metadata layer or Service layer or cloud service layer)
b. Result cache – last 24hrs of your result, the query result cache is retained for a maximum of 31 days after being generated. (it is also called Result set cache or 24 hrs. result cache or query result cache)
c. Warehouse cache – hold data locally as long as warehouse is running. (When Warehouse is suspended the cache is purged and cache is not purged when resumed)
(It is also called local cache or SSD cache or raw data cache or data cache)
(Users cannot see each other’s result but the result cache can re-use one users result cache and present it to other user)
I think as you are using the query you are talking of the Result cache, the cache is used by Snowflake if
The Query is same syntactically
If the user has permission to the tables
If the data in the tables used in the query has not changed
The query dose nit contains time function like CURRENTTIMESTAMP() and the query dose not use UDF.
The table micropartions have not been changed or re-clustered.

Joining SQL Query data with Rest Service data on the fly

I need to merge data from a mssql server and rest service on the fly. I have been asked to not store the data permanently in the mssql database as it changes periodically (caching would be OK, I believe as long as the cache time was adjustable).
At the moment, I am querying for data, then pulling joined data from a memory cache. If the data is not in cache, I call a rest service and store the result in cache.
This can be cumbersome and slow. Are there any patterns, applications or solutions that would help me solve this problem?
My thought is I should move the cached data to a database table which would speed up joins and have the application periodically refresh the data in the database table. Any thoughts?
You can try Denodo. It allows connecting multiple data source and has inbuild caching feature.
http://www.denodo.com/en

Configuring SQL Server to automatically take database snapshot and use that instead of actual database

I have been allocated task of fetching data from database. However, as per requirement, I can't update parts of fetched data which are continuously updated in the database.
In other words, I just need to provide data instance up to a specific point in time. So I figured I could take snapshot of database and use that to show data to client which will always be consistent in the sense that updated records in actual database won't be reflected. What I exactly need is to take automatic snapshot for example after every hour, and then read from that. Is it possible in SQL Server? In Oracle I done same using RMAN, but I am lost in SQL Server.

Data Warehouse Best Practice: Intra-day DW Loads and Reporting

we have intra-day Data Warehouse loads through the day (using SSIS, SQL Server 2005).
The reporting is done through Business Objects (XI 3.1 WebI).
We are not currently facing any issues, but what are the Best Practices for intra-day Data Warehouse loads, where at the same time Reporting from the same Database?
thanks,
Amrit
Not sure If I understood you correctly, but I guess that the two main problems you may be facing are:
data availability: your users may want to query data that you have temporary removed because you're refreshing it (...this depends on your data loading approach).
performance: The reporting may be affected by the data loading processes.
If your data is partitioned, I think it would be a nice approach to use a partitioned switch based data load.
You perform the data load on a staging partition that contains the data that you're reloading (while the datawarehouse partition is still available with all the data for the users). Then, once you have finished loading the data in your staging partition, you can immediately switch the partitions between staging and the datawarehouse. This will solve the data availability problem and could help reducing the performance one (if for instance your staging partition is on a different hard-drive than the datawarehouse).
more info on partitioned data load and other data loading techniques here:
http://msdn.microsoft.com/en-us/library/dd425070(v=sql.100).aspx

SQL Server Procedure Cache

If you run multiple DBs on the same SQL Server do they all fight for Procedure Cache? What I am trying to figure out is how does SQL Server determine how long to hold onto Procedure Cache? If other DBs are consuming memory will that impact the procedure cache for a given DB on that same server?
I am finding that on some initial loads of page within our application that it is slow, but once the queries are cachced it is obviously fast. Just not sure how long SQL Server keeps procedure cache and if other DBs will impact that amount of time.
The caching/compiling happens end to end
IIS will unload after 20 mins of not used by default.
.net compilation to CLR
SQL compilation
loading data into memory
This is why the initial calls take some time
Generally stuff stays in cache:
while still in use
no memory pressure
still valid (eg statistics updates will invalidate cached plans)
If you are concerned, add more RAM. Also note that each database will have different load patterns and SQL Server will juggle memory very well. Unless you don't have enough RAM...
From the documentation:
Execution plans remain in the procedure cache as long as there is enough memory to store them. When memory pressure exists, the Database Engine uses a cost-based approach to determine which execution plans to remove from the procedure cache. To make a cost-based decision, the Database Engine increases and decreases a current cost variable for each execution plan according to the following factors.
This link might also be of interest to you: Most Executed Stored Procedure?

Resources