Reading on SQL Server Book online and my understanding SQL Server Buffer Pool or "Buffer Cache" consists of
a) "Data page Cache" -- pages are always fetched into the data page cache from disk, for both read and write operation if they are not found inside the "cache"
b) "Plan cache" -- procedure cache may not be appropriate term as execution plan is cached for adhoc sql as well as dynamic sql
c) Query work space -- I believe this will be for joins or sort (order by) may be
Question: What else is kept in buffer pool? Does "Log Cache" is also part of buffer pool or "caching of log records" before hardning to transaction log on disk is kept in separate area of memory?
Check out this http://www.toadworld.com/platforms/sql-server/w/wiki/9729.memory-buffer-cache-and-procedure-cache.aspx
Extract from that blog post:
Other portions of buffer pool include:
System level data structures - holds SQL Server instance level data about databases and locks.
Log cache - reserved for reading and writing transaction log pages.
Connection context - each connection to the instance has a small area of memory to record the current state of the connection. This information includes stored procedure and user-defined function parameters, cursor positions and more.
Stack space - Windows allocates stack space for each thread started by SQL Server.
Hope this helps.
Related
SQL Server 2014 introduced "memory-optimized tables", whose documentation states that "The primary storage for memory-optimized tables is the main memory. Rows in the table are read from and written to memory. A second copy of the table data is maintained on disk, but only for durability purposes."
This seems to imply a significant performance gain, but doesn't SQL Server have an in-memory buffer cache anyway? If frequent queries are going to use the in-memory buffer cache, why is having a in-memory table providing a significant performance gain?
The "memory-optimized tables" live in memory at all times, ready to be read from the memory at all times. The copy on the disk is like a backup copy just in case if the memory had any issues so it can be reloaded from disk into memory again.
The Buffer cache is used to load data pages from disk into memory, only when there is a request to read those pages and they stay in the memory until the pages are required by other process doing similar request. If the data in memory is no longer required and there is a need to load other pages into memory, those pages already loaded in the memory will get flushed out of memory, until there is a request to read those pages again.
Can you see the difference now? "memory-optimized tables" always live in memory whether someone makes a request to read those pages or not. A standard persisted table will only be cached in memory when someone makes a request for those pages, and will be flushed out if those in memory pages are not being used and there is a need to load more pages in memory.
I have a batch job querying Sybase and SQL Server databases. This batch job can run for up to 1 day or more. We are running this on a small set of data and no error so far in terms of connection timeout. My questions are
How to handle this long running process? Should I configure a reconnect period so that the connection gets closed and reopened?
How to handle the resultset when it can return back to the client with 1 million records?
EDIT #1:
This sounds like a general question for jdbc but it's not because each database provider has their own options such as fetching size. It's very much up to each provider to support this or not. If Sybase does not support this, it means it will load all results into memory at once.
This is a general question not strictly related to Sybase (SAP) ASE.
If you want the tcp/ip connection not to break then use some keep alive parameters for network connections. If you want to handle network connection breaks then use some connection polling libraries.
You don't have to store the whole result set in your memory. Just read your rows and process them on the fly. If you do want to fetch all 1 million rows before before doing anything with them - then just more memory to the JVM.
According to https://docs.oracle.com/cd/E13222_01/wls/docs90/jdbc_drivers/sybase.html.
We can setFetchSize() to determine the maximum of records to be kept in the memory at one time. If you have enough memory, you can set it to 0. Hence, we can limit the memory allowance for each fetching so that it doesn't blow up our memory.
We have a SQL Server 2012 Enterprise Edition, 128GB of RAM, Windows 2008R2. The SQL Server job runs every day at 3 AM and takes 5 hrs to load data into the database. During this process, SQL Server utilizes 123GB (max memory allocated).
After the job completes, SQL Server is not releasing the RAM.
Queried memory utilization where buffer pool shows 97GB. Users don't access database during this time. I restarted SQL Server services to bring RAM down. I didn't find a correct answer related to this issue. Why is it not releasing the RAM? How can we bring RAM utilization down?
SQL Server Job -> SSIS package -> Import data from Mysql to SQL Server database
Thanks
This is by design once SQL Server uses memory, it keeps hold of it and does not release it back to OS.
Your Task Manager may show all/nearly all memory used by SQL Server but if you want to see how much memory SQL Server is actually using you can use the following query.
SELECT (physical_memory_in_use_kb/1024) AS Memory_usedby_Sqlserver_MB
FROM sys.dm_os_process_memory;
By design, SQL Server holds on to the RAM that is has allocated. Much of the RAM is used for the buffer pool. The buffer pool is a cache that holds database pages in memory for fast retrieval.
If SQL Server were to release some memory, and someone were to run a query that requests it right afterwards, the query would have to wait for expensive physical I/O to produce the data. Therefore, SQL Server tries to hold as much memory as possible (and as configured) for as long as possible.
The RAM settings here specify the min server memory and the max server memory. Careful setting of the max memory setting allows room for other processes to run. The article quotes a complicated formula for determining how much room to leave:
From the total OS memory, reserve 1GB-4GB to the OS itself.
Then subtract the equivalent of potential SQL Server memory allocations
outside the max server memory control, which is comprised of stack
size 1 * calculated max worker threads 2 + -g startup parameter 3 (or
256MB by default if -g is not set). What remains should be the
max_server_memory setting for a single instance setup.
In our servers, we usually just wing it and set the max memory option to several GB below the total physical memory. This leaves plenty of room for the OS and other applications.
If SQL Server memory is over the min server memory, and the OS is under memory pressure, SQL Server can release memory until it is at the min server memory setting.
Reference: Memory Management Architecture Guide.
One of the primary design goals of all database software is to
minimize disk I/O because disk reads and writes are among the most
resource-intensive operations. SQL Server builds a buffer pool in
memory to hold pages read from the database. Much of the code in SQL
Server is dedicated to minimizing the number of physical reads and
writes between the disk and the buffer pool. SQL Server tries to reach
a balance between two goals:
Keep the buffer pool from becoming so big that the entire system is low on memory.
Minimize physical I/O to the database files by maximizing the size of the buffer pool.
When SQL Server is using memory dynamically, it queries the system
periodically to determine the amount of free memory. Maintaining this
free memory prevents the operating system (OS) from paging. If less
memory is free, SQL Server releases memory to the OS. If more memory
is free, SQL Server may allocate more memory. SQL Server adds memory
only when its workload requires more memory; a server at rest does not
increase the size of its virtual address space.
...
As more users connect and run queries, SQL Server acquires the
additional physical memory on demand. A SQL Server instance continues
to acquire physical memory until it either reaches its max server
memory allocation target or Windows indicates there is no longer an
excess of free memory; it frees memory when it has more than the min
server memory setting, and Windows indicates that there is a shortage
of free memory.
As other applications are started on a computer running an instance of
SQL Server, they consume memory and the amount of free physical memory
drops below the SQL Server target. The instance of SQL Server adjusts
its memory consumption. If another application is stopped and more
memory becomes available, the instance of SQL Server increases the
size of its memory allocation. SQL Server can free and acquire several
megabytes of memory each second, allowing it to quickly adjust to
memory allocation changes.
If, for some reason:
You absolutely MUST have that memory back
You know you do not need it for a while
You are willing to pay a penalty for virtual memory allocation and physical I/O to retrieve data from disk the next time you need that memory
Then you can temporarily reconfigure the database max server memory setting to a lower value. This can be done through the SSMS user interface, or you can use an sp_configure 'max server memory' followed by reconfigure to make the changes programatically.
Full disclosure: I did not try it myself.
You should not try it on your production environment before testing it somewhere else.
This is from a DBA answer:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'max server memory', 4096;
GO
RECONFIGURE;
GO
4096 should be replaced by the value that you find acceptable as the minimum.
Should be followed by a similar command to increase the memory back to your original maximum.
Let's say we have database with defined transaction log initial size to 100MB and maxsize is UNLIMITED.
SQL Server will write into log sequentially from start to end. In one book I found next sentence:
When SQL Server reaches the end of the file as defined by the size
when it was set up, it will wrap around to the beginning again,
looking for free space to use. SQL Server can wrap around without
increasing the physical log file size when there is free virtual
transaction space. Virtual transaction log space becomes free when SQL
Server can write the data from the transaction log into the underlying
tables within the database.
Last part is really confusing to me. What last sentence means? Does it means that SQL Server overwrite old, committed transactions with new transactions?
As far as I know, that would not be the case, because, all transactions must be presented until backup is done.
I don't know if I was enough clear, I will updtae post if needed some explanations.
This only applies to SIMPLE transaction logging:
Virtual transaction log space becomes free when SQL Server can write the data from the transaction log into the underlying tables within the database.
This means, that once the transactions have actually been written to the physical tables, they are no longer needed in the transaction log. Because at this point, a power outage or another catastrophic failure can no longer cause the transactions to be "lost", as they have already been persisted to the disk.
No need to wait until a backup is done. However, if you need full point-in-time recovery, you would use FULL transaction logging, and in that case, no transaction logs will ever be overwritten.
The log records are no longer needed in the transaction log if all of the following are true:
The transaction of which it is part has committed.
The database pages it changed have all been written to disk by a checkpoint.
The log record is not needed for a backup (full, differential, or log).
The log record is not needed for any feature that reads the log (such as database mirroring or replication).
Further Reads,
https://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx
https://technet.microsoft.com/en-us/library/jj835093%28v=sql.110%29.aspx
I make a stress test of the SQL Server 2008 and I want to know what is the data flow to tempdb because of usage of temporary tables and variables.
The statistics is also shown in Activity Monitor:
Is it possible somehow to record the data and afterwards analyse it?
I have 2 cases in mind:
Record an SQL Server counter (I don't know which is the name for it)
Record somehow data from Activity Monitor
Writing into a database does not equate 1 to 1 with disk IO. Database updates only dirty in-memory pages that are later copied to disk by the lazy writer or at checkpoint. The only thing written to disk is the Write Ahead Log activity, for which there is a specific per database counter: Log Bytes Flushed/sec. Note that tempdb has special logging requirements as it is never recovered so it only needs undo information. Whenever dirty pages are actually flushed, be it at checkpoint or by lazy writer, there are specific counters for that too: Checkpoint pages/sec and Lazy writes/sec. These are not per database because these activities themselves are not 'per database'. Finally there are the virtual file stats DMVs: sys.dm_io_virtual_file_stats which offer the aggregate number of IO operations and number of bytes for each individual file of each individual database, including tempdb.
You mention that you want to measure the specific impact of temp tables and table variables, but you won't be able to separate them from the rest of tempdb activities (sort spools, worktables etc). I recommend you go over Working with tempdb in SQL Server 2005, as it still applies to SQL 2008.
If you use Performance Monitor (perfmon.exe) to monitor the SQL Server counters, you can configure this to log to a .csv file to analyse in Excel (for example)
The performance counter in question is Data File(s) Size under SQLServer:Databases
I would take some regular interval "snapshots" (using the following DMVs) loaded into tables to determine your internal usages of tempDB.
sys.all_objects
sys.dm_db_file_space_usage
sys.dm_db_task_space_usage
sys.dm_db_task_space_usage will break down usages by SPID, etc.