How to clear firebird query cache? - database

How do I clear the firebird query cache to execute my performance tests ?
The example in SqlServer here
Tks

Andrei is correct -- Firebird heavily relies on the OS file system cache. Firebird will cache a small amount of pages internally (check the buffers property on your database) however it's generally a very small amount of data. Classic defaults to something like 75 pages? I've seen suggestions of about 1000 pages elsewhere, which comes out to 8 or 16 MB depending on page size.
In lieu of restarting the OS to clear the file system cache, you could put your database on its own mount. Then to completely clear the cache you could stop Firebird, unmount/mount the partition and start Firebird again. That would invalidate the file system cache.
This shouldn't be too painful -- unlike other databases, Firebird does not have to scan the data files on start and replay transactions in a transaction log. The translation log is essentially combined with the data file by using careful writes.

Besides memory buffer for often accessed data pages Firebird relies on the OS file cache. Restarting server process one can empty the memory buffer but for clearing file cache, I'm afraid, you need to restart OS.

I think restarting the Firebird service is the easiest (only?) way.

Related

Why do best practices tell you to disable caching on the log drive of SQL Server on an Azure VM

According to this article and several others I've found:
Performance best practices for SQL Server
It is considered a best practice to disable caching on premium storage disks for the SQL Server Log disks. However, I can't find anywhere that explains why.
Does anyone have some insight?
Let me add that the reason I see disabling read-only cache on the log drive as an issue is because it makes you have to set up two separate Storage Pools inside the VM, which makes upgrading/downgrading VM's inside Azure more problematic and considerably less performant.
For example, say you start with a DS V13 which has a 16 drive limit, but about 6 of those drives can be maxed before you're throttled (25000 IOPs). Since best practices says read-only cache for data and no cache for logs, you give 8 of those drives to the data and 8 to the log.
Now, the server needs to be upgraded, so you upgrade it to a DS V14. Now you can max out 12 drives before being throttled (50000 IOPs). However, your data drive's Storage Spaces column size is only 8, which is throttled to 40000 IOPs. So you're not using the IO's full potential.
However, if you can start with a DS V13 and assign all 16 of those drives to a single Storage Pool then put both the log and data on it. You can upgrade/downgrade all the way up to a DS V15 without any concern for not using up your full IOP's potential.
Another way to put it is: If you create a single Storage Pool for all 16 drives, you have considerably more flexibility in upgrading/downgrading the VM. If you have to create two Storage Pools, you do not.
We recommend configuring “None” cache on premium storage disks hosting the log files.. Log files have primarily write-heavy operations and they do not benefit from the ReadOnly cache. Two reasons:
Legitimate Cache contents will get evicted by useless data from the log.
Log writes will also consume Cache BW/IOPS. If the cache is enabled on a disk (ReadOnly or ReadWrite), every write on that disk will also write that data into the cache. Every read will also access/put the data in the cache. Thus every IO will hit cache if the cache is ON.
Thanks,
Aung
Logs files are used as part of the recovery, and can help restore a database to a point in time. Having corrupt data on a log file from a power outage or hard reboot is not good with MSSQL. See the below article from MS, they relate to older versions of SQL but the purpose of the log file has not changed.
Information about using disk drive caches with SQL Server that every database administrator should know
https://support.microsoft.com/en-us/kb/234656
Description of caching disk controllers in SQL Server
https://support.microsoft.com/en-us/kb/86903

Multiple File groups on a Virtual machine for SQL Server

In many of the SQL Server articles it is mentioned that the best practice is to use multiple File group on a physical disk to avoid disk contention and disk spindle issues .So my query are :
1:Does the same theory of having multiple file group hold true for
a virtual machine ?
2:Should i still create my temp db to a
different disk and should i also create multiple temp db files to
avoid large read/write operation on the same temp db file in a
virtual machine setup for my production environment
You recommendation and reasoning would be helpful to decide the best practice.
Thanks.
Yes, it still applies to virtual servers. Part of the contention problem is accessing the Global Allocation Map (GAM) or Shared Global Allocation Map (SGAM), which exists for each database file and can only be accessed by one process or thread at a time. This is the "latch wait" problem.
If your second disk is actually on different spindles, then yes. If the database files would be on different logical disks but identical spindles, then it's not really important.
The MS recommendation is that you should create one database file for each logical processor on your server, up to 8. You should test to see if you find problems with latch contention on tempdb before adding more than 8 database files.
You do not need to (and generally should not) create multiple tempdb log files because those are used sequentially. You're always writing to the next page in the sequence, so there's no way to split up disk contention.
The question needs a bit more information about your environment. If the drives for the VM are hosted on a SAN somewhere and the drives presented to the VM are all spread across the same physical disks on the SAN then you're not going to avoid contention. If, however, the drives are not on the same physical drives then you may see an improvement. Your SAN team will have to advise you on that.
That being said, I still prefer having the log files split from the data file and tempDB being on it's own drive. The reason being that if a query doesn't go as planned then it can fill the log file drive which may take that database offline, but other databases may still be able to keep running (assuming they have enough empty space in their log files).
Again with tempDB, if that does get filled then the transaction will error out, and everything else should keep running without intervention.

Can a 200GB or bigger SQL Server database load into a pagefile when not enough memory is available on the server?

Here's what we're trying to do.
We will have a 200GB+ SQL Server database that needs to load into memory. Microsoft best practice is to have enough physical memory available on the server and then load the entire database into that. That means we would need 256GB of memory on each of our SQL Servers. This would result is fast access to the database which is loaded to memory, but for the high cost of memory. BTW, we're running SQL Server 2008 on Windows Server 2008.
Currently, our server is setup with only 12GB memory. Just under 3GB is allocated to the OS, and the remaining 9GB is used for SQL Server. Is it possible to increase the pagefile to 256GB and set it up on an SSD drive? What we want to do then is, load the database into the pagefile located on the SSD. We're hoping the performance will be similar to loading the entire database into memory, since it'll be on an SSD.
Will this work? Is there another alternative we're overlooking? We want to keep the costs down as much as we can, without sacrificing the performance of our environment. Any advice would be appreciated.
Thanks.
If you want the database to be stored in memory, you need to buy more memory. In spite of what the other answer suggests, memory is the absolute best and cheapest way to make a database perform better - SQL Server is designed to use memory well.
While SQL Server will take advantage of the page file when it has to, and while having the page file on an SSD will be slightly faster than on an old-fashioned mechanical disk, it's still I/O and swapping and there is a lot of overhead around that, regardless of the disk type underneath. This may turn out to be a little bit better, in general, than having the same page file on a spinny disk (or no page file at all), but I don't think that it's going to be anywhere near the impact of having real memory, or that it's going to come anywhere close to your expectations of "fast access."
If you can't buy more memory then you can start with this page file on an SSD, but I'm confident you will need to additionally focus on other tuning opportunities - largely making sure you have indexes that support the type of queries you run, avoiding full table scans as much as possible. For full table aggregates you can consider indexed views (see here and here); for subsets you can consider filtered indexes.
And just to be sure: you are storing the actual data on an SSD drive, right? If not, then I would argue that you should use the SSD for the data and/or log, not for the page file. The page file isn't going to offer you much benefit if it is constantly swapping data in and out by exchanging with a spinny disk.
Need more clairity on the question.
Are you in control of the database or is this a COTS solution that limits your ability to optimize?
Are you clustering? Is that why adding 200+Gb of RAM is an issue (now more than 400GB, 200 per node)?
Are you on bare metal or virtualized? Is this why RAM may be an issue?
So far it would seem the "experts" have made some assumptions that may not be fair to your circumstance.
Please update your question... :)

In Memory Database as a backup for Database Failures

Is in-memory database a viable backup option for performing read operations in case of database failures? One can insert data into an in-memory database once in a while and in case the database server/web server goes down (rare occurence), one can still access the data present in the in-memory database outside of web server.
If you're going to hold your entire database in memory, you might just as well perform all operations there and hold your backup on disk.
No, since a power outage means your database is gone. Or if the DB process dies, and the OS deallocates all the memory it was using.
I'd recommend a second hard drive, external or internal, and dump the data to that hard drive.
Obviously it probably depends on your database usage. For instance it would be hard for me to imagine StackOverflow doing this.
On the other hand not every application is SO. If your database usage is limited you could take a cue from Mobile Applications which accept the fact that a server may not always be available. And treat your web application as though it were a Mobile Client. See Architecting Disconnected Mobile Applications Using a Service Oriented Architecture

Load readonly database tables into memory

In one of my applications I have a 1gb database table that is used for reference data. It has a huge amounts of reads coming off that table but there are no writes ever. I was wondering if there's any way that data could be loaded into RAM so that it doesn't have to be accessed from disk?
I'm using SQL Server 2005
If you have enough RAM, SQL will do an outstanding job determining what to load into RAM and what to seek on disk.
This question is asked a lot and it reminds me of people trying to manually set which "core" their process will run on -- let the OS (or in this case the DB) do what it was designed for.
If you want to verify that SQL is in fact reading your look-up data out of cache, then you can initiate a load test and use Sysinternals FileMon, Process Explorer and Process Monitor to verify that the 1GB table is not being read from. For this reason, we sometimes put our "lookup" data onto a separate filegroup so that it is very easy to monitor when it is being accessed on disk.
Hope this helps.
You're going to want to take a look at memcached. It's what a lot of huge (and well-scaled) sites used to handle problems just like this. If you have a few spare servers, you can easily set them up to keep most of your data in memory.
http://en.wikipedia.org/wiki/Memcached
http://www.danga.com/memcached/
http://www.socialtext.net/memcached/
Just to clarify the issue for the sql2005 and up:
This functionality was introduced for
performance in SQL Server version 6.5.
DBCC PINTABLE has highly unwanted
side-effects. These include the
potential to damage the buffer pool.
DBCC PINTABLE is not required and has
been removed to prevent additional
problems. The syntax for this command
still works but does not affect the
server.
DBCC PINTABLE will explicitly pin a table in core if you want to make sure it remains cached.

Resources