Does anyone know how I can query the database to find what devices a database uses?
There is the sysdatabases table and the sysdevices table but I don't know how to link them
Anyone know?
The best way is to run sp_helpdb against the database you're interested in:
1> sp_helpdb tempdb2
2> go
... other stuff here...
device_fragments size usage created free kbytes
------------------------------ ------------- -------------------- ------------------------- ----------------
tempdb2data 2048.0 MB data only Dec 17 2008 11:42AM 2086568
tempdb2log 2048.0 MB log only Dec 17 2008 11:42AM not applicable
tempdb2log 2048.0 MB log only Dec 17 2008 11:42AM not applicable
tempdb2data 2048.0 MB data only Dec 17 2008 11:43AM 2088960
tempdb2log 4096.0 MB log only Dec 17 2008 11:44AM not applicable
--------------------------------------------------------------
log only free kbytes = 8355836
1 Just a note re your first question. If you USE database first, you will get even more detail in the report.
2 Do you still need the second question answered, how to link sysdatabase and sysdevices, as in, are you writing queries against the catalogue ? If so, I need your ASE version, the answers are different.
Related
I am running mirroring on several databases on a SQL Server 2008 R2 system (I know, don't judge me).
I ran this to look at the mirroring log
EXEC sp_dbmmonitorresults
GANDALF, --database
4, -- last day
0; -- don't update status
And I noticed that there is a gap of several hours
time_recorded
--------------------------
2021-03-04 10:15:39.930
2021-03-04 10:15:39.930
**
2021-03-04 10:15:09.913
**
2021-03-03 20:43:36.690
2021-03-03 20:43:36.690
2021-03-03 20:43:20.743
Can anyone give me a clue as to why this might happen?
I'm using visual Studio 2019 community SQL Server localdb on Windows 10 Pro 202H. I've been trying 'restore' the WideWorldImporters database from WideWorldImporters-Full.bak (downloaded from GitHub) to my localdb instances without any success.
This has happened so far:
Qyery: (*** = My Username)
USE master
RESTORE DATABASE WideWorldImporters
FROM disk = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\WideWorldImporters-Full.bak'
WITH MOVE 'WWI_Primary' TO 'C:\Users\***\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances\MSSQLLocalDB\WideWorldImporters.mdf',
MOVE 'WWI_UserData' TO 'C:\Users\***\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances\MSSQLLocalDB\WideWorldImporters_UserData.ndf',
MOVE 'WWI_Log' TO 'C:\Users\***\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances\MSSQLLocalDB\WideWorldImporters.ldf',
MOVE 'WWI_InMemory_Data_1' TO 'C:\Users\***\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances\MSSQLLocalDB\WideWorldImporters_InMemory_Data_1',
REPLACE
Message (pane) :
Processed 1464 pages for database 'WideWorldImporters', file 'WWI_Primary' on file 1.
Processed 53096 pages for database 'WideWorldImporters', file 'WWI_UserData' on file 1.
Processed 33 pages for database 'WideWorldImporters', file 'WWI_Log' on file 1.
Processed 3862 pages for database 'WideWorldImporters', file 'WWI_InMemory_Data_1' on file 1.
100% | No issues found
Executing query... (is running on and on)
Output (General) is empty
All the stuff notified in message window can be found in desitination folder:
WideWorldImportes_UserData.ndf (2 097 152 kb)
WideWorldImportes.mdf (1 048 576 kb)
WideWorldImportes.ldf ( 102 400 kb)
\WideWorldImportes_InMemory_Data_1\
filestream.hdr ( 1 kb)
\$FSLOG (empty )
\$HKv2
{1E6DC7E6-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 2 048 kb)
{3E231B6B-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 20 kb)
{4B9D83BE-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 2 048 kb)
{6E82296C-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 16 384 kb)
{6F44D507-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 1 024 kb)
{07FEB052-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 20 kb)
{7C4940C1-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 1 024 kb)
{9A77966E-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 4 096 kb)
{28CE0994-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 16 384 kb)
{63F1F945-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 2 048 kb)
{79B6C099-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 4 096 kb)
{122A2C90-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 16 384 kb)
{285FCA71-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 4 kb)
{421C57F0-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 4 096 kb)
{A54BA375-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 1 024 kb)
{C818BEE6-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 30 836 kb)
{CB6FF974-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 1 024 kb)
{F6F88B52-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 1 024 kb)
{F756E9B8-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.hkckp ( 1 024 kb)
Executing query... (still running, no changes in size of files for the last two hours)
This is where I'm now
and I'm getting mad
As you have gone for the option of RESTORE WITH REPLACE, it has overwritten existing WideWorldImporters database. As you have done restore with replace option, there could be uncommitted transactions, which it is trying to clean up. It will come back online after some time.
You have to be careful with REPLACE option. From MSDN
REPLACE Option Impact
REPLACE should be used rarely and only after
careful consideration. Restore normally prevents accidentally
overwriting a database with a different database. If the database
specified in a RESTORE statement already exists on the current server
and the specified database family GUID differs from the database
family GUID recorded in the backup set, the database is not restored.
This is an important safeguard.
The REPLACE option overrides several important safety checks that
restore normally performs. The overridden checks are as follows:
Restoring over an existing database with a backup taken of another
database.
With the REPLACE option, restore allows you to overwrite an existing
database with whatever database is in the backup set, even if the
specified database name differs from the database name recorded in the
backup set. This can result in accidentally overwriting a database by
a different database.
Restoring over a database using the full or bulk-logged recovery model
where a tail-log backup has not been taken and the STOPAT option is
not used.
With the REPLACE option, you can lose committed work, because the log
written most recently has not been backed up.
Overwriting existing files.
For example, a mistake could allow overwriting files of the wrong
type, such as .xls files, or that are being used by another database
that is not online. Arbitrary data loss is possible if existing files
are overwritten, although the restored database is complete.
You can read, how to come out of the restoring mode here: https://www.mssqltips.com/sqlservertip/5460/sql-server-database-stuck-in-restoring-state/
One of my production machines (SQL Server Express 2012) has not been performing too well. I started running the WhoIsActive script and I've been getting a lot of these wait types:
(10871ms)PREEMPTIVE_OS_AUTHZINITIALIZECON
They always occur when calling a function that checks certain user privileges. If I understand this correctly, the function had to wait almost 11 seconds for the Windows function AuthzInitializeContextFromSid (see https://www.sqlskills.com/help/waits/preemptive_os_authzinitializecontextfromsid/).
Am I correct in my assumption? (full output below)
I couldn't find any info online about this wait type going hayrwire. What could be causing this?
Full output:
00 00:00:10.876 75
<?query --
select #RetValue = ([dbo].[Users_IsMember]('some_role_name', #windowsUserName)
| is_srvrolemember('SysAdmin', #windowsUserName))
--?>
<?query --
MyDB.dbo.StoredProcName;1
--?>
DOMAIN\User (10871ms)PREEMPTIVE_OS_AUTHZINITIALIZECON master: 0 (0 kB),tempdb: 0 (0 kB),MyDB: 0 (0 kB) 10,875 0 0 NULL 93 0 0 <ShowPlanXML xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan" Version="1.5" Build="11.0.7001.0"><BatchSequence><Batch><Statements><StmtSimple StatementText="select #RetValue = ([dbo].[Users_IsMember]('some_role_name', #windowsUserName)
| is_srvrolemember('SysAdmin', #windowsUserName))
" StatementId="1" StatementCompId="49" StatementType="ASSIGN WITH QUERY" RetrievedFromCache="true" /></Statements></Batch></BatchSequence></ShowPlanXML> 3 runnable NULL 0 NULL ServerName AppName .Net SqlClient Data Provider 2018-12-17 09:29:35.413 2018-12-17 09:29:35.413 0 2018-12-17 09:29:46.447
In my experience the PREEMPTIVE_OS wait stats are related to an imbalance between how much memory is allocated to SQL Server vs. the Windows OS itself.
In this case, the OS is being starved for memory resources. You might try either adding more total memory to the box, or ensuring that SQL Server is configured to only use 80 percent of the total memory installed on the instance. Or both.
Note - this is not a blanket statement on how to configure memory for SQL server, but rather a good place to start with tuning for PREEMPTIVE_OS related wait types.
I've found this post about the usual size of a Sonarqube Database:
How big is a sonar database?
In our case, we have 3,584,947 LOC to analyze. If every 1,000 LOC stores 350 Ko of data space it should use about 1.2Gb But we've found that our SonarQube database actually stores more than 20Gb...
The official documentation (https://docs.sonarqube.org/display/SONAR/Requirements) says that for 30 millions LOC with 4 years of history, they use less than 20Gb...
In our General Settings > Database Cleaner we have all default value except for "Delete all analyses after" which is set to 360 instead of 260
What can create so much data in our case?
We use sonarqube 6.7.1 version
EDIT
As #simonbrandhof asked, here are our biggest tables
| Table Name | # Records | Data (KB) |
|`dbo.project_measures` | 12'334'168 | 6'038'384 |
|`dbo.ce_scanner_context`| 116'401 | 12'258'560 |
|`dbo.issues` | 2'175'244 | 2'168'496 |
20Gb of disk sounds way too big for 3.5M lines of code. For comparison the internal PostgreSQL schema at SonarSource is 2.1Gb for 1M lines of code.
I recommend to clean-up db in order to refresh statistics and reclaim dead storage. Command is VACUUM FULL on PostgreSQL. There are probably similar command on other databases. If it's not better then please provide the list of biggest tables.
EDIT
The unexpected size of table ce_scanner_context is due to https://jira.sonarsource.com/browse/SONAR-10658. This bug is going to be fixed in 6.7.4 and 7.2.
I am using sqlite3 as db manager for my application, developed on a rapsberry pi3.
My table is composed of around 200 columns (not so much), mostly boolean and numeric fields.
I add a (complete) record every minute. DB is accessed in a C program using transactions.
the transaction includes one insert and 6 updates (to maintain the code readable), avoiding to write a very long single insertion query.
The db file is on the filesystem (hence on the sd card) inside the home folder.
Every transsaction the db is opened, the pragmas
PRAGMA synchronous = NORMAL;
PRAGMA journal_mode = WAL;
are set and the query is performed.
I have good performance averagely but from the timing log I see a peak every once in a while.
Extract of the log is reported:
Apr 28 07:06:13 db write took 45.200000 ms
Apr 28 07:07:13 db commit took 0.302000 ms
Apr 28 07:07:13 db write took 75.858000 ms
Apr 28 07:08:13 db commit took 0.354000 ms
Apr 28 07:08:13 db write took 75.395000 ms
Apr 28 07:09:13 db commit took 0.268000 ms
Apr 28 07:09:13 db write took 40.620000 ms
Apr 28 07:10:13 db commit took 0.437000 ms
Apr 28 07:10:13 db write took 81.910000 ms
Apr 28 07:11:13 db commit took 0.205000 ms
Apr 28 07:11:13 db write took 43.315000 ms
Apr 28 07:12:13 db commit took 0.301000 ms
Apr 28 07:12:13 db write took 75.456000 ms
Apr 28 07:13:15 db commit took 1872.488000 ms <-----
Apr 28 07:13:15 db write took 1951.572000 ms <-----
Apr 28 07:14:13 db commit took 7.934000 ms
Apr 28 07:14:13 db write took 62.853000 ms
Apr 28 07:15:13 db commit took 0.274000 ms
Apr 28 07:15:13 db write took 80.568000 ms
Apr 28 07:16:13 db commit took 0.277000 ms
The arrow points to one of the time peak that are recurring (with variable periods) during the execution.
To bettere understand the situation, analyzing the benchmark I had two peaks in the last 12 hours, one is about 1 sec (not reported) and this one.
Could the time peaks happen because of filesystem activity on the sd?
Could making a different partition on the sd card have an impact on such performance?
Is there any other pragma that could protect my application from this behaviour?
Adding the pragmas has significantly improved the situation so far but I think is not acceptable yet.
Thanks for your time and patience.
Any hint is welcomed.
Regards,
mopyot
The database regularly moves the data from the write-ahead log into the actual database file; this is called checkpointing:
By default, SQLite will automatically checkpoint whenever a COMMIT occurs that causes the WAL file to be 1000 pages or more in size, or when the last database connection on a database file closes. […]
But programs that want more control can force a checkpoint using the wal_checkpoint pragma … The automatic checkpoint threshold can be changed or automatic checkpointing can be completely disabled using the wal_autocheckpoint pragma …