How can I run a command to check to see how much space I have in total in my database and how much free space I have?
CALL sa_disk_free_space( );
Reports information about space available for a dbspace, transaction log, transaction log mirror, and/or temporary file.
Result set:
dbspace_name - This is the dbspace name, transaction log file, transaction log mirror file, or temporary file
free_space - The number of free bytes on the volume.
total_space - The total amount of disk space available on the drive
where the dbspace resides
.
Related
I'm using TDengine 3.0. Now it is found that a large amount of 0000000.log is generated under /var/lib/taos/vnode/vnode2/wal/, which takes up a lot of space.
How should the log file be configured, and how should the file be cleaned up?
you could set WAL_RETENTION_PERIOD value to 0 then each WAL file is deleted immediately after its contents are written to disk. it would decrease the space immediately.
from https://docs.tdengine.com/taos-sql/database/
I know that file system use clusters (n x sectors (512 B) usualy 4KB in size) for storing files. If I have file of size 5 KB then it use two cluster to store and remaining space is called slack space. My question is related to situation where user read file from disk, modify (add few characters) and save this file again. What will happened, will OS (overwrite) write file from location from it started to read file, or file will be writen in new cluster completely, and address of file starting cluster will be erased and replaced with new cluster address.
new part:
I just read in a book "Information technologie:An Introduction for Today’s Digital World" that if file use 2 bloks (clusters) and second file use 4 consecutive blocks after first file. First file is edited and modified, his file size increased to total of 3 blocks. This file will be writen after second file and previously occupied 2 blocks are free. But still don t know what will happend if I for example increase file with one character and file is still smaller then total of 2 blocks. Will this data be added on existing file, to existing first two blocks, or it will be stored on new disk physical location (new 2 blocks)?
When user store file it will occupy some space on disk (cluster = combine several sectors = 4 KB since sector is usually 512 bytes). If file take 3KB then 1KB stay unused in this cluster. Now what will happened if I increase little file adding some data to this file. Answer now depend of procedure that user use to modify file.
1. If I manualy add data to file (using echo "some text" >> filename) this data will add this data in existing cluster since there is 1KB of space availabile. If file site increase it will take another free sectors (file use "extents" to address all this sectors)
2. If i use text editor it will copy file on other location on disc (because of multiuser and situation when two users access same file in a same time). Previous location will be "free" (content in sector stay but File system don t have reference to that) and replace with new location on disk.
Since majority of users use some editor for editing file then scenario 2 is most common.
I want to truncate a file(hive.log) in unix by keeping last 20 mb and as the file is being used by other applications ,i dont want to take any risk to recreate it.
I have tried unix truncate command but it truncates randomly and could not find any option to meet my requirement.
hive uses Log4j to maintain there logs. So whatever you want to achieve can be done by modifying the log4j property file.
File Location: /etc/hive/conf/hive-log4j.properties
Content You should be interested
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hive.log.dir}/${hive.log.file}
# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
# 30-day backup
#log4j.appender.DRFA.MaxBackupIndex= 30
log4j.appender.DRFA.MaxFileSize = 256MB
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
Says it's Daily Rotating file
log4j.appender.DRFA.MaxBackupIndex= 30
Says it will keep 30 backups of logs.
log4j.appender.DRFA.MaxFileSize = 256MB
Says the maximum file size would be 256MB.
Now you know which properties you need to change.
Is there any recommended way of dealing with growing log files. My log files are 41 gig and primary data files are 77 gig. I only discovered that one of my purge tasks that deletes old data has been failing for over 2 months.
How can I reduce the size of my log files back to normal?
This is because a valid checkpoint has not been reached. Your backup strategy depends on several factors but the reason for a large log file that you cannot shrink is that the database must be in a consistent state.
It has not been told when it can clean up the log file, so consider your large log file as potential changes that have yet to be fully committed to disk.
Log File: Factors that delay log transaction - Docs.Microsoft.com
You can also check out Manage the Size of the Transaction Log File from the Docs as well.
I have shrinked the log files and it worked. Please see the code I used below:
/* Use the database */
USE [GiantLogs]
GO
/* Check the name and size of the transaction log file*/
/* The log is fileid=2, and usage says "log only" */
/* Bonus: make sure you do NOT have more than one log file, that does not help performance */
exec sp_helpfile;
GO
/* Shrink the log file */
/* The file size is stated in megabytes*/
DBCC SHRINKFILE (GiantLogs_log, 2048);
GO
/* Check if it worked. It won't always do what you want if the database is active */
/* You may need to wait for a log backup or more activity to get the log to shrink */
exec sp_helpfile;
GO
We talk about HDD with single NTFS partition with size about 650Gigabytes.
We've done following:
delete partition scheme i.e. 512Kilobytes from the beginning;
flush 50Gigabytes with \xff from the beginning during write test;
restore partition scheme i.e. load mbr backup.
The question: How can we restore NTFS in that case?
What we tried to do:
testdisk with deep search without any found NTFS.
Additional info:
NTFS Boot Sector |Master File Table | File System Data | Master File Table Copy
To prevent the MFT from becoming fragmented, NTFS reserves 12.5 percent of volume by default for exclusive use of the MFT.
50G > 12.5% * 650G, therefore we cleaned vital data for ntfs recovery capability.