Memory is keep increasing but not released in TDengine database - tdengine

I use TDengine database 3.0.1.6 to insert data .(normal insert but not auto create table )
I wonder why my memory keeps increasing but not stable.

Related

Fully delete data from Clickhouse DB to save disk space

In order to free up disk space, I've gotten rid of a number of old tables in a clickhouse database via
DROP TABLE mydb.mytable
However, disk usage did not change at all. In particular I expected /var/lib/clickhouse/data/store to shrink.
What am I missing here? Is there a "postgresql.vacuum"-equivalent in clickhouse I should be doing?
Atomic databases keep dropped tables for 8 minutes.
You can use DROP TABLE mydb.mytable no delay
https://kb.altinity.com/engines/altinity-kb-atomic-database-engine

Alter Table with AWS Aurora Cluster with large table(700GB)

I have a very large table (700GB) in Aurora Database Cluster. When I run ALTER command to add new fields, it shows error 'Table is Full'. This is due to filling up the temporary space on the instance, which is sized according to the host instance size we select.
In my case free local storage is around 300 GB for (db r5 large). What is the alternate way of adding field to the table. I am thinking of Adding a new table with updated field and then copying the data from original table.

Memory optimized table - Partially load data into memory

Is it possible for a memory optimized table in SQL server to have part of its data in memory and the rest on disk?
I have a requirement to load the last 3 months' data into memory, and the rest
of it isn't really required to be in memory because it will not be
queried.
Is there any way to do this with memory optimized tables? If not, is there any alternate way to do this?
Use a view to union an in-memory table with a standard table (a partitioned view). Run a maintenance process to move data from the in-memory table to the standard table as needed.
You can add check-constraints to the standard table to help eliminate it from the query if that data will not be touched.
The database can have few tables on disk and few in memory,but it is not possible to have ,some of table data in disk and some data in memory
I have a requirement to load the last 3 months' data into memory, and the rest of it isn't really required to be in memory because it will not be queried
why not archive the table regularly ,so it retains only three months data and optimize it for In-memory usage..

Tablespace is not freed after dropping tables (Oracle 11g)

I have a Oracle 11g database with block size = 8192. So, if I'm correct maximum datafile size will be 32GB.
I have a huge table containing around 10 million records. Data in this table will be purged often. For purging we chose CTAS as a better option as we are going to delete greater portion of the data.
As we'll be dropping the old table after CTAS, the old tables are not releasing the space for new tables. I understand that a tablespace has AUTOEXTEND option but no AUTOSHRINK. But the space occupied by old tables should be available for new tables, which is not happening in this case.
I'm getting an Exception saying
ORA-01652: unable to extend temp segment by 8192 in tablespace
FYI the only operation happening all the time is CTAS + Dropping the old table. Nothing else. First time this is working fine, but when the same operation is done the second time, exception arises.
I tried adding an additional datafile to the tablespace, but after few more purge operations on the table, this is also getting full to 32GB and the issue continues.

Fastest way to insert a huge amount of data into partitioned table SQL Server

I have a table that is 13 TB big (due to historical reasons).
I want to reload the data because I have corrupt and duplicate data in that table.
The question is what is the fastest way to load data to an empty, partitioned table (partitioned by month)?
My thoughts:
Fill table by filling partition slices. I create two or three (depending on I/O cap) temp tables and load the data via SSIS OLEDB (openrowset bulk) to three temp tables at once. Afterwards I switch in the partitions and go on with the next three.
Insert latest and oldest data at once via "normal" insert (I don't think the clustered index will like that)
???
So what would be the best and fastest way?
The biggest speed gain you can achieve is to drop all the indexes on the target table, load the data (probably using choice 1) and then rebuild all your indexes (clustered index first). Keep the source data and the target mdf/ndf on separate physical drives (RAID hopefully).
You can script out the indexes by right clicking on the index and selecting Script index create to. Make sure that if the index is partitioned that the portioning info is included.
The largest gain would be in using the Bulk Insert Function, this is designed to deal in sets rather than row based operations I don't know where you're importing the data from but I imagine it would be a backup if you're dealing with corruption.

Resources