Is TDengine suitable to store large text fields? - tdengine

I need to migrate my database from PostgreSQL to the TDengine since most of them are time-series data. But there is a text field in the table for storing large chunks of text.
May I know if TDengine is suitable to store large chunks of text with any data type?
How long the maximum length of nchar is?

You can create TDengine table with 16374-bytes binary and 16374/4=4093 nchar
create table tb1(ts timestamp, vv binary(16374), nchar(4093))

Related

How are oracle clobs internally handled?

I have a column (CLOB type) in a database which holds json strings. These size of these json strings can be quite variable. In the case where these strings are less than 4000 characters I have heard that Oracle treats these CLOBs as VARCHAR internally. However, I am curious how exactly this process works. My interest is in the performance and ability to visually see json being stored.
If a CLOB in the DB has 50 characters does Oracle treat this single object as VARCHAR2(50) ? Do all CLOBs stored in the column need to be less than 4000 characters for Oracle to treat the whole column as a VARCHAR ? How does this all work?
Oracle does not always treat short CLOB values as VARCHAR2 values. It only does this if you allow it to do so, using the CLOB storage option of ENABLE STORAGE IN ROW. E.g.,
create table clob_test (
id number NOT NULL PRIMARY KEY,
v1 varchar2(60),
c1 clob
) lob(c1) store as (enable storage in row);
In this case, Oracle will store the data for C1 in the table blocks, right next to the values for ID and V1. It will do this as long as the length of the CLOB value is less than close to 4000 bytes (i.e., 4000 minus system control information that takes space in the CLOB).
In this case, the CLOB data will be read like a VARCHAR2 (e.g., the storage CHUNK size becomes irrelevant).
If the CLOB grows too big, Oracle will quietly move it out of the block into separate storage, like any big CLOB value.
If a CLOB in the DB has 50 characters does Oracle treat this single object as VARCHAR2(50)?
Basically, if the CLOB was created with ENABLE STORAGE IN ROW. This option cannot be altered after the fact. I wouldn't count on Oracle treating the CLOB exactly like a VARCHAR2 in every respect. E.g., there is system control information stored in the in-row CLOB that is not stored in a VARCHAR2 column. But for many practical purposes, including performance, they're very similar.
Do all CLOBs stored in the column need to be less than 4000 characters for Oracle to treat the whole column as a VARCHAR?
No. It's on a row-by-row basis.
How does this all work?
I explained what I know as best I could. Oracle doesn't publish its internal algorithms.

PostgreSQL Clob datatype

PostgreSQL supports both clob and text data types for storing large texts. I've used clob data type as we are migrating the DB from Oracle. I understand that the clob data type in PostgreSQL can only store up to 1GB of text rather than the 4GB in oracle.
Since my text size is well below 1GB, I am fine with using either of these types. So can I use PostgreSQL clob datatype or is there any advantage for text datatype over clob?
Any help will be much appreciated and Thanks in Advance.
The clob data type is unsupported in Postgres. However, it can be easily defined as a synonym to the text type:
create domain clob as text;

Hashkey(MD5) generation for JSON data (ntext)column SQL server

We have to generate a hashkey column in a table for incremental load, where it has multiple JSON data (Ntext)columns with more than 40,000 characters and it varies.
Currently we are converting it to varchar and generating, but varchar has limitation. Could you please suggest?

SQL Server: table datatypes to smallest possible space to overcome limit of maximum allowable table row size of 8060 bytes?

I am getting a similar error as here
Creating or altering table 'MediaLibrary' failed because the minimum
row size would be 14273, including 9 bytes of internal overhead. This
exceeds the maximum allowable table row size of 8060 bytes.
after importing a CSV with SQL Server Management Studio. Number columns are interpreted as strings so, instead of efficient (n)archer(x) datatype, the datatype may be nchar(1000) taking a lot of unnecessary space.
How can I see the datatypes of the table imported to SQL Server and update them to take the smallest amount of space?
Create table in sql database and define columns with proper data types than import CSV into table
if you have only string columns in table and each column can have 1000 chars I think that you need to split table and join in view

tables with multiple varbinary columns

If i have a table with varbinary(Max) datatype and have FILESTREAM attributes on the column. Now I need to have to store another binary data but without FILESTREAM attribute. So, if I add another column with VARBINARY(MAX) datatypes on the same table would there be any performance issue? Do I gain faster performance if I separate a table with FILESTREAM attributes and Create another separate table to store other VARBINARY(MAX) data?
for your this question.you can.
Filestream is the new feature in sqlserver2008,and in 2012 ,that change the name ,call fileTable.
I tested it.this feature is use the DB manage the file .and up file about 5M/s.
for your other column,if you not open the filestream,the file will be change the binary ,and store in sqlserver data file.
open the filestream,the file will store the server, and managed by sqlserver.
for your second question,i am not 100% sure,but if you use the filestream,it's will gain more effiencit,need to attention the backup and store.
one years ago,i implemented this function in our system,and i have the shcame,if you want ,i will send you.
sorry,my english is not good.
your performance might be effected if you add another VARBINARY(MAX) on the same table
When the FILESTREAM attribute is set, SQL Server stores the BLOB data in the NT file system and keeps a pointer the file, in the table. this allows SQL Server to take advantage of the NTFS I/O streaming capabilities. and reduces overhead on the SQL engine
The MAX types (varchar, nvarchar and varbinary) and in your case VARBINARY(MAX) datatype cannot be stored internally as a contiguous memory area, since they can possibly grow up to 2Gb. So they have to be represented by a streaming interface.
and they will effect performance very much
if you are sure your files are small you can go for VARBINARY(MAX) other wise if they are larger thab 2gb FILESTREAM is the best option for you
and yeah i would suggest you Create another separate table to store other VARBINARY(MAX) data

Resources