how many columns does TDengine database support? - tdengine

I think the TDengine database only supports 6 tags, but I want to know about non-tag columns, is there an unlimited number of non-tag columns?

I think you use the 1.6 version of TDengine which only supports a max of 6 tags, but the current version of TDengine (2.4) supports maximum of 128 tags and a total of 4096 columns.

Related

Does TDengine database support BLOB data type?

Currently, I'm using TDengine in a project and want to have a BLOB column to store a variable of unstructured binary data(e.g. CSV files). Does TDengine support this kind of data type for columns or tags like MySQL which has TINYBLOB, MEDIUMBLOB, and LONGBLOB, etc.?
TDengine supports the binary type and no more than 16K bytes length.

CLOB in Postgresql

I'm trying to migrate from Oracle to Postgresql database.
I have some clob column types at Oracle DB:
Here is my questions.
Is the TEXT type equivelant for the CLOB in ORACLE DB?
Are there any risk for directly convert it to the TEXT? I think TEXT limit is 1gb for the POSTGRE and CLOB limit is 4GB in Oracle?
Yes, TEXT is a good equivalent in PostgreSQL to CLOB in Oracle. But max size for TEXT is roughly 1GB whereas max for CLOB is 4GB.
Almost, a CLOB can be larger (2GB if I'm right) than TEXT: "just" 1GB.

Interbase to Firebird Migration

Besides doing a data pump. Is there any other solutions for migrating?
Can you take a GBK and restore it to firebird? Is there any other migration issues you may have run into?
Besides doing a data pump. Is there any other solutions for migrating?
no, this is the only solution
Can you take a GBK and restore it to firebird?
no, it is not compatibile backups files
Is there any other migration issues you may have run into?
you can run into many issues and as #Mark Rotteveel say it is to board. You can talk about specific issue you have.
I can point you to few issues:
Ambiguous field name between tables - as Interbase allow you to do select from two tables with same field names and put this names in the where clause without aliasing it
field not contained in the aggregate - as Interbase buggly check fields when you do group by
order by in aggregate like select count(*) from table_name ORDER BY some field Interbase allow this Firebird not
Count(*) return Int64 in Firebird in Interbase it is Integer
identifiers longer then 31 chars are not allowed in current Firebird Interbase allows it but not handle it as it understand only first 31 chars
if you use Delphi and IBX - you can not use Boolean fields in Firebird as IBX handling is not compatibile with Firebird

SQL Server 2012 MDS - Limitations

I am started using SQL Server 2012 MDS for maintaining our huge customer base. My question is whether MDS supports more than 10 million records? If so, how it is handled in excel? Excel has the number of rows limitation of 1 million.
Below is Quote from technet on same,posting relevant content here
Create Entity: Creating an entity from an Excel table is dependent on both the number of records and the number of columns, and appears to be linear in its progression. The number of attributes supported is based on SQL table limits, while the number of members will be constrained by Excel worksheet row limitations of 1M rows.

Compressing a text field in Sql Server 2k8 R2

So I have an application that stores a lot of text in a text field in SQL Server 2008 R2. I'm adding about 5000 records a day, and that is going to grow. The amount of data in the field can be between 4 KB and 100 KB.
I can change the field to be a blob field and store a byte stream in there (eg. zipped text), but I'm wondering if there is any compression option that I can use in SQL Server 2k8 (perhaps something designed for storing a lot of text?) that I could leverage using SQL Server out of the box?
thanks
SQL Server 2008 R2 has three compression options:
row compression
page compression (implies row compression)
unicode compression
All three options only apply to data (rows), so none could help with large documents (BLOBs). So your best option is to compress/decompress in the client (ZIP). I would not consider this option easily, it means you're trading off queriability of the data.
In additional to row/page comperssion you can use FILESTREAMS to store field on compressed NTFS drive. But your files is not so big and comression will be a best choise.
Note:
Regarding compatibility of FILESTREAMS :
FILESTREAM feature is available with all versions of SQL Server 2008, including SQL Server Express.
SQL Server Express database has a 4 GB limitation; however this limitation does not apply to the FILESTREAM data stored in a SQL Server Express database.
However you need 'Developer Edition' or 'Enterprise Edition' for Row / Page compression.
alter table pagevisit rebuild with (data_compression=page);
Msg 7738, Level 16, State 2, Line 2
Cannot enable compression for object
'PageVisit'. Only SQL Server
Enterprise Edition supports
compression.

Resources