PostgreSQL supports both clob and text data types for storing large texts. I've used clob data type as we are migrating the DB from Oracle. I understand that the clob data type in PostgreSQL can only store up to 1GB of text rather than the 4GB in oracle.
Since my text size is well below 1GB, I am fine with using either of these types. So can I use PostgreSQL clob datatype or is there any advantage for text datatype over clob?
Any help will be much appreciated and Thanks in Advance.
The clob data type is unsupported in Postgres. However, it can be easily defined as a synonym to the text type:
create domain clob as text;
Related
I need to migrate my database from PostgreSQL to the TDengine since most of them are time-series data. But there is a text field in the table for storing large chunks of text.
May I know if TDengine is suitable to store large chunks of text with any data type?
How long the maximum length of nchar is?
You can create TDengine table with 16374-bytes binary and 16374/4=4093 nchar
create table tb1(ts timestamp, vv binary(16374), nchar(4093))
I'm using VB.net , Entity framework 6 , SQL server 2008R2.
I have a case when on a varchar field I should save a very large text. But the number of characters on this text is not specified so I think that will be very large for that field. ( I know there's a VARCHAR(MAX) but I'm thinking also to have the data size in this field as low as possible ). Also for several reasons I can't use a file to save this text ( keeping only the filename on database ).
So I'm asking if is there any way to keep this large text zipped on database and of course to unzip when reading from database.
But I'm searching for a solution that will work with entity framework.
So if now I'm using :
Myobject.mytext="..... My text...."
How can I transform this in order to put the zipped text ( if is possible )
Thank you !
Sql Server does not support compression 'out of the box' of LOBs (VARCHAR(MAX) where Length>8k). So, the best approach would be to handle compression/decompression on the client, before you put it in the network towards SQL Server
So,
Change your column type to VARBINARY(MAX)
Compress your string (with for example https://stackoverflow.com/a/7343623/1202332)
Use entity framework to save the value in the DB
When reading:
Read column with compressed data (with EF)
Decompress your string
If i have a table with varbinary(Max) datatype and have FILESTREAM attributes on the column. Now I need to have to store another binary data but without FILESTREAM attribute. So, if I add another column with VARBINARY(MAX) datatypes on the same table would there be any performance issue? Do I gain faster performance if I separate a table with FILESTREAM attributes and Create another separate table to store other VARBINARY(MAX) data?
for your this question.you can.
Filestream is the new feature in sqlserver2008,and in 2012 ,that change the name ,call fileTable.
I tested it.this feature is use the DB manage the file .and up file about 5M/s.
for your other column,if you not open the filestream,the file will be change the binary ,and store in sqlserver data file.
open the filestream,the file will store the server, and managed by sqlserver.
for your second question,i am not 100% sure,but if you use the filestream,it's will gain more effiencit,need to attention the backup and store.
one years ago,i implemented this function in our system,and i have the shcame,if you want ,i will send you.
sorry,my english is not good.
your performance might be effected if you add another VARBINARY(MAX) on the same table
When the FILESTREAM attribute is set, SQL Server stores the BLOB data in the NT file system and keeps a pointer the file, in the table. this allows SQL Server to take advantage of the NTFS I/O streaming capabilities. and reduces overhead on the SQL engine
The MAX types (varchar, nvarchar and varbinary) and in your case VARBINARY(MAX) datatype cannot be stored internally as a contiguous memory area, since they can possibly grow up to 2Gb. So they have to be represented by a streaming interface.
and they will effect performance very much
if you are sure your files are small you can go for VARBINARY(MAX) other wise if they are larger thab 2gb FILESTREAM is the best option for you
and yeah i would suggest you Create another separate table to store other VARBINARY(MAX) data
Can I use the SqlServer NTEXT data type in LightSwitch?
I know how to add extension business types, but they always subclass the existing LightSwitch base types. The LightSwitch base type 'String' maps to the SqlServer data type NVARCHAR which has a limit of 4000 characters (if I'm not mistaken).
I need more than 4000 characters!
Paul -- Nvarchar(4000) is what lightswitch defaults, but you can change the properties of the field by clearing the maximum length field, which will change it to nvarchar(max). Nvarchar(max) can store about 2Gb (much, much more than 4000 characters!)
Since NTEXT is deprecated, in order to use the proper data type (NVARCHAR(MAX)) in LightSwitch, create the table in SQL Server and then attach to it as an external table from LightSwitch. Reference.
I'm making a blog, article website so i decide to use NTEXT data type for blog and article contents. Until i see this
Important
ntext, text, and image data types will
be removed in a future version of
MicrosoftSQL Server. Avoid using these
data types in new development work,
and plan to modify applications that
currently use them. Use nvarchar(max),
varchar(max), and varbinary(max)
instead.
Fixed and variable-length data types
for storing large non-Unicode and
Unicode character and binary data.
Unicode data uses the UNICODE UCS-2
character set.
(http://msdn.microsoft.com/en-us/library/ms187993.aspx)
Im sure that blog and article contents gonna reach 4000 character limit if i use nvarchar(max).
What data type should i use at this case?
You should use nvarchar(max)/varchar(max) - that's current pair of text types.
When using these types you have no limit for field size (well, actually the limit is 2 Gb, but I don't think you'll hit it).
See MSDN for more details:
data types in SQL Server
nchar and nvarchar types