Compressing a text field in Sql Server 2k8 R2 - sql-server

So I have an application that stores a lot of text in a text field in SQL Server 2008 R2. I'm adding about 5000 records a day, and that is going to grow. The amount of data in the field can be between 4 KB and 100 KB.
I can change the field to be a blob field and store a byte stream in there (eg. zipped text), but I'm wondering if there is any compression option that I can use in SQL Server 2k8 (perhaps something designed for storing a lot of text?) that I could leverage using SQL Server out of the box?
thanks

SQL Server 2008 R2 has three compression options:
row compression
page compression (implies row compression)
unicode compression
All three options only apply to data (rows), so none could help with large documents (BLOBs). So your best option is to compress/decompress in the client (ZIP). I would not consider this option easily, it means you're trading off queriability of the data.

In additional to row/page comperssion you can use FILESTREAMS to store field on compressed NTFS drive. But your files is not so big and comression will be a best choise.

Note:
Regarding compatibility of FILESTREAMS :
FILESTREAM feature is available with all versions of SQL Server 2008, including SQL Server Express.
SQL Server Express database has a 4 GB limitation; however this limitation does not apply to the FILESTREAM data stored in a SQL Server Express database.
However you need 'Developer Edition' or 'Enterprise Edition' for Row / Page compression.
alter table pagevisit rebuild with (data_compression=page);
Msg 7738, Level 16, State 2, Line 2
Cannot enable compression for object
'PageVisit'. Only SQL Server
Enterprise Edition supports
compression.

Related

SQL Express - Could I partition an exisiting table to solve 10GB file size limitation?

One of our custom reached the 10GB size limitation of SQL-Express edition. There are 2 tables contains too many training data. Could we partition tables on sql express edition? Is there any help link for this issue?
We had design a solution to refactor the tables and codes. But Partition tables sound like much easier.
No, man - unfortunately you can do nothing.
SQL Express 2005 & 2008 R1 4 Gb database size limit
SQL Express 2008 R2 10 Gb database size limit
SQL Express 2012 10 Gb database size limit
SQL Express 2014 10 Gb database size limit
SQL Express 2016 10 Gb database size limit
I have try before to find a work around because I wanted to use T-SQL syntax to manipulate some data and did not succeed.
Also, even if you find a way there is always possibility to violate the SQL Server license.
Use another database (there are open source solutions) or upgrade to standard edition.
Just create another database and move your table to the newly created database and do a cross database query.
You can even create a synonym to it so everything will be transparent to your front-end code.

Oracle to SQLServer export

I have to move data from existing database oracle to which I don't have direct access. The data is about 11 tables, 5GB each. The database admin can export the tables to some .csv or xml. The problem with csv is that some data is textual with lots of special characters. The problem with xml is that the markup is an overhead which will increase significantly the size of the files. The DBA admin is not competent enough to provide a working and neat solution. He uses toad as the database tool. Can you provide some ideas how to perform such a migration in the best possible way?
Please refer the below steps to migrate the data from Oracle to SQL server.
Recommended Migration Process
To successfully migrate objects and data from Oracle databases to SQL Server, Azure SQL DB, or Azure SQL Data Warehouse, use the following process:
1.Create a new SSMA project.
2.After you create the project, you can set project conversion, migration, and type mapping options. For information about project settings, see Setting Project Options (OracleToSQL). For information about how to customize data type mappings, see Mapping Oracle and SQL Server Data Types (OracleToSQL).
3.Connect to the Oracle database server.
4.Connect to an instance of SQL Server.
5.Map Oracle database schemas to SQL Server database schemas.
6.Optionally, Create assessment reports to assess database objects for conversion and estimate the conversion time.
7.Convert Oracle database schemas into SQL Server schemas.
8.Load the converted database objects into SQL Server.
You can do this in one of the following ways:
* Save a script and run it in SQL Server.
* Synchronize the database objects.
9. Migrate data to SQL Server.
10.If necessary, update database applications.
For more details :
[https://learn.microsoft.com/en-us/sql/ssma/oracle/migrating-oracle-databases-to-sql-server-oracletosql?view=sql-server-2017]
After the admin export data into CSV, try to convert it into a character set which will recognize all special characters.
Then, try to follow the steps from this link: link, it might work.
If after the import, there are still special characters, thy to manually convert them.
Get the DBA to export the tables using the ASCII delimiters which were designed for this purpose:
Row delimiter: Decimal 30 / 0x1E
Column delimiter: Decimal 31 / 0x1F
Then you can use BCP (or any other similar product) to upload the data to SQL Server.

Speed up export from SQL server 2008 R2 to Microsoft Access

I need to export 700,000 records from an SQL Server 2008 R2 table to a Microsoft Access database in 2002-2003 format. I am using the SQL Server Import and Export Wizard. This is currently taking over 2.5 hours. Because this is all taking place on a high secure server I am limited in my choice of tools. I could export to a text file but that loses some of the formatting.
I need a copy of one table from the database in either Access or Excel with formatting preserved. Exporting to text/CSV is not available as some of the fields may have commas. Also I cannot use Excel as the target because 2008 R2 does not support mode that 64K rows
Are there any ways to speed this us?
Using Access it should be a snap:
Link the table via ODBC, create an empty table in Access as it should appear.
Then run an append query using the linked table as source, and writing the data to the local table. The query can also rename the fields (alias) and perform minor modifications as to your needs.
If you don't have Access (Office) 2016 installed, I believe a 30 day evaluation version is for download.

Is it possible to compress data in SQL Server using zlib?

Is it possible to compress data (programmatically) in SQL Server using zlib compression? It's just one column in a specific table that needs compressing.
I'm not a SQL Server person myself - I've tried to find if there's anything available but not had much joy.
Thanks

How can I migrate database from SQL Server 2008 to SQL Server 2000

I am replacing an Access application with a web app, but the client is using SQL Server 2000, and I am using SQL Server 2008.
So, I have the database redesigned, with foreign keys, but now I need to get the data on the client's system.
Part of the problem is that they have images that are over 32k, so osql failed as the command buffer filled up.
I should be able to use osql to import the new schema at least, and perhaps all of the data except for the images.
The Export wizard just wouldn't work, even though I tried the Native SQL Driver and the OLE DB Sql Driver.
Flat files seems like a bad choice, as I don't know if it can do the images.
So, what is a good way to copy a 330M database from 2008 -> 2000?
Not sure about performance or time needed, but you could always try a tool like
Red-Gate SQL Compare / SQL Data Compare
Apex SQL Diff / SQL Data Diff
These will allow you to compare both the schema of two databases, as well as the data, and allow you to create synchronization scripts, or synchronize online.
Marc
I set the image column to null, which reduced the size of the insert statements.
This enabled me to import the data into the target database.

Resources