Is using Filestream a good idea? - sql-server

I have SQL Server 2012 and started looking into Filestream as a way to link "attachments" (> 1 GB) such as Excel documents and PDF files to database table records. While I have been successful in finding the "hello world" T-SQL examples that allow me to do some rudimentary tasks (enable Filestream, create table with Filestream column, insert row, etc.) I also encounter "beware" statements.
Is Filestream really as temperamental and full of "gotchas" as I the variously placed forum writings suggest, or is it straight forward and with a least predictable quriks?
Thank you for any insight

I would recommend using filetable which is implemented using filestream and its really neat enhancement, it preserves functionality of filestream but in contrast to filestream it can be configured to also allow accessing files outside db engine. For example, you can allow IIS to use those files directly. In production in big telecom company from 2013, flawless!
P.S. this is maybe more suitable for comment but I don't have reputation to write it :)

Related

Storing and retrieving any shots of file extension in a SQL Server database

I am writing an asp.net web application that stores APPLICANTS data in a SQL Server database.
Applicant might post name, address, telephone and a file.
The file might be of any extension including .docx for resume, 'jpg, .pdf for photos.
or even an Excel file.
Is it possible to store all these file extension on my database?
Or will that be lengthy?
Please help
Good question! Personally I would use FILESTREAM in your case and here's why
In SQL Server, BLOBs can be standard varbinary(max) data that stores
the data in tables, or FILESTREAM varbinary(max) objects that store
the data in the file system. The size and use of the data determines
whether you should use database storage or file system storage. If the
following conditions are true, you should consider using FILESTREAM:
Objects that are being stored are, on average, larger than 1 MB.
Fast read access is important.
You are developing applications that use a middle tier for application logic.
For smaller objects, storing varbinary(max) BLOBs
in the database often provides better streaming performance.
You can read up on FILESTREAM here.
Also consider using it in conjunction with FILETABLE.
Finally, here's a .net C# example on how to read from FILESTREAM column.
Please note, FILESTREAM is available in SQL Server starting from 2008 version.
Hope it helps!

T-SQL File Stream Enable Database

I am using T-SQL and Microsoft Management Studio 2008 R2. I want to create a database in which I can store video files.
After google search and some reading I have learned that there is a option to use "File Stream Enable Database". It was said that this kind of database should be used only when your files are larger then 2MB. I want to store video files, so I think this is suitable for my goals.
Please, give me more information about the main difference in using BLOB and FileStream Enable database or just to store the files in a given directory and to save only the url in the database table column?
Thanks in advance.
Filestream was an interesting change when it came in for me; the bit that suprised me was Full Text Search was taken out of the operating system because it caused issues; but file stream put it back because Blobs caused issues.
Using Filestream is basically transparent to your application and it even backs the files up as if they were in the database - and thats the big benefit or cost over the save in database v save pointer in database.
You can insert files the same way as you did before and you can read them back in SQL in exactly the same way. The difference and benefit is that that SQL can take advantage of Windows system cache for reading and files saving its own resources to make other queries run quicker.
Please, give me more information about the main difference in using BLOB and FileStream Enable
database
The feature you call for is "FileStream" not "FileStream enable".
Some blogs are also around, like http://blogs.msdn.com/b/rdoherty/archive/2007/10/12/getting-traction-with-sql-server-2008-filestream.aspx
At kleast try reading the documentation before running around and have other people do your basic groundwork.

How to build big and complex database in sql - IN EASY WAY?

I have installed Oracle XE. I build small database every day to practice from command prompt, but now I want to have more. I want to have a bigger database with a lot of different data to practice and make exercises.
So, is possible to get a big data file from somewhere and upload to XE database?
You can't get 'big' data for Oracle Express edition as it is limited to 4GB (10g) or 10GB (11g ).
That said, there are public datasets available. Personally I like the FAA data on registered aircraft owners/operators
As you are practicing with Oracle, perhaps a good solution (which will also generate exactly the data you need) would be to write your own stored procedures to generate your data in a loop (or similar construct).
You could then generate as much as you like whilst also practicing your handling of large datasets and writing of efficient PL/SQL and SQL code.
This way your data will match your current database structure too without having to build a new database matching whichever dataset you download from the web.
IIRC there are sample schemas as HR that can be enabled. See this.

Transferring data between different DBMS's

I would like to transfer the whole Database i have in Informix to Oracle. We have an an application which works on both Databases, one of our customers is moving from Informix to Oracle, and needs to transfer the whole Database to Oracle (the structure is the same).
We need often to transfer data between oracle/Mssql/Informix sometimes only one table and not the whole Database.
Does anybody know about any good program which does this kind of job?
The Pentaho Data Integration ETL tools are available as open source (also known under the former name "Kettle") for cross-database migration and many other use cases.
From their data sheet:
Common Use Cases
Data warehouse population with built-in support for slowly changing
dimensions, junk dimensions
Export of database(s) to text-file(s) or other databases
Import of data into databases, ranging from text-files to excel
sheets
Data migration between database applications
...
A list of input / output data formats can be found in the accepted answer of this question: Does anybody know the list of Pentaho Data Integration (Kettle) connectors list?
It supports all databases with a JDBC driver, which means most of them.
Check this question of mine, it includes some very good ideas: Searching for (freeware) database migration tool
you could give the Oracle Migration Workbench a try. See http://download.oracle.com/docs/html/B15858_01/toc.htm If you want to read Informix data into Oracle on a regular basis, using the Heterogeneous Services might be a better option. Check for hs4odbc or dg4odbc, depending on the Oracle release you have.
I hope this helps,
Ronald.
I have done this in the past and it is not a trivial task. We ended up writing out each table out to a pipe delimited flat file and reloading each table into Oracle with Oracle SQL Loader. There was a ton of Perl scripts to scrub the source data and shell scripts to automate the process as much as possible and run things in parallel as well.
Gotchas that can come up:
1. Pick a delimiter that is as unique as possible.
2. Try to find data types that match as close as possible to the Informix ones as possible. ie date vs. timestamp
3. Try to get the data as clean as possible prior to dumping out the flat files.
4. HS will most likely be too slow..
This was done years ago. You may want to investigate Golden Gate (now owned by Oracle) software which may help with the process(GG did not exist when I did it)
Another idea is use an ETL tool to read Informix and dump the data into Oracle (Informatica comes to mind)
Good luck :)
sqlldr - Oracle's import utility
Here's what I did to transfer 50TB of data from MySQL to ORacle. Generated csv files from MySql and used sqlldr utility in oracle to export all the data from the files to oracle db. It is the fastest way to import data. I researched on this for a few weeks and done lot of benchmark test cases and sqlldr is hands down best and fastest way to import into oracle.

Compress data from Database

Quick q, could be a silly one given my (lack of) findings on Google so far.
I have a Database. In this database is a Table with some Data. The Data is a large BLOB but can't be compressed (for reasons out of my control).
I have an Application that talks to this Database. I would really like to be able to ensure that the Data is compressed during transit.
As I understand it, the Database Provider would handle compression etc.
Is this the case? Are there settings on common ones, say SQL Server to enable compression?
For SQL Server, I found this "connect" entry, but no: I don't think TDS is currently compressed. You could (although I don't like it much) use SQL-CLR to compress it in .NET code, but it could have too much overhead.
I know it isn't an option in this case (from the question), but it is usually preferable to store BLOBs the way you want to get them. So if you want to get them compressed, store them compressed. SQL isn't a good tool for manipulating binary ;-p Such a strategy also means that you aren't using vendor-specific features - just the ability to store an opaque BLOB.
If your database access layer does not provide compression, you can set up a VPN link between the database server and the application host. Most serious VPN solutions compress data in transit. OpenVPN is a simple and easy to set up solution for quickly creating a tunnel. Data is compressed in transit. Probably won't be as efficient as a native compression, but it's a possible solution. And you get encryption thrown in for free :).
SQL Server 2008 is the first version of SQL Server to natively support compression of backups. Pre 2008, you need to do it with third party products.

Resources