BLOB or XML for saving XSD in SQL Server - sql-server

I have an application where I'm about to save an XSD- or XML-file to DB. I use a SQL Server 2008 R2 (or later) with Entity Framework. I have a form where the user will upload a XSD or XML file. The contents of this file should be stored in DB.
I will never have to look at the specific contents of the XML/XSD once it's stored in the DB, but it will be downloaded later on.
As I see it, I have two approaches:
Approach A.
Upload file.
Read contents of file.
Save contents of file as XML.
Approach B.
Upload file.
Save file as BLOB.
What's the better option here, or is there a (better) third option as well?

Approach C.
Save XSD file to shared access storage
Store the path in SQL server
I will never have to look at the specific contents of the XML/XSD
So why put it in the DB at all?
it will be downloaded later on
Much easier to retrieve it from a file share than a DB.

Related

How to put a SQL Query Output as a excel file to Azure Data Lake in SSIS

So basically, I want to get all this done in the SSIS package itself.
SQL Query output -> Convert to Excel file without any local files -> Upload to Azure Blob.
The "azure blob destination" has only the options for CSV, etc. But nothing to directly make it excel.
Converting it to excel require a local file as there excel file destination has only system file options.
If there is any way to get the output as excel file directly instead of CSV, then upload to Azure Blob.
Any help would be appreciated. Thanks.
In SSIS, we can not put a SQL Query Output as a excel file to Azure Data Lake without using a local excel file.
Not only SSIS does't support do this, we can't find any other tools or scripts can achieve that.
Hope this helps.
Alright.
So, Basically, like Leon Yue said. There is no other way except locally.
The only way to go about it is, to use Azure File Store mounted to your local drive instead.
That way it thinks its sending it locally, However you use it to upload it directly to the Azure File Store of the Storage, then move to the blob storage.

Importing Data Using Unix to Oracle DB

I want to import data on a weekly basis to an Oracle DB.
I'm receiving this data on specific location in a server in EDR format. For now I'm uploading them manually using Toad for Oracle uploader wizard. Is there any way to upload them automatically using Unix or any kind of scripting?
I would suggest to try out SQL loader through a shell script.
Code:
sqlldr username#server/password control=loader.ctl
two important files:
a. your data file to be uploaded.
b. Control file which states the table to be inserted and the delimiter character and the column fields, etc. basically describe how to load the data.
Oracle Reference

How to handle an Excel file after data exported to SQL Server?

I have 1000's of Excel files, and the data stored in them needs to be imported into SQL Server. The minimum size of 250 kb to 50mb.
Currently, I am storing the files in the server location and importing each file content to SQL Server. Once the data imported, the physical file still remains in the system for future reference.
But now the file occupies more than 25Gb of our server space. I don't want to delete the source files.
Can anyone help me sort out this problem?
I'm planning to convert the source file into bytes and store those bytes in SQL Server. But I don't know it is the right way of handling it.
CSV is the best way for keep your files .You should try convert your .xls to .csv ...That happend to me and I resolved with this method.

Retrieve a file from a SQL Filestream column using vb.net

To be honest, I don't have any idea how filestream works. It's my first time using and experimenting on it.
So, I was able to store data in a filestream column, but I have no idea how to retrieve it or how it should look after retrieving it.
Is it possible to just click a button and then the file in the filestream column will just open? For example, I stored a ms document file in the database, then the file will open in microsoft word or i stored a pdf, then the file will open in a pdf reader. Is it possible?
Im sorry if this is a dumb question. hehe. Thank you.
FILESTREAM from the developers point-of-view looks no different than a normal varbinary(max) column. This means you will be storing binary large objects (BLOBs). SQL Server will then store those BLOBs as files on the file system, rather than storing them directly in the database.
You can treat it exactly as a varbinary column on the .NET side. Take whatever data you want to store, turn it into a byte array, and save it to the DB.
When you retrieve it, it will again be in a byte array. You will need to do something with it in order for it to be usefull (write it to a file locally, process it and display it, etc.)
Side note, you can also access the FILESTREAM BLOBs using Win32 APIs if you enable it. See this link for more info.

Save Access Report as PDF/Binary

I am using Access 2007 (VBA - adp) front end with a SQL Server 2005 Backend. I have a report that I want to save a copy as a PDF as a binary file in the SQL Server.
Report Opened
Report Closed - Closed Event Triggered
Report Saved as PDF and uploaded into SQL Server table as Binary File
Is this possible and how would I achieve this?
There are different opinions if it's a good idea to store binary files in database tables or not. Some say it's ok, some prefer to save the files in the file system and only store the location of the file in the DB.
I'm one of those who say it's ok - we have a >440 GB SQL Server 2005 database in which we store PDF files and images. It runs perfectly well and we don't have any problems with it (for example with speed...that's usually one main argument of the "file system" people).
If you don't know how to save the files in the database, google "GetChunk" and "AppendChunk" and you will find examples like this one.
Concerning database design:
It's best if you make two tables: one only with an ID and the blob field (where the PDF files are stored in) and one with the ID and additional fields for filtering.
If you do it this way, all the searching and filtering will happen on the small table, and only when you know the ID of the file you want to load, you hit the big table exactly one time and load the file.
We do it like this and like I said before - the database contains nearly 450 GB of files, and we have no speed problems at all.
The easiest way to do this is to save the report out to disk as a PDF (if you don't know how to do that, I recommend this thread on the MSDN forums). After that, you'll need to use ADO to import the file using OLE embedding into a binary type of field. I'm rusty on that, so I can't give specifics, but Google searching has been iffy so far.
I'd recommend against storing PDF files in Access databases -- Jet has a strict limit to database size, and PDFs can fill up that limit if you're not careful. A better bet is to use OLE linking to the file, and retrieving it from disk each time the user asks for it.
The last bit of advice is to use an ObjectFrame to show the PDF on disk, which MSDN covers very well here.

Resources