I am using Access 2007 (VBA - adp) front end with a SQL Server 2005 Backend. I have a report that I want to save a copy as a PDF as a binary file in the SQL Server.
Report Opened
Report Closed - Closed Event Triggered
Report Saved as PDF and uploaded into SQL Server table as Binary File
Is this possible and how would I achieve this?
There are different opinions if it's a good idea to store binary files in database tables or not. Some say it's ok, some prefer to save the files in the file system and only store the location of the file in the DB.
I'm one of those who say it's ok - we have a >440 GB SQL Server 2005 database in which we store PDF files and images. It runs perfectly well and we don't have any problems with it (for example with speed...that's usually one main argument of the "file system" people).
If you don't know how to save the files in the database, google "GetChunk" and "AppendChunk" and you will find examples like this one.
Concerning database design:
It's best if you make two tables: one only with an ID and the blob field (where the PDF files are stored in) and one with the ID and additional fields for filtering.
If you do it this way, all the searching and filtering will happen on the small table, and only when you know the ID of the file you want to load, you hit the big table exactly one time and load the file.
We do it like this and like I said before - the database contains nearly 450 GB of files, and we have no speed problems at all.
The easiest way to do this is to save the report out to disk as a PDF (if you don't know how to do that, I recommend this thread on the MSDN forums). After that, you'll need to use ADO to import the file using OLE embedding into a binary type of field. I'm rusty on that, so I can't give specifics, but Google searching has been iffy so far.
I'd recommend against storing PDF files in Access databases -- Jet has a strict limit to database size, and PDFs can fill up that limit if you're not careful. A better bet is to use OLE linking to the file, and retrieving it from disk each time the user asks for it.
The last bit of advice is to use an ObjectFrame to show the PDF on disk, which MSDN covers very well here.
Related
I was trying to solve a smaller integration through logic apps in Azure.
I have a Stored Procedure that selects data from a database and outputs XML as result.
The thing is that the Xml result is about 50k rows and pretty large.
I made an on-premises gateway connection to run the Stored Procedure through logic apps. but when I get the result, not only does it split the big xml, but it also cuts the whole result after about 15k rows.
I know I could use blobs, which means I need to export the sql-xml to files first, which also means that I need to use BCP with something like powershell to export the xml to file in best way. But, I'm trying to scipt most of the on premises steps. I want this solution to be a cloud-based one as much as possible.
Anyone have a solution for this?
Ok, So...
I have boiled it down to two possible outcomes to why this problem occurs.
first one is that I noised I got this error when trying to open the xml im sql server.
'~vs8D51.xml' is too large to open with XML editor. The maximum file size is '10' MB. Please update the registry key 'HKCU\Software\Microsoft\SQL Server Management Studio\13.0_Config\XmlEditor\MaxFileSizeSupportedByLanguageService' to change the maximum size.
Which makes me think that the stored procedure in Azure Logic apps doesn't fetch a result larger than 10mb because of the restrain in sql server.
I have tried to change it in regedit, but every time I restart sql server manager it resets to 10mb.
I have no idea if this is a correct assessment of the problem but its a thought...
Second, a colleague told me he had a similar problem with an file from an FTP.
He said that in some weird way the logic app doesn't get all the data because of some kind of timeout that happens in the background...
He had to fetch the file content in split pieces and somehow stream it through the workflow of the logic app and then recreate the whole thing and save it to a file on the other end of the integration.
That made me think of trying out this: SQL Pagination for bulk data transfer with Logic Apps
It works, but not quite how i want it to work. I can stream the data and save them to blob, but that is as results from the table itself, not as split pieces of the whole XML of that same data...
Anyone know of a way to maybe iterate/paginate though a whole XML result in SQL in a good way, with root tags and all?
In SSMS 18 in order to have the MaxFileSizeSupportedByLanguageService value persist I needed to edit that key's value in the C:\Program Files (x86)\Microsoft SQL Server Management Studio 18\Common7\IDE\CommonExtensions\Platform\Shell\Microsoft.XmlEditor.pkgdef file.
To be honest, I don't have any idea how filestream works. It's my first time using and experimenting on it.
So, I was able to store data in a filestream column, but I have no idea how to retrieve it or how it should look after retrieving it.
Is it possible to just click a button and then the file in the filestream column will just open? For example, I stored a ms document file in the database, then the file will open in microsoft word or i stored a pdf, then the file will open in a pdf reader. Is it possible?
Im sorry if this is a dumb question. hehe. Thank you.
FILESTREAM from the developers point-of-view looks no different than a normal varbinary(max) column. This means you will be storing binary large objects (BLOBs). SQL Server will then store those BLOBs as files on the file system, rather than storing them directly in the database.
You can treat it exactly as a varbinary column on the .NET side. Take whatever data you want to store, turn it into a byte array, and save it to the DB.
When you retrieve it, it will again be in a byte array. You will need to do something with it in order for it to be usefull (write it to a file locally, process it and display it, etc.)
Side note, you can also access the FILESTREAM BLOBs using Win32 APIs if you enable it. See this link for more info.
The National Weather Service's Climate Prediction Center maintains data of recent weather data from about 1400 weather stations across the United States. The data for the previous day can always be found at the following address:
http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/cdus/prcp_temp_tables/dly_glob1.txt
In an ambitious attempt to store weather data for future reference, I want to store this data by row using SQL Server 2012. Five years ago a similar question was asked, and this answer mentioned the BULK INSERT command. I do not have access to this option.
Is there an option which allows for direct import of a web hosted text file which does not use the BULK statement? I do not want to save the file as I plan on automating this process and having it run daily direct to the server.
Update: I have found another option in Ad Hoc Distributed Queries. This option is also unavailable to me based on the nature of the databases in question.
Why do you NOT have access to Bulk Insert? I can't think of a reason that would be disabled on your version of SQL Server.
I can think of a couple ways of doing the work.
#1) Record a macro, using excel, to do everything from the data import, to the parsing of the data sets, and then to saving as a CSV file. I just did it; very easy. Then, use BULK INSERT to get the data from the CSV to SQL Server.
#2) Record a macro, using excel, to do everything from the data import, to the parsing of the data sets. Then use a VBA script to send the data to SQL Server. You will find several ideas from the link below.
http://www.excel-sql-server.com/excel-sql-server-import-export-using-vba.htm#Excel%20Data%20Export%20to%20SQL%20Server%20using%20ADO
#3) You could actually use Python or R to get the data from the web. Both have excellent HTML parsing packages. Then, as mentioned in point #1 above, save the data as a CSV (using Python or R) and BULK INSERT into SQL Server.
R is probably a bit off topic here, but still a viable option. I just did it to test my idea and everything is done in just two lines of code!! How efficient is that!!
X <- read.csv(url("http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/cdus/prcp_temp_tables/dly_glob1.txt"))
write.csv(X, file = "C:\\Users\\rshuell001\\Desktop\\foo.csv")
We have an application that stores Word and PDF documents in a share on a server. I'm looking into the possibility of storing these as BLOBs in the associated Microsoft SQL database instead, which seems like it's probably a good idea.
Separately, an idea which I'm investigating is the possibility of allowing users to easily view all of the documents in the share associated with a case (let's imagine they're grouped into folders by case) as one continuous stream on a tablet, as if they were all one big PDF file.
I think I've worked out how to do the latter, running a web service to convert the Word documents to PDFs and then concatenate them and the extant PDFs. But that's if we continue to store the documents as files in an NTFS share. What if we stored the documents as BLOBs in MSSQL instead?
Is there a way to concatenate BLOB data so that for every, say, 10 BLOB records (which might represent Word or PDF files), I could create an 11th record which was a concatenation of the other 10 as one giant PDF?
SQL Server Blobs are not an effective way of storing files. SQL 2008 bought about a better mechanism for this called FILESTREAM ( http://technet.microsoft.com/en-us/library/gg471497.aspx) which can store the files directly on the file system but managed by SQL.
As for the files you would not be able to simply concatenate the PDF files to form one continuous file but there are several libraries that you could use to do this, potentially on the fly. This would remove the need to store the concatenated document as well.
I'm building a huge inventory and sales management program on top of Dynamics CRM 2011. I've got a lot done but I'm kinda stuck on one part:
Images are stored in the database encoded as base64 with a MimeType column. I'm wondering how I might extract those images programmatically on a schedule to be sent as part of a data transfer to synchronize another DB.
I have a SQL Server Agent job that exports a view I created. I'm thinking about writing a program that will take that resultant CSV and use it to get a list of products we need to pull images for, and then it queries the DB and saves the files as say productserial-picnum.ext
Is that the best way to do that? Is there an easier way to pull the images out of the DB and into files?
I'm hoping it will be able to only export images that have changed since say a Last Modified column or something.
I don't know C# at all, VB, PHP and JavaScript enough to do some damage though..
you should be able to achieve this in TSQL itself
OPEN cursor with qualifying records (where now>lastmodified etc)
For Each Record
Select Binary Data into "#BinaryData
Convert "#BinaryData to #VarcharData (Something like below will work)
SET #VarcharData = CAST(N'' AS XML).value('xs:base64Binary(xs:hexBinary(sql:variable("#BinaryData")))', 'VARCHAR(MAX)')
Write #VarcharData to file (on server or a network drive if the agent is configured to write out)
Close File
Next Record