I'm building a huge inventory and sales management program on top of Dynamics CRM 2011. I've got a lot done but I'm kinda stuck on one part:
Images are stored in the database encoded as base64 with a MimeType column. I'm wondering how I might extract those images programmatically on a schedule to be sent as part of a data transfer to synchronize another DB.
I have a SQL Server Agent job that exports a view I created. I'm thinking about writing a program that will take that resultant CSV and use it to get a list of products we need to pull images for, and then it queries the DB and saves the files as say productserial-picnum.ext
Is that the best way to do that? Is there an easier way to pull the images out of the DB and into files?
I'm hoping it will be able to only export images that have changed since say a Last Modified column or something.
I don't know C# at all, VB, PHP and JavaScript enough to do some damage though..
you should be able to achieve this in TSQL itself
OPEN cursor with qualifying records (where now>lastmodified etc)
For Each Record
Select Binary Data into "#BinaryData
Convert "#BinaryData to #VarcharData (Something like below will work)
SET #VarcharData = CAST(N'' AS XML).value('xs:base64Binary(xs:hexBinary(sql:variable("#BinaryData")))', 'VARCHAR(MAX)')
Write #VarcharData to file (on server or a network drive if the agent is configured to write out)
Close File
Next Record
Related
I was trying to solve a smaller integration through logic apps in Azure.
I have a Stored Procedure that selects data from a database and outputs XML as result.
The thing is that the Xml result is about 50k rows and pretty large.
I made an on-premises gateway connection to run the Stored Procedure through logic apps. but when I get the result, not only does it split the big xml, but it also cuts the whole result after about 15k rows.
I know I could use blobs, which means I need to export the sql-xml to files first, which also means that I need to use BCP with something like powershell to export the xml to file in best way. But, I'm trying to scipt most of the on premises steps. I want this solution to be a cloud-based one as much as possible.
Anyone have a solution for this?
Ok, So...
I have boiled it down to two possible outcomes to why this problem occurs.
first one is that I noised I got this error when trying to open the xml im sql server.
'~vs8D51.xml' is too large to open with XML editor. The maximum file size is '10' MB. Please update the registry key 'HKCU\Software\Microsoft\SQL Server Management Studio\13.0_Config\XmlEditor\MaxFileSizeSupportedByLanguageService' to change the maximum size.
Which makes me think that the stored procedure in Azure Logic apps doesn't fetch a result larger than 10mb because of the restrain in sql server.
I have tried to change it in regedit, but every time I restart sql server manager it resets to 10mb.
I have no idea if this is a correct assessment of the problem but its a thought...
Second, a colleague told me he had a similar problem with an file from an FTP.
He said that in some weird way the logic app doesn't get all the data because of some kind of timeout that happens in the background...
He had to fetch the file content in split pieces and somehow stream it through the workflow of the logic app and then recreate the whole thing and save it to a file on the other end of the integration.
That made me think of trying out this: SQL Pagination for bulk data transfer with Logic Apps
It works, but not quite how i want it to work. I can stream the data and save them to blob, but that is as results from the table itself, not as split pieces of the whole XML of that same data...
Anyone know of a way to maybe iterate/paginate though a whole XML result in SQL in a good way, with root tags and all?
In SSMS 18 in order to have the MaxFileSizeSupportedByLanguageService value persist I needed to edit that key's value in the C:\Program Files (x86)\Microsoft SQL Server Management Studio 18\Common7\IDE\CommonExtensions\Platform\Shell\Microsoft.XmlEditor.pkgdef file.
The National Weather Service's Climate Prediction Center maintains data of recent weather data from about 1400 weather stations across the United States. The data for the previous day can always be found at the following address:
http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/cdus/prcp_temp_tables/dly_glob1.txt
In an ambitious attempt to store weather data for future reference, I want to store this data by row using SQL Server 2012. Five years ago a similar question was asked, and this answer mentioned the BULK INSERT command. I do not have access to this option.
Is there an option which allows for direct import of a web hosted text file which does not use the BULK statement? I do not want to save the file as I plan on automating this process and having it run daily direct to the server.
Update: I have found another option in Ad Hoc Distributed Queries. This option is also unavailable to me based on the nature of the databases in question.
Why do you NOT have access to Bulk Insert? I can't think of a reason that would be disabled on your version of SQL Server.
I can think of a couple ways of doing the work.
#1) Record a macro, using excel, to do everything from the data import, to the parsing of the data sets, and then to saving as a CSV file. I just did it; very easy. Then, use BULK INSERT to get the data from the CSV to SQL Server.
#2) Record a macro, using excel, to do everything from the data import, to the parsing of the data sets. Then use a VBA script to send the data to SQL Server. You will find several ideas from the link below.
http://www.excel-sql-server.com/excel-sql-server-import-export-using-vba.htm#Excel%20Data%20Export%20to%20SQL%20Server%20using%20ADO
#3) You could actually use Python or R to get the data from the web. Both have excellent HTML parsing packages. Then, as mentioned in point #1 above, save the data as a CSV (using Python or R) and BULK INSERT into SQL Server.
R is probably a bit off topic here, but still a viable option. I just did it to test my idea and everything is done in just two lines of code!! How efficient is that!!
X <- read.csv(url("http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/cdus/prcp_temp_tables/dly_glob1.txt"))
write.csv(X, file = "C:\\Users\\rshuell001\\Desktop\\foo.csv")
I want to link an invoice template in Excel to a SQL Server database.
The invoice currently contains only a few very basic fields like user/customer/date/item-id/description/quantity/Total etc.
In future more fields will be required.
What's the easiest way to store all that invoice data in realtime to a SQL Server database when the user presses enter in the invoice?
How many tables will I need to create in SQL Server?
The end users are not tech savvy at all, I need to deploy this solution without any technical requirements from them.
Thank you.
The best way to do this is to create an SSIS package that sucks up the excel spreadsheet and captures the data say once a day. The excel file will need to be in the same location, say a network folder and the structure will also have to be the same, meaning the columns and their names once setup will have to also be the same. Otherwise, it's best to have something like a web front end or a fat32 client (basically build an app in Visual Studio or something) that allows data input directory to the SQL server.
Check out this link to learn about it:
http://knowlton-group.com/using-ssis-to-export-data-to-excel/
I am trying to reconcile data from a website and a database programmatically. Right now my process is manual. I download data from the website, download data from my database, and reconcile using an Excel vlookup. Within Excel, I am only reconciling 1 date for many items.
I'd like to programmatically reconcile the data for multiple dates and multiple items. The problem is that I have to download the data from the website manually. I have heard of people doing "outer joins" and "table joins" but I do not know where to begin. Is this something that I code in VBA or notepad?
Generally I do this by bulk inserting the website data into a staging table and then write select statments to join that table to my data in the database. You may need to do clean up first to be able to match the records if they are stored differently.
Python is a scripting language. http://www.python.org
There are tools to allow you to read Excel spreadsheets. For example:
http://michaelangela.wordpress.com/2008/07/06/python-excel-file-reader/
You can also use Python to talk to your database server.
http://pymssql.sourceforge.net/
http://www.oracle.com/technology/pub/articles/devlin-python-oracle.html
http://sourceforge.net/projects/pydb2/
Probably the easiest way to automate this is to save the excel files you get to disk, and use Python to read them, comparing that data with what is in your database.
This will not be a trivial project, but it is very flexible and straight forward. Trying to do it all in SQL will be, IMHO, a recipe for frustration, especially if you are new to SQL.
Alternatively:
You could also do this by using VBA to read in your excel files and generate SQL INSERT statements that are compatible with your DB schema. Then use SQL to compare them.
I am using Access 2007 (VBA - adp) front end with a SQL Server 2005 Backend. I have a report that I want to save a copy as a PDF as a binary file in the SQL Server.
Report Opened
Report Closed - Closed Event Triggered
Report Saved as PDF and uploaded into SQL Server table as Binary File
Is this possible and how would I achieve this?
There are different opinions if it's a good idea to store binary files in database tables or not. Some say it's ok, some prefer to save the files in the file system and only store the location of the file in the DB.
I'm one of those who say it's ok - we have a >440 GB SQL Server 2005 database in which we store PDF files and images. It runs perfectly well and we don't have any problems with it (for example with speed...that's usually one main argument of the "file system" people).
If you don't know how to save the files in the database, google "GetChunk" and "AppendChunk" and you will find examples like this one.
Concerning database design:
It's best if you make two tables: one only with an ID and the blob field (where the PDF files are stored in) and one with the ID and additional fields for filtering.
If you do it this way, all the searching and filtering will happen on the small table, and only when you know the ID of the file you want to load, you hit the big table exactly one time and load the file.
We do it like this and like I said before - the database contains nearly 450 GB of files, and we have no speed problems at all.
The easiest way to do this is to save the report out to disk as a PDF (if you don't know how to do that, I recommend this thread on the MSDN forums). After that, you'll need to use ADO to import the file using OLE embedding into a binary type of field. I'm rusty on that, so I can't give specifics, but Google searching has been iffy so far.
I'd recommend against storing PDF files in Access databases -- Jet has a strict limit to database size, and PDFs can fill up that limit if you're not careful. A better bet is to use OLE linking to the file, and retrieving it from disk each time the user asks for it.
The last bit of advice is to use an ObjectFrame to show the PDF on disk, which MSDN covers very well here.