I have a visual studio sql server database project, and an Images table. I need insert a default image in a PostDeployment script, because this image will be used in all entities with an empty image. How can I store this image in the project, and how can access it in the script?
Insert this image in the database project is a good practice? I have a WindowsService project, and a ASP.NET MVC project with entity framework. Should I insert this image in some kind of initial verification in each one of this other 2 projects?
There are a couple of ways to do this.
Given a table
create table testImage
(id int,
myImage varbinary( max));
You can insert an image from a file with something like:
INSERT INTO testImage (id, myImage)
SELECT 1, bulkcolumn
FROM openrowset(BULK 'D:\x\dZLx1.png', single_blob) as myImage
There are a couple of potential headaches here, you need to keep track of the path to the image in your project somehow, and I think there are a few security-related scenarios where OPENROWSET doesn't work anyway.
It might be more reliable to do this once on your desktop, then SELECT the value out again to use in an insert statement such as
IF NOT EXISTS (SELECT * FROM testIMAGE where ID = 2)
BEGIN
INSERT INTO testImage VALUES
(2,0x89504E470D0A1A0.....)
END
(full script here: https://gist.github.com/gavincampbell/a25431dffd3555563a052c297a32415e)
As you will realise when you try this, the resulting string will be long. I think it would be a good idea to keep this in a separate script and reference it from the main post-deploy script with :r (apologies if you knew this already!). Also remember that you don't need quotes around the binary "string".
Related
It might sound insane what I am asking but I said to give it a try. I have the following statements that create a table and open a XML file from a location.
CREATE TABLE Ts (IntCol int, XmlCol xml);
GO
INSERT INTO Ts(XmlCol)
SELECT * FROM OPENROWSET(
BULK 'C:\Users\caf\Desktop\EEEEE\StoreDocument.xml',
SINGLE_BLOB) AS x;
GO
However, when I use SELECT XmlCol FROM TS the results are like a link and if I click on it, in a new window the content within the XML file are displayed. So far so good and correct.
Is there any way that after the select statement is executed somehow to auto open the results in a new tab without having to click myself? Thanks
Not that I know...
If you chose "output to grid" (that's the case, where you'd get the "link") SSMS expects a multi-column result. Only in cases where the XML is the only column, this might work.
But still SSMS expects many rows. Only in cases where your XML is a scalar answer (=one single value) this might make sense.
But SQL and SSMS support for set-based thinking. Scalar values are procedural thinking.
If you know, that your XML is the one and only response, you could use "output to text", but this is not the same...
So, I'm sorry, I think you will have to perform that click manually :-D
I am working on an import script, the idea being to import multiple workbooks into one table. I have made progress, so I am able to import one workbook successfully into my table. What I want to do is create a query that will loop a folder read the file names and import the data into my database in Microsoft SQL Server Management Studio.
--Creating the TABLE--
CREATE TABLE BrewinDolphinHoldings
(
recordID INT AUTO_NUMBER
FUNDNAME VARCHAR(25),
SEDOL VARCHAR(7),
ISIN VARCHAR(11),
NAME VARCHAR(20),
WEIGHT INTEGER(3)
)
constraint[pk_recordID]PRIMARYKEY
(
[recordID] ASC
)
INSERT INTO BrewinDolphinHoldings
VALUES
("HoldingsData', GB125451241, DavidsHoldings, 22)
--SELECTING THE SHEET--
SELECT/UPDATE? *
FROM OPENROWSET('Microsoft.JET.OLEDB.4.0',
'Excel 8.0;Database=X:\CC\sql\DEMO\SpreadsheetName + '.xlsx',
'SELECT * FROM [Sheet1$]') AS HoldingsData
So essentially my question is, I want to create a loop a loop that will read the file name in a directory, and the import will read that name every time it loops and import the relevant spreadsheets? so,for example:
DECLARE SpreadsheetName as STRING
DECLARE FileExtension as '.xlsx'
FOR EACH ITEM IN DIRECTORY
X=1
Y=MAX
FILENAME THAT LOOP READS = SpreadsheetName
SELECT * FROM
OPENROWSET('Microsoft.JET.OLEDB.12.0',
'Excel 8.0;Database=X:\CC\sql\DEMO\SpreadsheetName + fileExtension.xls
END LOOP
So, I'm thinking maybe something like this? Although I don't know if the loop will overwrite my database? maybe instead of UPDATE I should use INSERT?
I don't want to use SSIS, preferably a query, although if anyone can recommend anything I could look into, or, help me with this loop It would greatly help
I'm open to new ideas from you guys, so if anyone can try and fix my code, or give me a few examples of imports for multiple excel sheets, would be greatly appreciated!
I'm new to SQL Server, I do have some previous programming experience!
Thanks!
You can use bcp to do what you are talking about to import any type of delimited text file, such as csv or text tab delimited. If it is possible generate/save the spreadsheets as csv and use this method. See these links.
Import Multiple CSV Files to SQL Server from a Folder
http://www.databasejournal.com/features/mssql/article.php/3325701/Import-multiple-Files-to-SQL-Server-using-T-SQL.htm
If it has to be excel, then you can't use bcp, but these should still help you with the logic for the loops on the file folders. I have never used the excel openrowset before, but if you have it working like you said, it should be able to insert in just the same. You can still use the xp_cmdshell/xp_dirtree to look at the files and generate the path even though you can't import it with bcp.
How to list files inside a folder with SQL Server
I would then say it would be easiest to do a insert from a select statement from the openrowset to put it into the table.
http://www.w3schools.com/sql/sql_insert_into_select.asp
Make sure xp_cmdshell is enabled on your sql server instance as well.
https://msdn.microsoft.com/en-us/library/ms190693(v=sql.110).aspx
When publishing a dacpac with sqlpackage.exe, it runs Schema Compare first, followed by pre-deployment scripts. This causes a problem when, for instance, you need to drop a table or rename a column. Schema Compare was done before the object was modified and the deployment fails. Publish must be repeated to take the new schema into account.
Anyone have a work-around for this that does not involve publishing twice?
Gert Drapers called it as pre-pre-deployment script here
Actually it is a challenge. If you need to add non-nullable and foreign key column to a table full of data - you can do with a separate script only.
If you are the only developer - that is not a problem, but when you have a large team that "separate script" has to be somehow executed before every DB publish.
The workaround we used:
Create separate SQL "Before-publish" script (in DB project) which has a property [Build action = None]
Create custom MSBuild Task where to call SQLCMD.EXE utility passing "Before-publish" script as a parameter, and then to call SQLPACKAGE.EXE utility passing DB.dacpac
Add a call of the custom MSBuild Task to db.sqlproj file. For example:
<UsingTask
TaskName="MSBuild.MsSql.DeployTask"
AssemblyFile="$(MSBuildProjectDirectory)\Deploy\MsBuild.MsSql.DeployTask.dll" />
<Target Name="AfterBuild">
<DeployTask
Configuration="$(Configuration)"
DeployConfigPath="$(MSBuildProjectDirectory)\Deploy\Deploy.config"
ProjectDirectory="$(MSBuildProjectDirectory)"
OutputDirectory="$(OutputPath)"
DacVersion="$(DacVersion)">
</DeployTask>
</Target>
MsBuild.MsSql.DeployTask.dll above is that custom MSBuild Task.
Thus the "Before-publish" script could be called from Visual Studio.
For CI we used a batch file (*.bat) where the same two utilities (SQLCMD.EXE & SQLPACKAGE.EXE) were called.
The final process we've got is a little bit complicated and should be described in a separate article - here I mentioned a direction only :)
Move from using visual studio to using scripts that drive sqlpackage.exe and you have the flexibility to run scripts before the compare:
https://the.agilesql.club/Blog/Ed-Elliott/Pre-Deploy-Scripts-In-SSDT-When-Are-They-Run
ed
We faced a situation when we need to transform data from one table into other during deployment of the database project. Of course it is a problem to do using the DB project due to in the pre-deployment the destination table (column) still doesn't exist but in post-deployment script the source table (column) is already absent.
To transform data from TableA to TableB we used the following idea (This approach can be used for any data modifications):
Developer adds destination table (dbo.TableB) into the DB project and deploys it onto the local DB (without committing to a SVN)
He or she creates a pre-deployment transformation script. The trick is that the script put the result data into a temporary table: #TableB
Developer deletes the dbo.TableA in the DB project. It is assumed that the table will be deleted during execution of the main generated script.
Developer writes a post-deployment script that copies data form #TableB to dbo.TableB that was just created by the main script.
All of the changes are committed into the SVN.
This way we don't need the pre-pre-deployment script due to we store the intermediate data in the temporary table.
I'd like to say that the approach that uses the pre-pre-deployment script had the same intermediate (temporary) data, however it is stored not in temporary tables but in real tables. It happens between pre-pre-deployment and pre-deployment. After execution of pre-deployment script this intermediate data disappears.
What is more, the approach with using temporary tables allows us to face the following complicated but real situation: Imagine that we have two transformations in our DB project:
TableA -> TableB
TableB -> TableC
Apart from that we have two databases:
DatabaeA that have the TableA
DatabaeB where the TableA was already transformed into the TableB. The TableA is absent in the DatabaseB.
Nonetheless we can deal this situation. We need just one new action in the pre-deployment. Before the transformation we try to copy data form the dbo.TableA into #TableA. And the transformation script works with temporary tables only.
Let me show you how this idea works in DatabaseA and DatabaseB.
It is assumed that the DB project has two couples of the pre and post deployment scripts: "TableA -> TableB" and "TableB -> TableC".
Below is the example of the scripts for "TableB -> TableC" transformation.
Pre-deployment script
----[The data preparation block]---
--We must prepare to possible transformation
--The condition should verufy the existance of necessary columns
IF OBJECT_ID('dbo.TableB') IS NOT NULL AND
OBJECT_ID('tempdb..#TableB') IS NULL
BEGIN
CREATE TABLE #TableB
(
[Id] INT NOT NULL PRIMARY KEY,
[Value1] VARCHAR(50) NULL,
[Value2] VARCHAR(50) NULL
)
INSERT INTO [#TableB]
SELECT [Id], [Value1], [Value2]
FROM dbo.TableB
END
----[The data transformation block]---
--The condition of the transformation start
--It is very important. It must be as strict as posible to ward off wrong executions.
--The condition should verufy the existance of necessary columns
--Note that the condition and the transformation must use the #TableA instead of dbo.TableA
IF OBJECT_ID('tempdb..#TableB') IS NOT NULL
BEGIN
CREATE TABLE [#TableC]
(
[Id] INT NOT NULL PRIMARY KEY,
[Value] VARCHAR(50) NULL
)
--Data transformation. The source and destimation tables must be temporary tables.
INSERT INTO [#TableC]
SELECT [Id], Value1 + ' '+ Value2 as Value
FROM [#TableB]
END
Post-deployment script
--Here must be a strict condition to ward of a failure
--Checking of the existance of fields is a good idea
IF OBJECT_ID('dbo.TableC') IS NOT NULL AND
OBJECT_ID('tempdb..#TableC') IS NOT NULL
BEGIN
INSERT INTO [TableC]
SELECT [Id], [Value]
FROM [#TableC]
END
In the DatabaseA the pre-deployment script has already created the #TableA. Therefore the data preparation block won't be executed due to there is no dbo.TableB in the database.
However the data transformation will be executed because there is the #TableA in the database that was created by the transformation block of the "TableA -> TableB".
In the DatabaseB the data preparation and transformation blocks for the "TableA -> TableB" script won't be executed. However we already have the the transformed data in the dbo.TableB. Hence the the data preparation and transformation blocks for the "TableB -> TableC" will be executed without any problem.
I use the below work around in such scenarios
If you would like to drop a table
Retain the table within the dacpac (Under Tables folder).
Create a post deployment script to drop the table.
If you would like to drop a column
Retain the column in the table definition within dacpac (Under Tables folder).
Create a post deployment script to drop the column.
This way you can drop tables and columns from your database and whenever you make the next deployment ( may be after few days or even months) exclude that table/columns from dacpac so that dacpac is updated with the latest schema.
I am trying to use the SQL Server Database Project to keep all our table, stored procedure, views etc scripts. I now want a way to be able to keep all our reference (static) data as well. When the tool or project is run it will install all the DB objects and the insert all the reference data.
I found similar articles for vs 2010 but they were using things like Team Edition for Database professionals.
Get our DB under source control.
Synchronize our local development DB with latest version in source control.
Work with Visual Studio 2012 and SQL Server 2012
Use .Net tools as far as possible and not something like Redgate (Redgate is great but I don't want to for out for it just yet if I can use tools in VS 2012)
You can use this approach:
Put your reference data into XML files, one per table
Add XML files with reference data to your database project
Use a Post-Deployment script to extract the data from XML and merge it into your tables
Here is a more detailed description of each step, illustrated with an example. Let's say that you need to initialize a table of countries that has this structure:
create table Country (
CountryId uniqueidentifier NOT NULL,
CountryCode varchar(2) NOT NULL,
CountryName varchar(254) NOT NULL
)
Create a new folder called ReferenceData under your database project. It should be a sibling folder of the Schema Objects and Scripts.
Add a new XML file called Country.xml to the ReferenceData folder. Populate the file as follows:
<countries>
<country CountryCode="CA" CountryName="Canada"/>
<country CountryCode="MX" CountryName="Mexico"/>
<country CountryCode="US" CountryName="United States of America"/>
</countries>
Find Script.PostDeployment.sql, and add the following code to it:
DECLARE #h_Country int
DECLARE #xmlCountry xml = N'
:r ..\..\ReferenceData\Country.xml
'
EXEC sp_xml_preparedocument #h_Country OUTPUT, #xmlCountry
MERGE Country AS target USING (
SELECT c.CountryCode, c.CountryName
FROM OPENXML(#h_Country, '/countries/country', 1)
WITH (CountryCode varchar(2), CountryName varchar(254)) as c) AS source (CountryCode, CountryName)
ON (source.CountryCode = target.CountryCode)
WHEN MATCHED THEN
UPDATE SET CountryName = source.CountryName
WHEN NOT MATCHED BY TARGET THEN
INSERT (CountryId, CountryCode, CountryName) values (newid(), source.CountryCode, source.CountryName)
;
I tried this solution only in VS 2008, but it should be agnostic to your development environment.
How can I generate script instead of manually writing
if exists (select ... where id = 1)
insert ...
else
update ...
Very boring to do that with many records!
Using management studio to generate script 'Data only' generates only inserts. So running that against existing db gives error on primary keys.
For SQL 2008 onwards you could start using Merge statements along with a CTE
A simple example for a typical id/description lookup table
WITH stuffToPopulate(Id, Description)
AS
(
SELECT 1, 'Foo'
UNION SELECT 2, 'Bar'
UNION SELECT 3, 'Baz'
)
MERGE Your.TableName AS target
USING stuffToPopulate as source
ON (target.Id = source.Id)
WHEN MATCHED THEN
UPDATE SET Description=source.Description
WHEN NOT MATCHED THEN
INSERT (Id, Description)
VALUES (source.Id, source.Description);
Merge statements have a bunch of other functionality that is useful (such as NOT MATCHED BY DESTINATION, NOT MATCHED BY SOURCE). The docs (linked above) will give you much more info.
MERGE is one of the most effective methods to do this.
However, writing a Merge statement is not very intuitive at the beginning, and generating lines for many rows or many tables is a time-consuming process.
I'd suggest using one of the tools to simplify this challenge:
Data Script Writer (Desktop Application for Windows)
Generate SQL Merge (T-SQL Stored Procedure)
I wrote a blog post about these tools recently and approach to leveraging SSDT for deployment database with data. Find out more:
Script and deploy the data for database from SSDT project
I hope this can help.