Bulk Insert in Post Deployment Script - sql-server

So, I have a script that uses bulk insert to pull text from files and insert their contents into a table. I am loading from text files because the text may be large and in doing this, I do not need to worry about escaping. I have the script working locally with a set defined directory. ex ('C:\Users\me\Files\File.txt') But, I need to run this script in a Post Deployment script. The text files that I am reading from are in the same Database project. I cannot do a set defined directory as the directory may be different depending on the different environments that the project is published to. Is there a way to get a relative path or get what the solution/project's directory is after deployment?

So, because Bulk Insert needs an absolute path, scripts have no concept of relative paths, and this will be deployed on multiple environments where I do not know the absolute path. I decided to utilize Powershell AND Bulk Insert. So, what I am doing is, on the Database project's Pre-Build, I call my Powershell script. The Powershell script is able to figure out its current directory. I build a SQL file that is called in the Post-Deployment script. In this SQL file, I Bulk Insert using the current directory.

Why not use BCP: http://msdn.microsoft.com/en-us/library/ms162802.aspx ? It can handle relative paths. And if you are able to all PowerShell, I don't see why you wouldn't be able to call BCP.EXE. And it is essentially the same API as BULK INSERT.

Have you considered using a standard location on the file system? When I need to write DOS/CMD scripts that are portable (including install stuff for later consumption via T-SQL, such as CREATE ASSEMBLY FROM), I do something like:
IF NOT EXIST C:\TEMP\MyInstallFolder (
MKDIR C:\TEMP\MyInstallFolder
)
REM put stuff into C:\TEMP\MyInstallFolder now that it is certain to be there)
REM CALL some process that looks in C:\TEMP\MyInstallFolder
The MKDIR will create all missing parent folders. So a folder like TEMP, which used to be standard on PCs running Windows, is typically not there anymore since they have moved to per-user temp folders but is created and then MyInstallFolder is created, causing no errors. The IF NOT EXIST will make sure that re-running the script will also not error after the first run.

Related

Including scripts in QueryBank / Saved Export queries?

Can I incorporate scripts setting variables and while loops, etc. in the "QB Query" of the Query Bank?
I have a SQL Server script that works perfectly in my local dev DB but it doesnt play nice with Volusion.
I don't know if I should spend more time figuring it out or just stop because it isn't even possible.
You can't do it in the Custom Queries / Query Bank area. I believe their system will stop executing the script when it encounters certain keywords or punctuation. A work around is to create a .sql file that contains your script and place it in your vspfiles/schema/Generic folder. You'll also need an .xsd file with the same name. The contents of the xsd file aren't very important - you can reuse the contents from an existing one (search their support pages for Developer Resources to find examples). Once the sql and xsd files are in place, you can execute the SQL in the .sql file by using the URL/API method, like this...
http://www.MYWEBSITE.com/net/WebService.aspx?Login=USER#MYWEBSITE.com&EncryptedPassword=XXXXXXXXXXXXXXXXX&EDI_Name=Generic\FILENAME (<-- minus the .sql extension)
you'll need to replace several things above of course. But this works well for us. One thing to note, if you automate the creation and execution of these files, it's slow/inefficient on their system and could slow your site down, depending on how often you do it.

How to delete my temp folder after Powershell is forced to stop

I have a security question regarding a script that I have. My script creates two temporary CSV files, and after I run the script those files are deleted. Now, when the script crashes or the user stops it, those files remain in to the folder. How can I make sure that those files will get deleted if this happens?
I was thinking about using the windows temp folder "$TempDir = [System.IO.Path]::GetTempPath()", but this will not make any difference since the temp folder is renewed after boot time.
Any thoughts/suggestions?
One solution to your problem may be to write the temporary files to a location that only a limited number of people have access too.
Then, if the existence of these files may affect future executions of the script then a good practice would be to first check for left-overs from previous executions of the scripts, and if there are start the script with cleaning these up.

is it possible to update .properties files from batch script?

is it possible to update .properties files from batch script?
We have a external system which reads one configuration properties files and based on that it is doing some task. Our plan is to accept some dynamic values(like version number etc.) from user input(Jenkins parametrized build) and update those new values in existing config.properties file.
Please help!
If I'm understanding correctly, yes, I do something similar. JobA feeds properties to JobB.
JobA runs a task that creates an Amazon EC2 instance. The public URL is detected after instance creation, and I write that to an output.properties file. That file is saved to the jenkins node running JobA.
In Post-build Actions, use "Trigger parameterized build on other projects." -- from the Parameterized Trigger Plugin https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin --
"Projects to build" is JobB.
In "Parameters from properties file," point it at the output.properties file (relative to the workspace).
When JobB is triggered it'll use that file just like any other properties file, so you have a downstream job that's able to consume relatively random output from an upstream job.
[jenkins#burl-aet-jenk01 poc]$ cat output.properties
INSTANCE_HOSTNAME = ec2-54-84-37-183.compute-1.amazonaws.com

Creating A Log Of Files In A Folder and update into table

Can anyone help me to build a table that lists all files in a specified folder, so whenever a file is copied to that folder the table should update and make a log of files?
I need the list to retain the names, even if the file is moved from that folder or deleted. Later the data would be deleted by a scheduler.
Also I need the table to record the time exactly when the file was copied into that folder and not the modification or creation time.
I am using windows 7; how can I build a system with my desired behaviour?
Just turn on Windows file auditing, for that folder, the youtube video takes you through the process.
Microsoft provide information on their techNet site as to how you can use the LogParser tool to extract Security events from the Event Log DB.
Note: Admin questions should really be posted to the SuperUser site.

Data tier applications - Post Deployment

This is such a simple thing that even asking here is making me feel stupid but since I have been stuck on this for long time, I will ask it here. I am working on a data-tier application in visual studio. I have usual things like tables, stored procs and some post deployment data. By default, data tier application comes with Scripts/Post-Deployment folder. Inside this folder there is a file called Script.PostDeployment.sql. Just to be little more organised, I am creating folders inside Post-Deployment as StaticData and TestData. My insert statements for data creating are locatied inside these folders. So, based on this structure, I am adding following code to my Script.PostDeployment.sql:
/*
Post-Deployment Script Template
--------------------------------------------------------------------------------------
This file contains SQL statements that will be appended to the build script.
Use SQLCMD syntax to include a file in the post-deployment script.
Example: :r .\myfile.sql
Use SQLCMD syntax to reference a variable in the post-deployment script.
Example: :setvar TableName MyTable
SELECT * FROM [$(TableName)]
--------------------------------------------------------------------------------------
*/
:r .\StaticData\States.sql
:r .\TestData\Logins.sql
The problem is the above code does not work. For some starnge reason, the deploy command just ignores the paths and looks for States.sql and Logins.sql in Scripts/Post-Deployment and not in appropriate subfolders. Anyone else encountered anything similar? Very simple issue, but taking me forever to get around this. I have tried my best to explain, but ask questions and I can try to make things clearer.
Thanks!
I took a look at your sample code. When I had tried to reproduce this, I was using a SQL 2008 database project in Visual Studio 2010, but what your project is a data-tier application, and that is very different; when I switched to using a data-tier application, I was able to reproduce what you're seeing.
Data-tier applications produce DAC packages that contain the definitions of objects and also contain user-defined scripts, like the pre and post deployment scripts. Now, I'm not 100% certain (I haven't used DAC packages before, so I'm basing this on observation and research), but I'm guessing that the file structure of the DAC package doesn't support sub-folders under the Script\Post-deployment folder; I am assuming it has a pretty strict folder structure internally. Consequently, the DACCompiler appears designed to strip out just the filenames from your file references in the post-deployment script, and it ignores the directory path.
There is a whitepaper on data-tier applications here. In it is a section on adding a post-deployment script to the package, and in that section are some best practices, including the following:
• When you work in Solution Explorer, it is recommended that you include all post-deployment commands in the Script.PostDeployment.sql script file. This is because only one post-deployment file is included in the DAC package. In other words, you should not create multiple files.
Now, technically, that's what the :r command does, but you may find it easier to just embed the commands directly into the file manually.
It's also possible that this is simply a bug in the design of the DACCompiler.
Here's what I recommend that you do:
For now, the easiest thing to do - I
believe - is just to move the scripts
up directly under the Post-Deployment
folder; give them unique, descriptive
names to compensate for not having
the subdirectories.
Alternatively, if you really want to
keep the subdirectories, add a
pre-build command to your project;
have it copy the scripts from the
subdirectories into the
post-deployment directory before the
build starts (you'll need to ensure
the scripts have unique filenames)
If you feel that this is a bug, or a
feature that should exist, go to
http://connect.microsoft.com/SQLServer
and recommend that the product team
address it in a future version of the
product. This is a great place to
make these kinds of recommendations,
because the feedback goes to the
product team, the user community at
large can vote on feedback to
increase its weight, and the product
team can communicate back to you with
information about the feedback.
And, of course, you could hold out and see if somebody else has a different answer, and if there is, great! But I'm guessing if nobody else has responded yet, then probably there isn't one; I certainly couldn't find anything in my digging.
I hope overall this information is helpful. I wish I could give you a way to have it work now, but I think your best bet is to work within the limitations of the current design and post feedback to Connect.
Good luck.
I have a feeling that this will be too late in the pipeline to help with your problem but it might be worth a look. The dacpac format is just a zip file that contains a series of xml files and SQL scripts. If you change the extension of the file to zip then you will be able to access the files that it contains. The postdeploy.sql file should contain the aggregation of your post deployment script and any others that it references.
I just tried this using Visual Studio 2013 and it works.
IF ( '$(DeployType)' = 'Qualification' )
BEGIN --Run scripts
PRINT 'Deploying Qualification Specific scripts.'
:r .\Qualification\"QualificationSpecificTestScript.sql"
END
ELSE IF ( '$(DeployType)' = 'Production' )
BEGIN --Run scripts
PRINT 'Deploying Production Specific scripts.'
:r .\Production\"ProductionSpecificTestScript.sql"
END
The contents QualificationSpecificTestScript.sql and ProductionSpecificScript.sql are inserted into the generated Post Deployment script.
Here is the generated script file (just the relevant section):
IF ( '$(DeployType)' = 'Qualification' )
BEGIN --Run scripts
PRINT 'Deploying Qualification Specific scripts.'
begin transaction;
PRINT 'IN QUALIFICATION ENVIRONMENT POST DEPLOYMENT SCRIPT'
commit transaction;
END
ELSE IF ( '$(DeployType)' = 'Production' )
BEGIN --Run scripts
PRINT 'Deploying Production Specific scripts.'
begin transaction;
PRINT 'IN PRODUCTION ENVIRONMENT POST DEPLOYMENT SCRIPT'
-- TODO: Confirm this record should be deleted
--DELETE TB_VariableName where Id = 9514
commit transaction;
END

Resources