Releasing a local SQL database after it has been tested - sql-server

I'm developing a software that during its start-up will check if the attached local database (by which I mean a separate .mdf file that is attached using an open dialog box) is the appropriate database for the software. And if it is, I'll copy the source file, then paste it where my software can always find it (ex. C:\Program Files(my system generated folder)). To do that, I have to first release the .mdf file so I can copy it to my folder.
How can I release the .mdf file so I can create a copy of it to my desired folder during runtime of my software? I'm using vb.net.

Not sure if this is what you actually mean, but it seems likely. To copy / move database files you have to kick every one off i.e., take it offline (or detach it). You can do this via SMSS.
If you are looking to do this via code, this may be what you are looking for.
alter database mydbname set offline with rollback immediate
Note the this kicks everyone off immediately, potential causing unhappy users.
ADDED
After I answered, it occurred to me this had to be a duplicate, and it was

Related

SQL Server Management Studio Restore Database only restores databases one at a time

I am using SQL Server Management Studio 2014; I am trying to restore a number of databases and encountered some strange behavior.
As expected, the "Add" button allows me to add several .bak files.
Everything proceeds normally. However, when it finishes importing, it shows me a dialog saying "Database '[name]' imported successfully", where [name] is the name of the first .bak file I added.
Indeed, I can confirm looking at the Databases in Object Explorer that it added only the first .bak file on the list; none of the other items were restored.
I've seen posts that suggest how to write scripts to do this kind of an import, but I'd prefer to do it from the UI.
Does anyone know what could be causing this behavior and how to fix it?
Yeah, suppose I could just do them one at a time as a workaround, but that's really time-consuming and error-prone - I'd rather just be able to restore all of them at once.
That is the defined behaviour. The Add button allows you select a backup that uses more than one file. The full process involves selecting a database first and then restoring that. As you mention in your question there is the option of using scripts and you can use the interface to give you the script. All you need to do then is to run the scripts as a batch.
If you open a New Instance of SQL Management studio you could do more than one at once. This would be the way to work around the built in GUI limitation if you are doing the restore/backup manually.

How to version control SQL Server database with Visual Studio's Git Source Control Provider

I have Git Source Control Provider setup and running well.
For Visual Studio projects, that is.
The problem is that each such project is tightly tied to a SQL Server database.
I know how to version control a database in its own .git repository but this is neither convenient nor truly robust because ideally I would want the same ADD, COMMIT, TAG and BRANCH commands to operate on both directory trees simultaneously, in a synchronized manner.
Is there a way to Git SQL Server database with Visual Studio's Git Source Control Provider in the manner I described?
You can install the SQL Server Data Tools if you want to, but you don't have to: You can Use the Database Publishing Wizard to script your table data right from Visual Studio into the solution's folder, then Git it just like you do with the other project files in that folder.
You can store your database schema as Visual studio project using SQL Server Data Tools and then version control this project using Git.
Being in the database version control space for 5 years (as director of product management at DBmaestro) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database the
change already exists and impacts other data even if it was not
checked-in into the repository.
During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a difference (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a difference that was origin from a different path
and then it should be skipped, such as different branch or an
emergency fix.
Use a solution that knows how to perform Impact Analysis for many
schemas at once, using UI or using API in order to eventually
automate the build & deploy process.
An article I wrote on this was published here, you are welcome to read it.

SSMS And Visual Basic Express .... cannot backup

Let us start with...yes I am new to SQL and really only a lightweight programmer. So I am assuming that I am doing something horribly wrong. I have spent days on the MS forums looking for an answer to no avail. So I am going to give as much info as possible.
Application language is VBExpress 2010 and using SQLExpress 2008. The Database contain basic tables, no stored procs, no views, no diagrams. The Application has configured diagrams where one of the tables has inner joins... Tables origninally built in SSMS, but have been altered in VBE.
Anytime I run the application, even after exiting the application, if I then go to SSMS I can see the database name but I cannot open it up (no + beside it). If I try I get an error that says:
One or more files do not match the primary file of the database. If you are attempting to attach a database retry the operation with the correct files. If this is an existing database the file may be corrupted and should be restored from a backup."
When I look at the files, I see two log files, one with _1 appended on to it. If I delete the logfiles before opening SSMS, everything opens fine. If I had already opened SSMS then I have to delete the files, reboot my computer and then I can access the database through SSMS...
I recently found that if I go into SSMS, take the database offline and then bring it back online I can get access back.
Anytime I open SSMS, I have to completely reboot my computer before VBE will reconnect the database.
The bottom line is that I cannot back up the database without either deleting the log files or doing an offline/online cycle in SSMS....
This is driving me nuts. I cannot possilby deploy the application if I cannot achieve a normal backup procedure. And I cannot seem to get any kind of answer about why this is happening.
What am I doing so WRONG?
If you are using SQL Server 2008 then, reattach the database files and it will must be stored in C:\Program Files\Microsoft SQL Server\MSSQL 10.MSSQLSERVER\MSSQL\DATA.
And recheck the database on that folder and it will take backup and restore point of Primary files of Database. Don't modify/delete the database files. If the location of log files are changed the it will shows the error. Please give your mail id I'll send a program to restore and backup of database files in winrar.
Thank you
Regards,
Naresh.

Is it possible to access the FILESTREAM share?

What I mean is being able to access it through Windows Explorer or other programs. I believe the answer is that it isn't possible. But I really want to know why it's not allowed. It seems that the files could be made available read-only through the network share.
You can't access the Filestream share directly and explore around. Any open to a Filestream file needs to be done using the path retrieved from SQL Server and by using NtCreateFile (or a wrapper) with the appropriate transaction context passed in through the EABuffer.
It is possible to create a new share and point it to the physical location of the files, however this is pretty pointless as there's no supported way to resolve a SQL Filestream row to a physical file location (the RsFx filter driver handles these conversions internally), the file location may change at any time due to concurrent updates / partition changes, and you'll need to relax security on the folder to an unacceptable level. It can also cause corruptions in the database if you move or delete files without the knowledge of SQL Server. Any locks held on physical files will interfere with deletes as mentioned in dportas' comment.
I agree it would be great to be able to browse a namespace of the Filestream files through explorer and open files directly through applications without requiring an application rewrite.
Yes it is possible. The point of filestream however is that you get that access via the filestream API rather than direct through the filesystem. Bear in mind that the file name could change without warning - for example updates may cause a new filestream file to be created. Possibly if you are holding file system locks (even shared locks) on a file that is needed by SQL Server then that may cause a contention problem. So if you access the data direct through the file system the results will be unsupported and may be unreliable - but then again it might work :-)
Yes it is possible if you are also using FileTables (I am using Sql Express 2017). When in Sql Server Configuration Manager, right click on your server instance, select Properties, and then go to the FILESTREAM tab. Check the "Allow remote clients access to FILESTREAM data". You may have to stop/start your instance. Now you can browse to the share, which is named according to your instance (in my case SqlExpress). In my database (SimioPortal) I had created a file (BlobStore) where I stored my files.
So, at the command prompt I can now type: dir \localhost\sqlexpress\SimioPortal\blobstore and see a list of my files. You can do a similar thing in File Explorer.

Keeping development databases in multiple environments in sync

I'm early in development on a web application built in VS2008. I have both a desktop PC (where most of the work gets done) and a laptop (for occasional portability) on which I use AnkhSVN to keep the project code synced. What's the best way to keep my development database (SQL Server Express) synced up as well?
I have a VS database project in SVN containing create scripts which I re-generate when the schema changes. The original idea was to recreate the DB whenever something changed, but it's quickly becoming a pain. Also, I'd lose all the sample rows I entered to make sure data is being displayed properly.
I'm considering putting the .MDF and .LDF files under source control, but I doubt SQL Server Express will handle it gracefully if I do an SVN Update and the files get yanked out from under it, replaced with newer copies. Sticking a couple big binary files into source control doesn't seem like an elegant solution either, even if it is just a throwaway development database. Any suggestions?
There are obviously a number of ways to approach this, so I am going to list a number of links that should provide a better foundation to build on. These are the links that I've referenced in the past when trying to get others on the bandwagon.
Database Projects in Visual Studio .NET
Data Schema - How Changes are to be Implemented
Is Your Database Under Version Control?
Get Your Database Under Version Control
Also look for MSDN Webcast: Visual Studio 2005 Team Edition for Database Professionals (Part 4 of 4): Schema Source and Version Control
However, with all of that said, if you don't think that you are committed enough to implement some type of version control (either manual or semi-automated), then I HIGHLY recommend you check out the following:
Red Gate SQL Compare
Red Gate SQL Data Compare
Holy cow! Talk about making life easy! I had a project get away from me and had multiple people in making schema changes and had to keep multiple environments in sync. It was trivial to point the Red Gate products at two databases and see the differences and then sync them up.
In addition to your database CREATE script, why don't you maintain a default data or sample data script as well?
This is an approach that we've taken for incremental versions of an application we have been maintaining for more than 2 years now, and it works very well. Having a default data script also allows your QA testers to be able to recreate bugs using the data that you also have?
You might also want to take a look at a question I posted some time ago:
Best tool for auto-generating SQL change scripts
You can store backup (.bak file) of you database rather than .MDF & .LDF files.
You can restore your db easily using following script:
use master
go
if exists (select * from master.dbo.sysdatabases where name = 'your_db')
begin
alter database your_db set SINGLE_USER with rollback IMMEDIATE
drop database your_db
end
restore database your_db
from disk = 'path\to\your\bak\file'
with move 'Name of dat file' to 'path\to\mdf\file',
move 'Name of log file' to 'path\to\ldf\file'
go
You can put above mentioned script in text file restore.sql and call it from batch file using following command:
osql -E -i restore.sql
That way you can create script file to automate whole process:
Get latest db backup from SVN
repository or any suitable storage
Restore current db using bak file
We use a combo of, taking backups from higher environments down.
As well as using ApexSql to handle initial setup of schema.
Recently been using Subsonic migrations, as a coded, source controlled, run through CI way to get change scripts in, there is also "tarantino" project developed by headspring out of texas.
Most of these approaches especially the latter, are safe to use on top of most test data. I particularly like the automated last 2 because I can make a change, and next time someone gets latest, they just run the "updater" and they are ushered to latest.

Resources