Database integration tests in Visual Studio Online - sql-server

I'm enjoying the new Build tool in Visual Studio Online. Allows me to do almost everything that I do my local build server. But one thing that I'm missing is integration database tests: for every build run I re-create test database from scripts and run DB-tests against it.
In Visual Studio Online I can't seem to find any database instance available for my needs.
I have tried creating Azure SQL database (via PowerShell) for every build run and then delete it after the build is complete. But it takes forever (comparing to the rest of the build process) to create a database. And even when PowerShell scripts are done, database is not yet ready to accept requests - I need to constantly check if it is actually ready. So this scenario becomes too complex and not reliable.
Are there other options to do database (SQL Server) integration tests in Visual Studio Online?
Update: I think I'm not very clear of what I need - I need a free (very cheap) SQL Server instance to connect to that runs on build agent in VSO. Something like SQL Express or SQL CE or LocalDB, where I can connect to and re-create database to run C# tests against. Re-creating database or running tests is not a problem, having a valid connection string is a problem.
Update Oct 2016: I've blogged about how I do integration testing in VSTS

The TFS build servers come with MSSQL Server 2012 and MSSQL Server 2014 LocalDBs preinstalled.
Source: TFS Service - Software on the hosted build server
So, just put the following one-liner into your solution's post-build event to create a MYTESTDB LocalDB instance for your needs. This will allow you to connect to (LocalDB)\MYTESTDB an run the database integration tests just fine.
"C:\Program Files\Microsoft SQL Server\120\Tools\Binn\SqlLocalDB.exe" create "MYTESTDB" 12.0 -s
Source: SqlLocalDB Utility

In Azure DevOps, with .net Core and EF Core, I use a different technique.
I use a SQLite in memory database to execute both Integration and End to End tests.
Currently in .net Core you can use both InMemory database and SQLite with in memory option, to run any integration test in the default Azure DevOps CI Agent.
InMemory: https://learn.microsoft.com/en-us/ef/core/miscellaneous/testing/in-memory
Note that the InMemory database is not a relational database, it is a multi-purpose one, and just to mention one limitation:
InMemory will allow you to save data that would violate referential
integrity constraints in a relational database
SQLite in memory mode https://learn.microsoft.com/en-us/ef/core/miscellaneous/testing/sqlite
This approach offers a more realistic platform to test against.
Now, I went a bit further, I didn't want to just be able to run integration tests with database dependency in Azure DevOps, I wanted to also be able to host my WebAPIs in the CI Agent, and to share the database between the API DBcontext and my Persister object (Persister object is a helper class that allow me to automatically generate any kind of entity and save them to the database).
A quick note on Integration Tests and Ent to End tests:
Integration Tests
An example of integration test involving a database, could be a test of the Data Access Layer. In this case, normally, you would create a DBContext when starting a test, fill the target database with some data, use the component under test to manipulate the data, and again use the DBContext to make sure the assertions are satisfied.
This scenario is quite straight forward, in the same code you can share the same DBContext to generated the data and inject it to the component.
End to End Tests
Imagine you have like in my case a RESTful .net Core WebAPI you want to test, making sure all your CRUD operations are working as expected, and you want to test that filtering, pagination and so on are also correct.
In this case, it much more complex share the same DBContext between test (data setup and/or verification) and the WebAPI stack.
Before .net EF Core and WebHostBuilder
So far, the only way I knew was possible, was to have a dedicated server, VM or docker image, responsible serve the API, which had to be also accessible from the web or Azure DevOps.
Setup my integration tests to either re-create the DB, or be clever/limited enough to ignore completely the existing data, and make sure that each test was resilient to data corruption and fully reliable (no false negative or positive results).
Then I had to configure my build definition to run the tests.
Leveraging SQLite in memory with cache=shared and WebHostBuilder
Below I first describe the two majour technologies I use, then I add some code to show how to do it.
SQLite file::memory:?cache=shared
SQLite allow you to work in memory, instead of using a traditional file, this already gives us a huge performance boost, removing the I/O bottleneck, but on top of this, using the option cache=shared, we can use multiple connections within the same process to access the same data. If you need more than one database you can specify a name.
More info: https://www.sqlite.org/inmemorydb.html
WebHostBuilder
.net Core offers Host builders, WebHostBuilder allow us to create a server that startup and host our WebAPI, so that can be reached like if they were hosted on a real server.
When you use the WebHostBuilder in a test class, this two, are living within the same process.
More info: https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.hosting.webhostbuilder?view=aspnetcore-2.2
The Solution
When initialising an E2E test, create a new client to connect the api, create a dbcontext that you will use to seed the database and maybe assert.
Test initialisation:
[TestClass]
public class CategoryControllerTests
{
private TestServerApiClient _client;
private Persister<Category> _categoryPersister;
private Builder<Category> _categoryBuilder;
private IHouseKeeperContext _context;
protected IDbContextTransaction Transaction;
[TestInitialize]
public void TestInitialize()
{
_context = ContextProvider.GetContext();
_client = new TestServerApiClient();
ContextProvider.ResetDatabase();
_categoryPersister = new Persister<Category>(_context);
_categoryBuilder = new Builder<Category>();
}
[TestCleanup]
public void Cleanup()
{
_client?.Dispose();
_context?.Dispose();
_categoryPersister?.Dispose();
ContextProvider.Dispose();
}
[...]
}
TestServerApiClient class:
public class TestServerApiClient : System.IDisposable
{
private readonly HttpClient _client;
private readonly TestServer _server;
public TestServerApiClient()
{
var webHostBuilder = new WebHostBuilder();
webHostBuilder.UseEnvironment("Test");
webHostBuilder.UseStartup<Startup>();
_server = new TestServer(webHostBuilder);
_client = _server.CreateClient();
}
public void Dispose()
{
_server?.Dispose();
_client?.Dispose();
}
}
ContextProvider class is used to generate the DBContext which can be used to seed data or perform database queries for assertions.
public static class ContextProvider
{
private static bool _requiresDbDeletion;
private static IConfiguration _applicationConfiguration;
public static IConfiguration ApplicationConfiguration
{
get
{
if (_applicationConfiguration != null) return _applicationConfiguration;
_applicationConfiguration = new ConfigurationBuilder()
.AddJsonFile("Config/appsettings.json", optional: false, reloadOnChange: true)
.AddEnvironmentVariables()
.Build();
return _applicationConfiguration;
}
}
private static ServiceProvider _serviceProvider;
public static ServiceProvider ServiceProvider
{
get
{
if (_serviceProvider != null) return _serviceProvider;
var serviceCollection = new ServiceCollection();
serviceCollection.AddSingleton<IConfiguration>(ApplicationConfiguration);
var databaseType = ApplicationConfiguration?.GetValue<DatabaseType>("DatabaseType") ?? DatabaseType.SQLServer;
_requiresDbDeletion = databaseType == DatabaseType.SQLServer;
IocConfig.RegisterContext(serviceCollection, null);
_serviceProvider = serviceCollection.BuildServiceProvider();
return _serviceProvider;
}
set
{
_serviceProvider = value;
}
}
/// <summary>
/// Generate the db context
/// </summary>
/// <returns>DB Context</returns>
public static IHouseKeeperContext GetContext()
{
return ServiceProvider.GetService<IHouseKeeperContext>();
}
public static void Dispose()
{
ServiceProvider?.Dispose();
ServiceProvider = null;
}
public static void ResetDatabase()
{
if (_requiresDbDeletion)
{
GetContext()?.Database?.EnsureDeleted();
GetContext()?.Database?.EnsureCreated();
}
}
}
IocConfig class is an helper class I use in my framework to setup the dependency injection. The menthod used above, RegisterContext, is responsile to register the DBContext and set it up as desired, and because this is the same class used by the WebAPI, uses the configuration DatabaseType to determine what to do.
Inside this class probably you can find most of the "complexity".
When using SQLite in memory, you have to remember that:
The connection is not opened and closed automatically like when using SQL Server (that's why i used: context.Database.OpenConnection();)
If no connection is active, the database is deleted (that's why I used services.AddSingleton<IHouseKeeperContext>(s ... it is important that one connection is left open so that the database is not destroyed, but on the other hand you have to be careful to close all connections when a test ends, so that the database is eventually destroyed and the next test will correctly create a new empty one.
The rest of the class handles the SQL Server configuration for both Production and Testing setup. I can at any time setup the tests to use a real instance of SQL Server, all tests will keep being fully independend from the others but it will definitely be slow, and maybe suitable only for a nightly build (if needed, and it depends on the size of your system).
public class IocConfig
{
public static void RegisterContext(IServiceCollection services, IHostingEnvironment hostingEnvironment)
{
var serviceProvider = services.BuildServiceProvider();
var configuration = serviceProvider.GetService<IConfiguration>();
var connectionString = configuration.GetConnectionString(Constants.ConfigConnectionStringName);
var databaseType = DatabaseType.SQLServer;
try
{
databaseType = configuration?.GetValue<DatabaseType>("DatabaseType") ?? DatabaseType.SQLServer;
}catch
{
MyLoggerFactory.CreateLogger<IocConfig>()?.LogWarning("Missing or invalid configuration: DatabaseType");
databaseType = DatabaseType.SQLServer;
}
if(hostingEnvironment != null && hostingEnvironment.IsProduction())
{
if(databaseType == DatabaseType.SQLiteInMemory)
{
throw new ConfigurationErrorsException($"Cannot use database type {databaseType} for production environment");
}
}
switch (databaseType)
{
case DatabaseType.SQLiteInMemory:
// Use SQLite in memory database for testing
services.AddDbContext<HouseKeeperContext>(options =>
{
options.UseSqlite($"DataSource='file::memory:?cache=shared'");
});
// Use singleton context when using SQLite in memory if the connection is closed the database is going to be destroyed
// so must use a singleton context, open the connection and manually close it when disposing the context
services.AddSingleton<IHouseKeeperContext>(s => {
var context = s.GetService<HouseKeeperContext>();
context.Database.OpenConnection();
context.Database.EnsureCreated();
return context;
});
break;
case DatabaseType.SQLServer:
default:
// Use SQL Server testing configuration
if (hostingEnvironment == null || hostingEnvironment.IsTesting())
{
services.AddDbContext<HouseKeeperContext>(options =>
{
options.UseSqlServer(connectionString);
});
services.AddSingleton<IHouseKeeperContext>(s => {
var context = s.GetService<HouseKeeperContext>();
context.Database.EnsureCreated();
return context;
});
break;
}
// Use SQL Server production configuration
services.AddDbContextPool<HouseKeeperContext>(options =>
{
// Production setup using SQL Server
options.UseSqlServer(connectionString);
options.UseLoggerFactory(MyLoggerFactory);
}, poolSize: 5);
services.AddTransient<IHouseKeeperContext>(service =>
services.BuildServiceProvider()
.GetService<HouseKeeperContext>());
break;
}
}
[...]
}
Sample Test, where first I use the persister to generated data which is seeded in the database, then I use the API to get data, the test can be also reversed, using a POST request to set data and then using the DBContext to read the db and make sure the creation was successful.
[TestMethod]
public async Task GET_support_orderBy_Id()
{
_categoryPersister.Persist(3, (c, i) =>
{
c.Active = 1 % 2 == 0;
c.Name = $"Name_{i}";
c.Description = $"Desc_i";
});
var response = await _client.GetAsync("/api/category?&orderby=Id");
var categories = response.To<List<Category>>();
Assert.That.All(categories).HaveCount(3);
Assert.IsTrue(categories[0].Id < categories[1].Id &&
categories[1].Id < categories[2].Id);
response = await _client.GetAsync("/api/category?$orderby=Id desc");
categories = response.To<List<Category>>();
Assert.That.All(categories).HaveCount(3);
Assert.IsTrue(categories[0].Id > categories[1].Id &&
categories[1].Id > categories[2].Id);
}
Conclusions
I love the fact that I can run E2E tests in Azure DevOps for free, performances are incredibly good and this gives me a lot of confidence, ideal when you want to setup a continuous delivery environment.
Here is a screenshot of part of the build execution of this code in Azure DevOps (free version).
Sorry this ended up being longer than expected.

There is a "Redgate SQL CI" extension for VSTS in the marketplace you may want to try. See this link for details:
Within the extension, there are four actions available:
•Build – builds your database into a NuGet package from the database
scripts folder in source control
•Test – runs your tSQLt tests against the database
•Sync – synchronizes the package to an integration database
•Publish – publishes the package to a NuGet stream

You should push the integration tests (anything that needs an instance of your application) to be run in an environment as part of your release pipeline.
In your build just do compile and unit tests. If that competes you should trigger a Release and as part of your release pipeline your first step should be to deploy your database to an azure server.
Instead of trying to use SQL Azure you can create a VM in azure that already exists that has SQL server installed. Use remote scripting to deploy the database and execute your tests.
Even if you are not using the release tools to release this would work for you.

Related

Entity Framework on SQL Server CE - lazy vs eager loading, performance considerations

I have an application which is built with Entity Framework. The entities have virtual navigation properties, which means lazy loading. Everything is working, however, I have noticed a performance issue.
The lazy loading on some of the properties causes multiple queries to the database. e.g. if I .Where(...) on a collection of 10 items - it will generate 10 additional calls to the database. Let's say that the SQL time on those 10 calls is a total of 20ms. But the overall time it takes to complete the query is much higher.
If I eager-load with .Include(...), I see a similar SQL time (i.e. 20ms), but the operation completes much faster.
I haven't ran Profiler yet, but I suspect that the bottleneck is opening and closing the SQL Server CE database, or some other similar 'infrastructure' operation.
I really want to use the lazy loading, it make my code much simpler. Is there any way I can optimize the SQL Server CE connection or is there anything I could do at all to increase the performance with SQL Server CE?
My connection string right now
<add name="dataRepositoryConnection"
connectionString="Data Source=|DataDirectory|XXX.sdf"
providerName="System.Data.SqlServerCe.4.0" />
I have really narrowed down the issue to SQL CE, because when ran against SQL Server, the two scenarios (eager vs lazy loading) give the same performance.
Open a connection to the database in your application startup code, and leave it open for the lifetime of your app. Do not use this connection for any data access.
That will open the SQL Compact file and load the SQL Compact dll files at startup (and only at startup).
It is unclear if your application is a web app or desktop app, but you can use code similar to this and call it from Application_Start / App_Startup etc.:
public static class ContextHelper
{
private static ChinookEntities context ;
private static object objLock = new object();
public static void Open()
{
lock (objLock)
{
if (context != null)
throw new InvalidOperationException("Already opened");
context = new ChinookEntities();
context.Connection.Open();
}
}
}
See the deployment section in my blog post here: http://erikej.blogspot.dk/2011/01/entity-framework-with-sql-server.html

Bluemix connecting to external SQL Server Database

I have an application built using the ASP.NET 5 runtime - I would like to connect it to an on-premise SQL Server Database.
After some research I've already created the user-provided service with the relevant credentials, however I am unsure what to do next (i.e. writing the necessary code connecting it in ASP.NET).
Some further googling suggests to use Secure Gateway? but is this the only way? the cloud I am working on is dedicated and does not have the Secure Gateway service. Is there a workaround for this?
(Note: The application I'm working on is based on the ASP.NET-Cloudant example on IBM Github, if that helps).
https://github.com/IBM-Bluemix/asp.net5-cloudant
The Secure Gateway service isn't required as long as the Bluemix environment can connect to the server running SQL Server. This might require your firewall rules to be a little more relaxed on the SQL Server, or you can contact IBM to create a secure tunnel as Hobert suggested in his answer.
Aside from that issue, if you're planning to use Entity Framework to connect to your SQL Server, it should work similar to the existing tutorials on the asp.net site. The only difference will be in how you access the environment variables to create your connection string.
Assuming that you created your user-provided service with a command similar to this:
cf cups my-sql-server -p '{"server":"127.0.0.1","database":"MyDB","user":"sa","password":"my-password"}'
Your connection string in your Startup.cs file's ConfigureServices method would then look something like this:
string vcapServices = Environment.GetEnvironmentVariable("VCAP_SERVICES");
string connection = "";
if (vcapServices != null)
{
string myServiceName = "my-sql-server";
JArray userServices = (JArray)JObject.Parse(vcapServices)?["user-provided"];
dynamic creds = ((dynamic)userServices
.FirstOrDefault(m => ((dynamic)m).name == myServiceName))?.credentials;
connection = string.Format(#"Server={0};Database={1};User Id={2}; Password={3};",
creds.server, creds.database, creds.user, creds.password);
}
Update
The cloudant boilerplate that you're modifying doesn't use Entity Framework because cloudant is a NoSQL database, so it's a bit different than connecting to SQL Server. The reason that the boilerplate calls .Configure to register the creds class is that it needs to use that class from another location, but when using Entity Framework you simply need to use the credentials when adding EF to the services in the Startup.cs file so you don't need to use .Configure<creds>.
If you follow the guide here, the only part you'll need to change is the line var connection = #"Server=(localdb)\mssqllocaldb;Database=EFGetStarted.AspNet5.NewDb;Trusted_Connection=True;"; replacing it with the code above to create the connection string instead of hard-coding it like they did in the example tutorial.
Eventually, your ConfigureServices method should look something like this, assuming your DbContext class is named BloggingContext like in the example:
public void ConfigureServices(IServiceCollection services)
{
string vcapServices = Environment.GetEnvironmentVariable("VCAP_SERVICES");
string connection = "";
if (vcapServices != null)
{
string myServiceName = "my-sql-server";
JArray userServices = (JArray)JObject.Parse(vcapServices)?["user-provided"];
dynamic creds = ((dynamic)userServices
.FirstOrDefault(m => ((dynamic)m).name == myServiceName))?.credentials;
connection = string.Format(#"Server={0};Database={1};User Id={2}; Password={3};",
creds.server, creds.database, creds.user, creds.password);
}
services.AddEntityFramework()
.AddSqlServer()
.AddDbContext<BloggingContext>(options => options.UseSqlServer(connection));
services.AddMvc();
}
And then your Startup method would be simplified to:
public Startup(IHostingEnvironment env)
{
var configBuilder = new ConfigurationBuilder()
.AddJsonFile("config.json", optional: true);
Configuration = configBuilder.Build();
}
Excellent!
In Public Bluemix Regions, you would create and use the Secure Gateway Service to access the On-Premise MS SQL Server DB.
In your case, as a Bluemix Dedicated client, you should engage your IBM Bluemix Administration Team so they can work with your Network Team to create a tunnel between the Dedicated Bluemix Region and your On-Premise MS SQL DB Server.
If you want to connect directly from your Asp.Net Core application to a SQL Server you actually don't need a Secure Gateway.
For example, if you want to use a SQL Azure as your Database you can simply add the given connection string in your application.
But, for pratical and security reasons, you should create a User-Provided Service to store your credentials (and not use statically in your code), and pull your credentials from you VCAP_SERVICES simply adding SteelToe to your Cconfiguration Builder. (Instead of use parse the configuration manually with JObjects and JArrays)
Step-by-step:
In your CloudFoundry console create a User-Provided Service using a Json:
cf cups MySqlServerCredentials -p '{"server":"tcp:example.database.windows.net,1433", "database":"MyExampleDatabase", "user":"admin", "password":"password"}'
Obs.: If you use Windows console/Powershell you should escape you double quotes in Json like:
'{\"server\":\"myserver\",\"database\":\"mydatabase\",\"user\":\"admin\",\"password\":\"password\"}'
After you have created your User-Provided Service you should Connect this Service with your application in Bluemix Console.
Then, In your application add the reference to SteelToe CloudFoundry Steeltoe.Extensions.Configuration.CloudFoundry
In your Startup class add:
using Steeltoe.Extensions.Configuration;
...
var builder = new ConfigurationBuilder()
.SetBasePath(basePath)
.AddJsonFile("appsettings.json")
.AddCloudFoundry();
var config = builder.Build();
Finally, to access your configurations just use:
var mySqlName = config["vcap:services:user-provided:0:name"];
var database = config["vcap:services:user-provided:0:credentials:database"];
var server = config["vcap:services:user-provided:0:credentials:server"];
var password = config["vcap:services:user-provided:0:credentials:password"];
var user = config["vcap:services:user-provided:0:credentials:user"];
OBS.: If you're using Azure, remember to configure your Database firewall to accept the IP of your Bluemis application, but as default Bluemix don't give a static IP address you have some options:
Buy a Bluemix Statica service to you application (expensive)
Update firewall rules with REST put with the current IP of application (workaroud)
Open your Azure Database Firewall to a broad range of IPs. (Just DON'T)
More info about SteelToe CloudFoundry in :
https://github.com/SteeltoeOSS/Configuration/tree/master/src/Steeltoe.Extensions.Configuration.CloudFoundry

Transaction with multiple DbContext and each one with its own connection on Entity Framework

I know it looks like a duplicated since there are tons of questions regarding transactions on multiple contexts but none of them refer to this scenarios.
The setup
SQL Server 2016 (latest preview) on develop machine or SQL Azure v12 on production
Entity Framework 6.1.3
.Net 4.5 on a regular application
Application can run on an Azure Cloud Service or a VM, doesn't really matter
What we have in code
A single TransactionScope
Two DbContexts created outside the TransactionScope (it comes created by our dependency injection mechanics that are tied to ASP.Net MVC, so I left it out the scope for simplicity of the sample)
Each context create and maintain its own connection which points to different databases on different servers(it can also point to the same server in case of running on a development machine, but still different databases).
We use regular SQL Authentication - userId and password(just in case someone points to some post talking about problems with Integrated Security and MSDTC, we don't use it locally, since on Azure SQL it is not supported)
Our code sample
When we do something like:
var contextA = new ContextA();
var contextB = new ContextB();
using(var scope = new TransactionScope())
{
var objA = new EntityA();
objA.Name = "object a";
contextA.EntitiesA.Add(objA);
contextA.SaveChanges();
var objB = new EntityB();
objB.Name = "object B";
contextB.EntitiesB.Add(objB);
contextB.SaveChanges();
scope.Complete();
}
What we receive
A System.Data.Entity.Core.EntityException is thrown on the call of contextA.SaveChanges() with the following messages:
Root Exception: The underlying provider failed on EnlistTransaction.
Inner Exception: Connection currently has transaction enlisted. Finish current transaction and retry.
So, anyone have a clue on what exactly is going wrong with this sample?
We are trying to have a single transaction using multiple contexts and each context with its own connection to the database. Obviously since the data of each context is on different database servers(in production) we can't use the DbContext ctor that receives a DbConnection and shares it with both contexts, so share a DbConnection is not an option.
Thank you very much, I really appreciate any help.
Distributed transactions with TransactionScope are now supported by Azure SQL Database. See TransactionScope() in Sql Azure.
AFAIK, distributed transactions are not supported in SQL Azure. Source

ELMAH for ASP.NET MVC 4 using SQL SERVER 2008 R2

I ran ELMAH sql scripts in test DB(It created ELmah_Error table and 3 stored procedures) and configured ELMAH in MVC application using Nuget.
I modified web.config as specified and I'm able to log exceptions into
http://mysite/elmah.axd
But, instead i want to log the exceptions into Sql Server.
I added below class to achieve that
public class ElmahHandleErrorAttribute : System.Web.Mvc.HandleErrorAttribute
{
public override void OnException(System.Web.Mvc.ExceptionContext context)
{
LogException(e);
}
private static void LogException(Exception e)
{
// Call to Database and insert the exception info
}
}
Final step was to:
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
filters.Add(new ElmahHandleErrorAttribute ());
}
Is it the correct way to use ELMAH to log all exceptions or AM I missing something?
Once you have the database setup, all you need to do is add the following to the <elmah> section your web.config to setup the Elmah to log to the SQL Database:
<elmah>
<errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="<DBConnString>"
applicationName="<YourApp>"
</elmah>
Replace <DBConnString> and <YourApp> with appropriate values for your configuration.
Once you have done this you will not need to use your custom ElmahHandleErrorAttribute class.
I am not sure which NuGet package you installed, but I would recommend using the Elmah.MVC package as it integrates Elmah into MVC exceptionally well by setting up all of the ErrorHandlers and ErrorFilters for you.

windows service application interacting with SQL server database

I have a windows service application that is meant to interact with SQL server database (INSERT, UPDATE, ETC). The windows service application is also multi-threaded.
I created an "App_Data" folder to keep my database and used app.config file for connection information, etc.
After installing and starting the service, nothing happens, the database doesnt get updated, etc.
Has anyone ever written a windows service application that interacts with a database? Kindly advice me on how to overcome this problem..
Thanks
From you've described you don't necessarily have a database problem. What you need is a way to debug your windows service. Particularly the OnStart.
Here's what I often put in the OnStart in a Windows Service written in C#
protected override void OnStart(string[] args)
{
foreach (string arg in args)
{
if (arg == "DEBUG_SERVICE")
DebugMode();
}
#if DEBUG
DebugMode();
#endif
timer.Interval = 1;
timer.Start();
}
private static void DebugMode()
{
Debugger.Break();
}
Now when you want to Debug the OnStart you can add the "DEBUG_SERVICE" command argument from the Service Control panel. Otherwise you'll have to try and attach the debugger manually which might not be in time.
Also note how the I start a timer. This allows a separate thread to do the actual work. This is important because you want the OnStart to finish in a timely fashion. A timer isn't required because some windows services respond to an event like a file watcher but more often then not, it seems polling at intervals is what people do in Windows Services.
As far as I know, the App_Data folder and therefore connection strings pointing to it are only available in ASP.NET web apps and web sites - not in other types of Windows apps.
My recommendation: put your SQL Server database on a database server - can be your local machine and a SQL Server Express database - and connect to that server instance!

Resources