Using Always Encrypted of SQL Server from .NET Core 2.1 - dapper

I am aware that Microsoft has not specified any plan to support Always Encrypted from within Core, yet.
However, this is supported by classical .NET Framework (4.5 onward).
Our Core 2.1 project's usage of Always Encrypted is limited to two simple queries and we are willing to take alternative routes to use it other than downgrading from Core 2.1.
There are many remote ways to solve these problems but they involve remoting (WCF or REST for another classical .NET) or through a Service Fabric Actor (we were using this before).
Is there any clean in-process way to do it? Connecting through ODBC, for example, for these two queries or something like that?
N.B. We are running on Windows.

According Microsoft Docs latest ODBC drivers supports Always Encrypted. The following code works.
using System;
using System.Data.Odbc;
namespace ConsoleApp2
{
class Program
{
static void Main(string[] args)
{
using (var connection = new OdbcConnection(
"Driver={ODBC Driver 17 for SQL Server};DSN=SQL2016;Server={SQL2016};Trusted_Connection=yes;ColumnEncryption=Enabled;Database=AndreyTest;"))
{
using (var cmd = new OdbcCommand("select SSN from TestTable", connection))
{
connection.Open();
Console.WriteLine("Connected");
Console.WriteLine("SSN: " + Convert.ToString(cmd.ExecuteScalar()));
Console.ReadLine();
connection.Close();
}
}
}
}
}
If I remove "ColumnEncryption=Enabled" from the connection string, the output becomes "SSN: System.Byte[]", i.e. the SSN isn't decrypted.
I hope this will help!

Related

SQL Server 2016 Always Encrypted Timeout at Published IIS

I Have strange problem when i tried to publish my asp.net mvc application to my local (pc) iis with "Always Encrypted" Enabled.
My application keep timeout when i tried to access database using EF6 at local IIS (not express) :
But if i tried to access & debug my asp.net mvc app using Visual Studio 2017, database with 'always encrypted enabled' can be accessed perfectly without timeout.
And also i can access it with SQL Management Studio without problem.
Both (SMSS & ASP.NET web config) using this configuration.
Column Encryption Setting=enabled;
Note : I'm using ASP.NET MVC 5 & EF 6, SQL Server 2016 Developer Edition.
Sorry for my bad english.
UPDATED :
I have tried using .NET Framework Data Provider to see if there's any clue that'll help me solving this issue, Using following code :
var context = new TestDevEntities();
StringBuilder sb = new StringBuilder();
string connectionString = context.Database.Connection.ConnectionString;
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
using (SqlCommand cmd = new SqlCommand(#"SELECT [id],[name],[CCno] FROM [TestDev].[dbo].[testEncCol]", connection, null, SqlCommandColumnEncryptionSetting.ResultSetOnly))
{
using (SqlDataReader reader = cmd.ExecuteReader())
{
if (reader.HasRows)
{
while (reader.Read())
{
sb.Append(reader[2] + ";");
}
}
}
}
}
above code show me this error :
Now, with this kind of error i know i exactly i must do :)
Change the identity of application pool to the 'user' who previously generated the certificate.
Export currentuser cert (used by always encrypted) and import to the user that you want to use as application pool identity.
Now its worked!
EF should throw some kind of error as clear as .NET Data Providers do, instead of timeout failure that really confuse me #_#
UPDATED (1) :
Now the question is how to use it (Certificate) with default ApplicationPoolIdentity instead of custom account?
UPDATED (2) :
I have done what jakub suggest, but still no luck.
Thanks
One way (could be the only way) to use the DefaultAppPool identity instead of a custom (user) account is to store the certificate in the Local Machine certificate store (not Current User).
Once you create a certificate in the Local Machine certificate store, you need to grant DefaultAppPool access to the cert. You can do that using Microsoft Management Console (and the plugin for Local Computer certs):
Right click on the cert, select All Tasks > Manage Private Keys.
Click Add.
Set location to your computer (not your domain).
Enter IIS AppPool\DefaultAppPool as the object name.
Click OK twice.

Generate reports that query data from MS SQL database on Linux/CentOS?

I'm trying FastReport.Mono on Linux (CentOS 7) to see if I can make an app to generate reports. Example Mono app seems to work fine with PDF and JPG export from custom dataset.
Now I need to query data from MS SQL database. I have a sample report that works well under Windows, but fails on Linux:
private static void ReportExportJPG()
{
Report report = new Report();
report.Load(#"sql-report.frx");
report.Prepare(); // <<<--- Error here
...
}
Error message is:
Cant find object MsSqlDataConnection
Feature table says that MS SQL connectivity (nor ODBC, nor many others) is not available in FastReport.Mono. Does this mean it's entirely missing or I should use other ways and provide ready-made connection to FastReport somehow? If so - How?
P.S. Running Windows report generator with MS SQL connection under Wine works well, so I assume connecting to MS SQL from CentOS is somehow viable.
I was able to resolve this by adding the following to my project source:
using FastReport.Data;
using FastReport.Utils;
...
RegisteredObjects.AddConnection(typeof(MsSqlDataConnection));

Database integration tests in Visual Studio Online

I'm enjoying the new Build tool in Visual Studio Online. Allows me to do almost everything that I do my local build server. But one thing that I'm missing is integration database tests: for every build run I re-create test database from scripts and run DB-tests against it.
In Visual Studio Online I can't seem to find any database instance available for my needs.
I have tried creating Azure SQL database (via PowerShell) for every build run and then delete it after the build is complete. But it takes forever (comparing to the rest of the build process) to create a database. And even when PowerShell scripts are done, database is not yet ready to accept requests - I need to constantly check if it is actually ready. So this scenario becomes too complex and not reliable.
Are there other options to do database (SQL Server) integration tests in Visual Studio Online?
Update: I think I'm not very clear of what I need - I need a free (very cheap) SQL Server instance to connect to that runs on build agent in VSO. Something like SQL Express or SQL CE or LocalDB, where I can connect to and re-create database to run C# tests against. Re-creating database or running tests is not a problem, having a valid connection string is a problem.
Update Oct 2016: I've blogged about how I do integration testing in VSTS
The TFS build servers come with MSSQL Server 2012 and MSSQL Server 2014 LocalDBs preinstalled.
Source: TFS Service - Software on the hosted build server
So, just put the following one-liner into your solution's post-build event to create a MYTESTDB LocalDB instance for your needs. This will allow you to connect to (LocalDB)\MYTESTDB an run the database integration tests just fine.
"C:\Program Files\Microsoft SQL Server\120\Tools\Binn\SqlLocalDB.exe" create "MYTESTDB" 12.0 -s
Source: SqlLocalDB Utility
In Azure DevOps, with .net Core and EF Core, I use a different technique.
I use a SQLite in memory database to execute both Integration and End to End tests.
Currently in .net Core you can use both InMemory database and SQLite with in memory option, to run any integration test in the default Azure DevOps CI Agent.
InMemory: https://learn.microsoft.com/en-us/ef/core/miscellaneous/testing/in-memory
Note that the InMemory database is not a relational database, it is a multi-purpose one, and just to mention one limitation:
InMemory will allow you to save data that would violate referential
integrity constraints in a relational database
SQLite in memory mode https://learn.microsoft.com/en-us/ef/core/miscellaneous/testing/sqlite
This approach offers a more realistic platform to test against.
Now, I went a bit further, I didn't want to just be able to run integration tests with database dependency in Azure DevOps, I wanted to also be able to host my WebAPIs in the CI Agent, and to share the database between the API DBcontext and my Persister object (Persister object is a helper class that allow me to automatically generate any kind of entity and save them to the database).
A quick note on Integration Tests and Ent to End tests:
Integration Tests
An example of integration test involving a database, could be a test of the Data Access Layer. In this case, normally, you would create a DBContext when starting a test, fill the target database with some data, use the component under test to manipulate the data, and again use the DBContext to make sure the assertions are satisfied.
This scenario is quite straight forward, in the same code you can share the same DBContext to generated the data and inject it to the component.
End to End Tests
Imagine you have like in my case a RESTful .net Core WebAPI you want to test, making sure all your CRUD operations are working as expected, and you want to test that filtering, pagination and so on are also correct.
In this case, it much more complex share the same DBContext between test (data setup and/or verification) and the WebAPI stack.
Before .net EF Core and WebHostBuilder
So far, the only way I knew was possible, was to have a dedicated server, VM or docker image, responsible serve the API, which had to be also accessible from the web or Azure DevOps.
Setup my integration tests to either re-create the DB, or be clever/limited enough to ignore completely the existing data, and make sure that each test was resilient to data corruption and fully reliable (no false negative or positive results).
Then I had to configure my build definition to run the tests.
Leveraging SQLite in memory with cache=shared and WebHostBuilder
Below I first describe the two majour technologies I use, then I add some code to show how to do it.
SQLite file::memory:?cache=shared
SQLite allow you to work in memory, instead of using a traditional file, this already gives us a huge performance boost, removing the I/O bottleneck, but on top of this, using the option cache=shared, we can use multiple connections within the same process to access the same data. If you need more than one database you can specify a name.
More info: https://www.sqlite.org/inmemorydb.html
WebHostBuilder
.net Core offers Host builders, WebHostBuilder allow us to create a server that startup and host our WebAPI, so that can be reached like if they were hosted on a real server.
When you use the WebHostBuilder in a test class, this two, are living within the same process.
More info: https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.hosting.webhostbuilder?view=aspnetcore-2.2
The Solution
When initialising an E2E test, create a new client to connect the api, create a dbcontext that you will use to seed the database and maybe assert.
Test initialisation:
[TestClass]
public class CategoryControllerTests
{
private TestServerApiClient _client;
private Persister<Category> _categoryPersister;
private Builder<Category> _categoryBuilder;
private IHouseKeeperContext _context;
protected IDbContextTransaction Transaction;
[TestInitialize]
public void TestInitialize()
{
_context = ContextProvider.GetContext();
_client = new TestServerApiClient();
ContextProvider.ResetDatabase();
_categoryPersister = new Persister<Category>(_context);
_categoryBuilder = new Builder<Category>();
}
[TestCleanup]
public void Cleanup()
{
_client?.Dispose();
_context?.Dispose();
_categoryPersister?.Dispose();
ContextProvider.Dispose();
}
[...]
}
TestServerApiClient class:
public class TestServerApiClient : System.IDisposable
{
private readonly HttpClient _client;
private readonly TestServer _server;
public TestServerApiClient()
{
var webHostBuilder = new WebHostBuilder();
webHostBuilder.UseEnvironment("Test");
webHostBuilder.UseStartup<Startup>();
_server = new TestServer(webHostBuilder);
_client = _server.CreateClient();
}
public void Dispose()
{
_server?.Dispose();
_client?.Dispose();
}
}
ContextProvider class is used to generate the DBContext which can be used to seed data or perform database queries for assertions.
public static class ContextProvider
{
private static bool _requiresDbDeletion;
private static IConfiguration _applicationConfiguration;
public static IConfiguration ApplicationConfiguration
{
get
{
if (_applicationConfiguration != null) return _applicationConfiguration;
_applicationConfiguration = new ConfigurationBuilder()
.AddJsonFile("Config/appsettings.json", optional: false, reloadOnChange: true)
.AddEnvironmentVariables()
.Build();
return _applicationConfiguration;
}
}
private static ServiceProvider _serviceProvider;
public static ServiceProvider ServiceProvider
{
get
{
if (_serviceProvider != null) return _serviceProvider;
var serviceCollection = new ServiceCollection();
serviceCollection.AddSingleton<IConfiguration>(ApplicationConfiguration);
var databaseType = ApplicationConfiguration?.GetValue<DatabaseType>("DatabaseType") ?? DatabaseType.SQLServer;
_requiresDbDeletion = databaseType == DatabaseType.SQLServer;
IocConfig.RegisterContext(serviceCollection, null);
_serviceProvider = serviceCollection.BuildServiceProvider();
return _serviceProvider;
}
set
{
_serviceProvider = value;
}
}
/// <summary>
/// Generate the db context
/// </summary>
/// <returns>DB Context</returns>
public static IHouseKeeperContext GetContext()
{
return ServiceProvider.GetService<IHouseKeeperContext>();
}
public static void Dispose()
{
ServiceProvider?.Dispose();
ServiceProvider = null;
}
public static void ResetDatabase()
{
if (_requiresDbDeletion)
{
GetContext()?.Database?.EnsureDeleted();
GetContext()?.Database?.EnsureCreated();
}
}
}
IocConfig class is an helper class I use in my framework to setup the dependency injection. The menthod used above, RegisterContext, is responsile to register the DBContext and set it up as desired, and because this is the same class used by the WebAPI, uses the configuration DatabaseType to determine what to do.
Inside this class probably you can find most of the "complexity".
When using SQLite in memory, you have to remember that:
The connection is not opened and closed automatically like when using SQL Server (that's why i used: context.Database.OpenConnection();)
If no connection is active, the database is deleted (that's why I used services.AddSingleton<IHouseKeeperContext>(s ... it is important that one connection is left open so that the database is not destroyed, but on the other hand you have to be careful to close all connections when a test ends, so that the database is eventually destroyed and the next test will correctly create a new empty one.
The rest of the class handles the SQL Server configuration for both Production and Testing setup. I can at any time setup the tests to use a real instance of SQL Server, all tests will keep being fully independend from the others but it will definitely be slow, and maybe suitable only for a nightly build (if needed, and it depends on the size of your system).
public class IocConfig
{
public static void RegisterContext(IServiceCollection services, IHostingEnvironment hostingEnvironment)
{
var serviceProvider = services.BuildServiceProvider();
var configuration = serviceProvider.GetService<IConfiguration>();
var connectionString = configuration.GetConnectionString(Constants.ConfigConnectionStringName);
var databaseType = DatabaseType.SQLServer;
try
{
databaseType = configuration?.GetValue<DatabaseType>("DatabaseType") ?? DatabaseType.SQLServer;
}catch
{
MyLoggerFactory.CreateLogger<IocConfig>()?.LogWarning("Missing or invalid configuration: DatabaseType");
databaseType = DatabaseType.SQLServer;
}
if(hostingEnvironment != null && hostingEnvironment.IsProduction())
{
if(databaseType == DatabaseType.SQLiteInMemory)
{
throw new ConfigurationErrorsException($"Cannot use database type {databaseType} for production environment");
}
}
switch (databaseType)
{
case DatabaseType.SQLiteInMemory:
// Use SQLite in memory database for testing
services.AddDbContext<HouseKeeperContext>(options =>
{
options.UseSqlite($"DataSource='file::memory:?cache=shared'");
});
// Use singleton context when using SQLite in memory if the connection is closed the database is going to be destroyed
// so must use a singleton context, open the connection and manually close it when disposing the context
services.AddSingleton<IHouseKeeperContext>(s => {
var context = s.GetService<HouseKeeperContext>();
context.Database.OpenConnection();
context.Database.EnsureCreated();
return context;
});
break;
case DatabaseType.SQLServer:
default:
// Use SQL Server testing configuration
if (hostingEnvironment == null || hostingEnvironment.IsTesting())
{
services.AddDbContext<HouseKeeperContext>(options =>
{
options.UseSqlServer(connectionString);
});
services.AddSingleton<IHouseKeeperContext>(s => {
var context = s.GetService<HouseKeeperContext>();
context.Database.EnsureCreated();
return context;
});
break;
}
// Use SQL Server production configuration
services.AddDbContextPool<HouseKeeperContext>(options =>
{
// Production setup using SQL Server
options.UseSqlServer(connectionString);
options.UseLoggerFactory(MyLoggerFactory);
}, poolSize: 5);
services.AddTransient<IHouseKeeperContext>(service =>
services.BuildServiceProvider()
.GetService<HouseKeeperContext>());
break;
}
}
[...]
}
Sample Test, where first I use the persister to generated data which is seeded in the database, then I use the API to get data, the test can be also reversed, using a POST request to set data and then using the DBContext to read the db and make sure the creation was successful.
[TestMethod]
public async Task GET_support_orderBy_Id()
{
_categoryPersister.Persist(3, (c, i) =>
{
c.Active = 1 % 2 == 0;
c.Name = $"Name_{i}";
c.Description = $"Desc_i";
});
var response = await _client.GetAsync("/api/category?&orderby=Id");
var categories = response.To<List<Category>>();
Assert.That.All(categories).HaveCount(3);
Assert.IsTrue(categories[0].Id < categories[1].Id &&
categories[1].Id < categories[2].Id);
response = await _client.GetAsync("/api/category?$orderby=Id desc");
categories = response.To<List<Category>>();
Assert.That.All(categories).HaveCount(3);
Assert.IsTrue(categories[0].Id > categories[1].Id &&
categories[1].Id > categories[2].Id);
}
Conclusions
I love the fact that I can run E2E tests in Azure DevOps for free, performances are incredibly good and this gives me a lot of confidence, ideal when you want to setup a continuous delivery environment.
Here is a screenshot of part of the build execution of this code in Azure DevOps (free version).
Sorry this ended up being longer than expected.
There is a "Redgate SQL CI" extension for VSTS in the marketplace you may want to try. See this link for details:
Within the extension, there are four actions available:
•Build – builds your database into a NuGet package from the database
scripts folder in source control
•Test – runs your tSQLt tests against the database
•Sync – synchronizes the package to an integration database
•Publish – publishes the package to a NuGet stream
You should push the integration tests (anything that needs an instance of your application) to be run in an environment as part of your release pipeline.
In your build just do compile and unit tests. If that competes you should trigger a Release and as part of your release pipeline your first step should be to deploy your database to an azure server.
Instead of trying to use SQL Azure you can create a VM in azure that already exists that has SQL server installed. Use remote scripting to deploy the database and execute your tests.
Even if you are not using the release tools to release this would work for you.

Why GetDate() shows current date - 2 days on MSSQL server? [duplicate]

I have strange effects when retrieving columns of type DATE from SQLServer2008 using the Microsoft JDBC-Driver version 3.0 when running under the official Oracle JDK 1.7.0. Host OS is Windows Server 2003.
All Date columns are retrieved as two days in the past with respect to the value actually stored in the column.
I cooked up a minimal code example the test this out (Test table and data):
CREATE TABLE Java7DateTest (
dateColumn DATE
);
INSERT INTO Java7DateTest VALUES('2011-10-10');
Code:
public class Java7SQLDateTest {
public static void main(final String[] argv) {
try {
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
Connection connection = DriverManager.getConnection(
"jdbc:sqlserver://192.168.0.1:1433;databaseName=dbNameHere",
"user", "password");
PreparedStatement statement = connection.prepareStatement("SELECT * FROM Java7DateTest");
ResultSet resultSet = statement.executeQuery();
while (resultSet.next()) {
final java.sql.Date date = resultSet.getDate("dateColumn");
final String str = resultSet.getString("dateColumn");
System.out.println(date + " (raw: " + str + ")");
}
resultSet.close();
statement.close();
connection.close();
} catch (final Throwable t) {
throw new RuntimeException(t.getMessage(), t);
}
}
}
Running this code on above configuration prints: "2011-10-08 (raw: 2011-10-08)".
Under JRE 1.6.0_27 it prints: "2011-10-10 (raw: 2011-10-10)"
I could not find anything that seems to relate to my problem with google, so I'm assuming that its either something stupid I overlooked or nobody is using Java7 yet.
Can anybody confirm this problem? What are my alternatives if I still want to use Java7?
Edit: The problem occurs even when running with -Xint, so its not caused by Hotspot bugs.
Edit2: Old drivers (Microsoft 1.28) work properly with JDK1.7.0 (we were using that driver until maybe two years ago, I think).
jTDS also works perfectly fine with the example. I am considering switching to jTDS, but I am reluctant to do so because I have not the faintest idea what the effects on our productive environment may be. Ideally it should just work, but that what I believed when I switched my dev box to Java7, too.
There is one pretty fat database in the production environment, that is too big to create a copy of, for testing (or rather our server has so little disk left). So setting up a test environment for that one app is not straigthforward, I would have to stitch up a shrinked database for that.
Edit3: jTDS has its own set of catches attached. I found a behavioral difference that breaks one of our applications. ResultSet.getObject() returns different object types for SmallInt columns depending on driver (Short vs Integer). Also jTDS does not implement JDBC4 Connection interface, Connect.isValid() is not supported.
Edit4: I noticed last week that MSSQL-JDBC 3.0 refuses to connect to any DB after I updated to JDK1.6.0_29. jTDS it is then... we switched the productive server yesterday (I fixed tow places where the application was relying on peculiarities of the driver), and so far we had have no problems.
Thank you for your feedback. The Microsoft JDBC Driver for SQL Server does not yet support JRE 1.7.
We are aware of the getDate issue between our JDBC driver & JRE 1.7 and we are looking into publishing a hotfix to enable customers to move forward with non-production testing of our driver with JRE 1.7.
We will publish a link to the hotfix on our blog once available.
http://blogs.msdn.com/b/jdbcteam/
The hotfix is now available. http://blogs.msdn.com/b/jdbcteam/archive/2012/01/20/hotfix-available-for-date-issue-when-using-jre-1-7.aspx
Our blog also contains information on the known issues with JRE 1.6u29 & 1.6u30.
Shamitha Reddy
Program Manager - Microsoft JDBC Driver for SQL Server
I don't quite have an answer for you. But, I've recreated your situation as you described. It is the same with the jdbc driver v3.101 and v3.202 and v4.ctp3 when run under jdk1.7. However, the v2 driver from MS gives your expected answer both under jdk1.6 and jdk1.7. If you need a quick fix and can move to an older jdbc driver, that may work for you.
Other thoughts are on how the MS jdbc driver handles dates and conversion of Date objects between SQL Server and the jvm. Since the storage of the date is without a time zone, the interpretation of the Date object by the driver is based on the default time zone for the machine running the jdbc driver. For instance, if you store a smalldate of '2011-10-11 12:00' and retrieve it from a machine with the default time zone set to GMT-7 then the resulting UTC time of the Date object would be '2011-10-11 19:00'. It could be that there is some change in jdk1.7 that impacts this conversion process in the driver resulting in a wild offset. You might experiment with the ResultSet.getDate(column, Calendar) method to see if a Calendar with a specific time zone gets you the result you want or helps make sense of why you are seeing the strange offset in the conversion.
I don't have a SQL Server setup, but I can't reproduce your problem with PostgreSQL 9.0 and MySQL 5.1 on Windows 7 x64 with JDK 1.7.0. So JDK 1.7.0 can be excluded from being suspect. I have the impression that the SQL Server JDBC driver is to blame here. I'd suggest to use the jTDS JDBC driver instead. It has always been praised for its better performance and stability as opposed to the MS-provided SQL Server JDBC driver.
Information and download link for the hotpatch from Microsoft Support can be found here:
http://support.microsoft.com/kb/2652061
I was experiencing the same issue, where the date would be off by two days, and this hotpatch fixed it.
This is also an issue in OpenJDK 1.6.0_20. However, the mssql driver works fine with Suns JRE 1.6.0_16.

Node.js and Microsoft SQL Server

Is there any way I can get my Node.js app to communicate with Microsoft SQL?
I haven't seen any MS SQL drivers out there in the wild?
I'm putting a very simple app together and need to be able to communicate with an existing MS SQL database (otherwise I would have gone with mongoDB or Redis)
The original question is old and now using node-mssql as answered by #Patrik Šimek that wraps Tedious as answered by #Tracker1 is the best way to go.
The Windows/Azure node-sqlserver driver as mentioned in the accepted answer requires you to install a crazy list of prerequisites: Visual C++ 2010, SQL Server Native Client 11.0, python 2.7.x and probably also Windows 7 SDK for 64-bit on your server. You don't want to install all these GB's of software on your Windows Server if you ask me.
You really want to use Tedious. But also use node-mssql to wrap it and make coding a lot easier.
Update August 2014
Both modules are still actively maintained. Issues are responded on quite quickly and efficiently.
Both modules support SQL Server 2000 - 2014
Streaming supported since node-mssql 1.0.1
Update February 2015 - 2.x (stable, npm)
Updated to latest Tedious 1.10
Promises
Pipe request to object stream
Detailed SQL errors
Transaction abort handling
Integrated type checks
CLI
Minor fixes
This is plain Tedious:
var Connection = require('tedious').Connection;
var Request = require('tedious').Request;
var config = {
server: '192.168.1.212',
userName: 'test',
password: 'test'
};
var connection = new Connection(config);
connection.on('connect', function(err) {
executeStatement();
}
);
function executeStatement() {
request = new Request("select 42, 'hello world'", function(err, rowCount) {
if (err) {
console.log(err);
} else {
console.log(rowCount + ' rows');
}
connection.close();
});
request.on('row', function(columns) {
columns.forEach(function(column) {
if (column.value === null) {
console.log('NULL');
} else {
console.log(column.value);
}
});
});
request.on('done', function(rowCount, more) {
console.log(rowCount + ' rows returned');
});
// In SQL Server 2000 you may need: connection.execSqlBatch(request);
connection.execSql(request);
}
Here comes node-mssql which has Tedious as a dependency. Use this!
var sql = require('mssql');
var config = {
server: '192.168.1.212',
user: 'test',
password: 'test'
};
sql.connect(config, function(err) {
var request = new sql.Request();
request.query("select 42, 'hello world'", function(err, recordset) {
console.log(recordset);
});
});
A couple of new node.js SQL server clients have just released recently. I wrote one called node-tds and there is another called tedious
We just released preview drivers for Node.JS for SQL Server connectivity. You can find them here:
http://blogs.msdn.com/b/sqlphp/archive/2012/06/08/introducing-the-microsoft-driver-for-node-js-for-sql-server.aspx
(duplicating my answer from another question).
I would recommend node-mssql, which is a nice wrapper for other connectors, the default being my previous choice (Tedious) bringing a bit nicer of an interface. This is a JavaScript implimentation, with no compilation requirements, meaning you can work in windows and non-windows environments alike.
Another option, if you don't mind bringing in .Net or Mono with a binary bridge would be to use edge.js. Which can be very nice if you want to leverage .Net libraries in node.js
node-tds is abandoned, node-odbc doesn't work with windows, and the MS node-sqlserver driver doesn't seem to work on non-windows (and has some goofy requirements).
There is another module you can use - node-mssql. It uses other TDS modules as drivers and offer easy to use unified interface. It also add extra features and bug fixes.
Extra features:
Unified interface for multiple MSSQL drivers
Connection pooling with Transactions and Prepared statements
Parametrized Stored Procedures for all drivers
Serialization of Geography and Geometry CLR types
Smart JS data type to SQL data type mapper
Support both Promises and standard callbacks
You could maybe use node-tds.js:
An exciting implementation of the TDS protocol for node.js to allow communication with sql server...
USAGE:
var mssql = require('./mssql');
var sqlserver = new mssql.mssql();
sqlserver.connect({'Server':__IP__,'Port':'1433','Database':'','User Id':'','Password':''});
var result = sqlserver.execute("SELECT * FROM wherever;");
TSQLFTW - T-SQL For The WIN(dows) - by Fosco Marotto
https://github.com/gfosco/tsqlftw
It is a C# and ADO .NET managed code solution, with a C++ wrapper that Node.js can import and work with.
If you know .NET you could try WCF Data Services (ADO.NET Data Services); write an WCF app for data access and use odata (REST on steroids) to interact with the database
WCF Data Services: http://msdn.microsoft.com/en-us/data/bb931106
OData: http://www.odata.org/
If you are into SOA and use SQL Server 2005 you could check out the Native XML Web Services for Microsoft SQL Server 2005
http://msdn.microsoft.com/en-us/library/ms345123(v=sql.90).aspx
You could access SQL Server as a web service (HTTP, SOAP)
Microsoft (The Windows Azure Team) just released a node driver for SQL SERVER.
It has no package for npm yert, as far as I know, but it is open sourced. And the accepting community contribution too.
https://github.com/WindowsAzure/node-sqlserver
Introduction blog post here:
http://blogs.msdn.com/b/sqlphp/archive/2012/06/08/introducing-the-microsoft-driver-for-node-js-for-sql-server.aspx
I'd suggest taking a look at Prisma. We just (October 2020) announced preview support for SQL Server.
Prisma is an ORM that puts the emphasis on type-safety and developer experience. Unlike traditional ORMs that typically map tables to classes, Prisma maps queries to types (in TypeScript) and returns plain objects from queries.
To get started with Prisma and SQL Server check out this example and start from scratch guide in the docs.
If you are running on .NET look at entityspaces.js at, we are creating an entire universal ORM for Node.js that will not require a WCF JSON service ... https://github.com/EntitySpaces/entityspaces.js
If you are using MSFT backend technology you could use it now, however, we are creating a universal Node.js ORM and will have more information on that soon
There is an update from Microsoft. Here is a series of blog posts (part 1 and part 2).
Node.js SQL Server drivers seem very immature - there's a mish-mash of different projects with varying dependencies, performance, and levels of completeness, none of which inspire confidence.
I'd propose using edge-sql. This leverages .NET's mature database driver ecosystem, and depends only on .NET (a no-brainer if you are running node on Windows - if not there is Mono, but I have not tried that).
Here is a node example (server.js) using edge-sql (note you need to put your connection string into an environment variable as per edge-sql docs):
var edge = require('edge');
// edge-sql has built in support for T-SQL / MSSQL Server
var getData = edge.func('sql', function () {/*
select top 10 * from sometable
*/
});
getData(null, function (error, result) {
if (error) throw error;
console.log(result);
});
You can also leverage Edge.js with .NET to access other databases, such as Oracle. I have given an example of that approach here.
The status as of May 2016 is as follows.
The official Microsoft SQL Driver for Node, called node-sqlserver, has not been updated for a number of years.
There is a new fork on this called node-sqlserver-v8 that works with Node Versions 0.12.x. and >= 4.1.x. This fork also has pre-compiled binaries for x64 and x86 targets.
The package is available on NPM as msnodesqlv8.
I recommend this package because it is lightweight (has no dependencies) and it is the only one that works with all recent version of SQL Server, including SQL LocalDB.
Now (2016) you can use Sequelize ORM that supports:
MySQL / MariaDB,
PostgreSQL
SQLite
Microsoft SQL Server
It is widely used according to its Github's stars.
that link details only a sql 2000 solution, not sql 2005 nor sql 2008, and also that code only allow sending sql text, and does not allow the execution of stored procedures.
The real solution would be to install node JS on a linux server, or on a virtual linux server on a windows machine, and then go to microsoft web site and download the JDBC java drivers and install those microsoft ms sql java jdbc drivers on either the linux server or linux virtual server.

Resources