I'm fairly new to using Nunit as a test framework and have come across something I don't really understand.
I am writing an integration test which inserts a row into a database table. As I want to repeatedly run this test I wanted to delete the row out once the test has completed. The test runs fine, however when I look in the database table the row in question is still there even though the delete command ran. I can even see it run when profiling the database whilst running the test.
Does Nunit somehow rollback database transaction? If anyone has any ideas why I am seeing this happen please let me know. I have been unable to find any information about Nunit rolling back transactions which makes me think I'm doing something wrong, but the test runs the row appears in the database it is just not being deleted afterwards even though profiler shows the command being run!
Here is the test code I am running
[Test]
[Category("Consultant split integration test")]
public void Should_Save_Splits()
{
var splitList = new List<Split>();
splitList.Add(new Split()
{
UnitGroupId = 69,
ConsultantUserId = 1,
CreatedByUserId = 1,
CreatedOn = DateTime.Now,
UpdatedByUserId = 1,
UpdatedOn = DateTime.Now,
Name = "Consultant1",
Unit = "Unit1",
Percentage = 100,
PlacementId = 47
});
var connection = Helpers.GetCpeDevDatabaseConnection();
var repository = new Placements.ConsultantSplit.DAL.PronetRepository();
var notebookManager = Helpers.GetNotebookManager(connection);
var userManager = Helpers.GetUserManager(connection);
var placementManager = Helpers.GetPlacementManager(connection);
var sut = new Placements.ConsultantSplit.ConsultantSplitManager(repository, connection, notebookManager, userManager, placementManager);
IEnumerable<string> errors;
sut.SaveSplits(splitList, out errors);
try
{
using (connection.Connection)
{
using (
var cmd = new SqlCommand("Delete from ConsultantSplit where placementid=47",
connection.Connection))
{
connection.Open();
cmd.CommandType = CommandType.Text;
connection.UseTransaction = false;
cmd.Transaction = connection.Transaction;
cmd.ExecuteNonQuery();
connection.Close();
}
}
}
catch (Exception exp)
{
throw new Exception(exp.Message);
}
Related
I have the two SSIS packages which basically has two action like below
First it truncates the contents of the table and then it executes the script task like basically call an API and inserts the response in to the table
[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
public async void Main()
{
try
{
var sqlConn = new System.Data.SqlClient.SqlConnection();
ConnectionManager cm = Dts.Connections["SurplusMouse_ADONET"];
string serviceUrl = Dts.Variables["$Project::RM_ServiceUrl"].Value.ToString();
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls;
HttpClient client = new HttpClient();
client.BaseAddress = new Uri(serviceUrl);
client.DefaultRequestHeaders.Accept.Add(
new MediaTypeWithQualityHeaderValue("application/json"));
string APIUrl = string.Format(serviceUrl + "/gonogo");
var response = await client.GetAsync(APIUrl);
if (response.IsSuccessStatusCode)
{
var result = await response.Content.ReadAsStringAsync();
try
{
sqlConn = (System.Data.SqlClient.SqlConnection)cm.AcquireConnection(Dts.Transaction);
const string query = #"INSERT INTO [dbo].[RM_Approved_Room_State]
(APPROVED_ROOM_STATEID,SOURCE_ROOMID,DEST_ROOMID,ENTITY_TYPEID)
SELECT id, sourceRoomRefId, destinationRoomRefId,entityRefId
FROM OPENJSON(#json)
WITH (
id int,
sourceRoomRefId int,
destinationRoomRefId int,
entityRefId int
) j;";
using (var sqlCmd = new System.Data.SqlClient.SqlCommand(query, sqlConn))
{
sqlCmd.Parameters.Add("#json", SqlDbType.NVarChar, -1).Value = result;
await sqlCmd.ExecuteNonQueryAsync();
}
}
catch (Exception ex)
{
Dts.TaskResult = (int)ScriptResults.Failure;
}
finally
{
if (sqlConn != null)
cm.ReleaseConnection(sqlConn);
}
}
}
catch (Exception ex)
{
Dts.TaskResult = (int)ScriptResults.Failure;
}
}
#region ScriptResults declaration
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
#endregion
} }
Similar to the above package I have another one which does more likely does insert records into different table the response from a different endpoint. When I execute the packages locally/ execute them separately after deploying it in to the server it works fine. But when I add them in to the SQL Server Agent Job like below and run them on a schedule
The Jobs run successfully and dont show any errors but I can see only one table with data from one package but the other one truncates the records but I dont think the script task is getting executed / I dont see any records inserted. I dont think there are any issues with access because when I run them seperate manually the data are getting inserted, Just when it is running on a schedule it is not working as expected. Any idea what could be happening here.. Any help is greatly appreciated
I have an ASP.NET Core project that downloads large files which are stored in SQL Server. It works fine for small files, but large files often time out as they are read into memory before getting downloaded.
So I am working to improve that.
Based on SQL Client streaming support examples I have updated the code to the following:
public async Task<FileStreamResult> DownloadFileAsync(int id)
{
ApplicationUser user = await _userManager.GetUserAsync(HttpContext.User);
var file = await this._attachmentRepository.GetFileAsync(id);
using (SqlConnection connection = new SqlConnection(this.ConnectionString))
{
await connection.OpenAsync();
using (SqlCommand command = new SqlCommand("SELECT [Content] FROM [Attachments] WHERE [AttachmentId] = #id", connection))
{
command.Parameters.AddWithValue("id", file.AttachmentId);
SqlDataReader reader = await command.ExecuteReaderAsync(CommandBehavior.SequentialAccess);
if (await reader.ReadAsync())
{
if (!(await reader.IsDBNullAsync(0)))
{
Stream stream = reader.GetStream(0);
var result = new FileStreamResult(stream, file.ContentType)
{
FileDownloadName = file.FileName
};
return result;
}
}
}
}
return null;
}
But when I test, it throws this exception:
Cannot access a disposed object. Object name: 'SqlSequentialStream'
Is there a way to fix this exception?
Your using statements are all triggering when you do your return, thus disposing your connection and command, but the whole point of this is to leave the stream copy to happen in the background after your function is done.
For this pattern you're going to have to remove the using calls and let garbage collection trigger when the stream copy is done. FileStreamResult should at the very least call Dispose on the stream you give it, which should un-root the command and connection to be later finalized and closed.
This is the working code, which is dramatically faster than without the streaming:
[HttpGet("download")]
public async Task<FileStreamResult> DownloadFileAsync(int id)
{
var connectionString = _configuration.GetConnectionString("DefaultConnection");
ApplicationUser user = await _userManager.GetUserAsync(HttpContext.User);
var fileInfo = await this._attachmentRepository.GetAttachmentInfoByIdAsync(id);
SqlConnection connection = new SqlConnection(connectionString);
await connection.OpenAsync();
SqlCommand command = new SqlCommand("SELECT [Content] FROM [Attachments] WHERE [AttachmentId]=#id", connection);
command.Parameters.AddWithValue("id", fileInfo.Id);
// The reader needs to be executed with the SequentialAccess behavior to enable network streaming
// Otherwise ReadAsync will buffer the entire BLOB into memory which can cause scalability issues or even OutOfMemoryExceptions
SqlDataReader reader = await command.ExecuteReaderAsync(CommandBehavior.SequentialAccess);
if (await reader.ReadAsync())
{
if (!(await reader.IsDBNullAsync(0)))
{
Stream stream = reader.GetStream(0);
var result = new FileStreamResult(stream, fileInfo.ContentType)
{
FileDownloadName = fileInfo.FileName
};
return result;
}
}
return null;
}
I am using hangfire in an ASP.Net MVC project to manage longrunning background job.
I am trying to use lock statement block for database operation. Here is my lock statement code-
public class LockedTransaction
{
private Object thisLock = new Object();
public LockedTransaction() { }
public void UpdateCustomerBalance(long CustomerId, decimal AmountToDeduct, string ConnectionString)
{
lock (thisLock)
{
using (SqlConnection connection = new SqlConnection(ConnectionString))
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction(System.Data.IsolationLevel.ReadCommitted))
{
using (SqlCommand command = new SqlCommand())
{
command.Connection = connection;
command.Transaction = transaction;
command.CommandText = "SELECT Balance FROM Customer WHERE Id=" + CustomerId;
var userBalance = Convert.ToDecimal(command.ExecuteScalar());
userBalance = userBalance - AmountToDeduct;
command.CommandText = "UPDATE Customer SET Balance=" + userBalance + " WHERE Id=" + CustomerId;
command.ExecuteNonQuery();
transaction.Commit();
}
}
}
}
}
}
Here is how I'm calling the above code-
foreach (var queue in queues)
{
queue.Send();
LockedTransaction lockedTransaction = new LockedTransaction();
lockedTransaction.UpdateCustomerBalance(queue.CustomerId, queue.cost, "ConnectionString");
}
The problem is, database value is not updated as expected. For example, I have 5 queues as follows -
queue[0].cost = 0.50;
queue[1].cost = 0.50;
queue[2].cost = 0.50;
queue[3].cost = 0.50;
queue[4].cost = 0.50;
Database value should be deducted 2.5 (cost total) after completing the loop. But it's not happening. Sometimes deducted value is 2.00, sometimes 1.5, etc.
Any suggestion?
Your lock object (thisLock) is an instance property. And cause in foreach loop you create a new instance of LockedTransaction for each element in the queue, lock doesn't prevent concurrent executions (each calling of UpdateCustomerBalance method uses own lock object).
Changing thisLock to static property should help you:
private static Object thisLock = new Object();
I have been facing a strange issue. I have a code snippet like below:
SqlTransaction transaction = null;
SqlDataReader reader = null;
using (SqlConnection sqlConnection = new SqlConnection(_sqlEndpoint.ConnectionString))
{
sqlConnection.Open(); **// Randomly fails here**
try
{
switch (_sqlEndpoint.ResultType)
{
case ResultSetType.None:
transaction = sqlConnection.BeginTransaction();
using (SqlCommand command = new SqlCommand(_sqlEndpoint.Query, sqlConnection, transaction))
{
command.CommandTimeout = _sqlEndpoint.CommandTimeout;
int rowsAffected = command.ExecuteNonQuery();
exchange.Message.SetHeader("RowsAffected", rowsAffected);
}
break;
case ResultSetType.Single:
using (SqlCommand command = new SqlCommand(_sqlEndpoint.Query, sqlConnection))
{
command.CommandTimeout = _sqlEndpoint.CommandTimeout;
exchange.Message.Body = command.ExecuteScalar();
}
break;
default:
using (SqlCommand command = new SqlCommand(_sqlEndpoint.Query, sqlConnection))
{
command.CommandTimeout = _sqlEndpoint.CommandTimeout;
reader = command.ExecuteReader(CommandBehavior.CloseConnection);
exchange.Message.Body = reader;
}
break;
}
}
catch (SqlException ex)
{
Log.ErrorFormat("[{0}] Error occured while fetching data from sql server: {1}", _sqlEndpoint.RouteName, ex.Message);
}
}
The connection opens 9 out of 10 times and then fails with the following exception:
"The operation is not valid for the state of the transaction"
Again is able to connect the 11th time and then fails randomly the 19th time or so. This doesnt happen in the dev environment, only in production.
I have tried/checked the following:
Changing the connection string to use UserId and Password creds instead of SSPI to eliminate any AD related issues
Tried Clearing the specific connection from the pool by calling SqlConnection.ClearPool(conn) so that a new connection is made rather than getting from the pool
Checked that MSDTC service is running on the production server
I am running out of ideas on this one. I am not even sure how to reproduce this error on the dev machine.
Any help or pointers will be appreciated.
Thanks
I'm thinking of caching permissions for every user on our application server. Is it a good idea to use a SqlCacheDependency for every user?
The query would look like this
SELECT PermissionId, PermissionName From Permissions Where UserId = #UserId
That way I know if any of those records change then to purge my cache for that user.
If you read how Query Notifications work you'll see why createing many dependency requests with a single query template is good practice. For a web app, which is implied by the fact that you use SqlCacheDependency and not SqlDependency, what you plan to do should be OK. If you use Linq2Sql you can also try LinqToCache:
var queryUsers = from u in repository.Users
where u.UserId = currentUserId
select u;
var user= queryUsers .AsCached("Users:" + currentUserId.ToString());
For a fat client app it would not be OK. Not because of the query per-se, but because SqlDependency in general is problematic with a large number of clients connected (it blocks a worker thread per app domain connected):
SqlDependency was designed to be used in ASP.NET or middle-tier
services where there is a relatively small number of servers having
dependencies active against the database. It was not designed for use
in client applications, where hundreds or thousands of client
computers would have SqlDependency objects set up for a single
database server.
Updated
Here is the same test as #usr did in his post. Full c# code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.SqlClient;
using DependencyMassTest.Properties;
using System.Threading.Tasks;
using System.Threading;
namespace DependencyMassTest
{
class Program
{
static volatile int goal = 50000;
static volatile int running = 0;
static volatile int notified = 0;
static int workers = 50;
static SqlConnectionStringBuilder scsb;
static AutoResetEvent done = new AutoResetEvent(false);
static void Main(string[] args)
{
scsb = new SqlConnectionStringBuilder(Settings.Default.ConnString);
scsb.AsynchronousProcessing = true;
scsb.Pooling = true;
try
{
SqlDependency.Start(scsb.ConnectionString);
using (var conn = new SqlConnection(scsb.ConnectionString))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand(#"
if object_id('SqlDependencyTest') is not null
drop table SqlDependencyTest
create table SqlDependencyTest (
ID int not null identity,
SomeValue nvarchar(400),
primary key(ID)
)
", conn))
{
cmd.ExecuteNonQuery();
}
}
for (int i = 0; i < workers; ++i)
{
Task.Factory.StartNew(
() =>
{
RunTask();
});
}
done.WaitOne();
Console.WriteLine("All dependencies subscribed. Waiting...");
Console.ReadKey();
}
catch (Exception e)
{
Console.Error.WriteLine(e);
}
finally
{
SqlDependency.Stop(scsb.ConnectionString);
}
}
static void RunTask()
{
Random rand = new Random();
SqlConnection conn = new SqlConnection(scsb.ConnectionString);
conn.Open();
SqlCommand cmd = new SqlCommand(
#"select SomeValue
from dbo.SqlDependencyTest
where ID = #id", conn);
cmd.Parameters.AddWithValue("#id", rand.Next(50000));
SqlDependency dep = new SqlDependency(cmd);
dep.OnChange += new OnChangeEventHandler((ob, qnArgs) =>
{
Console.WriteLine("Notified {3}: Info:{0}, Source:{1}, Type:{2}", qnArgs.Info, qnArgs.Source, qnArgs.Type, Interlocked.Increment(ref notified));
});
cmd.BeginExecuteReader(
(ar) =>
{
try
{
int crt = Interlocked.Increment(ref running);
if (crt % 1000 == 0)
{
Console.WriteLine("{0} running...", crt);
}
using (SqlDataReader rdr = cmd.EndExecuteReader(ar))
{
while (rdr.Read())
{
}
}
}
catch (Exception e)
{
Console.Error.WriteLine(e.Message);
}
finally
{
conn.Close();
int left = Interlocked.Decrement(ref goal);
if (0 == left)
{
done.Set();
}
else if (left > 0)
{
RunTask();
}
}
}, null);
}
}
}
After 50k subscriptions are set up (takes about 5 min), here are the stats io of a single insert:
set statistics time on
insert into Test..SqlDependencyTest (SomeValue) values ('Foo');
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 16 ms.
Inserting 1000 rows takes about 7 seconds, which includes firing several hundred notifications. CPU utilization is about 11%. All this is on my T420s ThinkPad.
set nocount on;
go
begin transaction
go
insert into Test..SqlDependencyTest (SomeValue) values ('Foo');
go 1000
commit
go
the documentation says:
SqlDependency was designed to be used in ASP.NET or middle-tier
services where there is a relatively small number of servers having
dependencies active against the database. It was not designed for use
in client applications, where hundreds or thousands of client
computers would have SqlDependency objects set up for a single
database server.
It tells us not to open thousands of cache dependencies. That is likely to cause resource problems on the SQL Server.
There are a few alternatives:
Have a dependency per table
Have 100 dependencies per table, one for every percent of rows. This should be an acceptable number for SQL Server yet you only need to invalidate 1% of the cache.
Have a trigger output the ID of all changes rows into a logging table. Create a dependency on that table and read the IDs. This will tell you exactly which rows have changed.
In order to find out if SqlDependency is suitable for mass usage I did a benchmark:
static void SqlDependencyMassTest()
{
var connectionString = "Data Source=(local); Initial Catalog=Test; Integrated Security=true;";
using (var dependencyConnection = new SqlConnection(connectionString))
{
dependencyConnection.EnsureIsOpen();
dependencyConnection.ExecuteNonQuery(#"
if object_id('SqlDependencyTest') is not null
drop table SqlDependencyTest
create table SqlDependencyTest (
ID int not null identity,
SomeValue nvarchar(400),
primary key(ID)
)
--ALTER DATABASE Test SET ENABLE_BROKER with rollback immediate
");
SqlDependency.Start(connectionString);
for (int i = 0; i < 1000 * 1000; i++)
{
using (var sqlCommand = new SqlCommand("select ID from dbo.SqlDependencyTest where ID = #id", dependencyConnection))
{
sqlCommand.AddCommandParameters(new { id = StaticRandom.ThreadLocal.GetInt32() });
CreateSqlDependency(sqlCommand, args =>
{
});
}
if (i % 1000 == 0)
Console.WriteLine(i);
}
}
}
You can see the amount of dependencies created scroll through the console. It gets slow very quickly. I did not do a formal measurement because it was not necessary to prove the point.
Also, the execution plan for a simple insert into the table shows 99% of the cost being associated with maintaining the 50k dependencies.
Conclusion: Does not work at all for production use. After 30min I have 55k dependencies created. Machine at 100% CPU all the time.