Redis vs SQL Server performance - sql-server

Application performance is one of the main reason of using cache over relational database. Because it stores data in memory in the form of key value pair, we can store frequently accessed data in cache which are not changes very frequently. Reading from cache is much faster than database. Redis is one of the best solution in distributed cache market.
I was doing a performance test between Azure Redis cache and Azure SQL Server. I have created a simple ASP.NET Core application and inside that I have read data from SQL Server database as well as Redis multiple times and compare the read time duration between them. For database reading I have used Entity Framework Core and for Redis reading I have used 'Microsoft.Extensions.Caching.StackExchangeRedis'.
Model
using System;
namespace WebApplication2.Models
{
[Serializable]
public class Student
{
public int Id { get; set; }
public string Name { get; set; }
public int Age { get; set; }
public string Subject { get; set; }
public Student()
{
Name = string.Empty;
Subject = string.Empty;
}
}
}
Entity Framework Core data context.
using Microsoft.EntityFrameworkCore;
using WebApplication2.Models;
namespace WebApplication2.Data
{
public class StudentContext : DbContext
{
public StudentContext(DbContextOptions<StudentContext> options)
: base(options)
{
}
public DbSet<Student>? Students { get; set; }
}
}
Startup class
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews();
string studentDbConnectionString = Configuration.GetConnectionString("StudentDbConnectionString");
services.AddDbContext<StudentContext>(option => option.UseSqlServer(studentDbConnectionString));
string redisConnectionString = Configuration.GetConnectionString("RedisConnectionString");
services.AddStackExchangeRedisCache(options =>
{
options.Configuration = redisConnectionString;
});
}
appsettings.json
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*",
"ConnectionStrings": {
"StudentDbConnectionString": "[Azure SQL Server connection string]",
"RedisConnectionString": "[Azure Redis cache connection string]"
}
}
Home controller
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Caching.Distributed;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Runtime.Serialization.Formatters.Binary;
using WebApplication2.Data;
using WebApplication2.Models;
namespace WebApplication2.Controllers
{
public class HomeController : Controller
{
private readonly StudentContext _studentContext;
private readonly IDistributedCache _cache;
public HomeController(StudentContext studentContext, IDistributedCache cache)
{
_studentContext = studentContext;
_cache = cache;
}
public IActionResult Index()
{
List<Student>? students = null;
var counter = 10000;
var sw = Stopwatch.StartNew();
for (var i = 0; i < counter; i++)
{
students = _studentContext.Students.OrderBy(student => student.Id).ToList();
}
sw.Stop();
ViewData["DatabaseDuraion"] = $"Database: {sw.ElapsedMilliseconds}";
if (students != null && students.Count > 0)
{
List<Student> studentsFromCache;
var key = "Students";
_cache.Set(key, ObjectToByteArray(students));
sw.Restart();
for (var i = 0; i < counter; i++)
{
studentsFromCache = (List<Student>)ByteArrayToObject(_cache.Get(key));
}
sw.Stop();
ViewData["RedisDuraion"] = $"Redis: {sw.ElapsedMilliseconds}";
}
return View();
}
private byte[] ObjectToByteArray(object obj)
{
var bf = new BinaryFormatter();
using var ms = new MemoryStream();
bf.Serialize(ms, obj);
return ms.ToArray();
}
private object ByteArrayToObject(byte[] arrBytes)
{
using var memStream = new MemoryStream();
var binForm = new BinaryFormatter();
memStream.Write(arrBytes, 0, arrBytes.Length);
memStream.Seek(0, SeekOrigin.Begin);
object obj = binForm.Deserialize(memStream);
return obj;
}
}
}
Home\Index.cshtml view
#{
ViewData["Title"] = "Home Page";
}
<div class="text-center">
<p>#ViewData["DatabaseDuraion"]</p>
<p>#ViewData["RedisDuraion"]</p>
</div>
I have found SQL Server is faster than Redis.
The ASP.NET Core application is hosted in Azure App Service with the same location with Azure SQL Server and Azure Redis.
Please let me know why Redis is slower than SQL Server?

I have used github.com/dotnet/BenchmarkDotNet to benchmark the Azure SQL Server database and Azure cache for Redis for 10000 reads. SQL Server database mean: 16.48 sec and Redis mean: 29.53 sec.
I have used JMeter and connects 100 users each reading SQL Server database/Redis 1000 times. There is not much difference between total time it took to finish reading SQL Server database vs Redis (both are near about 3 mins and 30 sec), but I saw load on Azure SQL Server database DTU. The DTU goes near 100% during the test.
As a conclusion, I think speed is not the only reason to use Redis cache over SQL Server database but another reason is Redis cache reduces good amount of load from the database.

You don't only see performance difference here BTW. For cache, Redis is also giving you cache invalidation logic, which you need to build up in SQL In memory table. So Redis all the way when it comes to cache

Think about what's happening here
In SQL
Process -> TCP -> read optimised store (single table) -> Serialisation into application models
In Redis
Process -> check for cache hit -> TCP -> read optimised store (single table) -> Serialisation into application models
Redis is great, but don't mistake its purpose, if you are doing a read from an indexed table on a well optimised index then SQL is going to be quick, why would Redis be any quicker? The power of distributed cache comes in when your authoritive store or your process have to do some computations to gain to result, so effectively what you are saving by caching is CPU disk / time (be it on sql or in proc).
If you want to really increase speed it's in memory cache that you want, this however isn't as simple as it first sounds, the real trick here is a way to invalidate in memory cache across a distributed cluster upon a change to the authoritive store.
Hope this helps

Related

MSSQL replication triggers, how to handle conditional HasTrigger in EntityFrameworkCore

I am using EntityFrameworkCore version 7 to implement data access across a number of client databases.
I have recently run into the error 'Could not save changes because the target table has database triggers.' on one of the clients. The error is obviously self explanatory and I understand how to fix it using HasTrigger.
The problem is that this error has occurred because this specific client is replicated and has what I assume are auto generated triggers MSmerge_upd, MSmerge_ins, MSmerge_del. Concurrently the majority of my clients are not replicated and would therefore not have any of these triggers in their database.
So, what is the correct way to handle replication triggers in EntityFrameworkCore particularly when your clients have a mishmash where some are replicated and some are not? Is there a way to check inside IEntityTypeConfiguration if you are running on a replicated database and conditionally add the replication triggers? Is there some sort of best practice in terms of how to handle this scenario with the new HasTriggers requirement?
Given that nobody has posted any answer I will post what my workaround is for now.
I have created a class called AutoTriggerBuilderEntityTypeConfiguration which basically attempts to configure all the triggers for a given EF model.
There are some performance implications with this approach and it could potentially be improved by caching the triggers for all tables across the database but its sufficient for my use case.
It looks like this:
public abstract class AutoTriggerBuilderEntityTypeConfiguration<TEntity> : IEntityTypeConfiguration<TEntity>
where TEntity : class
{
private readonly string _connectionString;
public AutoTriggerBuilderEntityTypeConfiguration(string connectionString)
{
this._connectionString = connectionString;
}
public void Configure(EntityTypeBuilder<TEntity> builder)
{
this.ConfigureEntity(builder);
var tableName = builder.Metadata.GetTableName();
var tableTriggers = this.GetTriggersForTable(tableName);
var declaredTriggers = builder.Metadata.GetDeclaredTriggers();
builder.ToTable(t =>
{
foreach (var trigger in tableTriggers)
{
if (!declaredTriggers.Any(o => o.ModelName.Equals(trigger, StringComparison.InvariantCultureIgnoreCase)))
t.HasTrigger(trigger);
}
});
}
private IEnumerable<string> GetTriggersForTable(string tableName)
{
var result = new List<string>();
using (var connection = new SqlConnection(this._connectionString))
using (var command = new SqlCommand(#"SELECT sysobjects.name AS Name FROM sysobjects WHERE sysobjects.type = 'TR' AND OBJECT_NAME(parent_obj) = #TableName", connection)
{
CommandType = CommandType.Text
})
{
connection.Open();
command.Parameters.AddWithValue("#TableName", tableName);
using (var reader = command.ExecuteReader())
{
while (reader.Read())
result.Add(reader.GetString("Name"));
}
}
return result;
}
public abstract void ConfigureEntity(EntityTypeBuilder<TEntity> builder);
}

Two DbContext's on same server throws: This platform does not support distributed transactions

I am not able to figure out why TransactionScope is starting distributed transaction (which is not configured at the SQL Server). I would like to use local transaction instead, which can be used when two databases are located in the same SQL Server instance. What is wrong with my code, how can I fix it? Can I force Transaction Scope to try local transaction first?
Databases
appsettings.json
{
"ConnectionStrings": {
"DefaultConnection": "Data Source=DESKTOP;Initial Catalog=test;Integrated Security=True",
"Test2Connection": "Data Source=DESKTOP;Initial Catalog=test2;Integrated Security=True"
}
}
startup.cs registering TestContext and Test2Context
services.AddDbContext<TestContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
services.AddDbContext<Test2Context>(options =>
options.UseSqlServer(Configuration.GetConnectionString("Test2Connection")));
services.AddTransient<ICustomerRepository, CustomerRepository>();
services.AddTransient<IMaterialRepository, MaterialRepository>();
// This service inject TestContext and Test2Context
services.AddTransient<ICustomerService, CustomerService>();
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
CustomerRepository using TestContext
public class CustomerRepository : ICustomerRepository
{
private readonly TestContext _context;
public CustomerRepository(TestContext context)
{
_context = context;
}
public Customer Retrieve(int id)
{
return _context.Customers.Where(x => x.Id == id).FirstOrDefault();
}
}
MaterialRepository using Test2Context
public class MaterialRepository : IMaterialRepository
{
private readonly Test2Context _context;
public MaterialRepository(Test2Context context)
{
_context = context;
}
public Material Retrieve(int id)
{
return _context.Materials.Where(x => x.Id == id).FirstOrDefault();
}
}
CustomerService
public class CustomerService : ICustomerService
{
private readonly ICustomerRepository _customerRepository;
private readonly IMaterialRepository _materialRepository;
public CustomerService(
ICustomerRepository customerRepository,
IMaterialRepository materialRepository)
{
_customerRepository = customerRepository;
_materialRepository = materialRepository;
}
public void DoSomething()
{
using (var transaction = new TransactionScope(TransactionScopeOption.Required
//,new TransactionOptions { IsolationLevel = IsolationLevel.RepeatableRead }
))
{
var customer = _customerRepository.Retrieve(1);
var material = _materialRepository.Retrieve(1); // The exception is thrown here !
// _customerRepository.Save(customer);
transaction.Complete();
}
}
}
reading from second context throw's This platform does not support distributed transactions exception.
The distributed transaction is also firing, when using the same connection string for two database contexts
startup.cs
services.AddDbContext<TestContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
services.AddDbContext<TestReadOnlyContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
CustomerReadOnlyRepository
public class CustomerReadOnlyRepository : ICustomerReadOnlyRepository
{
private readonly TestReadOnlyContext _context;
public CustomerReadOnlyRepository(TestReadOnlyContext context)
{
_context = context;
}
public Customer Retrieve(int customerId)
{
Customer customer = _context.Customers.Where(x => x.Id == customerId).Include("Offices").FirstOrDefault();
return customer;
}
}
CustomerService
var customer = _customerRepository.Retrieve(1);
var customerReadOnly = _customerReadOnlyRepository.Retrieve(1); // Throw's the same error.
why TransactionScope is starting distributed transaction?
Because you have two different SQL Server Sessions. The client has no way to coordinate transactions on separate sessions without promoting the transaction to a distributed transaction.
Can I force Transaction Scope to try local transaction first?
If you use a single Sql Server session for both DbContext instances, then it won't need to promote to a distributed transaction.
You should be able to simply use identical query strings for both DbContexts, and SqlClient will automagically cache and reuse a single connection for both. When a SqlConnection enlisted in a Transaction is Close() or Disposed() it's actually set aside pending the outcome of the transaction. Any subsequent attempt to open a new SqlConnection using the same connection string will return this same connection. A DbContext will, by default, Open and Close the SqlConnection for each operation, so it should benefit from this behavior.
If the same connection string doesn't work, you might have to open the SqlConnection and use it to construct both DbContext instances.
But wait, the tables are in different databases. Yep, and if there's a good reason for that you can leave them there. You need to do some work to enable a single SqlConnection to access the objects in both databases. The best way to do this is to CREATE SYNONYMs so your application can connect to a single database and access the remote objects with local names. This also enables you to have multiple instances of your application on a single instance (handy for dev/test).

net core 1 (dnx 4.5.1) with enterpriselibrary 6 - setting up the connection string

i ve big problems running enterprise library data access block with net core 1 (dnx 4.5.1)
How can i setup the default connection string for entlib
my appsettings.json
"ConnectionString": "Server=localhost\sqlexpress;Initial Catalog=blind;User Id=blind;Password=blind"
Here is my problem (no default connectionstring)
Database db = DatabaseFactory.CreateDatabase();
how can i pass the appsettings ConnectionString to the entlib databasefactory
any help would be greatly appreciated
I know it's an old question, but I have a similar setup (but using .NET Core 2.0) and it took me awhile to figure out how to set the default database connection without using the web.config to manage it.
What I did was include the default database and all of the connection strings in the appsettings.json and then in my Startup class I read the appsettings.json into an object that I defined to store the default db name and the connection strings and configure the default + named database using DatabaseFactory.SetDatabase.
DatabaseFactory.SetDatabases() Definition
public class DataConfiguration
{
public string DefaultDatabase { get; set; }
public List<ConnectionStringSettings> ConnectionStrings { get; set; }
}
public class Startup
{
public Startup(IConfiguration configuration)
{
//Get the Database Connections from appsettings.json
DataConfig = configuration.Get<DataConfiguration>();
var defaultDb = DataConfig.ConnectionStrings?.Find(c => c.Name == DataConfig.DefaultDatabase);
DatabaseFactory.SetDatabases(() => new SqlDatabase(defaultDb.ConnectionString), GetDatabase);
Configuration = configuration;
}
public Database GetDatabase(string name)
{
var dbInfo = DataConfig.ConnectionStrings.Find(c => c.Name == name);
if (dbInfo.ProviderName == "System.Data.SqlClient")
{
return new SqlDatabase(dbInfo.ConnectionString);
}
return new MySqlDatabase(dbInfo.ConnectionString);
}
}
Whenever there is documentation, I always suggest reading it as it is usually good. This is one of those examples, check out the "Getting Started with ASP.NET 5 and Entity Framework 6". There are several things that you need to do to ensure that you are correctly configured.
Setup your connection string and DI.
public class ApplicationDbContext : DbContext
{
public ApplicationDbContext(string nameOrConnectionString)
: base(nameOrConnectionString)
{
}
}
Also, notice the path in the configuration, it seems to differ from yours.
public void ConfigureServices(IServiceCollection services)
{
services.AddScoped((_) =>
new ApplicationDbContext(
Configuration["Data:DefaultConnection:ConnectionString"]));
// Configure remaining services
}

ServiceStack Ormlite transaction between services

I'm having trouble with a rather complex save operation inside a ServiceStack service.
To simplify the explanation the service starts an Ormlite transaction and within it calls another service through ResolveService:
public ApplicationModel Post(ApplicationModel request)
{
using (IDbTransaction tr = Db.OpenTransaction())
{
using (var cases = ResolveService<CaseService>())
{
request.Case = cases.Post(request.Case);
}
}
Db.Save<Application>(request.Application, true);
}
The other service (CaseService) uses also a transaction to perform its logic:
public CaseModel Post(CaseModel request)
{
using (IDbTransaction tr = Db.OpenTransaction())
{
Db.Insert<Case>(request);
Db.SaveAllReferences<CaseModel>(request);
}
}
In a similar situation with higher hierarchy of services calling other services a "Timeout expired" error is thrown, and so far I've not been able to resolve, although I closely monitored the SQL Server for deadlocks.
My question is whether this is the right way of using/sharing Ormlite transactions across services or there is another mechanism?
Thanks in advance.
You shouldn't have nested transactions, rather than calling across services to perform DB operations you should extract shared logic out either using a separate shared Repository or re-usable extension methods:
public static class DbExtensions
{
public static void SaveCaseModel(this IDbConnection db,
CaseModel case)
{
db.Insert<Case>(case);
db.SaveAllReferences<CaseModel>(case);
}
}
Then your Services can maintain their own transactions whilst being able to share logic, e.g:
public ApplicationModel Post(ApplicationModel request)
{
using (var trans = Db.OpenTransaction())
{
Db.SaveCaseModel(request.Case);
Db.Save<Application>(request.Application, true);
trans.Commit();
}
}
public CaseModel Post(CaseModel request)
{
using (var trans = Db.OpenTransaction())
{
Db.SaveCaseModel(request);
trans.Commit();
}
}

Testing EF SQL Server based application with in-memory SQLite?

I am using SQL Server with Entity Framework for a development of web app in .NET 4 with VS2010 RC. I would like to prepare testing database with sample data.
Should I prepare a copy of the real database (say another SQL Server database) for testing, or can I use SQLite in memory for better performance?
If using SQLite, can I use the same model EF has created for SQL Server database? How to migrate schema from SQL Server to in-memory SQLite?
How are you testing your code that uses EF with SQL Server?
Thanks for sharing.
I use LINQ to Objects and DI.
So let's say I have a service which uses a repository:
public FooService : Service, IFooService
{
private IFooRepository Repository { get; set; }
public GetSpecialFoos()
{
return from f in Repository.SelectAll()
where f.IsSpecial
select f;
}
public FooService(IFooRepository repository)
{
this.Repository = repository;
}
}
Now I can use constructor injection to inject a mock repository for testing. Generally, you'd use a DI framework for this. But the important thing is the mock repository can use LINQ to Objects:
public MockFooRepository : IFooRepository
{
public IList<Foo> Data { get; set; }
public IQueryable<Foo> SelectAll()
{
return Data.AsQueryable();
}
}
Now I can test:
[TestMethod]
public void GetSpecialFoos_returns_only_special_foos()
{
var specialId = 1;
var notSoSpecialId = 2;
var foos = new List<Foo>
{
new Foo
{
Id = specialId,
IsSpecial = true
},
new Foo
{
Id = notSoSpecialId,
IsSpecial = false
}
}
// use a DI framework here instead, in the real world
var repository = new MockFooRepository
{
Data = foos
};
var service = new FooService(repository);
var actual = service.GetSpecialFoos();
var returned = actual.First();
Assert.AreEqual(true, returned.IsSpecial);
Assert.AreEqual(specialId, returned.Id);
}

Resources