We are using asp.net core 3.x with EF Core 3.x
We do have authorization on couple of entities(so that it only allow few of the records from table returned as response) which is achieved by accessing SQL view (internally joins two table) and we query against that view which will give you only those records which logged in user is authorized to.
In order to achieve this we need to insert logged in user id and ##spid (SQL Server) to the table (Session) (being used in above view) just before executing select queries (on Application table) and we need to delete that record immediately after query is executed. In order to achieve this we are using DbInterceptor.
Session table:
userId
sessionId
1
32
2
26
Application table:
id
userId
text
1
1
I need help to ...
2
2
I don't speak english...
Db interceptor implementation:
public class DbInterceptor : DbCommandInterceptor
{
private readonly IExecutionContext _executionContext;
public DbInterceptor(IExecutionContext executionContext)
{
_executionContext = executionContext;
}
public override async Task<InterceptionResult<DbDataReader>> ReaderExecutingAsync(DbCommand command,
CommandEventData eventData, InterceptionResult<DbDataReader> result,
CancellationToken cancellationToken = new CancellationToken())
{
var sqlParameter = new SqlParameter("UserId",
_executionContext.CurrentPrincipal.FindFirst(Claims.TSSUserId).Value);
await eventData.Context.Database.ExecuteSqlRawAsync("EXEC InsertUserSP #UserId", sqlParameter);
return await base.ReaderExecutingAsync(command, eventData, result);
}
public override async Task<DbDataReader> ReaderExecutedAsync(DbCommand command,
CommandExecutedEventData eventData, DbDataReader result,
CancellationToken cancellationToken = new CancellationToken())
{
var sqlParameter = new SqlParameter("UserId",
_executionContext.CurrentPrincipal.FindFirst(Claims.TSSUserId).Value);
await eventData.Context.Database.ExecuteSqlRawAsync("EXEC DeleteUserSP #UserId", sqlParameter);
return await base.ReaderExecutedAsync(command, eventData, result);
}
}
Now with this we got an exception
System.InvalidOperationException: 'There is already an open DataReader associated with this Command which must be closed first.' on line await eventData.Context.Database.ExecuteSqlRawAsync("EXEC DeleteUserSP #UserId", sqlParameter); in `ReaderExecutedAsync` method of interceptor.
I googled this exception and found that this error can be overcome by providing MultipleActiveResultSets to true in connection string.
Is there any side effect of using MultipleActiveResultSets?
While goggling around that topic, I come across several articles stating that It may share connection instance among different request, when MultipleActiveResultSets is set to true. If same connection is shared among the live request threads, then it can be problematic since authorization is working on the fact that it will have unique ##spid for all running live thread.
How DbContext will be provided with connection instance from Connection pool?
At ReaderExecutedAsync the data reader is still open and fetching rows. So it's too early to unset the user. Try hooking DataReaderDisposing instead.
If that doesn't work, force the connection open and call the procedure outside an interceptor. eg
var con = db.Database.GetDbConnection();
await con.OpenAsync();
await con.Database.ExecuteSqlRawAsync("EXEC InsertUserSP #UserId", sqlParameter);
This will ensure that the connection is not returned to the connection pool until the DbContext is Disposed.
Related
So i have a abp console application, which processes background jobs using hangfire + redis. In jobs mysql database is being accessed to select / insert records. Inside a job a single record is selected after some processing (it can take ~1sec - ~15mins), it is being updated in db. One job is scheduled more than 300K times and growing. Problem occurs when job is executed ~7000 times in one go. It is causing mysql connection problem. I want to use single connection and always keep it open. Any other suggestions are more than welcome!
public class MyJob : AsyncBackgroundJob<MyJobArgs>, ITransientDependency
{
private readonly IRepository<MyTable, Guid> _repository;
public MyJob (IRepository<MyTable, Guid> repository)
{
_repository = repository;
}
public override async Task ExecuteAsync(MyJobArgs args)
{
var data = await _repository.GetAsync(args.Id);
// processing //--- ~1sec - ~15mins
await _repository.UpdateAsync(data);
}
}
Some Background
In asp.net core when using SqlServer to store sessions, oddly enough the Id column in the SqlServer table gets set to the value of sessionKey which is a Guid generated by the SessionMiddleware. I say oddly enough because there is a SessionId but the Id in the table isn't set to that, it is set to the SessionKey. (I'm not making this up)
This sessionKey used for the Id in the table is also the value that is encrypted and placed in the session cookie. Here is that SessionMiddleware code:
var guidBytes = new byte[16];
CryptoRandom.GetBytes(guidBytes);
sessionKey = new Guid(guidBytes).ToString();
cookieValue = CookieProtection.Protect(_dataProtector, sessionKey);
var establisher = new SessionEstablisher(context, cookieValue, _options);
tryEstablishSession = establisher.TryEstablishSession;
isNewSessionKey = true;
The SessionId however, is a Guid generated by the DistributedSession object in the following line of code:
_sessionId = new Guid(IdBytes).ToString();
Interestingly the ISession interface provides a property for the SessionId but not the SessionKey. So it's often much easier in code to get access to a SessionId then a SessionKey, for example when you have access to an HttpContext object.
This makes it hard to match up the session to the database record if you desire to do that. This was noted by another user on stackoverflow as well How to Determine Session ID when using SQL Sever session storage.
Why?
What I want to know is why is the system designed this way? Why isn't the SessionId and SessionKey the one and the same? Why use two different Guids? I ask because I'm creating my own implementation of ISession and I'm tempted to use the SessionKey as the SessionId in my implementation so that it's easier to match up a record in the database to a session. Would that be a bad idea? Why wan't DistributedSession object designed that way rather than generating a SessionId that is different than the SessionKey? The only reason I can think of is perhaps trying increase security by obfuscating the linkage between the database record and the session it belongs to. But in general security professionals don't find security through obfuscation effective. So I'm left wondering why such a design was implemented?
I also posted the question on GitHub https://github.com/aspnet/Session/issues/151#issuecomment-287894321 to try to get an answer as well.
#Tratcher answered the question there so I'm pasting his answer below so that it's available here on stackoveflow too.
The lifetimes are different. The true lifetime of a session (and SessionId) is controlled by the server. SessionKey is stored in the cookie and lives on the client for an indeterminate amount of time. If the session expires on the server and then the client sends a new request with the old SessionKey, a new session instance with a new SessionId is created, but stored using the old SessionKey so that we don't have to issue a new cookie.
Put another way, don't depend on things outside of your control. The client can keep and replay their SessionKey indefinitely, but it's the server that decides if that is really still the same session.
In case someone need to get the sessionkey in asp.net core 3
Add DI for IDataProtector (IMPORTANT! when create protector it should be nameof(SessionMiddleware))
public IDataProtector _dataProtector;
public TestController( IDataProtectionProvider dataProtectionProvider )
{
_dataProtector = dataProtectionProvider.CreateProtector(nameof(SessionMiddleware));
}
Create method which will get proper value for the session cookie
private string Pad(string text)
{
var padding = 3 - ((text.Length + 3) % 4);
if (padding == 0)
{
return text;
}
return text + new string('=', padding);
}
Use it
public ActionResult TestSession( )
{
var protectedText = HttpContext.Request.Cookies[ ".AspNetCore.Session" ];
var sessionKey = "";
var protectedData = Convert.FromBase64String(Pad(protectedText));
if (protectedData == null)
{
sessionKey = string.Empty;
}
var userData = _dataProtector.Unprotect(protectedData);
if (userData == null)
{
sessionKey = string.Empty;
}
sessionKey = Encoding.UTF8.GetString(userData);
return Content( sessionKey );
}
The last few weeks we have been experiencing this error message while using the Azure Search SDK (1.1.1 - 1.1.2) and performing searches.
We consume the Search SDK from internal APIs (deployed as Azure Web Apps) that scale up-down based on traffic (so there could be more than 1 instance of the APIs doing the searches).
Our API queries 5 different indexes and maintains an in-memory copy of the SearchIndexClient object that corresponds to each index, a very simple implementation would look like:
public class AzureSearchService
{
private readonly SearchServiceClient _serviceClient;
private Dictionary<string, SearchIndexClient> _clientDictionary;
public AzureSearchService()
{
_serviceClient = new SearchServiceClient("myservicename", new SearchCredentials("myservicekey"));
_clientDictionary = new Dictionary<string, SearchIndexClient>();
}
public SearchIndexClient GetClient(string indexName)
{
try
{
if (!_clientDictionary.ContainsKey(indexName))
{
_clientDictionary.Add(indexName, _serviceClient.Indexes.GetClient(indexName));
}
return _clientDictionary[indexName];
}
catch
{
return null;
}
}
public async Task<SearchResults> SearchIndex(SearchIndexClient client, string text)
{
var parameters = new SearchParameters();
parameters.Top = 10;
parameters.IncludeTotalResultCount = true;
var response = await client.Documents.SearchWithHttpMessagesAsync(text, parameters, null, null);
return response.Body;
}
}
And the API would invoke the service by:
public class SearchController : ApiController
{
private readonly AzureSearchService service;
public SearchController()
{
service = new AzureSearchService();
}
public async Task<HttpResponseMessage> Post(string indexName, [FromBody] string text)
{
var indexClient = service.GetClient(indexName);
var results = await service.SearchIndex(indexClient, text);
return Request.CreateResponse(HttpStatusCode.OK, results, Configuration.Formatters.JsonFormatter);
}
}
We are using SearchWithHttpMessagesAsync due to a requirement to receive custom HTTP headers instead of the SearchAsync method.
This way we avoid opening/closing the client under traffic bursts. Before using this memory cache (and wrapping each client on a using clause) we would get port exhaustion alerts on Azure App Services.
Is this a good pattern? Could we be receiving this error because of the multiple instances running in parallel?
In case it is needed, the stack trace shows:
System.Net.Http.HttpRequestException: Only one usage of each socket address (protocol/network address/port) is normally permitted service.ip.address.hidden:443
[SocketException:Only one usage of each socket address (protocol/network address/port)is normally permitted service.ip.address.hidden:443]
at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure,Socket s4,Socket s6,Socket& socket,IPAddress& address,ConnectSocketState state,IAsyncResult asyncResult,Exception& exception)
[WebException:Unable to connect to the remote server]
at System.Net.HttpWebRequest.EndGetRequestStream(IAsyncResult asyncResult,TransportContext& context)
at System.Net.Http.HttpClientHandler.GetRequestStreamCallback(IAsyncResult ar)
EDIT: We are also receiving this error A connection attempt failed because the connected party did not properly respond after a period of time:
System.Net.Http.HttpRequestException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond service.ip.address.hidden:443
[SocketException:A connection attempt failed because the connected party did not properly respond after a period of time,or established connection failed because connected host has failed to respond service.ip.address.hidden:443]
at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure,Socket s4,Socket s6,Socket& socket,IPAddress& address,ConnectSocketState state,IAsyncResult asyncResult,Exception& exception)
[WebException:Unable to connect to the remote server]
at System.Net.HttpWebRequest.EndGetRequestStream(IAsyncResult asyncResult,TransportContext& context)
at System.Net.Http.HttpClientHandler.GetRequestStreamCallback(IAsyncResult ar)
As implemented in the code in your question, the cache will not prevent port exhaustion. This is because you're instantiating it as a field of the ApiController, which is created once per request. If you want to avoid port exhaustion, the cache must be shared across all requests. To make it concurrency-safe, you should use something like ConcurrentDictionary instead of Dictionary.
The "connection attempt failed" error is likely unrelated.
I'm developing a class library that contains generic methods for these scenarios:
Live support chat (1 on 1 private text chat, with many admins and guests)
Rooms with many users where you can send broadcast and private messages
These two features above are already implemented and now it's necessary for my application to save messages.
My question is, what is the best way to store chat conversations in a SQL database:
Everytime I click send, I insert the message in the database?
Create a List for each user and everytime I click send, the message is saved on the list of the user who sent the message. Then if a user disconnects, I'm going to iterate the list of messages and for each message insert all of them in the db.
Are there other solutions?
What I'm doing now is the following. I have this method which is located on my Hub class:
public void saveMessagetoDB(string userName, string message)
{
var ctx = new TestEntities1();
var msg = new tbl_Conversation {Msg = message};
ctx.tbl_Conversation.Add(msg);
ctx.SaveChanges();
}
I call this method saveMessagetoDB on my client side HTML file like this:
$('#btnSendMessage').click(function () {
var msg = $("#txtMessage").val();
if (msg.length > 0) {
var userName = $('#hUserName').val();
// <<<<<-- ***** Return to Server [ SaveMessagetoDB ] *****
objHub.server.saveMessagetoDB(userName, msg);
SignalR is great for a chat application and you wouldn't even need to store anything in SQL unless you want to create a transcript of the chat at a later time (which may not even be necessary).
I suggest getting the chat working with SignalR first (don't do anything with sql). Then once that is working you can put SQL logging as necessary in your signalR hub.
It most likely makes the most sense to write to sql on each message.
If you decide to store the chats in a database, then you will need to insert/update the messages as they happen.
If you are using the PersistentConnection then you can hook into the OnReceivedAsync event and insert / update data from that event:
protected override Task OnConnectedAsync(IRequest request, string connectionId)
{
_clients.Add(connectionId, string.Empty);
ChatData chatData = new ChatData("Server", "A new user has joined the room.");
return Connection.Broadcast(chatData);
}
Or in the SignalR class that inherits from Hub, you can persist to the Db right before you have notified any clients.
I have mapped classes with custom sql (insert, delete, update) through procedure calls. But, I noticed that when my insert procedure fails raising exception, the GenericAdoException from NHibernate doesn't have my message raised from the procedure.
But, all raised exceptions from procedures for delete and update is catched well, only the insert procedure hasn't its exception message catched.
Is that a limitation or a bug of NHibernate 3.2.4 when we use "native" generator for ids combined with custom sql ?
I'm searching also ways to get some out parameters from that procedures like a timestamp to each event (insert, delete and update), the timestamp is generated inside procedures.
EDIT: OUT PARAMs - I found the "generated" option over properties mapping options which we can ask to NHibernate to get params from procedures. This means that these properties have genarated values. So I tried to use generated="always" and works for insert, update and delete operations. Example: <property name="MyProp" generated="always"/>
I found that sql server driver doesn't put the messages raised by stored procedures into the SqlException when you run these stored procedures with ExecuteReader(). On the other hand NHibernate executes the custom sql-insert with ExecuteReader() (I debbuged its source code) and I guess it's right and necessary to get the key when it's mapped with native (or identity), my case.
Well, and now what to do ? I found also (hard to found) that the SqlConnection has a event called "InfoMessage" in which you can receive (catch) all messages sent from your stored procedures (raiserror). Now this is possible to "catch" these messages, but how to make them cross NHibernate core and be received by our application when we insert something session.save() ?
Altough we have access to session and so to the connection (SqlConnection) the messages was already lost, because them are only received by the delegate assigned to the event SqlConnection.InfoMessage before of its occurrence.
To solve this, I tried two approaches:
In the first I projected a way to register the delegate inside DriverConnectionProvider.GetConnection() and this delegate would store the messages on the thread context associating it with the connection, so these messages could be getted later.
In the second and the one choosed, I implemented IDbConnection and IDbCommand wrapping inside them the SqlConnection and SqlCommand (but I think the NHibernate has a bug because in some places it references DbConnection instead IDbConnection - like in ManagedProviderConnectionHelper, so I had to extend from DbConnection and DbCommand instead).
Inside my CustomSqlConnection I register the delegate and store the messages for later use.
This is working ! Work as standalone driver (ADO) either as a NHibernate driver.
The idea is:
public class CustomSqlConnection : DbConnection, IDbConnection {
private SqlConnection con;
private StringBuilder str = new StringBuilder(0);
public CustomSqlConnection() {
con = new SqlConnection();
con.InfoMessage += OnInfoMessage;
}
private void OnInfoMessage(object sender, SqlInfoMessageEventArgs e) {
if (str.Length > 0) {
str.Append("\n");
}
str.Append(e.Message);
}
public string FetchMessage() {
string msg = Message;
str.Clear();
return msg;
}
...
...
}
EDIT: The hard step is to implement all operations from DdConnection and Dbcommand, repassing the call to the sql instance (look the field con above), so:
...
public override void Open() {
con.Open();
}
...