How to retrieve auto-incremented Id in ServiceStack OrmLite? - sql-server

For a table that has an identity:
[AutoIncrement]
public int Id { get; set;}
When inserting a new row into the database, what is the best way to retrieve the Id of the object?
For example:
db.Insert<> (new User());
The value of the Id is 0 after the insert, but in the database this obviously is not the case. The only possibility I can see is the following:
Id = (int)db.GetLastInsertId();
However I don't believe this would be a safe call to make. If there are 100's of inserts happening at the same time, an Id for another insert may be returned. In EF when you do an insert the Id is set for you.
Does anyone know the best way to go about this?

In ServiceStack.OrmLite v4 which defaults to using parameterized queries there are a couple of options in db.Save() which automatically populates the AutoIncrement Id, e.g:
db.Save(item);
item.Id //populated with the auto-incremented id
Otherwise you can select the last insert id using:
var itemId = db.Insert(item, selectIdentity:true);
Here are more examples showcasing OrmLite's new API's.
For OrmLite v3
The correct call is db.GetLastInsertId() which for SQL Server under the hood for example calls SELECT SCOPE_IDENTITY() which returns the last inserted id for that connection.
This is safe because all the other concurrent inserts that might be happening are using different DB connections. In order for anyone else to use the same connection it needs to be disposed of and released back into the pool.

You should definitely using the Unit of Work pattern, particularly in this scenarios, you wrap the db related codes in a transaction scope.
In ormLite, you can implement this via IDbCommand and IDbTransaction (see example here http://code.google.com/p/servicestack/source/browse/trunk/Common/ServiceStack.OrmLite/ServiceStack.OrmLite.Tests/ShippersExample.cs)
Looking at the code, you'll notice it's going to be less magical and more manual coding, but it's one way.

Update: As seen here, if you are using ServiceStack/ORMLite v4, you need to utilize the parameterized query to get the inserted ID. For example:
var UserId = db.Insert<User>(new User(), selectIdentity: true);

Related

My own database autoincrement - Avoid duplicated keys

I have a Java EE Web Application and a SQL Server Database.
I intend to cluster my database later.
Now, I have two tables:
- Users
- Places
But I don't want to use auto id of SQL Server.
I want to generate my own id because of the cluster.
So, I've created a new table Parameter. The parameter table has two columns: TableName and LastId. My parameter table stores the last id. When I add a new user, my method addUser do this:
Query the last id of the parameter table and increments +1;
Insert the new User
Update the last id +1.
It's working. But it's a web application, so how about 1000 people simultaneously? Maybe some of them get the same last id. How can I solve this? I've tried with synchronized, but it's not working.
What do you suggest? Yes, I have to avoid auto-increment.
I know that the user has to wait.
Automatic ID may work better in a cluster, but if you want to be database-portable or implement the allocator yourself, the basic approach is to work in an optimistic loop.
I prefer 'Next ID', since it makes the logic cleaner, so I'm going to use that in this example.
SELECT the NextID from your allocator table.
UPDATE NextID SET NextID=NextID+Increment WHERE NextID=the value you read
loop while RowsAffected != 1.
Of course, you'll also use the TableName condition when selecting/ updating to select the appropriate allocator row.
You should also look at allocating in blocks -- Increment=200, say -- and caching them in the appserver. This will give better concurrency & be a lot faster than hitting the DB each time.

Trying to query data from an enormous SQL Server table using EF 5

I am working with a SQL Server table that contains 80 million (80,000,000) rows. Data space = 198,000 MB. Not surprisingly, queries against this table often churn or timeout. To add to the issues, the table rows get updated fairly frequently and new rows also get added on a regular basis. It thus continues to grow like a viral outbreak.
My issue is that I would like to write Entity Framework 5 LINQ to Entities queries to grab rows from this monster table. As I've tried, timeouts have become outright epidemic. A few more things: the table's primary key is indexed and it has non-clustered indexes on 4 of its 19 columns.
So far, I am writing simple LINQ queries that use Transaction Scope and Read Uncommitted Isolation Level. I have tried increasing both the command timeout and the connection timeout. I have written queries that return FirstOrDefault() or a collection, such as the following, which attempts to grab a single ID (an int) from seven days before the current date:
public int GetIDForSevenDaysAgo(DateTime sevenDaysAgo)
{
using (var txn = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
var GetId = from te in _repo.GetTEvents()
where te.cr_date > sevenDaysAgo
orderby te.cr_date
select te.id;
return GetId.FirstOrDefault();
}
}
and
public IEnumerable<int> GetIDForSevenDaysAgo(DateTime sevenDaysAgo)
{
using (var txn = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
var GetId = from te in _repo.GetTEvents()
where te.cr_date > sevenDaysAgo
orderby te.cr_date
select te.id;
return GetId.Take(1);
}
}
Each query times out repeatedly regardless of the timeout settings. I'm using the repository pattern with Unity DI and fetching the table with IQueryable<> calls. I'm also limiting the repository call to eight days from the current date (hoping to only grab the needed subset of this mammoth table). I'm using Visual Studio 2013 with Update 5 targeting .NET v4.5 and SQL Server 2008 R2.
I generated the SQL statement that EF generates and it didn't look incredibly more complicated than the LINQ statements above. And my brain hurts.
So, have I reached some sort of tolerance limit for EF? Is the table simply too big? Should I revert to Stored Procedures/domain methods when querying this table? Are there other options I should explore? There was some discussion around removing some of the table's rows, but that probably won't happen anytime soon. I did read a little about paging, but I'm not sure if that would help or not. Any thoughts or ideas would be appreciated! Thank you!
As I can see you only selecting data and don't change it. So why do you need to use TransactionScope? You need it only when you have 2 or more SaveChanges() in your code and you want them to be in one transaction. So get rid of it.
Another thing that i whould use in your case is disable change tracking and auto detection of changes on your context. But be carefull if you don't rectreade your context on each request. It can presist old data.
To do it you should write this lines near your context initialization:
context.ObjectTrackingEnabled = false;
context.DeferredLoadingEnabled = false;
The other thing that you should think about is pagenation and Cache. But as i can see in your example you trying to get only one row. So can't say anything particular.
I recommend you to read this article to further optimisation.
It's not easy to say if you have to go with stored procedures or EF since we speak for a monster. :-)
The first thing I would do is to run the query in SSMS displaying the Actual Execution Plan. Sometimes it provides information about indexes missing that might increase performance.
From you example, I 'm pretty sure you need an index on that date column.
In other words, -if you have access- be sure that table design is optimal for that amount of data.
My thought is that if a simple query hangs, what more EF can do?

How to detect if an entity is outdated and needs to be reloaded

Let's say I do a query against my context to retrieve a particular entity.
Now I'd like to find the best way to know if the corresponding row in the database (let's stay simple with a 1-1 mapping between entity and SQL Table) has changed since the creation of my entity.
I've thought about using a TimeStamp column and execute a simple query each each I want to know if the entity is outdated, like:
var uptodate = (from e in context.mySet
where e.TimeStamp == entityTimeStamp
select e).Any();
By indexing the TimeStamp column, I think it would be a fast way to go, but unfortunately I didn't find any confirmation around the internet...
If you're looking for a way to do this without having to modify your table, you could use the CHECKSUM command and add an additional column to your mapping (as a view presumably). This way you don't have to worry about adding a new column to the database.
Something like this should work :
select *,
BINARY_CHECKSUM(*) as CheckSumValue
from test WITH (NOLOCK);
Here is some sample fiddle.
With that said, sometimes it's preferable to have a ModifiedDate field in your table. If that is the case, then that would surely be your best way of checking for changes.
Good luck.
If optimistic locking is OK, How about adding the [ConcurrencyCheck] attribute to the field in your data class. This causes the update to fail if the record has been modified.
The attribute is from System.ComponentModel.DataAnnotations
After further tests and investigation using the TimeStamp and a query to know if it's still the same (more precisely if the given value is still in the table) seems the best way to go.
Such mechanism needs an index on the TimeStamp column to ensure the best performance (roughly O(Log(n) instead of (O(n)).

How to properly implement "per field history" through triggers in SQL Server (2008)

so, I'm facing the challenge of having to log the data being changed for each field in a table. Now I can obviously do that with triggers (which I've never used before, but I can imagine is not that difficult), but I also need to be able to link the log who performed the change which is where the problem lies. The trigger wouldn't be aware of who is performing the change and I can't pass in a user id either.
So, how can I do what I need to do? If it helps say I have these tables:
Employees {
EmployeeId
}
Jobs {
JobId
}
Cookies {
CookieId
EmployeeId -> Employees.EmployeeId
}
So, as you can see I have a Cookies table which the application uses to verify sessions, and I can infer the user out of it, but again, I can't make the trigger be aware of it if I want to make changes to the Jobs table.
Help would be highly appreciated!
We use context_info to set the user making the calls to the DB. Then our application level security can be enforced all the way to in DB code. It might seem like an overhead, but really there is no performance issue for us.
make_db_call() {
Set context_info --some data representing the user----
do sql incantation
}
in db
select #user = dbo.ParseContextInfo()
... audit/log/security etc can determine who....
To get the previous value inside the trigger you select from the 'deleted' pseudo table, and to the get the values you are putting in you select from th 'inserted' pseudo table.
Before you issue linq2sql query issue the command like this.
context.ExecuteQuery('exec some_sp_to_set_context ' + userId')
Or more preferably I'd suggest an overloaded DataContext, where the above is executed before eqch query. See here for an example.
we don't use multiple SQL logins as we rely on the connection pooling and also locking down the db caller to a limited user.

NHibernate - Must Change the Save Order to Satisfy Database Constraints?

Someone on our data team added a database constraint and, while it's perfectly valid and desirable, it creates great problems for NHibernate because there doesn't seem to be a way to override NHibernate's save order.
Given a (silly example) class like this:
public Person
{
public virtual string FirstName { get; set; }
public virtual bool IsCurrent { get; set; }
}
and a constraint that only one record in the backing table can be IsCurrent=true at the same time . . .
If I try to "deprecate" an existing record by setting IsCurrent=false, and replace it with a new record with IsCurrent=true, I get an ADO exception on Save because NHibernate tries to perform the Insert first, violating the SQL Server constraint that only one record can be IsCurrent=true at once.
I see two options:
Can SQL Server be configured to check constraints only at the end of a transaction? The following statement (the "update" of the old row to IsCurrent=false would un-break the constraint.
Can NHibernate's save order (for instances of the same type) be
modified or "hinted" in any way?
Thanks!
Jeff
Either approach is possible; I would lean toward #2. If you call:
session.saveOrUpdate(person1);
session.flush();
session.saveOrUpdate(person2);
The flush will push the SQL statement to the database. I believe this will fix your problem. (The above is java Hibernate code, your syntax may vary slightly).
The problem here is that NHibernate is not aware of all the data intergity checks in the database layer.
Your option 1 is possible if you hack SQL server and disable constraints for a (short) period when you manupulate data. But it is a dirty solution since constraints are disabled for all the transaction being processed at that time.
In this particular case I would use anonter approach:
There are no integrity checks. Data integridy is based on firing trigger on insert or update. The trigger is responsible for setting IsCurrent to false for all relevant records except the record beung currently inserted or updated. Of course, you have to deal with recursive trigger firing since withing the trigger you are modifying records in the same table where trigger was fired.

Resources