Linq How to run queries in chunks - sql-server

I have bundle of delete queries like following :-
DELETE FROM [Entry]
WHERE CompanyId = 1
AND EmployeeId IN (3, 4, 6, 7, 14, 17, 20, 21, 22,....100 more)
AND Entry_Date = '2016-12-01'
AND Entry_Method = 'I'
SO in my code, i run this list of queries as below :-
using (var ctx = new ApplicationDbContext(schemaName))
{
foreach (var item in queries)
{
ctx.Database.ExecuteSqlCommand(item);
}
}
But due to the large number of queries executing it creates a lock on sql, so i decided to execute the queries in chunk, so i found the below code :-
SET ROWCOUNT 500
delete_more:
DELETE FROM [Entry]
WHERE CompanyId = 1
AND EmployeeId IN (3, 4, 6, 7, 14, 17, 20, 21, 22,....100 more)
AND Entry_Date = '2016-12-01'
AND Entry_Method = 'I'
IF ##ROWCOUNT > 0 GOTO delete_more
SET ROWCOUNT 0
Now the problem is How do i run this thing as i was running it previously through ctx.Database.ExecuteSqlCommand?
What is the way i can run this chunk query code in Linq?

I would create a SQL Server stored procedure that get the employee ids as a parameter. Let's call it 'sp_deleteEmployees' with the param #ids
Then in C# you create a string on the ids
string idsList = "3, 4, 6, 7, 14, 17, 20, 21, 22"
context.Database.ExecuteSqlCommand("usp_CreateAuthor #ids={0} ", idsList);
EDIT
Sorry, I guess I didn't understand the problem. If you need to delete the employees in chunk you can split the list of employee with this
public static List<IEnumerable<T>> Partition<T>(this IEnumerable<T> source, int length)
{
var count = source.Count();
var numberOfPartitions = count / length + ( count % length > 0 ? 1 : 0);
List<IEnumerable<T>> result= new List<IEnumerable<T>>();
for (int i = 0; i < numberOfPartitions; i++)
{
result.Add(source.Skip(length*i).Take(length));
}
return result;
}
You can use this method to split the list to small chunks and delete them one chunk at a time

Related

How to update multiple rows of same column in one query using FoxPro database?

I want to update the multiple balance according to their invoice number in just one query. But the code below seems like the FoxPro not accepting it.
PS: I'm only using Visual FoxPro 6 as a database on my Classic ASP website.
UPDATE accounts_receivables SET balance =
CASE invoice_no
WHEN 3 THEN 6
WHEN 4 THEN 8
ELSE balance
END
WHERE invoice_no IN (3,4)
You must use ICASE() function instead. Like this:
UPDATE accounts_receivables SET balance =
ICASE (
invoice_no = 3, 6,
invoice_no = 4, 8,
"balance"
)
WHERE invoice_no IN (3,4)
This would work with VFP6:
UPDATE accounts_receivables SET balance = iif(invoice_no=3, 6, iif(invoice_no=4, 8, balance)) WHERE invoice_no IN (3,4)
That is also logically the same as writing as:
UPDATE accounts_receivables SET balance = iif(invoice_no=3, 6, 8)) WHERE invoice_no IN (3,4)
as long as you are only filtering on 3, 4.
However, you could also use do this via VFPOLEDB (even from within VFP6), and then you could as well write as a VFP9 supported query (since you say "ASP" likely you are already accessing the data via a VFP driver instead of direct access, you might simply use VFPOLEDB and utilize VFP9 supported SQL).
Sample in VFP (almost the same code from within ASP or VBA):
Local oCon, oCmd
oCon = Createobject("ADODB.Connection")
oCmd = Createobject("ADODB.Command")
oCmd.CommandType = 1
oCmd.CommandText = "UPDATE accounts_receivables"+;
" SET balance = icase(invoice_no=3, 6, invoice_no=4, 8, balance)"+;
" WHERE invoice_no IN (3,4)"
oCon.ConnectionString = "Provider=VFPOLEDB;Data Source=c:\MyDataFolder"
oCon.Open()
oCmd.ActiveConnection = oCon
oCmd.Execute
And saying ASP, if you really meant ASP.Net at this tiem rather than the old classic ASP, then you could even do that simpler like:
string sql = #"UPDATE accounts_receivables
SET balance = icase(invoice_no=3, 6, invoice_no=4, 8, balance)
WHERE invoice_no IN (3,4)";
using(OleDbConnection con = new OleDbConnection(#"Provider=VFPOLEDB;Data Source=c:\MyDataFolder"))
using(OleDbCommand cmd = new OleDbCommand(sql,con))
{
con.Open();
cmd.ExecuteNonQuery();
con.Close();
}

PostgreSQL 8.3.11 locked; orphaned pg_toast database object recovery

Howdy Slack Overflowvians.
So I came across this PostgreSQL server running 8.3.11 (yeah I know), that was in a locked state with:
ERROR: database is not accepting commands to avoid wraparound data loss in database "postgres"
HINT: Stop the postmaster and use a standalone backend to vacuum that database.
Normally the auto vaccum daemon (autovacuum=on), would handle this, but because the following four TOAST (allows storage of large field values 8 kB slices, like bread), database object. But the XID of this database never was reset because of these corrupt database objects.
Below is a snippet of the output when running the server in single-user mode with the admin user:
SELECT oid, relname, age(relfrozenxid) FROM pg_class WHERE relkind = 't' ORDER BY age(relfrozenxid) DESC LIMIT 4;
----
1: oid = "2421459" (typeid = 26, len = 4, typmod = -1, byval = t)
2: relname = "pg_toast_2421456" (typeid = 19, len = 64, typmod = -1, byval = f)
3: age = "2146484084" (typeid = 23, len = 4, typmod = -1, byval = t)
----
1: oid = "2421450" (typeid = 26, len = 4, typmod = -1, byval = t)
2: relname = "pg_toast_2421447" (typeid = 19, len = 64, typmod = -1, byval = f)
3: age = "2146484084" (typeid = 23, len = 4, typmod = -1, byval = t)
----
1: oid = "2421435" (typeid = 26, len = 4, typmod = -1, byval = t)
2: relname = "pg_toast_2421432" (typeid = 19, len = 64, typmod = -1, byval = f)
3: age = "2146484084" (typeid = 23, len = 4, typmod = -1, byval = t)
----
1: oid = "2421426" (typeid = 26, len = 4, typmod = -1, byval = t)
2: relname = "pg_toast_2421423" (typeid = 19, len = 64, typmod = -1, byval = f)
3: age = "2146484084" (typeid = 23, len = 4, typmod = -1, byval = t)
Notice the age is well above the vacuum_freeze_min_age (value set after a successful VACUUM), on this server and thus why it was issuing the original errors above. The above was AFTER running a VACUUM FULL; all other tables fine.
SELECT relfilenode FROM pg_class WHERE oid=2421459;
So when we looked on disk (used the pg_class.relfilenode value for each table above) the toast table's file was missing:
$ find /var/lib/pgsql/data/ -type f -name '2421426' | wc -l # Bad toast
0
and when we looked on disk at the index of the toast
SELECT relfilenode FROM pg_class WHERE (select reltoastidxid FROM pg_class WHERE oid=2421459)
$ find /var/lib/pgsql/data/ -type f -name '2421459' | wc -l # Bad toast's index
0
We then tried to find the table that the bad toast record is related to with:
SELECT * FROM pg_class WHERE reltoastrelid=2421459;
got 0 results for each table above! There are no tables for the VACUUM command to reset the XID of these relations.
When we checked the pg_depend table and found that these TOAST tables have NO references:
SELECT * FROM pg_depend WHERE refobjid IN(2421459,2421450,2421435,2421426)
Question
Can you delete the bad TOAST table and TOAST table indexes from the
pg_class table (e.g. DELETE FROM pg_class where oid=2421459)
Are there any other tables that we also need to remove the relation
from?
Could we just create a temp table and link it to the TOAST's
index's oid?
Example for #3 above:
CREATE TABLE adoptedparent (colnameblah char(1));
UPDATE pg_class SET reltoastrelid=2421459 WHERE relname='adoptedparent';
VACUUM FULL VERBOSE adoptedparent
EDIT:
select txid_current() is 3094769499 so these tables were corrupted a long time ago. We don't need to recover the data. We are running ext4 file system on Linux 2.6.18-238.el5. We checked the relevant lost+found/ directories and the files were not there.
Just for the home audience, in this particular case the resolution was to edit pg_class directly. And update the server to a supported version of Postgres, of course!
Specific answers:
Yes you can, although in most cases it's better to create an empty table, attach the toast relation to that table, add the pg_depend entries, and drop the table. In this case, that didn't make sense because there were truly no other objects depending on those toast tables.
Usually toast tables also have an index in pg_index, and entries in pg_depend. These did not.
See above.

Does Dapper use numbered parameters such as in Massive

Does Dapper use numbered parameters such as in Massive(#0, #1, ...) unlike named (#a, #b, ...)?
It is necessary to create a query as
//select #0 as val union select #1 union select #2 union select #3 union select #4
//union select #5 union select #6 union select #7 union select #8 union select #9
var sb = new StringBuilder("select #0 as val");
for (int i = 1; i < 10; i++)
{
sb.AppendFormat(" union select #{0}", i);
}
var query = sb.ToString();
//---Dapper = fail
var db = Connection;
var list = db.Query(query, param: new object[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 });
//---Massive = ok
var db2 = new Massive.DynamicModel(coins);
var list2 = db2.Query(query, args: new object[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 });
What are the solutions to the problem for Dapper?
Massive uses positional arguments in their queries but dapper uses named ones. So in massive you can pass in parameter arrays like: new int[] {1,2,3} while in dapper you need to pass in parameter objects like new { a = 1, b = 2 }.
To achieve a similar solution with dapper you can create a DynamicParameters object where you can pass your parameters dictionary where the key is the name of the parameter and value is the value of parameter, something like {"0",0}, {"1", 1}, etc
You can easily turn an array to a dictionary with LINQ:
var dictionary = new object[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
.Select((item, ind) => new {ind = ind.ToString(), item})
.ToDictionary(item => item.ind, item => item.item);
DynamicParameters p = new DynamicParameters(dictionary);
var list = db.Query(query, param: p);

SQL Server GUID sort algorithm. Why?

Problem with UniqueIdentifiers
We have an existing database which uses uniqueidentifiers extensively (unfortunately!) both as primary keys and some nullable columns of some tables. We came across a situation where some reports that run on these tables sort on these uniqueidentifiers because there is no other column in the table that would give a meaningful sort (isn't that ironic!). The intent was to sort so that it shows the items in the order they were inserted but they were not inserted using NewSequentialId() - hence a waste of time.
Fact about the Sort Algorithm
Anyway, considering SQL Server sorts uniqueidentifiers based on byte groups starting from the ending 5th byte group (6 bytes) and moving towards the 1st byte group (4 bytes) reversing the order on the 3rd byte-group (2 bytes) from right-left to left-right,
My Question
I was curious to know if there is any real life situation that this kind of sort helps at all.
How does SQL Server store the uniqueidentifier internally which might provide insight on
why it has this whacky sort algorithm?
Reference:
Alberto Ferrari's discovery of the SQL Server GUID sort
Example
Uniqueidentifiers are sorted as shown below when you use a Order By on a uniqueidentifier column having the below data.
Please note that the below data is sorted ascendingly and highest sort preference is from the 5th byte group towards the 1st byte group (backwards).
-- 1st byte group of 4 bytes sorted in the reverse (left-to-right) order below --
01000000-0000-0000-0000-000000000000
10000000-0000-0000-0000-000000000000
00010000-0000-0000-0000-000000000000
00100000-0000-0000-0000-000000000000
00000100-0000-0000-0000-000000000000
00001000-0000-0000-0000-000000000000
00000001-0000-0000-0000-000000000000
00000010-0000-0000-0000-000000000000
-- 2nd byte group of 2 bytes sorted in the reverse (left-to-right) order below --
00000000-0100-0000-0000-000000000000
00000000-1000-0000-0000-000000000000
00000000-0001-0000-0000-000000000000
00000000-0010-0000-0000-000000000000
-- 3rd byte group of 2 bytes sorted in the reverse (left-to-right) order below --
00000000-0000-0100-0000-000000000000
00000000-0000-1000-0000-000000000000
00000000-0000-0001-0000-000000000000
00000000-0000-0010-0000-000000000000
-- 4th byte group of 2 bytes sorted in the straight (right-to-left) order below --
00000000-0000-0000-0001-000000000000
00000000-0000-0000-0010-000000000000
00000000-0000-0000-0100-000000000000
00000000-0000-0000-1000-000000000000
-- 5th byte group of 6 bytes sorted in the straight (right-to-left) order below --
00000000-0000-0000-0000-000000000001
00000000-0000-0000-0000-000000000010
00000000-0000-0000-0000-000000000100
00000000-0000-0000-0000-000000001000
00000000-0000-0000-0000-000000010000
00000000-0000-0000-0000-000000100000
00000000-0000-0000-0000-000001000000
00000000-0000-0000-0000-000010000000
00000000-0000-0000-0000-000100000000
00000000-0000-0000-0000-001000000000
00000000-0000-0000-0000-010000000000
00000000-0000-0000-0000-100000000000
Code:
Alberto's code extended to denote that sorting is on the bytes and not on the individual bits.
With Test_UIDs As (-- 0 1 2 3 4 5 6 7 8 9 A B C D E F
Select ID = 1, UID = cast ('00000000-0000-0000-0000-100000000000' as uniqueidentifier)
Union Select ID = 2, UID = cast ('00000000-0000-0000-0000-010000000000' as uniqueidentifier)
Union Select ID = 3, UID = cast ('00000000-0000-0000-0000-001000000000' as uniqueidentifier)
Union Select ID = 4, UID = cast ('00000000-0000-0000-0000-000100000000' as uniqueidentifier)
Union Select ID = 5, UID = cast ('00000000-0000-0000-0000-000010000000' as uniqueidentifier)
Union Select ID = 6, UID = cast ('00000000-0000-0000-0000-000001000000' as uniqueidentifier)
Union Select ID = 7, UID = cast ('00000000-0000-0000-0000-000000100000' as uniqueidentifier)
Union Select ID = 8, UID = cast ('00000000-0000-0000-0000-000000010000' as uniqueidentifier)
Union Select ID = 9, UID = cast ('00000000-0000-0000-0000-000000001000' as uniqueidentifier)
Union Select ID = 10, UID = cast ('00000000-0000-0000-0000-000000000100' as uniqueidentifier)
Union Select ID = 11, UID = cast ('00000000-0000-0000-0000-000000000010' as uniqueidentifier)
Union Select ID = 12, UID = cast ('00000000-0000-0000-0000-000000000001' as uniqueidentifier)
Union Select ID = 13, UID = cast ('00000000-0000-0000-0001-000000000000' as uniqueidentifier)
Union Select ID = 14, UID = cast ('00000000-0000-0000-0010-000000000000' as uniqueidentifier)
Union Select ID = 15, UID = cast ('00000000-0000-0000-0100-000000000000' as uniqueidentifier)
Union Select ID = 16, UID = cast ('00000000-0000-0000-1000-000000000000' as uniqueidentifier)
Union Select ID = 17, UID = cast ('00000000-0000-0001-0000-000000000000' as uniqueidentifier)
Union Select ID = 18, UID = cast ('00000000-0000-0010-0000-000000000000' as uniqueidentifier)
Union Select ID = 19, UID = cast ('00000000-0000-0100-0000-000000000000' as uniqueidentifier)
Union Select ID = 20, UID = cast ('00000000-0000-1000-0000-000000000000' as uniqueidentifier)
Union Select ID = 21, UID = cast ('00000000-0001-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 22, UID = cast ('00000000-0010-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 23, UID = cast ('00000000-0100-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 24, UID = cast ('00000000-1000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 25, UID = cast ('00000001-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 26, UID = cast ('00000010-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 27, UID = cast ('00000100-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 28, UID = cast ('00001000-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 29, UID = cast ('00010000-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 30, UID = cast ('00100000-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 31, UID = cast ('01000000-0000-0000-0000-000000000000' as uniqueidentifier)
Union Select ID = 32, UID = cast ('10000000-0000-0000-0000-000000000000' as uniqueidentifier)
)
Select * From Test_UIDs Order By UID, ID
The algorithm is documented by the SQL Server guys here: How are GUIDs compared in SQL Server 2005? I Quote here here (since it's an old article that may be gone forever in a few years)
In general, equality comparisons make a lot of sense with
uniqueidentifier values. However, if you find yourself needing general
ordering, then you might be looking at the wrong data type and should
consider various integer types instead.
If, after careful thought, you decide to order on a uniqueidentifier
column, you might be surprised by what you get back.
Given these two uniqueidentifier values:
#g1= '55666BEE-B3A0-4BF5-81A7-86FF976E763F' #g2 =
'8DD5BCA5-6ABE-4F73-B4B7-393AE6BBB849'
Many people think that #g1 is less than #g2, since '55666BEE' is
certainly smaller than '8DD5BCA5'. However, this is not how SQL Server
2005 compares uniqueidentifier values.
The comparison is made by looking at byte "groups" right-to-left, and
left-to-right within a byte "group". A byte group is what is delimited
by the '-' character. More technically, we look at bytes {10 to 15}
first, then {8-9}, then {6-7}, then {4-5}, and lastly {0 to 3}.
In this specific example, we would start by comparing '86FF976E763F'
with '393AE6BBB849'. Immediately we see that #g2 is indeed greater
than #g1.
Note that in .NET languages, Guid values have a different default sort
order than in SQL Server. If you find the need to order an array or
list of Guid using SQL Server comparison semantics, you can use an
array or list of SqlGuid instead, which implements IComparable in a
way which is consistent with SQL Server semantics.
Plus, the sort follows byte groups endianness (see here: Globally unique identifier). The groups 10-15 and 8-9 are stored as big endian (corresponding to the Data4 in the wikipedia article), so they are compared as big endian. Other groups are compared using little endian.
A special service for those that find that the accepted answer a bit vague. The code speaks for itself; the magical parts are:
System.Guid g
g.ToByteArray();
int[] m_byteOrder = new int[16] // 16 Bytes = 128 Bit
{10, 11, 12, 13, 14, 15, 8, 9, 6, 7, 4, 5, 0, 1, 2, 3};
public int Compare(Guid x, Guid y)
{
byte byte1, byte2;
//Swap to the correct order to be compared
for (int i = 0; i < NUM_BYTES_IN_GUID; i++)
{
byte1 = x.ToByteArray()[m_byteOrder[i]];
byte2 = y.ToByteArray()[m_byteOrder[i]];
if (byte1 != byte2)
return (byte1 < byte2) ? (int)EComparison.LT : (int)EComparison.GT;
} // Next i
return (int)EComparison.EQ;
}
Full code:
namespace BlueMine.Data
{
public class SqlGuid
: System.IComparable
, System.IComparable<SqlGuid>
, System.Collections.Generic.IComparer<SqlGuid>
, System.IEquatable<SqlGuid>
{
private const int NUM_BYTES_IN_GUID = 16;
// Comparison orders.
private static readonly int[] m_byteOrder = new int[16] // 16 Bytes = 128 Bit
{10, 11, 12, 13, 14, 15, 8, 9, 6, 7, 4, 5, 0, 1, 2, 3};
private byte[] m_bytes; // the SqlGuid is null if m_value is null
public SqlGuid(byte[] guidBytes)
{
if (guidBytes == null || guidBytes.Length != NUM_BYTES_IN_GUID)
throw new System.ArgumentException("Invalid array size");
m_bytes = new byte[NUM_BYTES_IN_GUID];
guidBytes.CopyTo(m_bytes, 0);
}
public SqlGuid(System.Guid g)
{
m_bytes = g.ToByteArray();
}
public byte[] ToByteArray()
{
byte[] ret = new byte[NUM_BYTES_IN_GUID];
m_bytes.CopyTo(ret, 0);
return ret;
}
int CompareTo(object obj)
{
if (obj == null)
return 1; // https://msdn.microsoft.com/en-us/library/system.icomparable.compareto(v=vs.110).aspx
System.Type t = obj.GetType();
if (object.ReferenceEquals(t, typeof(System.DBNull)))
return 1;
if (object.ReferenceEquals(t, typeof(SqlGuid)))
{
SqlGuid ui = (SqlGuid)obj;
return this.Compare(this, ui);
} // End if (object.ReferenceEquals(t, typeof(UInt128)))
return 1;
} // End Function CompareTo(object obj)
int System.IComparable.CompareTo(object obj)
{
return this.CompareTo(obj);
}
int CompareTo(SqlGuid other)
{
return this.Compare(this, other);
}
int System.IComparable<SqlGuid>.CompareTo(SqlGuid other)
{
return this.Compare(this, other);
}
enum EComparison : int
{
LT = -1, // itemA precedes itemB in the sort order.
EQ = 0, // itemA occurs in the same position as itemB in the sort order.
GT = 1 // itemA follows itemB in the sort order.
}
public int Compare(SqlGuid x, SqlGuid y)
{
byte byte1, byte2;
//Swap to the correct order to be compared
for (int i = 0; i < NUM_BYTES_IN_GUID; i++)
{
byte1 = x.m_bytes[m_byteOrder[i]];
byte2 = y.m_bytes[m_byteOrder[i]];
if (byte1 != byte2)
return (byte1 < byte2) ? (int)EComparison.LT : (int)EComparison.GT;
} // Next i
return (int)EComparison.EQ;
}
int System.Collections.Generic.IComparer<SqlGuid>.Compare(SqlGuid x, SqlGuid y)
{
return this.Compare(x, y);
}
public bool Equals(SqlGuid other)
{
return Compare(this, other) == 0;
}
bool System.IEquatable<SqlGuid>.Equals(SqlGuid other)
{
return this.Equals(other);
}
}
}
Here's a different approach. The GUID is simply shuffled around ready for a normal string comparison like it occurs in SQL Server. This is Javascript but it is very easy to convert to any language.
function guidForComparison(guid) {
/*
character positions:
11111111112222222222333333
012345678901234567890123456789012345
00000000-0000-0000-0000-000000000000
byte positions:
111111111111
00112233 4455 6677 8899 001122334455
*/
return guid.substr(24, 12) +
guid.substr(19, 4) +
guid.substr(16, 2) +
guid.substr(14, 2) +
guid.substr(11, 2) +
guid.substr(9, 2) +
guid.substr(6, 2) +
guid.substr(4, 2) +
guid.substr(2, 2) +
guid.substr(0, 2);
};

Optimize delete query generated by Castle ActiveRecord

Lets say I have Id (primary key) list that I want to delete (e.g 1, 2, 3, 4).
Using this query :
Console.WriteLine ("DELETE DATA :");
ActiveRecordMediator<PostgrePerson>.DeleteAll ("Id IN (1, 2, 3, 4)");
I expect the console output is :
DELETE DATA :
NHibernate: DELETE FROM Person WHERE Id IN (1, 2, 3, 4)
but, the actual console output is (I use showsql option) :
DELETE DATA :
NHibernate: select postgreper0_.Id as Id5_, postgreper0_.Name as Name5_, postgreper0_.Age as Age5_, postgreper0_.Address as Address5_ from Person postgreper0_ w
here postgreper0_.Id in (1 , 2 , 3 , 4)
NHibernate: DELETE FROM Person WHERE Id = :p0;:p0 = 1
NHibernate: DELETE FROM Person WHERE Id = :p0;:p0 = 2
NHibernate: DELETE FROM Person WHERE Id = :p0;:p0 = 3
NHibernate: DELETE FROM Person WHERE Id = :p0;:p0 = 4
What should I do to make Castle ActiveRecord generate the expected (optimized) query?
Update
This is my implementation based on accepted answer.
int[] idList = GetIdList ();
ActiveRecordMediator<PostgrePerson>.Execute ((session, obj) => {
string hql = "DELETE PostgrePerson WHERE Id IN (:idList)";
return session.CreateQuery (hql)
.SetParameterList ("idList", idList)
.ExecuteUpdate ();
}, null);
Use the Execute callback method and run a DML-style HQL DELETE on the NHibernate ISession.

Resources