Add AttachToTransaction to action in FastCrud - dapper

I'm trying to make a UnitOfWork/Repository pattern using fastcrud.
I have created a generic repository
public interface IRepository<T> where T : BaseEntity
{
IDbTransaction Transaction { get; set; }
T Get(T entityKeys, Action<ISelectSqlSqlStatementOptionsBuilder<T>> statementOptions = null);
IEnumerable<T> Find(Action<IRangedBatchSelectSqlSqlStatementOptionsOptionsBuilder<T>> statementOptions = null);
int Count(Action<IConditionalSqlStatementOptionsBuilder<T>> statementOptions = null);
bool Delete(T entityToDelete, Action<IStandardSqlStatementOptionsBuilder<T>> statementOptions = null);
}
From the service I call
var repo = UnitOfWork.GetRepository<MyTable>();
var myList = repo.Find(statement => statement
.AttachToTransaction(repo.Transaction)
.OrderBy($"{nameof(MyTable.Name):C}")
);
This works. But I don't want the service to handle the AttachToTransaction call, instead i would like to add it to my repository
public IEnumerable<T> Find(Action<IRangedBatchSelectSqlSqlStatementOptionsOptionsBuilder<T>> statementOptions = null)
{
return Connection.Find<T>(statementOptions);
}
But here the statementOption is a delegated Action, and I can't do
statementOption.AttachToTransaction(this.Transaction)
My UnitOfWork always creates an transaction, so if I skip attaching to transaction it I will get an exception
An unhandled exception occurred while processing the request.
InvalidOperationException: ExecuteReader requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.

You can do it like this:
public IEnumerable<T> Find(Action<IRangedBatchSelectSqlSqlStatementOptionsOptionsBuilder<T>> statementOptions = null)
{
statementOptions += s => s.AttachToTransaction(this.Transaction);
return Connection.Find<T>(statementOptions);
}

I had the same issue too. I have used this extension method resolved it:
internal static IRangedBatchSelectSqlSqlStatementOptionsOptionsBuilder<TEntity> AttachToTransaction<TEntity>(
this IRangedBatchSelectSqlSqlStatementOptionsOptionsBuilder<TEntity> statement,
Action<IRangedBatchSelectSqlSqlStatementOptionsOptionsBuilder<TEntity>> originalStatementOptionsBuilder,
IDbTransaction transaction)
{
if (originalStatementOptionsBuilder == null)
{
statement.AttachToTransaction(transaction);
}
else
{
originalStatementOptionsBuilder(statement);
statement.AttachToTransaction(transaction);
}
return statement;
}
Now your service must change like this:
public IEnumerable<T> Find(Action<IRangedBatchSelectSqlSqlStatementOptionsOptionsBuilder<T>> statementOptions = null)
{
return Connection.Find<T>(s => s.AttachToTransaction(statementOptions, this.Transaction));
}

Related

EF Core 6 "normal" update method doesn't respect RowVersion expected behavior?

I have a .NET6 API project that allows users to fetch resources from a database (SQL Server), and update them on a web client, and submit the updated resource back for saving to db. I need to notify users if another user has already updated the same resource during editing. I tried using EF IsRowVersion property for this concurrency check.
I noticed that "normal" update procedure (just getting the entity, changing properties and saving) does not respect the RowVersion expected behavior. But if I get the entity using AsNoTracking and use the db.Update method, the concurrency check works as expected. What could be the reason, and is the db.Update the only way to force the RowVersion check? That method has the downside that it tries to update every property, not just those that have changed. Simplified and runnable console app example below:
using Microsoft.EntityFrameworkCore;
Guid guid;
using (PeopleContext db = new())
{
Person p = new() { Name = "EF", Age = 30 };
db.Database.EnsureDeleted();
db.Database.EnsureCreated();
db.People.Add(p);
await db.SaveChangesAsync();
guid = p.Id;
}
using (PeopleContext db = new())
{
Person p = await db.People.FirstAsync(x => x.Id == guid);
p.Name = "FE";
p.RowVersion = Convert.FromBase64String("AAAAAADDC9I=");
await db.SaveChangesAsync(); // Does not throw even though RowVersion is incorrect
}
using (PeopleContext db = new())
{
Person p = await db.People.AsNoTracking().FirstAsync(x => x.Id == guid);
p.Name = "EFFE";
p.RowVersion = Convert.FromBase64String("AAAAAAGGC9I=");
db.People.Update(p);
await db.SaveChangesAsync(); // Throws DbUpdateConcurrencyException as expected, but updates all properties
}
public class Person
{
public Guid Id { get; set; }
public string Name { get; set; } = string.Empty;
public int Age { get; set; }
public byte[] RowVersion { get; set; } = Array.Empty<byte>();
}
public class PeopleContext : DbContext
{
public PeopleContext(){}
public DbSet<Person> People => Set<Person>();
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlServer(#"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=EFRowVersionDb;Integrated Security=True;");
optionsBuilder.LogTo(Console.WriteLine, Microsoft.Extensions.Logging.LogLevel.Information);
optionsBuilder.EnableSensitiveDataLogging();
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Person>(entity =>
{
entity.Property(e => e.RowVersion)
.IsRequired()
.IsRowVersion();
});
}
}
I solved the problem by overriding the SaveChangesAsync method like this:
public override Task<int> SaveChangesAsync(bool acceptAllChangesOnSuccess, CancellationToken cancellationToken = default)
{
foreach (var item in ChangeTracker.Entries().Where(x=>x.State == EntityState.Modified))
{
item.OriginalValues["RowVersion"] = item.CurrentValues["RowVersion"];
}
return base.SaveChangesAsync(acceptAllChangesOnSuccess, cancellationToken);
}
I override that signature method cause the one without boolean calls that method. Same thing on sync version.

Spring get database table value on server startup

We are creating a spring and hibernate application and using a legacy database.
Our requirement is to get values from few database tables on server startup.
We are planning to put these values in properties files.So that we don't need to fetch DB for these values again and again.
We have used ApplicationListener to get hook on startup using following stackoverflow question:-
Listener for server starup and all spring bean loaded completely
the code being used is as below
#Component
public class SpringContextListener implements ApplicationListener<ContextRefreshedEvent> {
private List<Yosemitecompany> companyList = new ArrayList<Yosemitecompany>();
private YosemitecompanyRI iYosemitecompanyBO;
public SpringContextListener(){
}
public SpringContextListener(YosemitecompanyRI iYosemitecompanyBO) {
this.iYosemitecompanyBO = iYosemitecompanyBO;
}
public void onApplicationEvent(final ContextRefreshedEvent event) {
System.out.println("ApplicationListener Started"+iYosemitecompanyBO);
if(companyList == null || (companyList != null && companyList.size() <= 0) && iYosemitecompanyBO != null)
{
companyList = iYosemitecompanyBO.getCompanyDetailsWithStatus();
}
}
public List<Yosemitecompany> getCompanyList()
{
return companyList;
}
}
and this is the repository class
#Repository
#Transactional
public class YosemitecompanyRI implements IYosemitecompanyR{
static final Logger log = Logger.getLogger("YosemitecompanyDAOI");
#Autowired
private SessionFactory sessionFactory;
protected Session getSession() {
log.info(sessionFactory);
if (sessionFactory != null)
return sessionFactory.getCurrentSession();
else
return null;
}
#Override
public List<Yosemitecompany> getCompanyDetailsWithStatus()
{
List<Yosemitecompany> results = new ArrayList<Yosemitecompany>();
log.info("reached "+getSession());
if(getSession() != null)
{
log.info("executing query");
Criteria cr = getSession().createCriteria(Yosemitecompany.class);
cr.add(Restrictions.eq("cmpstatus",new BigDecimal(1)));
results = (List<Yosemitecompany>)cr.list();
}
return results;
}
}
Now on server startup..i get sessionFactory always as null..so my code for getting the list never gets executed.
i am new to spring and Hibernate.If this approach is fine then please help me to know what i am doing wrong.if there is a better approach to achieve please suggest that too.
Thanks in advance.

where doese breeze fits into ntier architecture

i am Trying to fit in breezeJS with my existing architecture. I have a structure like
html/JS/Angular :: based view using hot-towel angular.
web api controllers :: whom the view calls.
Services layer :: that is being called from Web api. Any business logic goes here.
Unit of Work :: And (if) business logic requires to talk to data base for CRUDs it calls UOW.
Repository Pattern :: UOW is actually wrapping repositories. and repositores in turn talking to DbContexts.
Uptill now i was able to conver normal repositories implementation into the one using
public EFContextProvider<MyContext> DbContext { get; set; }
instead of just DbContext and i am also exposing MetaData using a string property with in UOW and IQueryables are returned using DbContext.Context.SomeEntity
Question 1 : Am i on right track ??
Question 2 : Most of the breeze examples are suggesting one SaveChanges method that give you all the entities that were changed and it will persist it at once. What if i want to trigger some business logic before Add,Update and Delete. i want to call me AddSomething service method and want to have a particular type of entity being sent to AddSomething and run some business logic before persistence. How can i put it together.
my code looksl ike
[BreezeController]//This is the controller
public class BreezeController : ApiController
{
private readonly ISomeService someService;
public BreezeController(ISomeService someService)
{
this.someService = someService;
}
// ~/breeze/todos/Metadata
[HttpGet]
public string Metadata()
{
return someService.MetaData();
}
// ~/breeze/todos/Todos
// ~/breeze/todos/Todos?$filter=IsArchived eq false&$orderby=CreatedAt
[HttpGet]
public IQueryable<Node> Nodes()
{
return nodesService.GetAllNodes().AsQueryable();
}
// ~/breeze/todos/SaveChanges
//[HttpPost]
//public SaveResult SaveChanges(JObject saveBundle)
//{
// return _contextProvider.SaveChanges(saveBundle);
//}
Below is the service
public class SomeService : BaseService, ISomeService
{
private readonly IUow Uow;
public SomeService(IUow Uow)
: base(Uow)
{
this.Uow = Uow;
}
public IEnumerable<Something> GetAllNodes()
{
return Uow.Somethings.GetAll();
}
}
every service can expose one property through base. that is actually the meta data
public class BaseService : IBaseService
{
private readonly IUow Uow;
public BaseService(IUow Uow)
{
this.Uow = Uow;
}
public string MetaData()
{
return Uow.MetaData;
}
}
and the my UOW looks like
public class VNUow : IUow, IDisposable
{
public VNUow(IRepositoryProvider repositoryProvider)
{
CreateDbContext();
repositoryProvider.DbContext = DbContext;
RepositoryProvider = repositoryProvider;
}
// Code Camper repositories
public IRepository<Something> NodeGroup { get { return GetStandardRepo<Something>(); } }
} }
public IRepository<Node> Nodes { get { return GetStandardRepo<Node>(); } }
/// <summary>
/// Save pending changes to the database
/// </summary>
public void Commit()
{
//System.Diagnostics.Debug.WriteLine("Committed");
DbContext.Context.SaveChanges();
}
public string MetaData // the Name property
{
get
{
return DbContext.Metadata();
}
}
protected void CreateDbContext()
{
// DbContext = new VNContext();
DbContext = new EFContextProvider<VNContext>();
// Load navigation properties always if it is true
DbContext.Context.Configuration.LazyLoadingEnabled = false;
// Do NOT enable proxied entities, else serialization fails
DbContext.Context.Configuration.ProxyCreationEnabled = true;
// Because Web API will perform validation, we don't need/want EF to do so
DbContext.Context.Configuration.ValidateOnSaveEnabled = false;
//DbContext.Configuration.AutoDetectChangesEnabled = false;
// We won't use this performance tweak because we don't need
// the extra performance and, when autodetect is false,
// we'd have to be careful. We're not being that careful.
}
protected IRepositoryProvider RepositoryProvider { get; set; }
private IRepository<T> GetStandardRepo<T>() where T : class
{
return RepositoryProvider.GetRepositoryForEntityType<T>();
}
private T GetRepo<T>() where T : class
{
return RepositoryProvider.GetRepository<T>();
}
private EFContextProvider<VNContext> DbContext { get; set; }
#region IDisposable
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
if (DbContext != null)
{
DbContext.Context.Dispose();
}
}
}
#endregion
}
in the end Repository Implementaion looks like
public class EFRepository<T> : IRepository<T> where T : class
{
public EFRepository(EFContextProvider<VNContext> dbContext)
{
if (dbContext == null)
throw new ArgumentNullException("dbContext");
DbContext = dbContext;
DbSet = DbContext.Context.Set<T>();
}
protected EFContextProvider<VNContext> DbContext { get; set; }
protected DbSet<T> DbSet { get; set; }
public virtual IQueryable<T> GetAll()
{
return DbSet;
}
public virtual IQueryable<T> GetAllEagerLoad(params Expression<Func<T, object>>[] children)
{
children.ToList().ForEach(x => DbSet.Include(x).Load());
return DbSet;
}
public virtual IQueryable<T> GetAllEagerLoadSelective(string[] children)
{
foreach (var item in children)
{
DbSet.Include(item);
}
return DbSet;
}
public virtual IQueryable<T> GetAllLazyLoad()
{
return DbSet;
}
public virtual T GetById(int id)
{
//return DbSet.FirstOrDefault(PredicateBuilder.GetByIdPredicate<T>(id));
return DbSet.Find(id);
}
public virtual T GetByIdLazyLoad(int id, params Expression<Func<T, object>>[] children)
{
children.ToList().ForEach(x => DbSet.Include(x).Load());
return DbSet.Find(id);
}
public virtual void Add(T entity)
{
DbEntityEntry dbEntityEntry = DbContext.Context.Entry(entity);
if (dbEntityEntry.State != EntityState.Detached)
{
dbEntityEntry.State = EntityState.Added;
}
else
{
DbSet.Add(entity);
}
}
public virtual void Update(T entity)
{
DbEntityEntry dbEntityEntry = DbContext.Context.Entry(entity);
if (dbEntityEntry.State == EntityState.Detached)
{
DbSet.Attach(entity);
}
dbEntityEntry.State = EntityState.Modified;
}
public virtual void Delete(T entity)
{
DbEntityEntry dbEntityEntry = DbContext.Context.Entry(entity);
if (dbEntityEntry.State != EntityState.Deleted)
{
dbEntityEntry.State = EntityState.Deleted;
}
else
{
DbSet.Attach(entity);
DbSet.Remove(entity);
}
}
public virtual void Delete(int id)
{
var entity = GetById(id);
if (entity == null) return; // not found; assume already deleted.
Delete(entity);
}
}
Much of this question is broad question and answers will be primarily opinion based... that said, here's my two cents: keep it simple. Carefully consider whether you truly need 3, 4 and 5, especially whether you need to implement UoW or the Repository Pattern yourself. The EF DbContext implements both, you could use it in your controllers directly if you wanted.
If you have custom logic that needs to execute prior to savechanges utilize one of the interceptor methods: BeforeSaveEntity or BeforeSaveEntites. Here's the documentation for those methods:
http://www.getbreezenow.com/documentation/contextprovider#BeforeSaveEntity
Breeze supports "Named saves" where you specify the name of the specific server endpoint ( i.e. your service method) on a per save basis. See:
http://www.getbreezenow.com/documentation/saving-changes
This would look something like this on your client.
var saveOptions = new SaveOptions({ resourceName: "CustomSave1" });
em.saveChanges(entitiesToSave, saveOptions).then(function (saveResult) {
// .. do something interesting.
}
and on your server
[HttpPost]
public SaveResult CustomSave1(JObject saveBundle) {
ContextProvider.BeforeSaveEntityDelegate = CustomSave1Interceptor;
return ContextProvider.SaveChanges(saveBundle);
}
private Dictionary<Type, List<EntityInfo>> CustomSave1Interceptor(Dictionary<Type, List<EntityInfo>> saveMap) {
// In this method you can
// 1) validate entities in the saveMap and optionally throw an exception
// 2) update any of the entities in the saveMap
// 3) add new entities to the saveMap
// 4) delete entities from the save map.
// For example
List<EntityInfo> fooInfos;
if (!saveMap.TryGetValue(typeof(Foo), out fooEntities)) {
// modify or delete any of the fooEntites
// or add new entityInfo instances to the fooEntities list.
}
}

Easy way to dynamically invoke web services (without JDK or proxy classes)

In Python I can consume a web service so easily:
from suds.client import Client
client = Client('http://www.example.org/MyService/wsdl/myservice.wsdl') #create client
result = client.service.myWSMethod("Bubi", 15) #invoke method
print result #print the result returned by the WS method
I'd like to reach such a simple usage with Java.
With Axis or CXF you have to create a web service client, i.e. a package which reproduces all web service methods so that we can invoke them as if they where normal methods. Let's call it proxy classes; usually they are generated by wsdl2java tool.
Useful and user-friendly. But any time I add/modify a web service method and I want to use it in a client program I need to regenerate proxy classes.
So I found CXF DynamicClientFactory, this technique avoids the use of proxy classes:
import org.apache.cxf.endpoint.Client;
import org.apache.cxf.endpoint.dynamic.DynamicClientFactory;
//...
//create client
DynamicClientFactory dcf = DynamicClientFactory.newInstance();
Client client = dcf.createClient("http://www.example.org/MyService/wsdl/myservice.wsdl");
//invoke method
Object[] res = client.invoke("myWSMethod", "Bubi");
//print the result
System.out.println("Response:\n" + res[0]);
But unfortunately it creates and compiles proxy classes runtime, hence requires JDK on the production machine. I have to avoid this, or at least I can't rely on it.
My question:
Is there another way to dinamically invoke any method of a web service in Java, without having a JDK at runtime and without generating "static" proxy classes? Maybe with a different library? Thanks!
I know this is a really old question but if you are still interested you could use soap-ws github project: https://github.com/reficio/soap-ws
Here you have a sample usage really simple:
Wsdl wsdl = Wsdl.parse("http://www.webservicex.net/CurrencyConvertor.asmx?WSDL");
SoapBuilder builder = wsdl.binding()
.localPart("CurrencyConvertorSoap")
.find();
SoapOperation operation = builder.operation()
.soapAction("http://www.webserviceX.NET/ConversionRate")
.find();
Request request = builder.buildInputMessage(operation)
SoapClient client = SoapClient.builder()
.endpointUrl("http://www.webservicex.net/CurrencyConvertor.asmx")
.build();
String response = client.post(request);
As you can see it is really simple.
With CXF 3.x this could be possible with StaxDataBinding. Follow below steps to get the basics. Of course, this could be enhanced to your needs.
Create StaxDataBinding something like below. Note below code can be enhanced to your sophistication.
class StaxDataBinding extends AbstractInterceptorProvidingDataBinding {
private XMLStreamDataReader xsrReader;
private XMLStreamDataWriter xswWriter;
public StaxDataBinding() {
super();
this.xsrReader = new XMLStreamDataReader();
this.xswWriter = new XMLStreamDataWriter();
inInterceptors.add(new StaxInEndingInterceptor(Phase.POST_INVOKE));
inFaultInterceptors.add(new StaxInEndingInterceptor(Phase.POST_INVOKE));
inInterceptors.add(RemoveStaxInEndingInterceptor.INSTANCE);
inFaultInterceptors.add(RemoveStaxInEndingInterceptor.INSTANCE);
}
static class RemoveStaxInEndingInterceptor
extends AbstractPhaseInterceptor<Message> {
static final RemoveStaxInEndingInterceptor INSTANCE = new RemoveStaxInEndingInterceptor();
public RemoveStaxInEndingInterceptor() {
super(Phase.PRE_INVOKE);
addBefore(StaxInEndingInterceptor.class.getName());
}
public void handleMessage(Message message) throws Fault {
message.getInterceptorChain().remove(StaxInEndingInterceptor.INSTANCE);
}
}
public void initialize(Service service) {
for (ServiceInfo serviceInfo : service.getServiceInfos()) {
SchemaCollection schemaCollection = serviceInfo.getXmlSchemaCollection();
if (schemaCollection.getXmlSchemas().length > 1) {
// Schemas are already populated.
continue;
}
new ServiceModelVisitor(serviceInfo) {
public void begin(MessagePartInfo part) {
if (part.getTypeQName() != null
|| part.getElementQName() != null) {
return;
}
part.setTypeQName(Constants.XSD_ANYTYPE);
}
}.walk();
}
}
#SuppressWarnings("unchecked")
public <T> DataReader<T> createReader(Class<T> cls) {
if (cls == XMLStreamReader.class) {
return (DataReader<T>) xsrReader;
}
else {
throw new UnsupportedOperationException(
"The type " + cls.getName() + " is not supported.");
}
}
public Class<?>[] getSupportedReaderFormats() {
return new Class[] { XMLStreamReader.class };
}
#SuppressWarnings("unchecked")
public <T> DataWriter<T> createWriter(Class<T> cls) {
if (cls == XMLStreamWriter.class) {
return (DataWriter<T>) xswWriter;
}
else {
throw new UnsupportedOperationException(
"The type " + cls.getName() + " is not supported.");
}
}
public Class<?>[] getSupportedWriterFormats() {
return new Class[] { XMLStreamWriter.class, Node.class };
}
public static class XMLStreamDataReader implements DataReader<XMLStreamReader> {
public Object read(MessagePartInfo part, XMLStreamReader input) {
return read(null, input, part.getTypeClass());
}
public Object read(QName name, XMLStreamReader input, Class<?> type) {
return input;
}
public Object read(XMLStreamReader reader) {
return reader;
}
public void setSchema(Schema s) {
}
public void setAttachments(Collection<Attachment> attachments) {
}
public void setProperty(String prop, Object value) {
}
}
public static class XMLStreamDataWriter implements DataWriter<XMLStreamWriter> {
private static final Logger LOG = LogUtils
.getL7dLogger(XMLStreamDataWriter.class);
public void write(Object obj, MessagePartInfo part, XMLStreamWriter writer) {
try {
if (!doWrite(obj, writer)) {
// WRITE YOUR LOGIC HOW you WANT TO HANDLE THE INPUT DATA
//BELOW CODE JUST CALLS toString() METHOD
if (part.isElement()) {
QName element = part.getElementQName();
writer.writeStartElement(element.getNamespaceURI(),
element.getLocalPart());
if (obj != null) {
writer.writeCharacters(obj.toString());
}
writer.writeEndElement();
}
}
}
catch (XMLStreamException e) {
throw new Fault("COULD_NOT_READ_XML_STREAM", LOG, e);
}
}
public void write(Object obj, XMLStreamWriter writer) {
try {
if (!doWrite(obj, writer)) {
throw new UnsupportedOperationException("Data types of "
+ obj.getClass() + " are not supported.");
}
}
catch (XMLStreamException e) {
throw new Fault("COULD_NOT_READ_XML_STREAM", LOG, e);
}
}
private boolean doWrite(Object obj, XMLStreamWriter writer)
throws XMLStreamException {
if (obj instanceof XMLStreamReader) {
XMLStreamReader xmlStreamReader = (XMLStreamReader) obj;
StaxUtils.copy(xmlStreamReader, writer);
xmlStreamReader.close();
return true;
}
else if (obj instanceof XMLStreamWriterCallback) {
((XMLStreamWriterCallback) obj).write(writer);
return true;
}
return false;
}
public void setSchema(Schema s) {
}
public void setAttachments(Collection<Attachment> attachments) {
}
public void setProperty(String key, Object value) {
}
}
}
Prepare your input to match the expected input, something like below
private Object[] prepareInput(BindingOperationInfo operInfo, String[] paramNames,
String[] paramValues) {
List<Object> inputs = new ArrayList<Object>();
List<MessagePartInfo> parts = operInfo.getInput().getMessageParts();
if (parts != null && parts.size() > 0) {
for (MessagePartInfo partInfo : parts) {
QName element = partInfo.getElementQName();
String localPart = element.getLocalPart();
// whatever your input data you need to match data value for given element
// below code assumes names are paramNames variable and value in paramValues
for (int i = 0; i < paramNames.length; i++) {
if (paramNames[i].equals(localPart)) {
inputs.add(findParamValue(paramNames, paramValues, localPart));
}
}
}
}
return inputs.toArray();
}
Now set the proper data binding and pass the data
Bus bus = CXFBusFactory.getThreadDefaultBus();
WSDLServiceFactory sf = new WSDLServiceFactory(bus, wsdl);
sf.setAllowElementRefs(false);
Service svc = sf.create();
Client client = new ClientImpl(bus, svc, null,
SimpleEndpointImplFactory.getSingleton());
StaxDataBinding databinding = new StaxDataBinding();
svc.setDataBinding(databinding);
bus.getFeatures().add(new StaxDataBindingFeature());
BindingOperationInfo operInfo = ...//find the operation you need (see below)
Object[] inputs = prepareInput(operInfo, paramNames, paramValues);
client.invoke("operationname", inputs);
If needed you can match operation name something like below
private BindingOperationInfo findBindingOperation(Service service,
String operationName) {
for (ServiceInfo serviceInfo : service.getServiceInfos()) {
Collection<BindingInfo> bindingInfos = serviceInfo.getBindings();
for (BindingInfo bindingInfo : bindingInfos) {
Collection<BindingOperationInfo> operInfos = bindingInfo.getOperations();
for (BindingOperationInfo operInfo : operInfos) {
if (operInfo.getName().getLocalPart().equals(operationName)) {
if (operInfo.isUnwrappedCapable()) {
return operInfo.getUnwrappedOperation();
}
return operInfo;
}
}
}
}
return null;
}

Generating transaction id for in-memory databases

At the time of this writing, TRANSACTION_ID() does not support in-memory databases. I can generate my own IDs using a sequence table but it's not clear how to communicate existing IDs to triggers. The first trigger should generate a new ID. Subsequent triggers (in the same transaction) should share the existing ID.
I could use thread-local variables to share the existing ID but that seems fragile. Is there a better way to do this?
What about using sequences instead of transaction ids?
CREATE SEQUENCE SEQ;
The first operation in the transaction sets a session variable as follows:
SET #TID = SEQ.NEXTVAL;
The other operations within this transaction use the session variable:
CALL #TID;
I found a (very hacky) workaround:
/**
* Invoked when a transaction completes.
*/
public abstract class TransactionListener extends Value
{
private boolean invoked;
/**
* Invoked when the transaction completes.
*/
protected abstract void onCompleted();
#Override
public String getSQL()
{
return null;
}
#Override
public int getType()
{
throw new AssertionError("Unexpected method invocation");
}
#Override
public long getPrecision()
{
throw new AssertionError("Unexpected method invocation");
}
#Override
public int getDisplaySize()
{
throw new AssertionError("Unexpected method invocation");
}
#Override
public String getString()
{
throw new AssertionError("Unexpected method invocation");
}
#Override
public Object getObject()
{
throw new AssertionError("Unexpected method invocation");
}
#Override
public void set(PreparedStatement prep, int parameterIndex) throws SQLException
{
throw new AssertionError("Unexpected method invocation");
}
#Override
protected int compareSecure(Value v, CompareMode mode)
{
throw new AssertionError("Unexpected method invocation");
}
#Override
public int hashCode()
{
throw new AssertionError("Unexpected method invocation");
}
#Override
public boolean equals(Object other)
{
throw new AssertionError("Unexpected method invocation");
}
#Override
public boolean isLinked()
{
return !invoked;
}
#Override
public void close()
{
invoked = true;
onCompleted();
}
}
// -------------TRIGGER BELOW-----------
public void fire(final Connection connection, ResultSet oldRow, ResultSet newRow)
throws SQLException
{
Statement statement = connection.createStatement();
long transactionId;
ResultSet rs = statement.executeQuery("SELECT #TRANSACTION_ID");
try
{
rs.next();
transactionId = rs.getLong(1);
if (transactionId == 0)
{
// Generate a new transaction id
rs.close();
JdbcConnection jdbcConnection = (JdbcConnection) connection;
final Session session = (Session) jdbcConnection.getSession();
session.unlinkAtCommit(new TransactionListener()
{
#Override
protected void onCompleted()
{
boolean oldAutoCommit = session.getAutoCommit();
session.setAutoCommit(false);
try
{
Statement statement = connection.createStatement();
statement.executeQuery("SELECT SET(#TRANSACTION_ID, NULL)");
statement.close();
}
catch (SQLException e)
{
throw new AssertionError(e);
}
finally
{
session.setAutoCommit(oldAutoCommit);
}
}
});
rs = statement.executeQuery("SELECT SET(#TRANSACTION_ID, "
+ "audit_transaction_sequence.NEXTVAL)");
rs.next();
transactionId = rs.getLong(1);
}
}
finally
{
rs.close();
}
assert (transactionId != 0);
// ...
}
Here's how it works:
We use Session.unlinkAtCommit() to listen for transaction commits (I assume this hooks rollbacks too but I haven't verified this yet)
Since we cannot predict the number and order of trigger invocation we must do the following check in every single trigger:
If #TRANSACTION_ID is null, register a new event listener and increment the sequence.
If #TRANSACTION_ID is not null, grab the current transaction id from it.
The two major problems with this workaround is:
It is extremely fragile. If Session.unlinkAtCommit() changes in the future it will likely break the event listener.
We must repeat a lot of boilerplate code at the top of each trigger just to retrieve the transaction id.
It would be far easier to implement this as a built-in function TRANSACTION_LOCAL_ID(). This funciton would return a database instance-specific transaction id similar to HSQLDB.

Resources