How to save to database data retrieved from a message driven bean? - database

I'm want to use the n-tier architecture for an application so the client tier, web tier, business tier and data tier is separate. I'm wondering how a message driven bean which has a message save it to the database without changing the architecture. (That is using a normal session bean i retrieved data entered through a JSP page to a servlet and from the servlet called the bean class which had operations to the database, it is not possible to do this with the message driven beans since it already has a overridden method onMessage)
So far i can retrieve the values from the servlet directly using the message bean but i want to change this to a 4-tier architecture where database operations are not in the servlet.
my servlet code
#Resource(mappedName = "jms/dest")
private Queue dest;
#Resource(mappedName = "jms/queue")
private ConnectionFactory queue;
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("text/html;charset=UTF-8");
String str = request.getParameter("message");
try {
sendJMSMessageToDest(str);
} catch (JMSException ex) {
}
private Message createJMSMessageForjmsDest(Session session, Object messageData) throws JMSException{
TextMessage tm = session.createTextMessage();
tm.setText(messageData.toString());
return tm;
}
private void sendJMSMessageToDest(Object messageData) throws JMSException{
Connection connection = null;
Session session = null;
try {
connection = queue.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer messageProducer = session.createProducer(dest);
messageProducer.send(createJMSMessageForjmsDest(session,messageData));
} catch (JMSException ex) {
}
}

You must think in two possible workflows:
Synchronous interaction.
Asynchronous interaction.
Below I draw a possible architecture wich covers boot workflows. The components are:
DAO: Data Access Object layer. This is responsible of persist and query database without business logic. Implemented with Stateless Session Beans
BL: Business Logic layer. This is responsible of process Business Logic, don't know where the data will be persisted or queried just invokes DAO layer. Also is independent of View layer (Web, Web Service, Rest, etc.).
Serlvet: In this case represents the view layer or web interaction with the user calls directly to BL for process de data retrieved.
MDB: This layer is for asynchronous events it dequeues messages from a Queue (or Topic) then call to BL layer for process the data retrieved.
This architecture enables code reutilization and separation of responsibilities.
Theres is the Diagram with two workflows.
Syncrhonous workflow:
Servlet call BL.
BL call DAO.
DAO inteacts with Database
Asyncrhonous workflow:
i. Servlet enqueues message A,B,C
ii. MDB dequeue A
iii. MDB call BL.
iv. BL call DAO.
v. DAO interacts with Database

Related

Quarkus server-side http-cache

I tried to find out how to config. a server-side rest client (i.e. microservice A calls other microservice B using rest) to used a http cache.
The background is, that the binary entities transfered over the wire can be quite large. Overall performance can benefit from a cache on microservice A side which employs http caching headers and etags provided by microservice B.
I found a solution that seems to work, but I'm not sure it that is a proper solution, that work together with current requests, that can occur on microservice A at any time.
#Inject
/* package private */ ManagedExecutor executor;
//
// Instead of using a declarative rest client we create it ourselves, because we can then supply a server-side cache: See ctor()
//
private ServiceBApi serviceClientB;
#ConfigProperty(name="serviceB.url")
/* package private */ String serviceBUrl;
#ConfigProperty(name="cache-entries")
/* package private */ int cacheEntries;
#ConfigProperty(name="cache-entrysize")
/* package private */ int cacheEntrySize;
#PostConstruct
public void ctor()
{
// Create proxy ourselves, because we can then supply a server-side cache
final CacheConfig cc = CacheConfig.custom()
.setMaxCacheEntries(cacheEntries)
.setMaxObjectSize(cacheEntrySize)
.build();
final CloseableHttpClient httpClient = CachingHttpClientBuilder.create()
.setCacheConfig(cc)
.build();
final ResteasyClient client = new ResteasyClientBuilderImpl()
.httpEngine(new ApacheHttpClient43Engine(httpClient))
.executorService(executor)
.build();
final ResteasyWebTarget target = (ResteasyWebTarget) client.target(serviceBUrl);
this.serviceClientB = target.proxy(ServiceBApi.class);
}
#Override
public byte[] getDoc(final String id)
{
try (final Response response = serviceClientB.getDoc(id)) {
[...]
// Use normally and no need to handle conditional gets and caching headers and other HTTP protocol stuff here, because this does underlying impl.
[...]
}
}
My questions are:
Is my solution ok as server-side solution, i.e. can it handle concurrent requests?
Is there a declarative (quarkus) way (#RegisterRestClient. etc) to achieve the same?
--
Edit
To make things clear: I want service B to be able to control the caching based on the HTTP get request and the specific resource. Additionally I want to avoid the unnecessary transmission of the large documents service B provides.
--
Mik
Assuming that you have worked with the declarative way of using Quarkus' REST Client before, you would just inject the client in your serviceB-consuming class. The method, that will invoke Service B, should be annotated with #CacheResult. This will cache results depending on the incoming id. See also Quarkus Cache Guide.
Please note: As Quarkus and Vert.x are all about non-blocking operations, you should use the async support of the REST Client.
#Inject
#RestClient
ServiceBApi serviceB;
...
#Override
#CacheResult(cacheName = "service-b-cache")
public Uni<byte[]> getDoc(final String id) {
return serviceB.getDoc(id).map(...);
}
...

Spring MongoTemplate not a part of ongoing transaction

I am attempting to transition to using MongoDB Transactions via Spring Data Mongo now that MongoDB 4.0 supports transactions, and Spring Data Mongo 2.1.5.Release supports it as well.
According to the Spring Data Mongo Documentation, you should be able to use the Spring MongoTransactionManager and have the MongoTemplate recognize and participate in ongoing transactions: https://docs.spring.io/spring-data/mongodb/docs/2.1.5.RELEASE/reference/html/#_transactions_with_mongotransactionmanager
However, this following test fails:
#Autowired
private TestEntityRepository testEntityRepository;
#Autowired
private MongoTemplate mongoTemplate;
#BeforeTransaction
public void beforeTranscation() {
cleanAndInitDatabase();
}
#Test
#Transactional
public void transactionViaAnnotation() {
TestEntityA entity1 = new TestEntityA();
entity1.setValueA("a");
TestEntityA entity2 = new TestEntityA();
entity2.setValueA("b");
testEntityRepository.save(entity1);
testEntityRepository.save(entity2);
// throw new RuntimeException("prevent commit");
List<TestEntityA> entities = testEntityRepository.findAll(Example.of(entity1));
Assertions.assertEquals(1, entities.size()); // SUCCEEDS
entities = testEntityRepository.findAll(Example.of(entity2));
Assertions.assertEquals(1, entities.size()); // SUCCEEDS
entities = mongoTemplate.findAll(TestEntityA.class);
Assertions.assertEquals(2, entities.size()); // FAILS - expected: <2> but was: <0>
}
It appears that the testEntityRepository works fine with the transaction. The asserts succeed, and if I uncomment the exception line, neither of the records are persisted to the database.
However, trying to use the mongoTemplate directly to do a query doesn't work as it appears to not participate in the transaction.
The documentation I have linked shows using the template directly within a #Transactional method like I am attempting. However, the text says
MongoTemplate can also participate in other, ongoing transactions.
which could be interpreted to mean the template can be used with different transactions, and not necessarily the implicit transaction. But that is not what the example would indicate.
Any ideas what is happening or how to get the template to participate in the same implicit transaction?

Java-EE database connection pool runs out of max

I have a default standalone.xml configuration where there is a maximum of 20 connections to be active at the same time in the pool of connections to the database. With good reasons, I guess. We run an Oracle database.
There's a reasonable amount of database traffic as there is third party API traffic, e.g. SOAP and HTTP calls in the enterprise application I'm developing.
We often do something like the following:
#PersistenceContext(unitName = "some-pu")
private EntityManager em;
public void someBusinessMethod() {
someEntity = em.findSomeEntity();
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
em.update(someEntity);
cdiEvent.fire(finishedBusinessEvent);
}
However, in this case the database connection is acquired when the entity is fetched and is released after the update (actually when the entire transaction is done). About transactions, everything is container managed, no additional annotations. I know that you shouldn't "hold" the database connection longer than necessary, and this is exactly what I'm trying to solve. For one I wouldn't know how to programmatically release the connection nor do I think it would be a good idea, because you still want to be able to roll back for the entire transaction.
So? How to attack this problem? There's a number of options I tried:
Option 1, using ManagedExecutorService:
#Resource
private ManagedExecutorService mes;
public void someBusinessMethod() {
someEntity = em.findSomeEntity();
this.mes.submit(() -> {
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
em.update(someEntity);
cdiEvent.fire(finishedBusinessEvent);
});
}
Option 2, using #Asynchronous:
#Inject
private AsyncBean asyncBean;
public void someBusinessMethod() {
someEntity = em.findSomeEntity();
this.asyncBean.process(someEntity);
}
public class AsyncBean {
#Asynchronous
public void process() {
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
em.update(someEntity);
cdiEvent.fire(finishedBusinessEvent);
}
}
This in fact solved the database connection pooling issue, e.g. the connection is released as soon as the soap.callEndPoint happened. But it did not feel really stable (can't pinpoint the problems here). And of course the transaction is finished once you enter the a-sync processing, so whenever something went wrong during the soap call there was nothing roll backed.
wrapping up...
I'm about to move the long running IO tasks (soap and http calls) to a separate part of the application offloaded via queue's and feeding the result back in the application via queue's once again. In this case everything is done via transactions and no connections are held up. But this is a lot of overhead, thus before doing so I'd like to hear your opinion / best practices how to solve this problem!
Your queue solution is viable, but perhaps not necessary if you only perform read operations before your calls, you could split the transaction into 2 transactions (as you would also do with the queue) by using a DAO pattern.
Example:
#Stateless
private DaoBean dao;
#TransactionAttribute(TransactionAttributeType.NEVER)
public void someBusinessMethod() {
Entity e = dao.getEntity(); // creates and discards TX
e = soap.callEndPoint(e.getSomeProperty());
dao.update(e); // creates TX 2 and commits
}
This solutions has a few caveats.
The business method above can not be called while a transaction is already active because it would negate the purpose of the DAO (one TX suspended with NOT_SUPPORTED).
You will have to handle or ignore the possible changes that could have occurred on the entity during the soap call (#Version ...).
The entity will be detached in the business method, so you will have to eager load everything you need in the soap call.
I can't tell you if this would work for you as it depends on what is done before the business call. While still complex, it would be easier than a queue.
You were kind of heading down the right track with Option 2, it just needs a little more decomposition to get the transaction management happening in a way that keeps them very short.
Since you have a potentially long running web service call you're definitely going to need to perform your database updates in two separate transactions:
short find operation
long web service call
short update operation
This can be accomplished by introducing a third EJB as follows:
Entry point
#Stateless
public class MyService {
#Inject
private AsyncService asyncService;
#PersistenceContext
private EntityManager em;
/*
* Short lived method call returns promptly
* (unless you need a fancy multi join query)
* It will execute in a short REQUIRED transaction by default
*/
public void someBusinessMethod(long entityId) {
SomeEntity someEntity = em.find(SomeEntity.class, entityId);
asyncService.process(someEntity);
}
}
Process web service call
#Stateless
public class AsyncService {
#Inject
private BusinessCompletionService businessCompletionService;
#Inject
private SomeSoapService soap;
/*
* Long lived method call with no transaction.
*
* Asynchronous methods are effectively run as REQUIRES_NEW
* unless it is disabled.
* This should avoid transaction timeout problems.
*/
#Asynchronous
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public void process(SomeEntity someEntity) {
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
businessCompletionService.handleBusinessProcessCompletion(someEntity);
}
}
Finish up
#Stateless
public class BusinessCompletionService {
#PersistenceContext
private EntityManager em;
#Inject
#Any
private Event<BusinessFinished> businessFinishedEvent;
/*
* Short lived method call returns promptly.
* It defaults to REQUIRED, but will in effect get a new transaction
* for this scenario.
*/
public void handleBusinessProcessCompletion(SomeEntity someEntity) {
someEntity.setSomething(SOMETHING);
someEntity = em.merge(someEntity);
// you may have to deal with optimistic locking exceptions...
businessFinishedEvent.fire(new BusinessFinished(someEntity));
}
}
I suspect that you may still need some connection pool tuning to cope effectively with your peak load. Monitoring should clear that up.

Where to implement Entity Framework database transactions in modern web applications?

Let’s assume that the primary components in your application are an Angular client, which calls an ASP.NET Web API, which uses Entity Framework to perform CRUD operations on your database. So, for example, in your API controllers, the Post (Add) method adds a new entity to the database context and then commits it to the database by calling the Entity Framework SaveChanges method.
This works fine when only one record needs to be added to the database at a time.
But, what if, for example, you want to add several records of different entity types to your database in one transaction? Where do you implement the Database.BeginTransaction and Database.CommitTransaction/RollbackTransaction? If you add a service layer to accomplish this, then what does the Angular client call?
PLEASE SEE BELOW FOR FURTHER DETAIL AND QUESTIONS.
I want to provide more detail about my current approach to solving this problem and ask the following questions:
(1) Is this a good approach, or is there a better way?
(2) My approach does not port to .NET Core, since .NET Core does not support OData yet (see https://github.com/OData/WebApi/issues/229). Any thoughts or ideas about this?
I have stated the problems that I faced and the solutions that I chose below. I will use a simple scenario where a customer is placing an order for several items – so, there is one Order record with several OrderDetail records. The Order record and associated OrderDetail records must be committed to the database in a single transaction.
Problem #1: What is the best way to send the Order and OrderDetail records from the Angular client to the ASP.NET Web API?
Solution #1: I decided to use OData batching, so that I could send all the records in one POST. I am using the datajs library to perform the batching (https://www.nuget.org/packages/datajs).
Problem #2: How do I wrap a single transaction around the Order and OrderDetail records?
Solution #2: I set up an OData batch endpoint in my Web API, which involved the following:
(1) In the client, configure a batch request route.
// Configure the batch request route.
config.Routes.MapODataServiceRoute(
routeName: "batch",
routePrefix: "batch",
model: builder.GetEdmModel(),
pathHandler: new DefaultODataPathHandler(),
routingConventions: conventions,
batchHandler: new TransactionalBatchHandler(GlobalConfiguration.DefaultServer));
}
(2) In the Web API, implement a custom batch handler, which wraps a database transaction around the given OData batch. The batch handler starts the transaction, calls the appropriate ODataController to perform the CRUD operation, and then commits/rolls back the transaction, depending on the results.
/// <summary>
/// Custom batch handler specialized to execute batch changeset in OData $batch requests with transactions.
/// The requests will be executed in the order they arrive, that means that the client is responsible for
/// correctly ordering the operations to satisfy referential constraints.
/// </summary>
public class TransactionalBatchHandler : DefaultODataBatchHandler
{
public TransactionalBatchHandler(HttpServer httpServer)
: base(httpServer)
{
}
/// <summary>
/// Executes the batch request and wraps the execution of the whole changeset within a transaction.
/// </summary>
/// <param name="requests">The <see cref="ODataBatchRequestItem"/> instances of this batch request.</param>
/// <param name="cancellation">The <see cref="CancellationToken"/> associated with the request.</param>
/// <returns>The list of responses associated with the batch request.</returns>
public async override Task<IList<ODataBatchResponseItem>> ExecuteRequestMessagesAsync(
IEnumerable<ODataBatchRequestItem> requests,
CancellationToken cancellation)
{
if (requests == null)
{
throw new ArgumentNullException("requests");
}
IList<ODataBatchResponseItem> responses = new List<ODataBatchResponseItem>();
try
{
foreach (ODataBatchRequestItem request in requests)
{
OperationRequestItem operation = request as OperationRequestItem;
if (operation != null)
{
responses.Add(await request.SendRequestAsync(Invoker, cancellation));
}
else
{
await ExecuteChangeSet((ChangeSetRequestItem)request, responses, cancellation);
}
}
}
catch
{
foreach (ODataBatchResponseItem response in responses)
{
if (response != null)
{
response.Dispose();
}
}
throw;
}
return responses;
}
private async Task ExecuteChangeSet(
ChangeSetRequestItem changeSet,
IList<ODataBatchResponseItem> responses,
CancellationToken cancellation)
{
ChangeSetResponseItem changeSetResponse;
// Since IUnitOfWorkAsync is a singleton (Unity PerRequestLifetimeManager) used by all our ODataControllers,
// we simply need to get a reference to it and use it for managing transactions. The ODataControllers
// will perform IUnitOfWorkAsync.SaveChanges(), but the changes won't get committed to the DB until the
// IUnitOfWorkAsync.Commit() is performed (in the code directly below).
var unitOfWorkAsync = GlobalConfiguration.Configuration.DependencyResolver.GetService(typeof(IUnitOfWorkAsync)) as IUnitOfWorkAsync;
unitOfWorkAsync.BeginTransaction();
// This sends each request in the changeSet to the appropriate ODataController.
changeSetResponse = (ChangeSetResponseItem)await changeSet.SendRequestAsync(Invoker, cancellation);
responses.Add(changeSetResponse);
if (changeSetResponse.Responses.All(r => r.IsSuccessStatusCode))
{
unitOfWorkAsync.Commit();
}
else
{
unitOfWorkAsync.Rollback();
}
}
}
You do not need to implement Database.BeginTransaction and Database.CommitTransaction/RollbackTransaction if you are using Entity Framework. Entity Framework implements UnitOfWork. The only thing that you should care about is to work with a different instance of DbContext for every web request, but exaclty 1 instance for 1 request and call SaveChanges only 1 time when you made all the changes you need.
In case of any Exception during SaveChanges all the changes will be rolled back.
The angular client should not care about this, it only sends the data and checks if everything was fine.
This is very easy to do if you use an IoC framework, like Unity and let your DbContext injected in your Controller or Service.
In this case you should use the following settings (if you use Unity):
container.RegisterType<DbContext, YourDbContext>(new PerRequestLifetimeManager(), ...);
Then you can do this if you want to use it in a Controller:
public class YourController : Controller
{
private YourDbContext _db;
public YourController(DbContext context)
{
_db = context;
}
...
No need to over-complicate things. Add the code to the WebApi project. Pass around your Transaction object and re-use it. See https://msdn.microsoft.com/en-us/library/dn456843(v=vs.113).aspx for an example.

Creating durable subscriber using Polling Consumer in camel

I am trying to create a durable subscriber using polling consumer.
The URI is correct as same uri is working when used in camel route and durable subscriber is correctly created.
For some reason PollingConsumer is not able to create durable subscriber and instead creates normal subscriber.
Is it not possible to create durable subscribers using polling consumer?
public class OutQWaitingProcesser implements Processor {
#Override
public void process(Exchange exchange) throws Exception {
Endpoint endpoint = exchange.getContext().getEndpoint("jms:topic:response?clientId=5&durableSubscriptionName=abcd");
PollingConsumer consumer = endpoint.createPollingConsumer();
consumer.start();
Exchange exchange2 = consumer.receive();
String body = exchange2.getIn().getBody(String.class);
exchange.getIn().setBody(body);
consumer.stop();
}
}
Camel JmsPollingConsumer is based on Spring JMSTemplate which doesn't support to set the durableSubscription option.

Resources