I am using a camel jdbc component to insert a record into an Oracle table. the insert uses a sequence to populate a primary key ID column.
INSERT INTO my_table (id, data) VALUES (my_seq.nextval, 'some data')
The relevant part of the route looks like below:
from("some end point here")
.process(preInsertProcessor)
.to("jdbc:myDataSource")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
LOGGER.info("Extracting database generated id");
// This list is null
List<Integer> ids = exchange.getIn().getHeader(
JdbcConstants.JDBC_GENERATED_KEYS_DATA, List.class);
});
Inside the preInsertProcessir I set the message body to be my insert statement and also set some two header values to instruct camel I want the generated ID back:
message.setBody("INSERT INTO my_table (id, data) VALUES (my_seq.nextval, ?:data)");
message.setHeader("data", "some data");
message.setHeader(JDBC_RETRIEVE_GENERATED_KEYS, true);
message.setHeader(JDBC_GENERATED_COLUMNS, new String[]{"ID"});
End if I look at the logs I can see:
[DEBUG] org.apache.camel.component.bean.MethodInfo - Setting bean invocation result on the OUT message: [Message: INSERT INTO my_table(id, data)VALUES (my_seq.nextval, :?data]
[DEBUG] org.apache.camel.spring.spi.TransactionErrorHandler - Transaction begin (0x1de4bee0) redelivered(false) for (MessageId: ID-MELW1TYGC2S-62650-1438583607644-0-8 on ExchangeId: ID-MELW1TYGC2S-62650-1438583607644-0-9))
[INFO ] au.com.nab.cls.router.non.repudiation.GeneratedIdExtractor - Extracting database generated id
[DEBUG] org.apache.camel.processor.MulticastProcessor - Done sequential processing 1 exchanges
[DEBUG] org.apache.camel.spring.spi.TransactionErrorHandler - Transaction commit (0x1de4bee0) redelivered(false) for (MessageId: ID:414d5120445041594855423120202020027844552045b302 on ExchangeId: ID-MELW1TYGC2S-62650-1438583607644-0-7))
If I get it well from the look of the logs the insert would be executed and my final processor should be able to get the generated ID. In reality what happens is that no record gets inserted and no ID is present in the header of the message. Without the final processor everything works fine.
Obviously I am doing something wrong here but I cannot figure out what. I am aware I could use a message en-richer to get the ID before the insert but I would prefer to avoid an extra database trip.
Thank you in advance for your inputs.
UPDATE
I put a break point in org.apache.camel.component.jdbc.JdbcProducer and found out the reason for not having the INSERT executed and consequently not getting the ID back.
// JdbcProducer code; creating a prepared statement part
if (shouldRetrieveGeneratedKeys) {
...
} else if (expectedGeneratedColumns instanceof String[]) {
// Execution gets herestatement
ps = conn.prepareStatement(preparedQuery, (String[]) expectedGeneratedColumns);
...
}
// Expected count returned here is 2
int expectedCount = ps.getParameterMetaData().getParameterCount();
if (expectedCount > 0) {
...
// And here I get the crash:
// java.sql.SQLException: Number of parameters mismatch. Expected: 2, was:1
getEndpoint().getPrepareStatementStrategy().populateStatement(ps, it, expectedCount);
}
This is where my research stopped as digging too much in the various three parties code is not really easy. I suspect one of the following two options are the cause:
I am still not doing it the right way
A camel bug which does not work as expected when header contains both named parameters and retrieve generated keys settings
Please advise about any fix or work around.
Thanks again
I also ran into this. My workaround:
Use camel-sql instead of camel-jdbc. Add parametersCount option to the endpoint URL (in adddition to setHeader(SqlConstants.SQL_RETRIEVE_GENERATED_KEYS, constant(true)) and setHeader(SqlConstants.SQL_GENERATED_COLUMNS, constant(new String[] {"ID_COLUMN_NAME"})).
Update: Works with 11.2.0.4 jdbc driver (does not work with 12.2.0.1 jdbc driver).
Related
from("{{my.app.source}}")
.unmarshal()
.bindy(BindyType.Csv, EmployeeCsvRecord.class)
.split(body())
.streaming()
.bean("employeeService", "getMap")
.aggregate(constant(true), new EmployeeAggregationStrategy())
.completionSize(500)
.log("data ready to insert into database")
.to("{{sql.insertEmployee}}")
.log("data inserted into database");
"sql:insert = into employee (employeeName, employeeAge, employeeGender, employeeDepartment, employeeSalary) values (:#employeeName, :#employeeAge, :#employeeGender, :#employeeDepartment, :#employeeSalary);batch=true"
Cannot find key [employeeName] in message body or headers to use when setting named parameter in query
I tried with batch false it is working fine.
I have to do it only with sql component otherwise there are multiple ways available.
I'm running big dependency scan on legacy db and see that some objects have obsolete ref links, if you run this code in SSMS for View that points to not existing table like in my case, you will get your output on Results tab AND error info in Messages . Like in my case below.
I tried to check all env things I know and output of this stored procedure, but didn't see any indication.
How I can capture this event as I'm running this in looped dynamic SQL script and capture output in my table for further processing?
Updated:
it just text in Message box ,on error, you still have output on
Results tab
this is sp, it loop thru object list I took from sys.object and run this string as my sample to get all dependencies, load all into table. This call to
sql_reference_entities is the only way to get inter database
dependency on column level. So I need stick to this 100$>
--
Select *
From sys.dm_sql_referenced_entities('dbo.v_View_Obs_Table','Object')
--
----update------
This behavior was fixed in SQL Server 2014 SP3 and SQL Server 2016 SP2:
Starting from Microsoft SQL Server 2012, errors raised by
sys.dm_sql_referenced_entities (such as when an object has undergone a
schema change) cannot be caught in a TRY...CATCH Transact-SQL block.
While this behavior is expected in SQL Server 2012 and above, this
improvement introduces a new column that's called is_incomplete to the
Dynamic Management View (DMV).
KB4038418 - Update adds a new column to DMV sys.dm_sql_referenced_entities in SQL Server 2014 and 2016
----update-------
The tldr is that you can't capture these on the server side, and must use a client program in C#, PowerShell or some other client that can process info messages.
That DMV is doing something strange that I don't fully understand. It's generating errors (which a normal UDF is not allowed to do), and those errors do not trigger a TRY/CATCH block or set ##error. EG
create table tempdb.dbo.foo(id int)
go
create view dbo.v_View_Obs_Table
as
select * from tempdb.dbo.foo
go
drop table tempdb.dbo.foo
go
begin try
Select * From sys.dm_sql_referenced_entities('dbo.v_View_Obs_Table','Object')
end try
begin catch
select ERROR_MESSAGE(); --<-- not hit
end catch
However these are real errors, as you can see running this from client code:
using System;
using System.Data.SqlClient;
namespace ConsoleApp6
{
class Program
{
static void Main(string[] args)
{
using (var con = new SqlConnection("Server=.;database=AdventureWorks;integrated security=true"))
{
con.Open();
con.FireInfoMessageEventOnUserErrors = true;
con.InfoMessage += (s, a) =>
{
Console.WriteLine($"{a.Message}");
foreach (SqlError e in a.Errors)
{
Console.WriteLine($"{e.Message} Number:{e.Number} Class:{e.Class} State:{e.State} at {e.Procedure}:{e.LineNumber}");
}
};
var cmd = con.CreateCommand();
cmd.CommandText = "Select * From sys.dm_sql_referenced_entities('dbo.v_View_Obs_Table','Object')";
using (var rdr = cmd.ExecuteReader())
{
while (rdr.Read() || (rdr.NextResult() && rdr.Read()))
{
Console.WriteLine(rdr[0]);
}
}
Console.ReadKey();
}
}
}
}
outputs
Invalid object name 'tempdb.dbo.foo'.
Invalid object name 'tempdb.dbo.foo'. Number:208 Class:16 State:3 at v_View_Obs_Table:4
0
The dependencies reported for entity "dbo.v_View_Obs_Table" might not include references to all columns. This is either because the entity references an object that does not exist or because of an error in one or more statements in the entity. Before rerunning the query, ensure that there are no errors in the entity and that all objects referenced by the entity exist.
The dependencies reported for entity "dbo.v_View_Obs_Table" might not include references to all columns. This is either because the entity references an object that does not exist or because of an error in one or more statements in the entity. Before rerunning the query, ensure that there are no errors in the entity and that all objects referenced by the entity exist. Number:2020 Class:16 State:1 at :1
The scenario is the following -
OrderTable with Columns "OrderId" and "OrderType"
OrderRelationTable with Columns "OrderId" and "CustId"
OrderProcessTable with Columns "OrderId", "OrderType", "CustId", and "ProcessFlag"
The flow goes like this-
Application1 creates the record in OrderTable -> Then pass the record to Application2 by using MQ protocol, Application 2 in this case insert/create the record passed in the OrderRelationTable -> Then a trigger is called in Oracle DB to create the record in OrderProcessTable
Problem
Sometimes the record is not inserted into table 3 OrderProcessTable. Not sure if it is cause by timing or there is something that is not correct with the trigger?
Application1 Code
boolean updated = false;
/** JDBC prepare statement execution insert into OrderTable in Java**/
int rowCount = ps.executeUpdate();
if(rowCount>0){
updated=true;
}
log.log("updated flag:"+updated);
/** I am able to see the log shows the flag is true, and recored inserted into OrderTable **/
Application2 Code
This doesn't really matter much assuming that it is some Java JDBC code that does the insert into OrderRelationTable and it is successful.
The Trigger
Assuming the syntax is correct.
CREATE OR REPLACE TRIGGER INSERTINTOOrderProcessTable
AFTER INSERT ON OrderRelationTable
FOR EACH ROW DECLEAR
v_order_type := null;
BEGIN
SELECT OrderType INTO v_order_type FROM OrderTable
WHERE OrderId = :new.OrderId
AND OrderType IS NOT NULL
AND rownum=1;
IF v_order_type IS NOT NULL THEN
INSERT INTO OrderProcessTable VALUES (:new.OrderId, v_order_type, :new.CustId, 'N');
END IF;
END;
Questions -
After the Application 1 Code is executed is guaranteed DB will have the OrderTable record avaliable for SELECT statement? (Assume that updated flag is true)
Is there a timing issue with the app code and trigger? for example when trigger calls the SELECT statement from OrderTable? (of course the order id is matching in the OrderRelationTable and OrderTable)
Basically right now my problem is that sometimes (rarely) the record is not inserted into OrderProcessTable via the trigger even though it should (Order Type is not null)? Any idea?
There's no timing issue, as far as I can tell.
As of trigger code: what is the purpose of and rownum = 1 condition? I'm not saying that it is wrong, I'm just asking. Do you expect several rows to be returned by that query? If so, is that a legal situation? Wouldn't you rather handle it with the WHEN TOO_MANY_ROWS exception handler (i.e. instead of using the ROWNUM condition)?
What happens if SELECT returns nothing? It raises then NO_DATA_FOUND exception and trigger fails and certainly doesn't insert anything. Is it propagated so that someone (human being) or something (error logging procedure) sees / catches it so that you'd know that something went wrong?
And, of course, the fact that V_ORDER_TYPE remains NULL which causes INSERT to fail (as P. Salmon already suggested).
I'm inserting data into multiple tables, and I use the mybatis component to do that. I also need to create a temporary table before I can insert the data. High-level overview is:
Get data to insert
Create temp table
Insert data to temp table
Insert into table1 select x from temp table
Insert into table2 select y from temp table
Steps 2 to 5 should be their own single transaction, in case something fails. I've got this currently:
from(initialEndpoint)
.routeId("database-appender")
.aggregate().expression(constant(true)).completionSize(100).aggregationStrategy(new LinkListAggregator())
.transacted()
.bean(CreateTmpLinksTable.class)
.to("mybatis:prepareLinks?executorType=reuse&statementType=InsertList")
.to("mybatis:insertLinks?executorType=reuse&statementType=InsertList")
.to("mybatis:insertLinkSources?executorType=reuse&statementType=InsertList")
.end()
.log("Wrote at most ${body.size} links to the database")
The CreateTmpLinksTable needs to have access to the current connection, such that the creation of the temporary table does not happen in a different transaction (targeting PostgreSQL, if it matters).
I have this currently:
public class CreateTmpLinksTable {
public void createImportTable(Exchange exchange) throws SQLException {
final Connection conn = exchange.getIn().getHeader("TransactionConnection", Connection.class);
try (final Statement stat = conn.createStatement()) {
stat.execute("CREATE TEMPORARY TABLE tmp_links(" +
"url text, hostname text, service media, service_id bigint, user_id bigint, screen_name text, harvested_at timestamp with time zone, body text" +
") ON COMMIT DROP");
}
}
}
I also haven't setup my transaction manager. My suspicion is I have to get hold of the transaction manager, in order to correctly participate in the transaction.
Questions:
How do I get the transaction manager from a regular bean? Is it just a matter of getting the context, then from the context getting the manager through the registry?
Is there a better way to do what I need? I can see at least one: move all responsibilities into a single bean and do the work there. Any other ways?
NOTE: I'm learning Camel, and I like to do things using only code. Once I know how everything is wired up, then I can transfer that knowledge to Spring.
Q1,
If you can pass the instance of bean to the camel route, you can setup the transaction manager yourself, otherwise you have to use the registry to look up the transaction manager instance.
Q2,
You can wrap the DB update work in a single bean and use the transacted DSL in camel if you have other resources need to be managed.
We're having a strange problem in Oracle. I'll sketch some (simplified) context first:
Consider this mapping to an Entity:
public EntityMap()
{
Table("EntityTable");
Id(x => x.Id)
.Column("entityID")
.GeneratedBy.Native("ENTITYID").UnsavedValue(0);
Map(x => x.SomeBoolean).Column("SomeBoolean");
}
and this code:
var entity = new Entity();
using (var transaction = new TransactionScope(TransactionScopeOption.Required))
{
Session.Save(entity);
transaction.Complete();
}
//A lot of code
if(someCondition)
{
using (var transaction = new TransactionScope(TransactionScopeOption.Required))
{
enitity.SomeBoolean = true;
Session.Update(entity);
transaction.Complete();
}
}
This code is called a few times. The first time it generates the following queries:
select ENTITYID.nextval from dual
INSERT INTO Entity
(SomeBoolean, EntityID)
VALUES (0, 1216)
UPDATE Entity
SET SomeBoolean = 1
WHERE EntityID = 1216
The second time it is called these queries are generated (someCondition is false)
select ENTITYID.nextval from dual
INSERT INTO Entity
(SomeBoolean, EntityID)
VALUES (0, 1217)
And now the trouble begins. From now on, each insert will use the correct autoincremented value, but the update will always use 1217
select ENTITYID.nextval from dual
INSERT INTO Entity
(SomeBoolean, EntityID)
VALUES (0, 1218)
UPDATE Entity
SET SomeBoolean = 1
WHERE EntityID = 1217
And of course, this is not what we want to happen. If I inspect the value of the Id while debugging, it contains the correct autoincremented value. Somehow, deep in the bowels of NHibernate, the incorrect is is assigned to the WHERE clause...
The strange part is that this only happens on Oracle. If I switch NHibernate to MsSql, everything works like a charm.
So I found out what happened. NHibernate changed it's default connection release mode between versions 1.x and 2.x. Instead of closing the connection when the session is Disposed, the connections is now closed after each transaction. However, we were manually coordinating our transactions which apparently caused troubles in Oracle.
This question has some extra information and this entry in the NHibernate documentation also clarifies how the connections are handeled:
As of NHibernate, if your application manages transactions through .NET APIs such as System.Transactions library, ConnectionReleaseMode.AfterTransaction may cause NHibernate to open and close several connections during one transaction, leading to unnecessary overhead and transaction promotion from local to distributed. Specifying ConnectionReleaseMode.OnClose will revert to the legacy behavior and prevent this problem from occuring.
This blog post is what got me looking in the right direction.