Flink Queryable State Error - apache-flink

I am trying to use queryable state on Flink (version 1.4.2) but unfortunately I keep getting the following error:
INFO my.test.flink.QueryableState - Params are a96438fa12879b7598c9cf32684e2669, kafka-cluster_jobmanager_1, 6123
INFO my.test.flink.QueryableState - Before the call java.util.concurrent.CompletableFuture#26aa12dd[Not completed]
java.util.concurrent.ExecutionException: java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at my.test.flink.QueryableState.main(QueryableState.java:67)
Caused by: java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1166)
at org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.readInt(AbstractByteBuf.java:619)
at org.apache.flink.queryablestate.network.messages.MessageSerializer.deserializeHeader(MessageSerializer.java:231)
at org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:76)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
On the client side I am using flink-queryable-state-client-java_2_11.jar and the relevant part of code for the queryable client is
QueryableStateClient client = new QueryableStateClient(jobManagerHost, jobManagerPort);
TypeInformation<MyEvent> typeInformation = TypeInformation.of(new TypeHint<MyEvent>() {});
ListStateDescriptor<MyEvent> descriptor = new ListStateDescriptor<MyEvent>("myEvents",
typeInformation.createSerializer(new ExecutionConfig()));
CompletableFuture<ListState<MyEvent>> resultFuture =
client.getKvState(JobID.fromHexString(jobIdParam),"myEvents", "1",
BasicTypeInfo.STRING_TYPE_INFO , descriptor );
logger.info("Before the call " + resultFuture);
try {
logger.info("Finished"+ resultFuture.get());
} catch(Exception ex) {
ex.printStackTrace();
}
Finally the job running on Flink has a ListState configured as it can been seen below. Note that data are keyed on ListState by String
TypeInformation<MyEvent> typeInformation = TypeInformation.of(new TypeHint<MyEvent>() {});
ListStateDescriptor<MyEvent> eventState =
new ListStateDescriptor<MyEvent>("myEvents",typeInformation);
eventState.setQueryable("myEvents");
eventListState = getRuntimeContext().getListState(eventState);
It seems to me like a serialization error but I do not know what I need to do to fix it. Does anybody have an idea what might be wrong with code above ? Am I missing something?

I ran into that exact same problem when updating this queryable state demo for Flink 1.4. If I recall correctly, the important part is dealing with the CompletableFuture correctly -- you can't just call get() straightaway.
See the code for a working example, the key part of which looks something like this:
try {
CompletableFuture<FoldingState<BumpEvent, Long>> resultFuture =
client.getKvState(jobId, EventCountJob.ITEM_COUNTS, key,
BasicTypeInfo.STRING_TYPE_INFO, countingState);
resultFuture.thenAccept(response -> {
try {
Long count = response.get();
// now we could do something with the value
} catch (Exception e) {
e.printStackTrace();
}
});
resultFuture.get(5, TimeUnit.SECONDS);
} catch (Exception e) {
e.printStackTrace();
}

Related

How to resolve error "Rowtime timestamp is null" in flink server

I am trying to run some code in flink server but unable to
Below is my code
DataStream<UserInfo> keyedStream = executionEnvironment
.addSource(new UserDataSource());
keyedStream.assignTimestampsAndWatermarks(new MessageWaterEmitter());
tableEnv.registerDataStream("test", keyedStream, "userId,ticks,startime.rowtime");
Table table = tableEnv
.sqlQuery(
"SELECT userId,COUNT(userId) as ticks,TUMBLE_END(startime,INTERVAL '5' SECOND) as startime FROM test "
+ "GROUP BY TUMBLE(startime,INTERVAL '5' SECOND),userId");
DataStream<Row> userInfoDataStream = tableEnv.toRetractStream(table, Row.class)
.filter(new FilterFunction<Tuple2<Boolean, Row>>() {
#Override
public boolean filter(Tuple2<Boolean, Row> booleanUserInfoTuple2) throws Exception {
return booleanUserInfoTuple2.f0;
}
}).map(new MapFunction<Tuple2<Boolean, Row>, Row>() {
#Override
public Row map(Tuple2<Boolean, Row> booleanUserInfoTuple2) throws Exception {
return booleanUserInfoTuple2.f1;
}
});
JdbcSink sink = new JdbcSink();
userInfoDataStream.addSink(sink);
Below is the error am getting
java.lang.RuntimeException: Rowtime timestamp is null. Please make sure that a proper TimestampAssigner is defined and the stream environment uses the EventTime time characteristic.
at DataStreamSourceConversion$651.processElement(Unknown Source)
at org.apache.flink.table.runtime.CRowOutputProcessRunner.processElement(CRowOutputProcessRunner.scala:70)
at org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:637)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:612)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:592)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$BroadcastingOutputCollector.collect(OperatorChain.java:707)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$BroadcastingOutputCollector.collect(OperatorChain.java:660)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:727)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:705)
at org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:305)
at org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:394)
at flink.source.UserDataSource.run(UserDataSource.java:20)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:208)
can any body help me with this problem? thank you in advance
The root cause of this problem is that while you are calling assignTimestampsAndWatermarks on keyedStream, you aren't doing anything with the result of this call. If you rework the code like this, it will work:
DataStream<UserInfo> keyedStream = executionEnvironment
.addSource(new UserDataSource())
.assignTimestampsAndWatermarks(new MessageWaterEmitter());
tableEnv.registerDataStream("test", keyedStream, "userId,ticks,startime.rowtime");
Calling assignTimestampsAndWatermarks on a stream doesn't modify that stream, but instead returns a new stream that has timestamps and watermarks.
This could also be fixed this like, which might be a bit clearer as to what is going on:
DataStream<UserInfo> streamWithTSandWMs = keyedStream
.assignTimestampsAndWatermarks(new MessageWaterEmitter());
tableEnv.registerDataStream("test", streamWithTSandWMs, "userId,ticks,startime.rowtime");

RestEasy client does not use correct encoding

I'm using RestEasy as a client to read news from a service.
ResteasyClient listClient = new ResteasyClientBuilder().build();
ResteasyWebTarget listTarget = listClient.target("https://someservice.com/file.xml");
Response r = listTarget.request().get();
final HexMl feedList = r.readEntity(HexMl.class);
The service does not return an encoding or media type in the response header, only an encoding in the xml itself
<?xml version="1.0" encoding="windows-1252"?>
RestEasy does not seem to evaluate this so I get an exception:
javax.ws.rs.ProcessingException: org.jboss.resteasy.plugins.providers.jaxb.JAXBUnmarshalException: javax.xml.bind.UnmarshalException
- with linked exception:
[org.xml.sax.SAXParseException; lineNumber: 116; columnNumber: 30; Invalid byte 2 of 3-byte UTF-8 sequence.]
at org.jboss.resteasy.client.jaxrs.internal.ClientResponse.readFrom(ClientResponse.java:300)
at org.jboss.resteasy.client.jaxrs.internal.ClientResponse.readEntity(ClientResponse.java:196)
at org.jboss.resteasy.specimpl.BuiltResponse.readEntity(BuiltResponse.java:218)
at com.roche.services.NewsImportService.importFeed(NewsImportService.java:72)
at com.roche.commands.NewsImportCommand.execute(NewsImportCommand.java:26)
at com.roche.commands.NewsImportCommand$execute.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:110)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:122)
at Script1.run(Script1.groovy:4)
at info.magnolia.module.groovy.console.MgnlGroovyConsole$1.call(MgnlGroovyConsole.java:154)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.jboss.resteasy.plugins.providers.jaxb.JAXBUnmarshalException: javax.xml.bind.UnmarshalException
Is there a way to overwrite the encoding RestEasy uses or intercept the response before the entity is read?
I tried
Response r = listTarget.request().accept(APPLICATION_XML + ";charset=windows-1252").get();
and
Response r = listTarget.request(APPLICATION_XML + ";charset=windows-1252").get();
and
#Consumes(APPLICATION_XML + ";charset=windows-1252")
public class HexMl { ... }
without success. The XML itself seems to be correctly encoded in windows-1252.
For now I'm using a ReaderInterceptor, but this doesn't seem right. So I'd still be glad about better suggestions.
ResteasyClientBuilder clientBuilder = new ResteasyClientBuilder();
ResteasyProviderFactory providerFactory = new ResteasyProviderFactory();
RegisterBuiltin.register( providerFactory );
providerFactory.getClientReaderInterceptorRegistry().registerSingleton( new ReaderInterceptor() {
#Override
public Object aroundReadFrom(ReaderInterceptorContext context) throws IOException, WebApplicationException {
InputStream is = context.getInputStream();
String responseBody = IOUtils.toString( is , "windows-1252");
LOGGER.debug( "received response:\n{}\n\n", responseBody );
context.setInputStream( new ByteArrayInputStream( responseBody.getBytes() ) );
return context.proceed();
}
} );
clientBuilder.providerFactory( providerFactory );
Then I use this clientBuilder to create my client.

How to handle exception while parsing JSON in Flink

I am reading data from Kafka using flink 1.4.2 and parsing them to ObjectNode using JSONDeserializationSchema. If the incoming record is not a valid JSON then my Flink job fails. I would like to skip the broken record instead of failing the job.
FlinkKafkaConsumer010<ObjectNode> kafkaConsumer =
new FlinkKafkaConsumer010<>(TOPIC, new JSONDeserializationSchema(), consumerProperties);
DataStream<ObjectNode> messageStream = env.addSource(kafkaConsumer);
messageStream.print();
I am getting the following exception if the data in Kafka is not a valid JSON.
Job execution switched to status FAILING.
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'This': was expecting ('true', 'false' or 'null')
at [Source: [B#4f522623; line: 1, column: 6]
Job execution switched to status FAILED.
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
The easiest solution is to implement your own DeserializationSchema and wrap JSONDeserializationSchema. You can then catch the exception and either ignore it or perform custom action.
As suggested by #twalthr, I implemented my own DeserializationSchema by copying JSONDeserializationSchema and added exception handling.
import org.apache.flink.api.common.serialization.AbstractDeserializationSchema;
import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode;
import java.io.IOException;
public class CustomJSONDeserializationSchema extends AbstractDeserializationSchema<ObjectNode> {
private ObjectMapper mapper;
#Override
public ObjectNode deserialize(byte[] message) throws IOException {
if (mapper == null) {
mapper = new ObjectMapper();
}
ObjectNode objectNode;
try {
objectNode = mapper.readValue(message, ObjectNode.class);
} catch (Exception e) {
ObjectMapper errorMapper = new ObjectMapper();
ObjectNode errorObjectNode = errorMapper.createObjectNode();
errorObjectNode.put("jsonParseError", new String(message));
objectNode = errorObjectNode;
}
return objectNode;
}
#Override
public boolean isEndOfStream(ObjectNode nextElement) {
return false;
}
}
In my streaming job.
messageStream
.filter((event) -> {
if(event.has("jsonParseError")) {
LOG.warn("JsonParseException was handled: " + event.get("jsonParseError").asText());
return false;
}
return true;
}).print();
Flink has improved null record handling for FlinkKafkaConsumer
There are two possible design choices when the DeserializationSchema encounters a corrupted message. It can either throw an IOException which causes the pipeline to be restarted, or it can return null where the Flink Kafka consumer will silently skip the corrupted message.
For more details, you can see this link.

Lucene indexWriter update does not affect Solr search

I implement a small code for the purpose of extracting some keywords out of Lucene index. I did implement that using search component. My problem is when I tried to update Lucene IndexWriter, Solr index which is placed on top of that, does not affect. As you can see I did the commit part.
BooleanQuery query = new BooleanQuery();
for (String fieldName : keywordSourceFields) {
TermQuery termQuery = new TermQuery(new Term(fieldName,"N/A"));
query.add(termQuery, Occur.MUST_NOT);
}
TermQuery termQuery=new TermQuery(new Term(keywordField, "N/A"));
query.add(termQuery, Occur.MUST);
try {
//Query q= new QueryParser(keywordField, new StandardAnalyzer()).parse(query.toString());
TopDocs results = searcher.search(query,
maxNumDocs);
ScoreDoc[] hits = results.scoreDocs;
IndexWriter writer = getLuceneIndexWriter(searcher.getPath());
for (int i = 0; i < hits.length; i++) {
Document document = searcher.doc(hits[i].doc);
List<String> keywords = keyword.getKeywords(hits[i].doc);
if(keywords.size()>0) document.removeFields(keywordField);
for (String word : keywords) {
document.add(new StringField(keywordField, word, Field.Store.YES));
}
String uniqueKey = searcher.getSchema().getUniqueKeyField().getName();
writer.updateDocument(new Term(uniqueKey, document.get(uniqueKey)),
document);
}
writer.commit();
writer.forceMerge(1);
writer.close();
} catch (IOException | SyntaxError e) {
throw new RuntimeException();
}
private IndexWriter getLuceneIndexWriter(String indexPath) throws IOException {
FSDirectory directory = FSDirectory.open(new File(indexPath).toPath());
Analyzer analyzer = new StandardAnalyzer();
IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
return new IndexWriter(directory, iwc);
}
Please help me through solving this problem.
Update:
I did some investigation and found out that the retrieving part of documents works fine while Solr did not restarted. But the searching part of documents did not work. After I restarted Solr it seems that the core corrupted and failed to start! Here is the corresponding log:
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:896)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:662)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:513)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:278)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:272)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1604)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1716)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:868)
... 9 more
Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file found in NRTCachingDirectory(MMapDirectory#C:\Users\Ali\workspace\lucene_solr_5_0_0\solr\server\solr\document\data\index lockFactory=org.apache.lucene.store.SimpleFSLockFactory#3bf76891; maxCacheMB=48.0 maxMergeSizeMB=4.0): files: [_2_Lucene50_0.doc, write.lock, _2_Lucene50_0.pos, _2.nvd, _2.fdt, _2_Lucene50_0.tim]
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:821)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:78)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:65)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:272)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:115)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1573)
... 11 more
4/7/2015, 6:53:26 PM
ERROR
SolrIndexWriter
SolrIndexWriter was not closed prior to finalize(),​ indicates a bug -- POSSIBLE RESOURCE LEAK!!!
4/7/2015, 6:53:26 PM
ERROR
SolrIndexWriter
Error closing IndexWriter
java.lang.NullPointerException
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:2959)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:2927)
at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:965)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1010)
at org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:130)
at org.apache.solr.update.SolrIndexWriter.finalize(SolrIndexWriter.java:183)
at java.lang.ref.Finalizer.invokeFinalizeMethod(Native Method)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:101)
at java.lang.ref.Finalizer.access$100(Finalizer.java:32)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:190)
There for my guess would be problem with indexing the keywordField and also problem related to closing the IndexWriter.

Would singleton database connection affect performance in a weblogic clustered environment?

I have a Java EE struts web application using a singleton database connection. In the past, there is only one weblogic server, but now, there are two weblogic servers in a cluster.
Session replication have been tested to be working in this cluster. The web application consist of a few links that will open up different forms for the user to fill in. Each form has a dynamic dropdownlist that will populate some values depending on which form is clicked. These dropdownlist values are retrieved from the oracle database.
One unique issue is that the first form that is clicked, might took around 2-5 seconds, and the second form clicked could take forever to load or more than 5 mins. I have checked the codes and happened to know that the issue lies when an attempt to call the one instance of the db connection. Could this be a deadlock?
public static synchronized DataSingleton getDataSingleton()
throws ApplicationException {
if (myDataSingleton == null) {
myDataSingleton = new DataSingleton();
}
return myDataSingleton;
}
Any help in explaining such a scenario would be appreciated.
Thank you
A sample read operation calling Singleton
String sql = "...";
DataSingleton myDataSingleton = DataSingleton.getDataSingleton();
conn = myDataSingleton.getConnection();
try {
PreparedStatement pstmt = conn.prepareStatement(sql);
try {
pstmt.setString(1, userId);
ResultSet rs = pstmt.executeQuery();
try {
while (rs.next()) {
String group = rs.getString("mygroup");
}
} catch (SQLException rsEx) {
throw rsEx;
} finally {
rs.close();
}
} catch (SQLException psEx) {
throw psEx;
} finally {
pstmt.close();
}
} catch (SQLException connEx) {
throw connEx;
} finally {
conn.close();
}
The Singleton class
/**
* Private Constructor looking up for Server's Datasource through JNDI
*/
private DataSingleton() throws ApplicationException {
try {
Context ctx = new InitialContext();
SystemConstant mySystemConstant = SystemConstant
.getSystemConstant();
String fullJndiPath = mySystemConstant.getFullJndiPath();
ds = (DataSource) ctx.lookup(fullJndiPath);
} catch (NamingException ne) {
throw new ApplicationException(ne);
}
}
/**
* Singleton: To obtain only 1 instance throughout the system
*
* #return DataSingleton
*/
public static synchronized DataSingleton getDataSingleton()
throws ApplicationException {
if (myDataSingleton == null) {
myDataSingleton = new DataSingleton();
}
return myDataSingleton;
}
/**
* Fetching SQL Connection through Datasource
*
*/
public Connection getConnection() throws ApplicationException {
Connection conn = null;
try {
if (ds == null) {
}
conn = ds.getConnection();
} catch (SQLException sqlE) {
throw new ApplicationException(sqlE);
}
return conn;
}
It sounds like you may not be committing the transaction at the end of your use of the connection.
What's in DataSingleton - is it a database connection? Allowing multiple threads to access the same database connection is not going to work, for example once you have more than one user. Why don't you use a database connection pool, for example a DataSource?

Resources