My H2/C3PO/Hibernate setup does not seem to preserving prepared statements? - database

I am finding my database is the bottleneck in my application, as part of this it looks like Prepared statements are not being reused.
For example here method I use
public static CoverImage findCoverImageBySource(Session session, String src)
{
try
{
Query q = session.createQuery("from CoverImage t1 where t1.source=:source");
q.setParameter("source", src, StandardBasicTypes.STRING);
CoverImage result = (CoverImage)q.setMaxResults(1).uniqueResult();
return result;
}
catch (Exception ex)
{
MainWindow.logger.log(Level.SEVERE, ex.getMessage(), ex);
}
return null;
}
But using Yourkit profiler it says
com.mchange.v2.c3po.impl.NewProxyPreparedStatemtn.executeQuery() Count 511
com.mchnage.v2.c3po.impl.NewProxyConnection.prepareStatement() Count 511
and I assume that the count for prepareStatement() call should be lower, ais it is looks like we create a new prepared statment every time instead of reusing.
https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html
I am using C3po connecting poolng wehich complicates things a little, but as I understand it I have it configured correctly
public static Configuration getInitializedConfiguration()
{
//See https://www.mchange.com/projects/c3p0/#hibernate-specific
Configuration config = new Configuration();
config.setProperty(Environment.DRIVER,"org.h2.Driver");
config.setProperty(Environment.URL,"jdbc:h2:"+Db.DBFOLDER+"/"+Db.DBNAME+";FILE_LOCK=SOCKET;MVCC=TRUE;DB_CLOSE_ON_EXIT=FALSE;CACHE_SIZE=50000");
config.setProperty(Environment.DIALECT,"org.hibernate.dialect.H2Dialect");
System.setProperty("h2.bindAddress", InetAddress.getLoopbackAddress().getHostAddress());
config.setProperty("hibernate.connection.username","jaikoz");
config.setProperty("hibernate.connection.password","jaikoz");
config.setProperty("hibernate.c3p0.numHelperThreads","10");
config.setProperty("hibernate.c3p0.min_size","1");
//Consider that if we have lots of busy threads waiting on next stages could we possibly have alot of active
//connections.
config.setProperty("hibernate.c3p0.max_size","200");
config.setProperty("hibernate.c3p0.max_statements","5000");
config.setProperty("hibernate.c3p0.timeout","2000");
config.setProperty("hibernate.c3p0.maxStatementsPerConnection","50");
config.setProperty("hibernate.c3p0.idle_test_period","3000");
config.setProperty("hibernate.c3p0.acquireRetryAttempts","10");
//Cancel any connection that is more than 30 minutes old.
//config.setProperty("hibernate.c3p0.unreturnedConnectionTimeout","3000");
//config.setProperty("hibernate.show_sql","true");
//config.setProperty("org.hibernate.envers.audit_strategy", "org.hibernate.envers.strategy.ValidityAuditStrategy");
//config.setProperty("hibernate.format_sql","true");
config.setProperty("hibernate.generate_statistics","true");
//config.setProperty("hibernate.cache.region.factory_class", "org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory");
//config.setProperty("hibernate.cache.use_second_level_cache", "true");
//config.setProperty("hibernate.cache.use_query_cache", "true");
addEntitiesToConfig(config);
return config;
}
Using H2 1.3.172, Hibernate 4.3.11 and the corresponding c3po for that hibernate version
With reproducible test case we have
HibernateStats
HibernateStatistics.getQueryExecutionCount() 28
HibernateStatistics.getEntityInsertCount() 119
HibernateStatistics.getEntityUpdateCount() 39
HibernateStatistics.getPrepareStatementCount() 189
Profiler, method counts
GooGooStaementCache.aquireStatement() 35
GooGooStaementCache.checkInStatement() 189
GooGooStaementCache.checkOutStatement() 189
NewProxyPreparedStatement.init() 189
I don't know what I shoud be counting as creation of prepared statement rather than reusing an existing prepared statement ?
I also tried enabling c3p0 logging by adding a c3p0 logger ands making it use same log file in my LogProperties but had no effect.
String logFileName = Platform.getPlatformLogFolderInLogfileFormat() + "songkong_debug%u-%g.log";
FileHandler fe = new FileHandler(logFileName, LOG_SIZE_IN_BYTES, 10, true);
fe.setEncoding(StandardCharsets.UTF_8.name());
fe.setFormatter(new com.jthink.songkong.logging.LogFormatter());
fe.setLevel(Level.FINEST);
MainWindow.logger.addHandler(fe);
Logger c3p0Logger = Logger.getLogger("com.mchange.v2.c3p0");
c3p0Logger.setLevel(Level.FINEST);
c3p0Logger.addHandler(fe);

Now that I have eventually got c3p0Based logging working and I can confirm the suggestion of #Stevewaldman is correct.
If you enable
public static Logger c3p0ConnectionLogger = Logger.getLogger("com.mchange.v2.c3p0.stmt");
c3p0ConnectionLogger.setLevel(Level.FINEST);
c3p0ConnectionLogger.setUseParentHandlers(false);
Then you get log output of the form
24/08/2019 10.20.12:BST:FINEST: com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache ----> CACHE HIT
24/08/2019 10.20.12:BST:FINEST: checkoutStatement: com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 347; checked out: 1; num connections: 13; num keys: 347
24/08/2019 10.20.12:BST:FINEST: checkinStatement(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 347; checked out: 0; num connections: 13; num keys: 347
making it clear when you get a cache hit. When there is no cache hit yo dont get the first line, but get the other two lines.
This is using C3p0 9.2.1

Related

laravel Maximum execution time of 60 seconds exceeded --

hello guys so I am doing a laravel project ( new to laravel ).
i am supposed to to calculation from other tables and save the results in another OUTPUT table.
I have 8 calculation in total in each line and up to 3k lines to fill.
The problem that I get the ma execution time error 60 sec even if a change it in laravel and php.ini.
each function called is just calling a select where and sum I've decided to divide them for better org.
My question is is there a better way to process the data and minimize to exc time if you can help .
public function calcul($week)
{ $test = Output::where('week',$week)->Limit(1);
if($test->first()){
return self::afficher($week);
}
else{
DB::table('article')->orderBy('material')->chunk(100, function ($stocks){
foreach ($stocks as $stock) {
$id = $stock->material;
$safe_stock=self::safe_stock($id);
$past_need=self::PassedNeeds($id) - self::NeedsInTwoWeeks($id);
$two_week_need=self::NeedsInTwoWeeks($id);
$stock_=self::stock($id);
$bdl=self::bdl($id);
$sm=self::sm($id);
$package=self::package($id);
$store_1=self::store_1($id);
$store_2=self::store_2($id);
$store_3=self::store_3($id);
Output::create([
'material' => $id,'safe_stock'=>$safe_stock,'past_need'=>$past_need,'two_week_need'=>$two_week_need,
'stock'=>$stock_,'bdl'=>$bdl,'sm'=>$sm,'package'=>$package,
'store_1'=>$store_1,'store_2'=>$store_2,'store_3'=>$store_3
]);
}
});
return self::index();
}
}

unable to extra/list all event log on watson assistant wrokspace

Please help I was trying to call watson assistant endpoint
https://gateway.watsonplatform.net/assistant/api/v1/workspaces/myworkspace/logs?version=2018-09-20 to get all the list of events
and filter by date range using this params
var param =
{ workspace_id: '{myworkspace}',
page_limit: 100000,
filter: 'response_timestamp%3C2018-17-12,response_timestamp%3E2019-01-01'}
apparently I got any empty response below.
{
"logs": [],
"pagination": {}
}
Couple of things to check.
1. You have 2018-17-12 which is a metric date. This translates to "12th day of the 17th month of 2018".
2. Assuming the date should be a valid one, your search says "Documents that are Before 17th Dec 2018 and after 1st Jan 2019". Which would return no documents.
3. Logs are only generated when you call the message() method through the API. So check your logging page in the tooling to see if you even have logs.
4. If you have a lite account logs are only stored for 7 days and then deleted. To keep logs longer you need to upgrade to a standard account.
Although not directly related to your issue, be aware that page_limit has an upper hard coded limit (IIRC 200-300?). So you may ask for 100,000 records, but it won't give it to you.
This is sample python code (unsupported) that is using pagination to read the logs:
from watson_developer_cloud import AssistantV1
username = '...'
password = '...'
workspace_id = '....'
url = '...'
version = '2018-09-20'
c = AssistantV1(url=url, version=version, username=username, password=password)
totalpages = 999
pagelimit = 200
logs = []
page_count = 1
cursor = None
count = 0
x = { 'pagination': 'DUMMY' }
while x['pagination']:
if page_count > totalpages:
break
print('Reading page {}. '.format(page_count), end='')
x = c.list_logs(workspace_id=workspace_id,cursor=cursor,page_limit=pagelimit)
if x is None: break
print('Status: {}'.format(x.get_status_code()))
x = x.get_result()
logs.append(x['logs'])
count = count + len(x['logs'])
page_count = page_count + 1
if 'pagination' in x and 'next_url' in x['pagination']:
p = x['pagination']['next_url']
u = urlparse(p)
query = parse_qs(u.query)
cursor = query['cursor'][0]
Your logs object should contain the logs.
I believe the limit is 500, and then we return a pagination URL so you can get the next 500. I dont think this is the issue but once you start getting logs back its good to know

Flink Error - Key group is not in KeyGroupRange

I am running a Flink graph using RocksDB as my statebackend. For one of the join operators in my graph, I get the below exception. (the actual group #s vary of course from run to run).
java.lang.IllegalArgumentException: Key group 45 is not in
KeyGroupRange{startKeyGroup=0, endKeyGroup=42}.
My operator is not too is as follows
Source1 -----> Map1A ---> KeyBy--->\___ >
\----> Map1B ---> KeyBy--->-----> Join1AB ---->
\____>
Source2 ----->------------KeyBy---> -----------------> Join2,1AB ---->
The error is thrown for in the Join2,1AB operator which joins (a) the result of Join1AB with the (keyed) source2.
Any ideas what could be causing this? I have the full stacktrace below, and I understand this is still very vague - but any pointers in the right direction is much appreciated.
Caused by: java.lang.IllegalArgumentException: Key group 45 is not in KeyGroupRange{startKeyGroup=0, endKeyGroup=42}.
at org.apache.flink.runtime.state.KeyGroupRangeOffsets.computeKeyGroupIndex(KeyGroupRangeOffsets.java:142)
at org.apache.flink.runtime.state.KeyGroupRangeOffsets.setKeyGroupOffset(KeyGroupRangeOffsets.java:104)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullSnapshotOperation.writeKVStateData(RocksDBKeyedStateBackend.java:664)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullSnapshotOperation.writeDBSnapshot(RocksDBKeyedStateBackend.java:521)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$3.performOperation(RocksDBKeyedStateBackend.java:417)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$3.performOperation(RocksDBKeyedStateBackend.java:399)
at org.apache.flink.runtime.io.async.AbstractAsyncIOCallable.call(AbstractAsyncIOCallable.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:897)
... 5 more
[CIRCULAR REFERENCE:java.lang.IllegalArgumentException: Key group 45 is not in KeyGroupRange{startKeyGroup=0, endKeyGroup=42}.]
EDIT: If I change my state backend to be file system (i.e. FsStateBackend), then I get the following stack trace. Something off with key group indexing.
java.lang.IllegalArgumentException: Key group index out of range of key group range [43, 86).
at org.apache.flink.runtime.state.heap.NestedMapsStateTable.setMapForKeyGroup(NestedMapsStateTable.java:104)
at org.apache.flink.runtime.state.heap.NestedMapsStateTable.putAndGetOld(NestedMapsStateTable.java:218)
at org.apache.flink.runtime.state.heap.NestedMapsStateTable.put(NestedMapsStateTable.java:207)
at org.apache.flink.runtime.state.heap.NestedMapsStateTable.put(NestedMapsStateTable.java:145)
at org.apache.flink.runtime.state.heap.HeapValueState.update(HeapValueState.java:72)
<snip user code stack trace>
org.apache.flink.streaming.api.operators.co.KeyedCoProcessOperator.processElement1(KeyedCoProcessOperator.java:77)
at org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:242)
at org.apache.flink.streaming.runtime.tasks.TwoInputStreamTask.run(TwoInputStreamTask.java:91)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:263)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:745)
The problem was my data objects (POJOs) had an mutable hashcode. Specifically, the hash code contained Enums. Example if I have a stream of Cars where the hashcode is composed of the car year and the car type (enum) as below.
Car {
private final CarType carType;
private final int carYear
public long hashCode() {
int result = 17;
result = 31 * result + carYear;
result = 31 * result + carType.hasCode(); <---- This is mutable!
}
}
The enum's hashCode is essentially Object.hashCode() (which is memory address dependent). Subsequently, the hashCode on one machine (or process) will not be the same as on another machine (or process). This also explains why I only ran into this problem when running in a distributed environment as opposed to running locally.
To resolve this, I changed my hashCode() to be immutable. Doing String.hashCode() is poor performance, so I may need to optimize that. But the below definition of Car will fix the problem.
Car {
private final CarType carType;
private final int carYear
public long hashCode() {
int result = 17;
result = 31 * result + carYear;
result = 31 * result + carType.name().hasCode(); <---- This is IMMUTABLE!
}
}

Solr 6.0.0 - SolrCloud java example

I have solr installed on my localhost.
I started standard solr cloud example with embedded zookeepr.
collection: gettingstarted
shards: 2
replication : 2
500 records/docs to process time took 115 seconds[localhost tetsing] -
why is this taking this much time to process just 500 records.
is there a way to improve this to some millisecs/nanosecs
NOTE:
I have tested the same on remote machine solr instance, localhost having data index on remote solr [inside java commented]
I started my solr myCloudData collection with Ensemble with single zookeepr.
2 solr nodes,
1 Ensemble zookeeper standalone
collection: myCloudData,
shards: 2,
replication : 2
Solr colud java code
package com.test.solr.basic;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.common.SolrInputDocument;
public class SolrjPopulatorCloudClient2 {
public static void main(String[] args) throws IOException,SolrServerException {
//String zkHosts = "64.101.49.57:2181/solr";
String zkHosts = "localhost:9983";
CloudSolrClient solrCloudClient = new CloudSolrClient(zkHosts, true);
//solrCloudClient.setDefaultCollection("myCloudData");
solrCloudClient.setDefaultCollection("gettingstarted");
/*
// Thread Safe
solrClient = new ConcurrentUpdateSolrClient(urlString, queueSize, threadCount);
*/
// Depreciated - client
//HttpSolrServer server = new HttpSolrServer("http://localhost:8983/solr");
long start = System.nanoTime();
for (int i = 0; i < 500; ++i) {
SolrInputDocument doc = new SolrInputDocument();
doc.addField("cat", "book");
doc.addField("id", "book-" + i);
doc.addField("name", "The Legend of the Hobbit part " + i);
solrCloudClient.add(doc);
if (i % 100 == 0)
System.out.println(" Every 100 records flush it");
solrCloudClient.commit(); // periodically flush
}
solrCloudClient.commit();
solrCloudClient.close();
long end = System.nanoTime();
long seconds = TimeUnit.NANOSECONDS.toSeconds(end - start);
System.out.println(" All records are indexed, took " + seconds + " seconds");
}
}
You are committing every new document, which is not necessary. It will run a lot faster if you change the if (i % 100 == 0) block to read
if (i % 100 == 0) {
System.out.println(" Every 100 records flush it");
solrCloudClient.commit(); // periodically flush
}
On my machine, this indexes your 500 records in 14 seconds. If I remove the commit() call from the for loop, it indexes in 7 seconds.
Alternatively, you can add a commitWithinMs parameter to the solrCloudClient.add() call:
solrCloudClient.add(doc, 15000);
This will guarantee your records are committed within 15 seconds, and also increase your indexing speed.

opa database : how to know if value exists in database?

I'd like to know if a record is present in a database, using a field different from the key field.
I've try the following code :
function start()
{
jlog("start db query")
myType d1 = {A:"rabbit", B:"poney"};
/myDataBase/data[A == d1.A] = d1
jlog("db write done")
option opt = ?/myDataBase/data[B == "rabit"]
jlog("db query done")
match(opt)
{
case {none} : <>Nothing in db</>
case {some:data} : <>{data} in database</>
}
}
Server.start(
{port:8092, netmask:0.0.0.0, encryption: {no_encryption}, name:"test"},
[
{page: start, title: "test" }
]
)
But the server hang up, and never get to the line jlog("db query done"). I mispell "rabit" willingly. What should I've done ?
Thanks
Indeed it fails with your example, I have a "Match failure 8859742" exception, don't you see it?
But do not use ?/myDataBase/data[B == "rabit"] (which should have been rejected at compile time - bug report sent) but /myDataBase/data[B == "rabit"] which is a DbSet. The reason is when you don't use a primary key, then you can have more than one value in return, ie a set of values.
You can convert a dbset to an iter with DbSet.iterator. Then manipulate it with Iter: http://doc.opalang.org/module/stdlib.core.iter/Iter

Resources