BlackBerry downloading a file - file

I have a blackberry application that is downloading a file from online. Sometimes the download succeeds and other times it fails somewhere in the middle. It only seems to be a problem on the Curve 9360 device. When it fails, on the device it closes my app and shows a pop-up that says
"Uncaught exeption Application [MyApp] is not responding; process
terminated"
this is the while loop that it is in when it fails:
byte data[] = new byte[1024];
try {
while ((count = is.read(data)) != -1) {
total += count;
progress = (int)(total*100/lengthOfWebFile);
if(model.getValue() < progress){
UiApplication.getUiApplication().invokeLater(new Runnable()
{
public void run()
{
EmbeddedMediaScreen.this.model.setValue(progress);
}
});
}
//write this chunk
os.write(data, 0, count);
}
} catch (Exception e) {
e.printStackTrace();
}
I don't get any kind of stack trace in the console when this happens. I get the following:
[710.328] Application BBCurve9360DemoLoop(314) is not responding; process terminated
[710.328]
[710.429] [0 0]
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
.[lots more "0 2"s]
.
.
.
[710.429] 2 203
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
[710.429] 0 2
.[lots more "0 2"s
.
.
Has anyone run across anything like this while pro grammatically downloading a file on a blackberry device?
Can anyone see anything in my IO loop that would cause this type of crash?
And lastly does anyone know if there is someway that I can get an actual stack trace of whatever exception is being thrown?

To print stack trace, you have to catch throwable instead of exception. My conclusion is based on the RIM note given below
RIM Implementation Note
Only uncaught exceptions have stack traces. The VM checks the current catch stack and if it finds anything that will catch the exception, it eliminates the stack trace to save time and memory. Any code in the current stack such as catch (Exception e) eliminates the stack trace.
If the exception is never caught, then the stack trace is generated.The stack trace is also generated if there is code such as catch (Throwable t).
Also
You get a "process terminated" message if the while loop doesnt end and the loop is continually putting in runnable objects in the event thread. Try to get rid of the invokeLater() and see if your process still gets terminated.

Try adding Thread.yield() after os.write(). This will give other threads the opportunity to run, and should stop your app from being killed by the JVM.

Related

Vulkan swapchain creation causes crash with no debug information

I'm following vulkan-tutorial's procedure to understand vulkan, I'm now in creating swapchain. before that I've also created instance and a debug/validation layer but when I'm trying to create swapchain, the app crashes.
void createSwapChain() {
VkSurfaceFormatKHR surfaceFormat;
surfaceFormat.format=VK_FORMAT_B8G8R8A8_SRGB;
surfaceFormat.colorSpace=VK_COLOR_SPACE_SRGB_NONLINEAR_KHR;
VkPresentModeKHR presentMode=VK_PRESENT_MODE_FIFO_KHR;
int width,height;
glfwGetFramebufferSize(window, &width, &height);
VkExtent2D extent ={width,height};
uint32_t imageCount=2;//minimum cap +1
VkSurfaceCapabilitiesKHR capabilities;
vkGetPhysicalDeviceSurfaceCapabilitiesKHR(physicalDevice, surface, &capabilities);
VkSwapchainCreateInfoKHR createInfo={0};
createInfo.sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR;
createInfo.surface = surface;
createInfo.minImageCount = imageCount;
createInfo.imageFormat = surfaceFormat.format;
createInfo.imageColorSpace = surfaceFormat.colorSpace;
createInfo.imageExtent = extent;
createInfo.imageArrayLayers = 1;
createInfo.imageUsage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
createInfo.imageSharingMode = VK_SHARING_MODE_EXCLUSIVE;
// createInfo.queueFamilyIndexCount = 1;
// createInfo.pQueueFamilyIndices = NULL;
createInfo.preTransform = capabilities.currentTransform/*VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR*/;//swapChainSupport.capabilities.currentTransform
createInfo.compositeAlpha = VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR;
createInfo.presentMode = presentMode;
createInfo.clipped = VK_TRUE;// means obscured images have unimportant color!!
createInfo.oldSwapchain = VK_NULL_HANDLE;
if (vkCreateSwapchainKHR(device, &createInfo, NULL, &swapChain) != VK_SUCCESS)
printf("failed to create swap chain!");
}
Is there any idea? The host is eclipse with MinGW-64 -lglfw3 and -lvulkan-1 on windows 10 64-bit.
And here the graphic card information:
Available Layers:
VK_LAYER_AMD_switchable_graphics
VK_LAYER_VALVE_steam_overlay
VK_LAYER_VALVE_steam_fossilize
VK_LAYER_KHRONOS_validation
Available extensions:
VK_KHR_device_group_creation
VK_KHR_external_fence_capabilities
VK_KHR_external_memory_capabilities
VK_KHR_external_semaphore_capabilities
VK_KHR_get_physical_device_properties2
VK_KHR_get_surface_capabilities2
VK_KHR_surface
VK_KHR_win32_surface
VK_EXT_debug_report
VK_EXT_debug_utils
VK_EXT_swapchain_colorspace
Instance extension:
VK_KHR_surface
VK_KHR_win32_surface
VK_EXT_debug_utils
Availible devices:
AMD Radeon(TM) Graphics Type: 1 GeometryShader: Yes
FamilyIndex: 0 AvailibleQueueCount: 1 Surface Support: Yes Graphics | Compute | Transfer | Binding
FamilyIndex: 1 AvailibleQueueCount: 2 Surface Support: Yes | Compute | Transfer | Binding
FamilyIndex: 2 AvailibleQueueCount: 1 Surface Support: Yes | | Transfer | Binding
Availible device extensions: (The required is VK_KHR_swapchain)
VK_KHR_16bit_storage VK_KHR_8bit_storage VK_KHR_bind_memory2
VK_KHR_buffer_device_address VK_KHR_copy_commands2 VK_KHR_create_renderpass2
VK_KHR_dedicated_allocation VK_KHR_depth_stencil_resolve VK_KHR_descriptor_update_template
VK_KHR_device_group VK_KHR_draw_indirect_count VK_KHR_driver_properties
VK_KHR_dynamic_rendering VK_KHR_external_fence VK_KHR_external_fence_win32
VK_KHR_external_memory VK_KHR_external_memory_win32 VK_KHR_external_semaphore
VK_KHR_external_semaphore_win32 VK_KHR_format_feature_flags2 VK_KHR_get_memory_requirements2
VK_KHR_global_priority VK_KHR_imageless_framebuffer VK_KHR_image_format_list
VK_KHR_maintenance1 VK_KHR_maintenance2 VK_KHR_maintenance3
VK_KHR_maintenance4 VK_KHR_multiview VK_KHR_pipeline_executable_properties
VK_KHR_pipeline_library VK_KHR_push_descriptor VK_KHR_relaxed_block_layout
VK_KHR_sampler_mirror_clamp_to_edge VK_KHR_sampler_ycbcr_conversion VK_KHR_separate_depth_stencil_layouts
VK_KHR_shader_atomic_int64 VK_KHR_shader_clock VK_KHR_shader_draw_parameters
VK_KHR_shader_float16_int8 VK_KHR_shader_float_controls VK_KHR_shader_integer_dot_product
VK_KHR_shader_non_semantic_info VK_KHR_shader_subgroup_extended_types VK_KHR_shader_subgroup_uniform_control_flow
VK_KHR_shader_terminate_invocation VK_KHR_spirv_1_4 VK_KHR_storage_buffer_storage_class
Swap chain details:
Min/max images in swap chain; 1, 16
Min/max width and height 800 : 600 | 800 : 600
Pixel format codes(at least we need one no matter what!): 44
50 58 97 44 50 58 97 58 97 58
97 97 2 3 4 5 8 37 38 43
45 51 52 57 64 91 92 122
color space codes(at least we need one no matter what!): 0
0 0 0 1000104006 1000104006 1000104006 1000104006 1000104008 1000104008 1000104007
1000104007 1000104002 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
Available presentation modes:
0 (Immediate)
2 3 ( FIFO, FIFO_relaxed)
A few things:
There isn't really enough information in the problem description to allow others to locate the problem. For example, we can't tell how surface got created and we can't tell how the Vulkan device got created. It is better to post a complete reproducible example someplace and point to it.
There are a lot of hits when searching for "vulkan swapchain exception" like this one and this one. Something there may give a clue.
Based on what is provided and the links mentioned above, my best guess is that you're not enabling the swapchain extension when creating the device. Perhaps you could check that?
The validation layer should be helpful and you can find more information about it here. You might try making a vk_layer_settings.txt file and use it to set the message severity to "info" so that you can verify that the layer is active. (It will print some sort of startup message at this level) Your IDE might be swallowing the text output coming from the validation layer and the point behind doing this is to verify that you can see any validation output. It may be easier to provide a log filename in the same settings file to send the validation output to that file instead of figuring out what the IDE is doing. You can also use the vkconfig tool to apply these sorts of settings to the validation layer.

Capture stdout within a TCL catch command

In my main tcl script, I am calling a tcl proc wrapped in a catch command.
This proc in-turn calls 10 more procs.
When there is an error in execution in any of those 10 procs, TCL still continues execution of my main script as expected and I am just able to view the error message which I captured. This error message may/may-not be conclusive enough to determine which of the 10 procs errored out during execution.
Is there a way to still keep capturing all the stdout until the point of error?
I know it can be done by writing all messages (puts statements) in those 10 procs to another log file. But I'm interested in knowing if there is any other way.
The catch command doesn't intercept I/O at all. To intercept output, the simplest and most method is to put a channel transform on that channel with chan push.
oo::class create Capture {
variable contents encoding
# Implement the channel interception protocol
method initialize {handle mode} {
set contents {}
return {initialize finalize write}
}
method finalize handle {
# We do nothing here
}
method write {handle buffer} {
append contents $buffer
return $buffer
}
# Methods for ordinary people!
method capture {channel body} {
set encoding [chan configure $channel -encoding]
chan push $channel [self]
try {
uplevel 1 $body
} finally {
chan pop $channel
}
}
method contents {} {
# Careful; need the encoding as channels work with binary data
return [encoding convertfrom $encoding $contents]
}
}
How to use this class:
set capt [Capture new]
$capt capture stdout {
puts "Hello world!"
}
puts "Captured [string length [$capt contents]] characters"
puts [lmap c [split [$capt contents] ""] {scan $c "%c"}]
Output (I assume you recognise ASCII codes; the 13 10 at the end is a carriage-return/new-line sequence):
Hello world!
Captured 14 characters
72 101 108 108 111 32 119 111 114 108 100 33 13 10
catch won't capture the stdout of a Tcl procedure, it will capture the return value.
A demo in an interactive tclsh:
% proc p {} {puts "some stdout"; return "return value"}
% catch p result
some stdout
0
% set result
return value

Looping in Pyspark causes sparkException

In Zeppelin with pyspark.
Before I found the correct way of doing things (Last over a Window), I had a loop that extended the value of a previous row to itself one by one (I know loops are bad practice). However, after running a couple hundred times it fails with a nullPointerException before reaching the best case condition=0.
To get around the error, (before I discovered the last command), I had the loop run a few hundred times for a midpoint condition=1000, dump the results. Run it again with condition=500, rinse and repeat until I hit condition=0.
def extendTarget(myDF, loop, lessThan):
i = myDF.filter(col("target") == "unknown").count()
while (i > lessThan):
cc = loop
while (cc > 0):
myDF = myDF.withColumn("targetPrev", lag("target", 1).over(Window.partitionBy("id").orderBy("myTime")))
myDF = myDF.withColumn("targetNew", when(col("target") == "unknown", col("targetPrev")).otherwise(col("target")))
myDF = myDF.select(
"id",
"myTime",
col("targetNew").alias("target"))
cc = cc - 1
i = myDF.filter(col("target") == "unknown").count()
print i
return myDF
myData = spark.read.load(myPath)
myData = extendTarget(myData, 20, 0)
myData.write.parquet(myPathPart1)
I expect it to take a stupid long amount of time (since I'm doing it wrong), but do not expect it to exception out with
Output (given inputs (myData, 20, 0)
38160
22130
11375
6625
5085
4522
4216
3936
3662
3419
3202
Error
Py4JJavaError: An error occurred while calling o26814.count.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 32 in stage 1539.0 failed 4 times, most recent failure: Lost task 32.3 in stage 1539.0 (TID XXXX, ip-XXXX, executor 17): ExecutorLostFailure (executor 17 exited caused by one of the running tasks) Reason: Container from a bad node: container_XXXX_0001_01_000033 on host: ip-XXXX. Exit status: 50. Diagnostics: Exception from container-launch.
Container id: container_XXXX_0001_01_000033
Exit code: 50
Stack trace: ExitCodeException exitCode=50:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
at org.apache.hadoop.util.Shell.run(Shell.java:869)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 50
.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2041)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2029)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2028)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2028)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:966)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:966)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:966)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2262)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2211)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2200)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:777)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:299)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2830)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2829)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
at org.apache.spark.sql.Dataset.count(Dataset.scala:2829)
at sun.reflect.GeneratedMethodAccessor388.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
(<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred while calling o26814.count.\n', JavaObject id=o26815), <traceback object at 0x7efc521b11b8>)
I can only guess that this has something to do with memory or cache. Even though I am reusing all variable names. If it is a memory problem, is there a garbage collect, or clear cache/memory command that I can put beside the print command to allow it to loop forever?
Again I know its bad practice to use loops, especially if they go on for what seems like forever, but sometimes the better\smarter code doesn't present itself when I need it so during the interim I hack it however I can.
Answered in a comment by Cronoik
Could you please try the following (this will keep the execution plan small): myDF = myDF.select("id", "myTime", col("targetNew").alias("target")).checkpoint() (replace the corrosponding line before and and the checkpoint directory before via spark.setCheckpointDir). – cronoik Sep 9 at 4:05
I tested it out and it worked! The first few times it failed though, mostly due to my misunderstanding of how checkpoint works. Would you like to put your answer as an answer? Some things I had to figure out that would be nice as part of the answer spark.sparkContext.setCheckpointDir("path/") - to set checkpointPath myDF.checkpoint() does not work, you have to assign that checkpoint back myDF = myDF.checkpoint() Also, I found the nullpointer exception, if I tell the loop to run 200 times before counting it does a nullpointerException and sparkContext ends. (must be restarted) – Ranald Fong Sep 10 at 2:44
Also, having the checkpoint inside each loop caused it to run slowly as it was probably checkpointing every loop run. What really sped it up was putting the checkpoint right before the print, allowing it to loop within memory x amt of times (i used 100) before checkpointing. Thanks cronoik, if you put your answer as an answer I'll mark it my answer! – Ranald Fong Sep 10 at 2:53
Edit: Commenter wanted details
Point is to extend a value from one row until the end, replacing all unknowns
| id | time | target |
| a | 1:00 | 1 |
| a | 1:01 | unknown|
| a | . | . |
| a | 5:00 | unknown|
| a | 5:01 | 2 |
to
| id | time | target |
| a | 1:00 | 1 |
| a | 1:01 | 1 |
| a | . | 1 |
| a | 5:00 | 1 |
| a | 5:01 | 2 |
Code was changed to use Checkpoints
spark.sparkContext.setCheckpointDir(".../myCheckpointsPath/")
def extendTarget(myDF, loop, lessThan):
i = myDF.filter(col("target") == "unknown").count()
while (i > lessThan):
cc = loop
while (cc > 0):
myDF = myDF.withColumn("targetPrev", lag("target", 1).over(Window.partitionBy("id").orderBy("myTime")))
myDF = myDF.withColumn("targetNew", when(col("target") == "unknown", col("targetPrev")).otherwise(col("target")))
myDF = myDF.select(
"id",
"myTime",
col("targetNew").alias("target"))
cc = cc - 1
i = myDF.filter(col("target") == "unknown").count()
print i
myDF = myDF.checkpoint()
return myDF
myData = spark.read.load(myPath)
myData = extendTarget(myData, 20, 0)
myData.write.parquet(myPathPart1)
At the expense of HDD space for the checkpoints, allows the loop to go on forever! But in general, do not use Loops(doubly so for infinite loops), use First and Last with ignore Nulls instead.

My H2/C3PO/Hibernate setup does not seem to preserving prepared statements?

I am finding my database is the bottleneck in my application, as part of this it looks like Prepared statements are not being reused.
For example here method I use
public static CoverImage findCoverImageBySource(Session session, String src)
{
try
{
Query q = session.createQuery("from CoverImage t1 where t1.source=:source");
q.setParameter("source", src, StandardBasicTypes.STRING);
CoverImage result = (CoverImage)q.setMaxResults(1).uniqueResult();
return result;
}
catch (Exception ex)
{
MainWindow.logger.log(Level.SEVERE, ex.getMessage(), ex);
}
return null;
}
But using Yourkit profiler it says
com.mchange.v2.c3po.impl.NewProxyPreparedStatemtn.executeQuery() Count 511
com.mchnage.v2.c3po.impl.NewProxyConnection.prepareStatement() Count 511
and I assume that the count for prepareStatement() call should be lower, ais it is looks like we create a new prepared statment every time instead of reusing.
https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html
I am using C3po connecting poolng wehich complicates things a little, but as I understand it I have it configured correctly
public static Configuration getInitializedConfiguration()
{
//See https://www.mchange.com/projects/c3p0/#hibernate-specific
Configuration config = new Configuration();
config.setProperty(Environment.DRIVER,"org.h2.Driver");
config.setProperty(Environment.URL,"jdbc:h2:"+Db.DBFOLDER+"/"+Db.DBNAME+";FILE_LOCK=SOCKET;MVCC=TRUE;DB_CLOSE_ON_EXIT=FALSE;CACHE_SIZE=50000");
config.setProperty(Environment.DIALECT,"org.hibernate.dialect.H2Dialect");
System.setProperty("h2.bindAddress", InetAddress.getLoopbackAddress().getHostAddress());
config.setProperty("hibernate.connection.username","jaikoz");
config.setProperty("hibernate.connection.password","jaikoz");
config.setProperty("hibernate.c3p0.numHelperThreads","10");
config.setProperty("hibernate.c3p0.min_size","1");
//Consider that if we have lots of busy threads waiting on next stages could we possibly have alot of active
//connections.
config.setProperty("hibernate.c3p0.max_size","200");
config.setProperty("hibernate.c3p0.max_statements","5000");
config.setProperty("hibernate.c3p0.timeout","2000");
config.setProperty("hibernate.c3p0.maxStatementsPerConnection","50");
config.setProperty("hibernate.c3p0.idle_test_period","3000");
config.setProperty("hibernate.c3p0.acquireRetryAttempts","10");
//Cancel any connection that is more than 30 minutes old.
//config.setProperty("hibernate.c3p0.unreturnedConnectionTimeout","3000");
//config.setProperty("hibernate.show_sql","true");
//config.setProperty("org.hibernate.envers.audit_strategy", "org.hibernate.envers.strategy.ValidityAuditStrategy");
//config.setProperty("hibernate.format_sql","true");
config.setProperty("hibernate.generate_statistics","true");
//config.setProperty("hibernate.cache.region.factory_class", "org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory");
//config.setProperty("hibernate.cache.use_second_level_cache", "true");
//config.setProperty("hibernate.cache.use_query_cache", "true");
addEntitiesToConfig(config);
return config;
}
Using H2 1.3.172, Hibernate 4.3.11 and the corresponding c3po for that hibernate version
With reproducible test case we have
HibernateStats
HibernateStatistics.getQueryExecutionCount() 28
HibernateStatistics.getEntityInsertCount() 119
HibernateStatistics.getEntityUpdateCount() 39
HibernateStatistics.getPrepareStatementCount() 189
Profiler, method counts
GooGooStaementCache.aquireStatement() 35
GooGooStaementCache.checkInStatement() 189
GooGooStaementCache.checkOutStatement() 189
NewProxyPreparedStatement.init() 189
I don't know what I shoud be counting as creation of prepared statement rather than reusing an existing prepared statement ?
I also tried enabling c3p0 logging by adding a c3p0 logger ands making it use same log file in my LogProperties but had no effect.
String logFileName = Platform.getPlatformLogFolderInLogfileFormat() + "songkong_debug%u-%g.log";
FileHandler fe = new FileHandler(logFileName, LOG_SIZE_IN_BYTES, 10, true);
fe.setEncoding(StandardCharsets.UTF_8.name());
fe.setFormatter(new com.jthink.songkong.logging.LogFormatter());
fe.setLevel(Level.FINEST);
MainWindow.logger.addHandler(fe);
Logger c3p0Logger = Logger.getLogger("com.mchange.v2.c3p0");
c3p0Logger.setLevel(Level.FINEST);
c3p0Logger.addHandler(fe);
Now that I have eventually got c3p0Based logging working and I can confirm the suggestion of #Stevewaldman is correct.
If you enable
public static Logger c3p0ConnectionLogger = Logger.getLogger("com.mchange.v2.c3p0.stmt");
c3p0ConnectionLogger.setLevel(Level.FINEST);
c3p0ConnectionLogger.setUseParentHandlers(false);
Then you get log output of the form
24/08/2019 10.20.12:BST:FINEST: com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache ----> CACHE HIT
24/08/2019 10.20.12:BST:FINEST: checkoutStatement: com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 347; checked out: 1; num connections: 13; num keys: 347
24/08/2019 10.20.12:BST:FINEST: checkinStatement(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 347; checked out: 0; num connections: 13; num keys: 347
making it clear when you get a cache hit. When there is no cache hit yo dont get the first line, but get the other two lines.
This is using C3p0 9.2.1

AHK IF statement

ArrayCount = 0
Loop, Read, Times.txt ; This loop retrieves each line from the file.
{
ArrayCount += 1 ; Keep track of how many items are in the array.
ArrayTime%ArrayCount% := A_LoopReadLine
}
WinGetTitle, Title, A
Loop %ArrayCount%
{
element := ArrayTime%A_Index%
Time = %A_WDay%%A_Hour%%A_Min%
msgbox %Time% , %element%
if (Time=%element%)
{
IfWinExist, Test.txt
{
WinActivate
Sleep 500
Send Hi{enter}
msgbox %Time% , %element%
Sleep 500
WinActivate, %Title%
}
}
}
Ok so the main issue is with this part:
if (Time=%element%)
I have also tried
if (%Time%=%element%)
if (A_WDay . A_Hour . A_Min=%element%)
And I think some other similar variations, the problem I'm getting is it's either always true, or always false, depending on how I have it written.
Inside the text file is a list like this:
10000
10700
11400
20400
21100
I add an extra line that has the current time for testing, and I added the msgbox to compare, and I can clearly see they're both the same when it doesn't work, or that they're different when it does. Sorry for such a basic question but I feel like I've really been trying for a long time and read everything I can on variables and IF statements, thanks for any help.
Also the point of it is I need it to go off every 7 hours starting at midnight on sunday, this is what I came up with, if there's maybe a completely better way in general I'd be happy to hear that too.
Try this:
if % Time = element
{
MsgBox, Equal!
}
As for the scheduling part, try running your script through Windows Task Scheduler (hit Windows+R, type taskschd.msc and press Enter). There are tutorials on the Internet explaining how to create new tasks.
With regard to timers, have a look at this as an example.
SetTimer, AlertType1, 60000
ToAlertType1:=1
ToAlertType2:=1
AlertType1:
;If A_WDay between 2 and 7 ; is day monday - sunday?
;{
If (A_Hour = 7 or A_Hour = 13)
{
If (ToAlertType1)
{
SoundBeep, 500, 500
ToAlertType2:=1
ToAlertType1:=0
MsgBox, 4096,%AppName%, Msg1.
Return
}
}
Else if (A_Hour = 21)
{
If (ToAlertType2)
{
SoundBeep, 500, 500
ToAlertType2:=0
ToAlertType1:=1
MsgBox, 4096,%AppName%, Msg2.
Return
}
}
;}
Return

Resources