what's the different between SqlMapClient and SqlMapSeesion in ibatis? - ibatis

When I read ibatis-sqlmap-2.3.4,I find They both implements SqlMapExecutor.
SqlMapClientImpl do insert with localSqlMapSession which provide thread safe.
But in spring2.5.6, the execute method of SqlMapClientTemplate use SqlMapClientImpl like this:
SqlMapSession session = this.sqlMapClient.openSession();
...
return action.doInSqlMapClient(session);
The openSession method return a new SqlMapSessionImpl each time.
My questions are:
Why SqlMapClientTemplate use sqlMapSeesion instead of sqlMapClient ?
Why localSqlMapSession of sqlMapClient is unused in SqlMapClientTemplate ? use like this:
return action.doInSqlMapClient(this.sqlMapClient);
what's the different between SqlMapClient and SqlMapSeesion ?

for your first question, spring-orm explain in the comment:
// We always need to use a SqlMapSession, as we need to pass a Spring-managed
// Connection (potentially transactional) in. This shouldn't be necessary if
// we run against a TransactionAwareDataSourceProxy underneath, but unfortunately
// we still need it to make iBATIS batch execution work properly: If iBATIS
// doesn't recognize an existing transaction, it automatically executes the
// batch for every single statement...
the answer to difference between ibatis' SqlMapClient and SqlMapSession can be found in interface SqlMapClient's comments:
/**
* Returns a single threaded SqlMapSession implementation for use by
* one user. Remember though, that SqlMapClient itself is a thread safe SqlMapSession
* implementation, so you can also just work directly with it. If you do get a session
* explicitly using this method <b>be sure to close it!</b> You can close a session using
* the sqlMapSession.close() method.
* <p/>
*
* #return An SqlMapSession instance.
*/
public SqlMapSession openSession();

Related

How to benchmark DB operations using JMH?

Sometimes we have to perform same DB operation multiple times within a loop. How can I compute the execution time for each operation using JMH?
public void applyAll(ArrayList<parameter_type> lists) {
for(parameter_type param : lists) {
saveToDB(param);
}
}
How can I compute the execution time for saveToDB(param) for each time it is being executed/called?
DB operations are really nothing to microbenchmark. Their will depend on multiple things that are quite impossible to isolate.
As for using parameters, have a look at this answer that explains the use of the #Param annotation.
As #RafaelWinterhalter said, this type of calls are prone to give misleading results in benchmarks. But if you still want to try, then:
Serialize and save a reference list of calls.
Then in a benchmark use a #State(Scope.Thread) object to restore this list to an array and have a loop counter variable there.
Then #Benchmark public int test1_saveToDB(MyState state) { saveToDB(state.params[state.i]); return state.i++; }

How to define thread safe array?

How can I define a thread safe global array with minimal modifications?
I want like every access to it to be accomplished by using mutex and synchronized block.
Something like this as 'T' will be some type (note that 'sync' keyword is not currently defined AFAIK):
sync Array!(T) syncvar;
And every access to it will be simmilar to this:
Mutex __syncvar_mutex;
//some func scope....
synchronized(__syncvar_mutex) { /* edits 'syncvar' safely */ }
My naive attempt was to do something like this:
import std.typecons : Proxy:
synchronized class Array(T)
{
static import std.array;
private std.array.Array!T data;
mixin Proxy!data;
}
Sadly, it doesn't work because of https://issues.dlang.org/show_bug.cgi?id=14509
Can't say I am very surprised though as automagical handling of multi-threading via hidden mutexes is very unidiomatic in modern D and the very concept of synchronized classes is mostly a relict from D1 times.
You can implement same solution manually, of course, by defining own SharedArray class with all necessary methods and adding locks inside the methods before calling internal private plain Array methods. But I presume you want something that work more out of the box.
Can't invent anything better right here and now (will think about it more) but it is worth noting that in general it is encouraged in D to create data structures designed for handling shared access explicitly instead of just protecting normal data structures with mutexes. And, of course, most encouraged approach is to not shared data at all using message passing instead.
I will update the answer if anything better comes to my mind.
It is fairly easy to make a wrapper around array that will make it thread-safe. However, it is extremely difficult to make a thread-safe array that is not a concurrency bottleneck.
The closest thing that comes to mind is Java's CopyOnWriteArrayList class, but even that is not ideal...
You can wrap the array inside a struct that locks the access to the array when a thread acquires a token and until it releases it.
The wrapper/locker:
acquire(): is called in loop by a thread. As it returns a pointer, the thread knows that it has the token when the method returns a non null value.
release(): is called by a thread after processing the data whose access has been acquired previously.
.
shared struct Locker(T)
{
private:
T t;
size_t token;
public:
shared(T) * acquire()
{
if (token) return null;
else
{
import core.atomic;
atomicOp!"+="(token, 1);
return &t;
}
}
void release()
{
import core.atomic;
atomicOp!"-="(token, 1);
}
}
and a quick test:
alias LockedIntArray = Locker!(size_t[]);
shared LockedIntArray intArr;
void arrayTask(size_t cnt)
{
import core.thread, std.random;
// ensure the desynchronization of this job.
Thread.sleep( dur!"msecs"(uniform(4, 20)));
shared(size_t[])* arr = null;
// wait for the token
while(arr == null) {arr = intArr.acquire;}
*arr ~= cnt;
import std.stdio;
writeln(*arr);
// release the token for the waiting threads
intArr.release;
}
void main(string[] args)
{
import std.parallelism;
foreach(immutable i; 0..16)
{
auto job = task(&arrayTask, i);
job.executeInNewThread();
}
}
With the downside that each block of operation over the array must be surrounded with an acquire/release pair.
You have the right idea. As an array, you need to be able to both edit and retrieve information. I suggest you take a look at the read-write mutex and atomic utilities provided by Phobos. A read operation is fairly simple:
synchronize on mutex.readLock
load (with atomicLoad)
copy the item out of the synchronize block
return the copied item
Writing should be almost exactly the same. Just syncronize on mutex.writeLock and do a cas or atomicOp operation.
Note that this will only work if you copy the elements in the array during a read. If you want to get a reference, you need to do additional synchronization on the element every time you access or modify it.

How to perform uvm_do_on without randomization?

I have a virtual sequencer from which I execute three transactions in parallel, each one on its corresponding sequencer. So I have something like this:
class top_vseqr extends uvm_seqr extends uvm_sequencer;
type_a_seqr seqr_a;
type_b_seqr seqr_b;
type_c_seqr seqr_c;
...
endclass: top_vseqr
class simple_vseq extends uvm_sequence;
`uvm_declare_p_sequencer(top_vseqr)
type_a_seq seq_a;
type_b_seq seq_b;
type_c_seq seq_c;
...
virtual task body();
fork
`uvm_do_on(seq_a, p_sequencer.seqr_a)
`uvm_do_on(seq_b, p_sequencer.seqr_b)
`uvm_do_on(seq_c, p_sequencer.seqr_c)
join
endtask: body
endclass: simple_vseq
But now I want to be able to drive specific transactions into the virtual sequencer, depending on the test I am running. To do so, I have a class with an analysis import that is updated every time the monitor sees a transaction in the interface, and a function that returns the next transaction to be driven. So now I want to do something like the following:
class test extends uvm_test;
model model_a;
simple_vseq seq;
top_vseqr virt_seqr;
...
task run_phase(uvm_phase phase);
...
seq = simple_vseq::type_id::create("seq", this);
seq.seq_a = model_a.get_sequence();
seq.start(virt_seqr);
...
endtask: run_phase
Digging through the UVM documentation I have seen that there is a 'uvm_send macro, but it doesn't allow you to select the sequencer to run the sequence on (i.e. I haven't seen a 'uvm_send_on or something like that). What can I do?
Thanks!
You can implement the contents of the uvm_do_on macro without the call to randomize() (like you showed in the second snippet) without any worries. This is anyway the suggested practice by some experts, because the sequencer/driver handshake mechanism is pretty simple. The `uvm_do* macros are not the norm, they're just there to help you out in the beginning.
I don't think there is a `uvm_send_on macro but there is a `uvm_create_on(SEQ_OR_ITEM, SEQR) macro which you can use. From the UVM documentation, this is the same as `uvm_create except that it also sets the parent sequence to the sequence in which the macro is invoked, and it sets the sequencer to the specified ~SEQR~ argument. In fact, the `uvm_create macro calls `uvm_create_on macro internally by passing m_sequencer by default. You can override it using the `uvm_create_on call.
Alternatively, you could also do a set_sequencer on your sequence_item object so that it sets the m_sequencer variable.
Hope this helps.
`uvm_do_on_with may statisfis your requirement, and you can also delete rand in your packet to disable randomization or add constraint

Using Active Record pattern in CakePHP, and avoiding passing arrays around

As my CakePHP 2.4 app gets bigger, I'm noticing I'm passing a lot of arrays around in the model layer. Cake has kinda led me down this path because it returns arrays, not objects, from it's find calls. But more and more, it feels like terrible practice.
For example, in my Job model, I've got a method like this:
public function durationInSeconds($job) {
return $job['Job']['estimated_hours'] * 3600; // convert to seconds
}
Where as I imagine that using active record patter, it should look more like this:
public function durationInSeconds() {
return $this->data['Job']['estimated_hours'] * 3600; // convert to seconds
}
(ie, take no parameter, and assume the current instance represents the Job you want to work with)
Is that second way better?
And if so, how do I use it when, for example, I'm looping through the results of a find('all') call? Cake returns an array - do I loop through that array and do a read for every single row? (seems a waste to re-fetch the info from the database)
Or should I implement a kind of setActiveRecord method that emulates read, like this:
function setActiveRecord($row){
$this->id = $row['Job']['id'];
$this->dtaa = $row;
}
Or is there a better way?
EDIT: The durationInSeconds method was just a simplest possible example. I know for that particular case, I could use virtual fields. But in other cases I've got methods that are somewhat complex, where virtual fields won't do.
The best solution depends on the issue you need to solve. But if you have to make a call to a function for each result row, perhaps it is necessary to redesign the query taking all the necessary data.
In this case that you have shown, you can use simply a virtual Field on Job model:
$this->virtualFields = array(
'duration_in_seconds' => 'Job.estimated_hours * 3600',
):
..and/or you can use a method like this:
public function durationInSeconds($id = null) {
if (!empty($id)) {
$this->id = $id;
}
return $this->field('estimated_hours') * 3600; // convert to seconds
}

cakephp code completion in Netbeans with "$this->" before a Helper

Code completion works only if I use:
/* #var $html HtmlHelper */
$html->link...
but I want it to work while using
$this->Html->...
any idea?
This won't be possible with the current setup. If you want code completion like that, you could put the following at the top of your class (in the constructor maybe).
if (false) {
$this->Html = new HtmlHelper();
}
This will give you autocompletion, and since the IF condition never evaluates to TRUE it won't mess up your code.

Resources