hello guys so I am doing a laravel project ( new to laravel ).
i am supposed to to calculation from other tables and save the results in another OUTPUT table.
I have 8 calculation in total in each line and up to 3k lines to fill.
The problem that I get the ma execution time error 60 sec even if a change it in laravel and php.ini.
each function called is just calling a select where and sum I've decided to divide them for better org.
My question is is there a better way to process the data and minimize to exc time if you can help .
public function calcul($week)
{ $test = Output::where('week',$week)->Limit(1);
if($test->first()){
return self::afficher($week);
}
else{
DB::table('article')->orderBy('material')->chunk(100, function ($stocks){
foreach ($stocks as $stock) {
$id = $stock->material;
$safe_stock=self::safe_stock($id);
$past_need=self::PassedNeeds($id) - self::NeedsInTwoWeeks($id);
$two_week_need=self::NeedsInTwoWeeks($id);
$stock_=self::stock($id);
$bdl=self::bdl($id);
$sm=self::sm($id);
$package=self::package($id);
$store_1=self::store_1($id);
$store_2=self::store_2($id);
$store_3=self::store_3($id);
Output::create([
'material' => $id,'safe_stock'=>$safe_stock,'past_need'=>$past_need,'two_week_need'=>$two_week_need,
'stock'=>$stock_,'bdl'=>$bdl,'sm'=>$sm,'package'=>$package,
'store_1'=>$store_1,'store_2'=>$store_2,'store_3'=>$store_3
]);
}
});
return self::index();
}
}
Related
In gatling how to achieve the following?
sample_testdata.csv
orderId
101112
111213
121314
131415
sample test running with 4 users and multiple iterations
user1 should use orderId 101112 for all the iterations
user2 should use orderId 111213 for all the iterations
and so on ...
I am not able to find uniqueonce strategy in feeder.
code:
scenario("Get Art")
.during(test_duration minutes) {
feed(fdr_arts)
.exec(_.set("hToken",hToken))
.exec(_.set("hTimeStamp",hTimeStamp))
.exec(_.set("gToken", gToken))
.exec(actionBuilder = http("Get Arts")
.post(getArtUrl)
}
Your scenario includes .during - which is a looping construct - and inside that you're calling feed. So each user is going to keep on looping for test_duration and on each loop it's going to pull the next value from the feeder.
To get the behaviour you want, you need to put the feeder before the loop...
scenario("Get Art")
.feed(fdr_arts)
.during(test_duration minutes) {
.exec(_.set("hToken",hToken))
.exec(_.set("hTimeStamp",hTimeStamp))
.exec(_.set("gToken", gToken))
.exec(actionBuilder = http("Get Arts")
.post(getArtUrl)
}
val txn_getArt = exec(_.set("hToken",hToken))
.exec(_.set("hTimeStamp",hTimeStamp))
.exec(_.set("gToken", gToken))
.exec(actionBuilder = http("Get Arts")
.post(getArtUrl)
// Chaining it after feeder does the trick in scenario
scenario("Get Art").repeat(5000){feed(fdr_arts).exec(txn_getArt)}
I am finding my database is the bottleneck in my application, as part of this it looks like Prepared statements are not being reused.
For example here method I use
public static CoverImage findCoverImageBySource(Session session, String src)
{
try
{
Query q = session.createQuery("from CoverImage t1 where t1.source=:source");
q.setParameter("source", src, StandardBasicTypes.STRING);
CoverImage result = (CoverImage)q.setMaxResults(1).uniqueResult();
return result;
}
catch (Exception ex)
{
MainWindow.logger.log(Level.SEVERE, ex.getMessage(), ex);
}
return null;
}
But using Yourkit profiler it says
com.mchange.v2.c3po.impl.NewProxyPreparedStatemtn.executeQuery() Count 511
com.mchnage.v2.c3po.impl.NewProxyConnection.prepareStatement() Count 511
and I assume that the count for prepareStatement() call should be lower, ais it is looks like we create a new prepared statment every time instead of reusing.
https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html
I am using C3po connecting poolng wehich complicates things a little, but as I understand it I have it configured correctly
public static Configuration getInitializedConfiguration()
{
//See https://www.mchange.com/projects/c3p0/#hibernate-specific
Configuration config = new Configuration();
config.setProperty(Environment.DRIVER,"org.h2.Driver");
config.setProperty(Environment.URL,"jdbc:h2:"+Db.DBFOLDER+"/"+Db.DBNAME+";FILE_LOCK=SOCKET;MVCC=TRUE;DB_CLOSE_ON_EXIT=FALSE;CACHE_SIZE=50000");
config.setProperty(Environment.DIALECT,"org.hibernate.dialect.H2Dialect");
System.setProperty("h2.bindAddress", InetAddress.getLoopbackAddress().getHostAddress());
config.setProperty("hibernate.connection.username","jaikoz");
config.setProperty("hibernate.connection.password","jaikoz");
config.setProperty("hibernate.c3p0.numHelperThreads","10");
config.setProperty("hibernate.c3p0.min_size","1");
//Consider that if we have lots of busy threads waiting on next stages could we possibly have alot of active
//connections.
config.setProperty("hibernate.c3p0.max_size","200");
config.setProperty("hibernate.c3p0.max_statements","5000");
config.setProperty("hibernate.c3p0.timeout","2000");
config.setProperty("hibernate.c3p0.maxStatementsPerConnection","50");
config.setProperty("hibernate.c3p0.idle_test_period","3000");
config.setProperty("hibernate.c3p0.acquireRetryAttempts","10");
//Cancel any connection that is more than 30 minutes old.
//config.setProperty("hibernate.c3p0.unreturnedConnectionTimeout","3000");
//config.setProperty("hibernate.show_sql","true");
//config.setProperty("org.hibernate.envers.audit_strategy", "org.hibernate.envers.strategy.ValidityAuditStrategy");
//config.setProperty("hibernate.format_sql","true");
config.setProperty("hibernate.generate_statistics","true");
//config.setProperty("hibernate.cache.region.factory_class", "org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory");
//config.setProperty("hibernate.cache.use_second_level_cache", "true");
//config.setProperty("hibernate.cache.use_query_cache", "true");
addEntitiesToConfig(config);
return config;
}
Using H2 1.3.172, Hibernate 4.3.11 and the corresponding c3po for that hibernate version
With reproducible test case we have
HibernateStats
HibernateStatistics.getQueryExecutionCount() 28
HibernateStatistics.getEntityInsertCount() 119
HibernateStatistics.getEntityUpdateCount() 39
HibernateStatistics.getPrepareStatementCount() 189
Profiler, method counts
GooGooStaementCache.aquireStatement() 35
GooGooStaementCache.checkInStatement() 189
GooGooStaementCache.checkOutStatement() 189
NewProxyPreparedStatement.init() 189
I don't know what I shoud be counting as creation of prepared statement rather than reusing an existing prepared statement ?
I also tried enabling c3p0 logging by adding a c3p0 logger ands making it use same log file in my LogProperties but had no effect.
String logFileName = Platform.getPlatformLogFolderInLogfileFormat() + "songkong_debug%u-%g.log";
FileHandler fe = new FileHandler(logFileName, LOG_SIZE_IN_BYTES, 10, true);
fe.setEncoding(StandardCharsets.UTF_8.name());
fe.setFormatter(new com.jthink.songkong.logging.LogFormatter());
fe.setLevel(Level.FINEST);
MainWindow.logger.addHandler(fe);
Logger c3p0Logger = Logger.getLogger("com.mchange.v2.c3p0");
c3p0Logger.setLevel(Level.FINEST);
c3p0Logger.addHandler(fe);
Now that I have eventually got c3p0Based logging working and I can confirm the suggestion of #Stevewaldman is correct.
If you enable
public static Logger c3p0ConnectionLogger = Logger.getLogger("com.mchange.v2.c3p0.stmt");
c3p0ConnectionLogger.setLevel(Level.FINEST);
c3p0ConnectionLogger.setUseParentHandlers(false);
Then you get log output of the form
24/08/2019 10.20.12:BST:FINEST: com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache ----> CACHE HIT
24/08/2019 10.20.12:BST:FINEST: checkoutStatement: com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 347; checked out: 1; num connections: 13; num keys: 347
24/08/2019 10.20.12:BST:FINEST: checkinStatement(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 347; checked out: 0; num connections: 13; num keys: 347
making it clear when you get a cache hit. When there is no cache hit yo dont get the first line, but get the other two lines.
This is using C3p0 9.2.1
I have following scenario which has two requests (RequestOne and RequestTwo). It is setup to run for 3 users and 1 repetition. The simulation should have taken at least 20 seconds to finish as I am using 20 seconds as pacing. However, every time I run it, it finishes in less than 20 seconds. I tried with different values for pacing as well.
val Workload = scenario("Load Test")
.repeat(1, "repetition") {
pace(20 seconds)
.exitBlockOnFail {
.feed(requestIdFeeder)
.group("Load Test") {
.exec(session => {
session.set("url", spURL)
})
.group("RequestOne") {exec(requestOne)}
.feed(requestIdFeeder)
.group("RequestTwo") {exec(requestTwo)}
}
}
}
setUp(Workload.inject(atOnceUsers(3))).protocols(httpProtocol)
output
Simulation com.performance.LoadTest completed in 11 seconds
Found the problem. I used only 1 repetition so the scenario didn't need to wait for the 20sec pacing to complete and it exited early. Setting repetition to > 1 helped achieve the desired rate.
val Workload = scenario("Load Test")
.repeat(10, "repetition") {
pace(20 seconds)
.exitBlockOnFail {
So, if you want to achieve fixed number of transactions in your simulation, use repetition, otherwise use "forever (" as mentioned in gatling docoumentation to achieve consistent rate.
val Workload = scenario("Load Test")
.forever (
pace(20 seconds)
.exitBlockOnFail {
I like to sort my XML response.
This is my code:
// Make some cURL
// Create a simple XML element
$xml = new SimpleXMLElement($resp, LIBXML_NOWARNING, false);
// Output
foreach ($xml->Departure as $departure){
// DEFINE VARIABLES BASED ON XML RESPONSE
$name = $departure['name'];
$rtDate = $departure['rtDate'];
$rtTime = $departure['rtTime'];
$direction = $departure['direction'];
$trainCategory = $departure['trainCategory'];
// CALCULATE DURATION UNTIL NEXT DEPARTURE
$prognosedTime = new DateTime($rtTime);
$currentTime = new DateTime($time);
$interval = $currentTime->diff($prognosedTime);
// OUTPUT FOR BROWSERS
echo $interval->format('%i') . ' Min: ' . $name . ' > ' . $direction . '",';
echo $trainCategory;
echo "<hr/>";
};
?>
Result:
7 Min: Bus 240 > S Ostbahnhof
Bus
-------------------------------------
8 Min: Tram M10 > S+U Warschauer Str.
MetroTram
-------------------------------------
2 Min: U1 > Uhlandstr.
U-Bahn
-------------------------------------
0 Min: Tram M10 > S+U Hauptbahnhof
MetroTram
Problem:
My result should be sorted by $interval
I read
PHP sorting issue with simpleXML several times but I don't get it. So I wanted a shorter solution (for bloody beginners) and found something nice in Sort Foreach Loop after ID. But then I need arrays. Another solution is very close to that and shows how to define arrays: ASC sort foreach. But here I have no idea how to put all my data into an array as I never know how many rows the response will have. I believe I am very close to a solution but don't get it since 2 days. narf
How I can get relative time instead of absolute time by using CakeTime:timeAgoInWords ?
for example:
about one week ago
instead of
1 week, 2 days ago
Create you own method to handle that based on what you want. And you can implement a function int_to_words() like this
Something like:
function relativeTime($time){
$arrray = explode(" ",$time); // transform time in array of words
if(array[1] == "week,"){
if(array[2]) > 5){
return "Almost two weeks ago";
}
return "About one week ago";
}else if(array[1] == "weeks"){
$aux = array[0];
$number = int_to_words($aux);
return "About $number week ago";
}
// Read more about the possible CakeTime:timeAgoInWords results
// and continue to implement the other possibilities here...
}