I noticed that my app was pretty slow for guard to rerun the current test i was working on. I quickly profiled it and found that Mongoid was taking .4 to .5 seconds to require.
Is that normal for Mongoid? I'm using Mongoid 4.0.0 and Ruby 2.1.2
Anyway to speed that up?
Related
After dbt upgrade on our project (dbt-core==1.0.4 -> dbt-core==1.3.2) I observed huge performance decrease when models are built. Basically, our 84 models would run for 1h20min with 1.0.4 version, but it runs without an end with version 1.3.2.
I have checked for any breaking changes between the versions, but haven't found a clue why the performance issue occurred. I would greatly appreciate any advice what to check/change.
Some information:
we have 258 models, 3104 tests, 7 snapshots, 0 analyses, 16 seed files, 190 sources, 0 exposures, 0 metrics. Process running models (dbt run) is the main issue, as it doesn't go through due to performance decrease.
It seems like with every next model it takes longer for it to process. First ~10 models run with the same time as with version==1.0.4, but as long as the process goes, the worse performance gets. Simple model that is being built as #33 model would take 10 seconds with version 1.0.4, however it takes over 3000 seconds with version 1.3.2
we use SQLServer db engine with dbt-sqlserver==1.3.0 adapter for 1.3.2 version.
Performance decrease is observed on both our servers and on my local testing environment, which would imply there is something wrong with our dbt project, not the machine itself.
We use 2 threads and ODBC Driver 18 for SQL Server driver.
I compared SQL Scripts generated by dbt between 1.0.4 and 1.3.2 versions and they were with no changes.
I have studied the release notes for dbt-core and dbt-sqlserver releases but haven't found a clue what change could decrease the performance. I've also tried upgrading to lower versions (1.2.X) but it resulted with the exactly same huge performance decrease.
I'm using mac osx brew php-fpm#7.4-debug with Xdebug profiling to profile a web applications performance.
I'm running into an issue where the Xdebug profiler is reporting a page load time of only 1.5 seconds but when I do an apache bench test using ab -n 9 -c 3 [my_web_app]/ the average page load time is over 4 seconds, it also takes 4 seconds loading in the browser as well.
What are some reasons why the profiler would not be picking up these missing 2.5 seconds in the page load?
When starting react.js in development it needs like 3 or 5 minutes to load. The time varies, after it's started it refreshes fast enough on changes...
Also when doing production build it takes a lot of time.
What could cause this?
There are some huge components some are like 500~ lines there are some of components that has ~2k lines.
Could huge components cause this?
First update - tried to start react app in development 4 times today.
First start in development took 4 mins 10 sec.
Second start in development took 3 mins 12 sec.
Third start in development took 2 mins 30 sec.
Fourth start in development took 2 mins 35 sec.
Also tried to build production build and look how long it exactly takes.
First production build - 7mins 27seconds.
Second production build - 3mins 41 seconds.
Third production build(removed all build files) - 3mins 34 seconds.
Also those big components who has from 500 to 2k lines of code are not my design choice. When I joined project I just found it like that. Now I am just figuring out is it okay that it takes such period of time to start development server for react.
Output message when build finished:
File sizes after gzip:
2.37 MB build/static/js/2.6d79667f.chunk.js
72.79 KB build/static/css/2.37cd983e.chunk.css
52.85 KB build/static/js/main.375a34e0.chunk.js
2.17 KB build/static/css/main.5827d774.chunk.css
796 B build/static/js/runtime-main.a4023761.js
The bundle size is significantly larger than recommended.
Consider reducing it with code splitting: https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#code-splitting
You can also analyze the project dependencies: https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#analyzing-the-bundle-size
The project was built assuming it is hosted at http://localhost:5050/.
You can control this with the homepage field in your package.json.
The build folder is ready to be deployed.
Find out more about deployment here:
https://github.com/facebook/create-react-app/blob/master/packages/cra-template/template/README.md
This happens when your app has more components along with static contents. When you also build react js production it does take time to build an optimized build. You can follow below link for optimization:
https://medium.com/front-end-weekly/optimizing-loading-time-for-big-react-apps-cf13bbf63c57
Not exactly, I think the server is fast but if you are doing the production build it will take some minutes, depends on how much code you have, if it's your hello world and it doesn't charge I think something is bad
Hey guys I'm playing with sonata and trying to create a simple backend for a few related models. But my pages are generated in about a second for quite a simple page.
http://i.imgur.com/VPLII4B.png
Most of the time takes twig templates processing. But event on simple non-sonata pages (in this project) it takes quite a while to generate a page http://i.imgur.com/Se376oi.png
I wrote a simple twig extension to measure time in prod env and it doesn't actually differ much from dev.
I have quite powerful laptop (i7 - 8 cores with 8 gb ram, NON-SSD 7200 rpm hard) and I even tried to deploy the project on the powerful server and the time also didn't differ much (about 10-15%).
Am I doing something wrong? I use php 5.6 on local machine with opcache enabled
opcache.enable=1
opcache.enable_cli=0
opcache.memory_consumption=400
opcache.max_accelerated_files=10000
What is page generation time for your real projects?
I have recently migrated from solr 3.6 to solr 4.0. The documents in my core are getting constantly updated and so I fire a code commit after every 10 thousand docs . However moving from 3.6 to 4.0 I have noticed that for the same core size it takes about twice the time to commit in solr4.0 compared to solr 3.6.
Is there any workaround by which I can reduce this time. Any help would be highly appreciated
Solr 4 has transaction logging enabled by default. If you don't need that, you can disable this option. I would provide a link, but the Solr Wiki is currently down for maintenance.