I installed mongodb 4.4.3 on my ubuntu server(4 cores and 8GB of ram SSD) I have a collection of icons that contains more than 1M documents
On my backend I’m using node js and mongoose The problem is when I want to search in the icons collection, only one request uses 100% CPU, but the weird thing is that mongodb is still using the cpu 100% for about 10 seconds or so after delivering the response, I don’t understand why that
how can I fix this?
here is my code:
const icons = await Icon.find({name: ‘car’}).limit(100).lean();
Related
i have tried multiple times to upload a big data set into solr, and i get this error, does anyone know what can i do ?
https://i.stack.imgur.com/DUIKC.png
I am able to do this in Solr 7.x (tried in 7.2 as well as 7.6) version. See if you can use that version for your project.
Probably in the newer version, solrconfig.xml needs some changes depending on which one you picked for your collection creation like _deafult or sample_techproducts_configs.
The following is my problem. even if I configured and running the solr in cloud mode . Also using the SolrJ package I was able to list the collection (Not only the cores) . In Hue I was not able to get the collections.
It will give only the cores+shards+replica of each collection under Hue search dashboard. Because of this the reports created cores will have partial data .
How can I configure Hue.ini or anyother way which I can list the collections rather than the cores
https://community.cloudera.com/t5/Web-UI-Hue-Beeswax/Connecting-to-Solrcloud-from-Hue/td-p/32110
is there is any way this feature will be available in Hue
The related tracking jira is https://issues.cloudera.org/browse/HUE-5304. It should come up in Hue pretty soon.
I am using the Collective Solr 4.1.0 Search on our Plone 4.2.6 system.
I am running a solr_core on my server that is currently being used by our Plone live system's search. Now I want to build a new index but without shutting down the live system search for 10 or more hours (time for reindexing). Doing that on the same core is only available on collective.solr 5.0 and higher versions. See collective.solr changelog.
Is there way for me to build a new index on another core while still being able to use the search on the currently used core? I thought of it like this: live_system uses core_1 for query and builds a new index on core_2. Once the index is built, switch both cores so that the live_system now uses core_2 for its search.
I know there is a way to load an already built Solr index into a Solr core, but I can't figure out how do accomplish this switcheroo I'm thinking of.
Kindly check the Master- Slave architecture. That might can help here !!
Check the following link- https://cwiki.apache.org/confluence/display/solr/Index+Replication
Hey guys I'm playing with sonata and trying to create a simple backend for a few related models. But my pages are generated in about a second for quite a simple page.
http://i.imgur.com/VPLII4B.png
Most of the time takes twig templates processing. But event on simple non-sonata pages (in this project) it takes quite a while to generate a page http://i.imgur.com/Se376oi.png
I wrote a simple twig extension to measure time in prod env and it doesn't actually differ much from dev.
I have quite powerful laptop (i7 - 8 cores with 8 gb ram, NON-SSD 7200 rpm hard) and I even tried to deploy the project on the powerful server and the time also didn't differ much (about 10-15%).
Am I doing something wrong? I use php 5.6 on local machine with opcache enabled
opcache.enable=1
opcache.enable_cli=0
opcache.memory_consumption=400
opcache.max_accelerated_files=10000
What is page generation time for your real projects?
I have installed Liferay-Tomcat 6.0.6 on one of my Linux machine having 4GB of RAM and it uses MySQL installed on a different machine.
The liferay runs really slow even for 10 concurrent users.
I have attached the screen shot taken from the AppDynamics which shows EhCache and C3PO both are responding slow at times.
Are there any special config required for EhCache or C3PO??
I am currently running with default configurations.
can you tell me the methodology for deriving those stats? In most cases, this behavior comes down to poor code implementation in a custom portlet or a JVM configuration issue and more likely the latter. Take a look at the Ehcache Garbage Collection Tuning Documentation. When you run jstat, how do the garbage collection times look?