Any idea about below error?
Could not find a component named 'MainLayout'. Did you forget
to add it to App_Start\ReactConfig.cs?
Description: An unhandled exception occurred.
Exception Details: React.Exceptions.ReactInvalidComponentException: Could not find a component named 'MainLayout'. Did you
forget to add it to App_Start\ReactConfig.cs?
Source Error:
Line 119: if (ReactSettingsProvider.Current.EnableClientSide &&
!isEditingOverrideEnabled)
Line 120: {
Line 121: writer.WriteLine(reactComponent.RenderHtml());
Line 122:
Line 123: var tagBuilder = new TagBuilder("script")
Source File: C:\Project\HenryFord\Sitecore\src\Foundation\React\code\Mvc\JsxView.cs Line: 121
[ReactInvalidComponentException: Could not find a component named
'MainLayout'. Did you forget to add it to App_Start\ReactConfig.cs?]
React.ReactComponent.EnsureComponentExists() in
D:\a\1\s\src\React.Core\ReactComponent.cs:279
Server.min.js has component Mainlayout code still in backend we get error
“could not find a component named Mainlayout”.
Please let me know if more information is required. Thanks.
I have created some Typescript Types to help me during the development of React App. If I use them separately, they work well, they are fast and not using any additional memory, as expected. The problem starts when I try to use them together with other types and on my reactjs app it shows JavaScript heap out of memory.
I have created this Playground, with a simple example, to explain the problem.
On the playground, pay attention to these types:
// from line line #129
type UseQueryOptions<T extends Base, K extends AllKeys<T, 4> > = Expand<T, K>
type UseQueryOptions2<T , K > = Expand_<T, K> // line #129
type UseQueryOptions3<T , K > = Expand_<T, K> extends infer O ? O : never
type ExpandResult<T,K> = Expand_<T, K> extends infer O ? O : never
type UseQueryOptions4<T , K > = ExpandResult<T,K>
As you can see, I tried multiple solutions to use Expand_ with another Type.
Then, if you try to examine what is Expand_<T, K> (on line #129) (I mean, on the playground, move the mouse over Expand_<T, K>), the popup will be showed and you will see the function that runs this type. The memory used by the worker of typescript on Chrome, by this action, is stable (20/22 MB) and the info, on the popup, is showed correctly and fast as expected.
But, if you examine any of UseQueryOptions[N] (line #129/#131/#132/#135 ), the memory will start growing to 1.5GB (then, I think, Chrome cuts out the worker) and the popup will not show any information.
To be clear, here below you will find an image of what I mean regarding memory consumption:
In a normal situation the worker of typescript uses less than 30Mb, if you try to examine the type Expand on line #108 or let y1 on lines #113/#114:
But this is a memory that is used by the worker when I try to examine the UseQueryOptions types on lines #129/#131/#132/#135:
All these explanations start because I'm also facing this anomaly with my reactjs app.
When I try to run react start, after adding similar Types that I mentioned before, the console gets blocked on Files successfully emitted, waiting for typecheck results... and the process node that run react start grows until reaching the limit of memory set on the node (can be 2Gb or 8 Gb, but is only a matter of time before it shows the error), and then showing:
<--- Last few GCs --->
[16365:0x1046ca000] 448881 ms: Mark-sweep 2028.9 (2059.0) -> 2020.9 (2059.3) MB, 766.2 / 0.0 ms (average mu = 0.132, current mu = 0.008) allocation failure scavenge might not succeed
[16365:0x1046ca000] 449641 ms: Mark-sweep 2028.9 (2059.3) -> 2020.9 (2059.3) MB, 754.6 / 0.0 ms (average mu = 0.073, current mu = 0.007) allocation failure scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x1012e4da5 node::Abort() (.cold.1) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
2: 0x1000a6239 node::Abort() [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
3: 0x1000a639f node::OnFatalError(char const*, char const*) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
4: 0x1001e9007 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
5: 0x1001e8fa3 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
6: 0x100397e95 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
7: 0x10039995a v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
8: 0x100395029 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
9: 0x1003928c1 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
10: 0x1003a115a v8::internal::Heap::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
11: 0x1003a11e1 v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
12: 0x10036eb87 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
13: 0x1006ed8d8 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
14: 0x100a7a239 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_NoBuiltinExit [/Users/jure.prah/.nvm/versions/node/v14.16.0/bin/node]
If I remove those lines from my project, then it runs as expected and without any error and the memory of the node is around 200 MB.
I hope this is enough and you have all the information in order to solve my problem.
Otherwise, I will be here to provide you with further information.
I haven't looked deeply at your set up or if your local memory usage is unusually high, but with that many dependencies it is definitely possible that you're just running out of memory. When I run my React applications locally I increase the amount of memory available to the Node process:
"scripts": {
"start": "NODE_OPTIONS='--max-old-space-size=8192' yarn start"
}
And with this have experienced a lot less crashes due to JavaScript heap out of memory.
We have been hit by a Solr Behavior in production which we are unable to debug. To start with here are the configurations for solr:
Solr Version: 6.5, Master with 1 Slave of the same configuration as mentioned below.
JVM Config:
-Xms2048m
-Xmx4096m
-XX:+ParallelRefProcEnabled
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=50
Rest all are default values.
Solr Config:
<autoCommit>
<!-- Auto hard commit in 5 minutes -->
<maxTime>{solr.autoCommit.maxTime:300000}</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<autoSoftCommit>
<!-- Auto soft commit in 15 minutes -->
<maxTime>{solr.autoSoftCommit.maxTime:900000}</maxTime>
</autoSoftCommit>
</updateHandler>
<query>
<maxBooleanClauses>1024</maxBooleanClauses>
<filterCache class="solr.FastLRUCache" size="8192" initialSize="8192" autowarmCount="0" />
<queryResultCache class="solr.LRUCache" size="8192" initialSize="4096" autowarmCount="0" />
<documentCache class="solr.LRUCache" size="12288" initialSize="12288" autowarmCount="0" />
<cache name="perSegFilter" class="solr.search.LRUCache" size="10" initialSize="0" autowarmCount="10" regenerator="solr.NoOpRegenerator" />
<enableLazyFieldLoading>true</enableLazyFieldLoading>
<queryResultWindowSize>20</queryResultWindowSize>
<queryResultMaxDocsCached>${solr.query.max.docs:40}
</queryResultMaxDocsCached>
<useColdSearcher>false</useColdSearcher>
<maxWarmingSearchers>2</maxWarmingSearchers>
</query>
The Host (AWS) configurations are:
RAM: 7.65GB
Cores: 4
Now, our solr works perfectly fine for hours and sometimes for days but sometimes suddenly memory jumps up and the GC kicks in causing long big pauses with not much to recover.
We are seeing this happening most often when one or multiple segments gets added or deleted post a hard commit. It doesn't matter how many documents got indexed. The images attached shows that just 1 document was indexed, causing an addition of one segment and it all got messed up till we restarted the Solr.
Here are the images from NewRelic and Sematext (Kindly click on the links to view):
JVM Heap Memory Image
1 Document and 1 Segment addition Image
Update: Here is the JMap output when SOLR last died, we have now increased the JVM memory to xmx of 12GB:
num #instances #bytes class name
----------------------------------------------
1: 11210921 1076248416 org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat$IntBlockTermState
2: 10623486 934866768 [Lorg.apache.lucene.index.TermState;
3: 15567646 475873992 [B
4: 10623485 424939400 org.apache.lucene.search.spans.SpanTermQuery$SpanTermWeight
5: 15508972 372215328 org.apache.lucene.util.BytesRef
6: 15485834 371660016 org.apache.lucene.index.Term
7: 15477679 371464296 org.apache.lucene.search.spans.SpanTermQuery
8: 10623486 339951552 org.apache.lucene.index.TermContext
9: 1516724 150564320 [Ljava.lang.Object;
10: 724486 50948800 [C
11: 1528110 36674640 java.util.ArrayList
12: 849884 27196288 org.apache.lucene.search.spans.SpanNearQuery
13: 582008 23280320 org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight
14: 481601 23116848 org.apache.lucene.document.FieldType
15: 623073 19938336 org.apache.lucene.document.StoredField
16: 721649 17319576 java.lang.String
17: 32729 7329640 [J
18: 14643 5788376 [F
19: 137126 4388032 java.util.HashMap$Node
20: 52990 3391360 java.nio.DirectByteBufferR
21: 131072 3145728 org.apache.solr.update.VersionBucket
22: 20535 2891536 [I
23: 99073 2377752 shaded.javassist.bytecode.Utf8Info
24: 47788 1911520 java.util.TreeMap$Entry
25: 34118 1910608 org.apache.lucene.index.FieldInfo
26: 26511 1696704 org.apache.lucene.store.ByteBufferIndexInput$SingleBufferImpl
27: 17470 1677120 org.apache.lucene.codecs.lucene54.Lucene54DocValuesProducer$NumericEntry
28: 13762 1526984 java.lang.Class
29: 7323 1507408 [Ljava.util.HashMap$Node;
30: 2331 1230768 [Lshaded.javassist.bytecode.ConstInfo;
31: 18929 1211456 com.newrelic.agent.deps.org.objectweb.asm.Label
32: 25360 1014400 java.util.LinkedHashMap$Entry
33: 41388 993312 java.lang.Long
The load on Solr is not much - max it goes to 2000 requests per minute. The indexing load can sometimes be in burst but most of the time its pretty low. But as mentioned above sometimes even a single document indexing can put solr into tizzy and sometimes it just works like a charm.
Any pointers on where we are possibly going wrong would be great.
I was also facing the same problem before, but later I investigated and found some holes where it suddenly increases the SOLR heap size consumption.
I used to delta update my SOLR on each record update on my DB, it works fine if document size is shorter.
But as going forward my document size increased it the SOLR stopped working 5-8 times a day.
The reason found is that whenever you delta update a record, SOLR updates it immediately but later SOLR have to adjust all the document indexes again, so while adjusting in process if another delta request arrives it will again start a new one and goes on increasing the heap consumption and at a point it stops responding.
I still did not found the correct working solution for this problem but I implemented a workaround for this that is I stopped delta updating of documents and use to re-index the whole core frequently(2-3 times a day)
I am using miniupnp SW package running on my router. In order to list all available devices / services on my LAN network, I used 'listdevice' application, which basically query miniupnpc to discover all devices / services and then print them out.
Can someone please explain how can I understand, which service is belong to each device?
See an example table below:
1: urn:schemas-upnp-org:service:Layer3Forwarding:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1cf::urn:schemas-upnp-org:service:Layer3Forwarding:1
2: uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1cf
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1cf
3: uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c0
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c0
4: uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c1
5: urn:schemas-upnp-org:device:WANConnectionDevice:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c1::urn:schemas-upnp-org:device:WANConnectionDevice:1
6: urn:schemas-upnp-org:device:WANDevice:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c0::urn:schemas-upnp-org:device:WANDevice:1
7: urn:schemas-upnp-org:service:WANIPConnection:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c1::urn:schemas-upnp-org:service:WANIPConnection:1
8: urn:schemas-upnp-org:service:WANCommonInterfaceConfig:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c0::urn:schemas-upnp-org:service:WANCommonInterfaceConfig:1
9: urn:schemas-upnp-org:service:WANPPPConnection:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c1::urn:schemas-upnp-org:service:WANPPPConnection:1
10: upnp:rootdevice
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1cf::upnp:rootdevice
11: urn:schemas-upnp-org:device:InternetGatewayDevice:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1cf::urn:schemas-upnp-org:device:InternetGatewayDevice:1
12: urn:microsoft.com:service:X_MS_MediaReceiverRegistrar:1
http://192.168.1.1:8200/rootDesc.xml
uuid:4d696e69-444c-164e-9d41-7e1fa325930c::urn:microsoft.com:service:X_MS_MediaReceiverRegistrar:1
13: urn:schemas-upnp-org:service:ConnectionManager:1
http://192.168.1.1:8200/rootDesc.xml
uuid:4d696e69-444c-164e-9d41-7e1fa325930c::urn:schemas-upnp-org:service:ConnectionManager:1
14: urn:schemas-upnp-org:service:ContentDirectory:1
http://192.168.1.1:8200/rootDesc.xml
uuid:4d696e69-444c-164e-9d41-7e1fa325930c::urn:schemas-upnp-org:service:ContentDirectory:1
15: urn:schemas-upnp-org:device:MediaServer:1
http://192.168.1.1:8200/rootDesc.xml
uuid:4d696e69-444c-164e-9d41-7e1fa325930c::urn:schemas-upnp-org:device:MediaServer:1
16: upnp:rootdevice
http://192.168.1.1:8200/rootDesc.xml
uuid:4d696e69-444c-164e-9d41-7e1fa325930c::upnp:rootdevice
17: uuid:4d696e69-444c-164e-9d41-7e1fa325930c
http://192.168.1.1:8200/rootDesc.xml
uuid:4d696e69-444c-164e-9d41-7e1fa325930c
For this case, the response will actually do the trick for you.
Each Device will always be followed by his Services, so as can be seen in the table above, a device might offer no services, for example:
5: urn:schemas-upnp-org:device:WANConnectionDevice:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c1::urn:schemas-upnp-org:device:WANConnectionDevice:1
Or can offer several services, for example:
6: urn:schemas-upnp-org:device:WANDevice:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c0::urn:schemas-upnp-org:device:WANDevice:1
7: urn:schemas-upnp-org:service:WANIPConnection:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c1::urn:schemas-upnp-org:service:WANIPConnection:1
8: urn:schemas-upnp-org:service:WANCommonInterfaceConfig:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c0::urn:schemas-upnp-org:service:WANCommonInterfaceConfig:1
9: urn:schemas-upnp-org:service:WANPPPConnection:1
http://192.168.1.1:5000/rootDesc.xml
uuid:63ce4f39-1485-4bd6-ba33-bb1ec09dc1c1::urn:schemas-upnp-org:service:WANPPPConnection:1
I'm trying to switch over to using app.yaml instead of web.xml and app-engine-web.xml. I've attempted to follow the documentation faithfully, but I'm getting an error from appconfig.cmd update <my-war-directory> that says
Reading application configuration data...
Bad configuration: Line 18, column 13: Error setting property 'handlers' on class: com.google.apphosting.utils.config.AppYaml
Caused by: Line 18, column 13: Error setting property 'handlers' on class: com.google.apphosting.utils.config.AppYaml
Please see the logs [C:\Users\<blah blah>\appcfg3710135744474388957.log] for further information.
In the indicated log file, I see a stack dump with the line:
com.google.appengine.repackaged.net.sourceforge.yamlbeans.tokenizer.Tokenizer$TokenizerException: Line 18, column 13: Found a mapping value where it is not allowed.
Here's my file (with line numbers manually added):
1 application: my-app
2 version: 1
3 runtime: java
4 threadsafe: true
5
6 public_root: /static
7
8 static_files:
9 - include: /**
10
11 welcome_files:
12 - index.html
13
14 system_properties:
15 java.util.logging.config.file: WEB-INF/logging.properties
16
17 handlers:
18 - url: /user/*
19 servlet: org.restlet.ext.servlet.ServerServlet
20 name: user
21 init_params:
22 org.restlet.application: com.my-app.server.resource.user.UserApplication
23 org.restlet.clients: HTTP HTTPS
After experimenting with some YAML validators on the web, I actually think it is complaining about line 19, where column 13 points to the ":" character after "servlet". But this usage looks totally consistent with the documentation at https://developers.google.com/appengine/docs/java/configyaml/appconfig_yaml#Required_Elements
I'm sure I'm doing something stupid, but I'm stumped.
Thank you for your great input AndyD! The documentation is updated to fix the problematic sample code.
For me, it is also very convenient to use some yaml validator to check the yaml files. For example:
http://data-lint.herokuapp.com/
Thanks.
As I noted above, the culprit was the indentation of lines 19-21 - they need to line up under the "u" in "url" from line 18.