Application with multiple content and each content with multiple documents is throwing "ClusterSearcher should have a top level dispatch." when deployed on multi node cluster with multiple content nodes. The same is working on a single node cluster with a single content node.
Using [Vespa version: ] 7.51.13
com.yahoo.container.di.componentgraph.core.ComponentNode$ComponentConstructorException: Error constructing 'com.yahoo.prelude.cluster.ClusterSearcher in acme'
Caused by: java.lang.IllegalStateException: ClusterSearcher should have a top level dispatch.
================= services.xml ==============
<?xml version="1.0" encoding="utf-8" ?>
<admin version="2.0">
<adminserver hostalias="admin0"/>
<configservers>
<configserver hostalias="admin0"/>
</configservers>
</admin>
<container id="container" version="1.0">
<config name="search.config.qr-start">
<jvm>
<heapSizeAsPercentageOfPhysicalMemory>50</heapSizeAsPercentageOfPhysicalMemory>
</jvm>
</config>
<document-api />
<http>
<server id="stateless1" port="8080">
</server>
</http>
<search>
<chain id="default" inherits="vespa">
<searcher id="com.acme.search.CatalogSearcher" bundle="sth-search">
<config name="com.acme.search.Searcher">
<redirectsFile>redirects.txt</redirectsFile>
<autoCorrectAPI>https://apps02.acme.com:9815/search/</autoCorrectAPI>
<connectionTimeout>250</connectionTimeout>
<connectionRequestTimeout>250</connectionRequestTimeout>
<socketTimeout>250</socketTimeout>
</config>
</searcher>
</chain>
</search>
<nodes jvmargs="-verbose:gc">
<node hostalias="stateless0"/>
<node hostalias="stateless1"/>
</nodes>
</container>
<content id="sth" version="1.0">
<redundancy>2</redundancy>
<documents>
<document type="sth" mode="index" />
</documents>
<nodes>
<node hostalias="content0" distribution-key="0"/>
<node hostalias="content1" distribution-key="1"/>
</nodes>
</content>
<content id="thesaurus" version="1.0">
<redundancy>2</redundancy>
<documents>
<document type="thesaurus" mode="index"/>
</documents>
<nodes>
<node hostalias="content0" distribution-key="0"/>
<node hostalias="content1" distribution-key="1"/>
</nodes>
</content>
<content id="acme" version="1.0">
<redundancy>2</redundancy>
<documents>
<document type="vc_products" mode="index" />
<document type="vc_thesaurus" mode="index"/>
</documents>
<nodes>
<node hostalias="content0" distribution-key="0"/>
<node hostalias="content1" distribution-key="1"/>
</nodes>
</content>
From comments, you get this not when you actually deploy on multiple nodes but when you try to instantiate a mock Application instance in a unit test, as in Application.fromApplicationPackage(...).
The reason for this is that Application is not able to fully mock a full application, only the stateless container parts. The ClusterSearcher which is instantiated in this setup complains that it can't see any real content clusters downstream (which is correct), it does not know it has been created in a mocked setup.
For this reason you need to create special services.xmls for unit tests as in general the one you use in production will give problems like this. Using Application works well for testing specific functionality of a set of components but not for unit testing your true production application.
We would like to improve this by mocking component clusters inside application but nobody's working on it at the moment. if you want to have a go at it the code is in
https://github.com/vespa-engine/vespa/tree/master/application
Related
Set Url value by applying filter based on location-type value from given sample.
<page id="{Page_ID}" name="" type="dat" size="0" sequence="26" originalpagenumber="26" location_type="3" location="">
<content description="" content_format="26" />
<AdditionalInfo>
<Url>https://s3-eu-west-1/2019/may//001063/gvb140/683c82a3-b3f5-49ee-a34e-01859e8e2228.mp3</Url>
<Name />
<Encoding>audio</Encoding>
<SecurityType>IBCloud</SecurityType>
</AdditionalInfo>
The XML you've shared is invalid so I've taken the liberty to make it valid in an effort to show you how I might approach this.
I've made the assumption you're looking to update values in a table somewhere in your environment. You can run this example in SSMS.
THIS IS MEANT AS AN EXAMPLE ONLY. DO NOT BLINDLY RUN THIS AGAINST PRODUCTION DATA.
First, I created a mock-up table that holds some page XML data. I assumed that a single XML row could contain multiple page nodes with duplicate location_type values.
DECLARE #Pages TABLE ( page_xml XML );
INSERT INTO #Pages ( page_xml ) VALUES (
'<pages>
<page id="{Page_ID}" name="" type="dat" size="0" sequence="26" originalpagenumber="26" location_type="3" location="">
<content description="" content_format="26" />
<AdditionalInfo>
<Url>https://s3-eu-west-1/2019/may//001063/gvb140/683c82a3-b3f5-49ee-a34e-01859e8e2228.mp3</Url>
<Name />
<Encoding>audio</Encoding>
<SecurityType>IBCloud</SecurityType>
</AdditionalInfo>
</page>
<page id="{Page_ID}" name="" type="dat" size="0" sequence="27" originalpagenumber="27" location_type="3" location="">
<content description="" content_format="27" />
<AdditionalInfo>
<Url>https://s3-eu-west-1/2019/may//001063/gvb140/683c82a3-b3f5-49ee-a34e-01859e8e2228.mp3</Url>
<Name />
<Encoding>audio</Encoding>
<SecurityType>IBCloud</SecurityType>
</AdditionalInfo>
</page>
<page id="{Page_ID}" name="" type="dat" size="0" sequence="28" originalpagenumber="28" location_type="8" location="">
<content description="" content_format="28" />
<AdditionalInfo>
<Url>https://s3-eu-west-1/2019/may//001063/gvb140/683c82a3-b3f5-49ee-a34e-01859e8e2228.mp3</Url>
<Name />
<Encoding>audio</Encoding>
<SecurityType>IBCloud</SecurityType>
</AdditionalInfo>
</page>
</pages>' );
Selecting the current results from #Pages shows the following XML:
<pages>
<page id="{Page_ID}" name="" type="dat" size="0" sequence="26" originalpagenumber="26" location_type="3" location="">
<content description="" content_format="26" />
<AdditionalInfo>
<Url>https://s3-eu-west-1/2019/may//001063/gvb140/683c82a3-b3f5-49ee-a34e-01859e8e2228.mp3</Url>
<Name />
<Encoding>audio</Encoding>
<SecurityType>IBCloud</SecurityType>
</AdditionalInfo>
</page>
<page id="{Page_ID}" name="" type="dat" size="0" sequence="27" originalpagenumber="27" location_type="3" location="">
<content description="" content_format="27" />
<AdditionalInfo>
<Url>https://s3-eu-west-1/2019/may//001063/gvb140/683c82a3-b3f5-49ee-a34e-01859e8e2228.mp3</Url>
<Name />
<Encoding>audio</Encoding>
<SecurityType>IBCloud</SecurityType>
</AdditionalInfo>
</page>
<page id="{Page_ID}" name="" type="dat" size="0" sequence="28" originalpagenumber="28" location_type="8" location="">
<content description="" content_format="28" />
<AdditionalInfo>
<Url>https://s3-eu-west-1/2019/may//001063/gvb140/683c82a3-b3f5-49ee-a34e-01859e8e2228.mp3</Url>
<Name />
<Encoding>audio</Encoding>
<SecurityType>IBCloud</SecurityType>
</AdditionalInfo>
</page>
</pages>
There are two pages with a location_type of 3, and one with a location_type of 8.
Next, I declared a few variables which I then used to modify the XML.
DECLARE #type INT = 3, #url VARCHAR(255) = 'http://www.google.com';
/* Update each Url text for the specified location_type */
WHILE EXISTS ( SELECT * FROM #Pages WHERE page_xml.exist( '//pages/page[#location_type=sql:variable("#type")]/AdditionalInfo/Url[text()!=sql:variable("#url")]' ) = 1 )
UPDATE #Pages
SET
page_xml.modify( '
replace value of (//pages/page[#location_type=sql:variable("#type")]/AdditionalInfo/Url/text()[.!=sql:variable("#url")])[1]
with sql:variable("#url")
' );
After running the update the XML now contains:
<pages>
<page id="{Page_ID}" name="" type="dat" size="0" sequence="26" originalpagenumber="26" location_type="3" location="">
<content description="" content_format="26" />
<AdditionalInfo>
<Url>http://www.google.com</Url>
<Name />
<Encoding>audio</Encoding>
<SecurityType>IBCloud</SecurityType>
</AdditionalInfo>
</page>
<page id="{Page_ID}" name="" type="dat" size="0" sequence="27" originalpagenumber="27" location_type="3" location="">
<content description="" content_format="27" />
<AdditionalInfo>
<Url>http://www.google.com</Url>
<Name />
<Encoding>audio</Encoding>
<SecurityType>IBCloud</SecurityType>
</AdditionalInfo>
</page>
<page id="{Page_ID}" name="" type="dat" size="0" sequence="28" originalpagenumber="28" location_type="8" location="">
<content description="" content_format="28" />
<AdditionalInfo>
<Url>https://s3-eu-west-1/2019/may//001063/gvb140/683c82a3-b3f5-49ee-a34e-01859e8e2228.mp3</Url>
<Name />
<Encoding>audio</Encoding>
<SecurityType>IBCloud</SecurityType>
</AdditionalInfo>
</page>
</pages>
Using the WHILE EXISTS (... ensures that all Url nodes for the requested location_type are updated. In my example here there are two pages with a value of 3 that are updated, while location_type 8 is left alone.
Basically, what this is doing is updating the Url for any page with the requested location_type to the new #url value.
There are two key pieces here, the first being:
.exist( '//pages/page[#location_type=sql:variable("#type")]/AdditionalInfo/Url[text()!=sql:variable("#url")]' )
Which looks for the requested location_type that doesn't have the new #url value.
And the second:
page_xml.modify( '
replace value of (//pages/page[#location_type=sql:variable("#type")]/AdditionalInfo/Url/text()[.!=sql:variable("#url")])[1]
with sql:variable("#url")
' );
Which modifies (updates) the Url for the location type that has yet to be updated. The two "conditions" allow for a loop (WHILE) that ends when they're no longer met, ensuring that all page nodes for the requested location_type are updated.
I'm configuring Active Directory Login for Sitecore 9.0.0. And I have issues with IsAdministrator role. I used the following map, but it didn't work.
Any idea about how to configure it?
<map name="Administrator Claim" type="Sitecore.Owin.Authentication.Services.DefaultClaimToPropertyMapper, Sitecore.Owin.Authentication">
<data hint="raw:AddData">
<source name="Administrator" />
<target name="IsAdministrator" value="true" />
</data>
</map>
You have to map http://www.sitecore.net/identity/claims/isAdmin claim in Sitecore.Owin.Authentication.IdentityServer.config file in the propertyInitializer section as follows:
<propertyInitializer>
<maps>
<map name="set IsAdministrator" type="Sitecore.Owin.Authentication.Services.DefaultClaimToPropertyMapper, Sitecore.Owin.Authentication">
<data hint="raw:AddData">
<source name="http://www.sitecore.net/identity/claims/isAdmin" value="true" />
<target name="IsAdministrator" value="true" />
</data>
</map>
</maps>
</propertyInitializer>
Read more details here.
I have been trying to understand the config of the log4net library and I think I have it except for some unexpected behavior.
I have a root logger that has a level set to INFO and another logger for a specific class that has level set to ERROR.
What I expected from this was that the class logger would only log at error and ignore the roots level since I had additivity set to false for the class logger. Here is the log4net.config I have at the moment:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<log4net>
<appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" />
</layout>
</appender>
<appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="log.txt" />
<appendToFile value="true" />
<rollingStyle value="Size" />
<maxSizeRollBackups value="10" />
<maximumFileSize value="100KB" />
<staticLogFileName value="true" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" />
</layout>
</appender>
<logger name="EveStatic.Config.ViewModel" additivity="false">
<level value="error"/>
<appender-ref ref="ConsoleAppender"/>
<appender-ref ref="RollingFileAppender"/>
</logger>
<root>
<level value="debug" />
<appender-ref ref="ConsoleAppender" />
<appender-ref ref="RollingFileAppender" />
</root>
</log4net>
</configuration>
In my AssemblyInfo.cs:
[assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.config", Watch = true)]
And in the class that loads the configuration:
log4net.Config.XmlConfigurator.Configure(new FileInfo("log4net.config"));
These two seem redundant but the configuration wont load unless I have the code part. Everything here is in a class library being used by another project.
Is this behavior expected and I don't understand the configuration and level overrides or am I forgetting something?
EDIT:
Here is how I instantiate and call the ILog. The the full class name is the name of the logger in the config plus the ConfiInfoViewModel:
private static readonly ILog LOG = LogManager.GetLogger(typeof(ConfigInfoViewModel));
...
LOG.Debug("Something buggy");
Also note that when testing the logging levels I had a log statement for each level in a series.
Your problem lays here
LogManager.GetLogger(typeof(ConfigInfoViewModel));
Internally this get resolved to
LogManager.GetLogger(typeof(ConfigInfoViewModel).FullName);
Now log4net is looking for a Logger named "EveStatic.Config.ConfigInfoViewModel" (result of typeof(ConfigInfoViewModel).FullName)
Because no Logger with that name is specified a new one with your default settings is used.
Also note that level specify a threshold, not a single level.
Example: level=warn means log warn an all levels above (error and fatal)
I have my custom Searcher and my custom DocumenetProcessor in my vespa app.My service.xml is given below:
<services version="1.0">
<container id="default" version="1.0">
<document-api/>
<search>
<chain id="default" inherits="vespa">
<searcher id="com.example.test.CustomSearcher" bundle="example-vespa-app"/>
</chain>
</search>
<nodes>
<node hostalias="node1" />
</nodes>
<document-processing>
<chain id="default" inherits="vespa">
<documentprocessor id="com.example.test.CustomDocumentProcessor"/>
</chain>
</document-processing>
</container>
<content id="test_user" version="1.0">
<redundancy>1</redundancy>
<documents>
.....
</documents>
<nodes>
<node hostalias="node1" distribution-key="0" />
</nodes>
</content>
</services>
My CustomDocumentProcessor is given below:
public class CustomDocumentProcessor extends DocumentProcessor {
#Override
public Progress process(Processing processing) {
for (DocumentOperation op : processing.getDocumentOperations()) {
if (op instanceof DocumentPut) {
DocumentPut put = (DocumentPut) op;
Document document = put.getDocument();
document.setFieldValue("documentType",
String.valueOf(document.getDataType()));
}
}
return Progress.DONE;
}
}
When I remove CustomDocumentProcessor from service.xml, my app works .When I add it , It gives an error:
Request failed. HTTP status code: 400
Invalid application package: default.default: Error loading model: Missing chain 'vespa'.
Why is that? Please help.
Remove "inherits=vespa" from the document-processing chain.
There is no "vespa document processing chain like there is for search chains.
This question already has an answer here:
WPF-Log4Net used inside VSIX extension did not output log when installed at target VS
(1 answer)
Closed 4 years ago.
I have a WPF solution. I downloaded log4net dll, I added log4net.config and had set the "Copy to Output Directory" value as "Copy always".
log4net.config:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<log4net>
<root>
<level value="ALL" />
<appender-ref ref="LogFileAppender" />
</root>
<appender name="LogFileAppender" type="log4net.Appender.FileAppender">
<file value="myapp.txt" />
<appendToFile value="true" />
<rollingStyle value="Size" />
<maxSizeRollBackups value="5" />
<maximumFileSize value="10MB" />
<staticLogFileName value="true" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %level %logger - %message%newline" />
</layout>
</appender>
</log4net>
</configuration>
And I added the below line in AssemblyInfo.cs:
[assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.config", Watch = true)]
then the below code in my TestWindowControl.xaml.cs
public partial class TestWindowControl : UserControl
{
private static readonly log4net.ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
public TestWindowControl()
{
XmlConfigurator.Configure(new System.IO.FileInfo("log4net.config"));
log.Info("info testing");
log.Debug("debug testing");
log.Error("error testing");
log.Fatal("fatal testing");
log.Warn("warn testing");
}
}
But logs are not writing to the file. It's working in Console application but not working for WPF. Am I missing something?
Try to set the filter inside appender
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="WARN" />
<levelMax value="ERROR" />
</filter>