I have two instances where I have to deploy Vespa on a docker container. One container will act as a config cluster, container cluster, and content cluster while the other will act as a container cluster and content cluster.
host.xml file for the application looks like:
<hosts>
<host name="vespa-master">
<alias>admin0</alias>
</host>
<host name="vespa-searcher">
<alias>searcher1</alias>
</host>
</hosts>
services.xml for the application looks like:
<services version="1.0">
<admin version="2.0">
<adminserver hostalias="admin0"/>
<configservers>
<configserver hostalias="admin0"/>
</configservers>
</admin>
<container id="container" version="1.0">
<document-api />
<search/>
<nodes>
<node hostalias="admin0"/>
<node hostalias="searcher1"/>
</nodes>
</container>
<content id="content" version="1.0">
<documents>
<!--version 1 docs starts-->
<document type="document_name" mode="index" />
<!--version 1 docs ends-->
</documents>
<redundancy>2</redundancy>
<engine>
<proton>
<searchable-copies>1</searchable-copies>
</proton>
</engine>
<group name="top-group">
<distribution partitions="*"/>
<group name="group0" distribution-key="0">
<node hostalias="admin0" distribution-key="0"/>
<node hostalias="searcher1" distribution-key="1"/>
</group>
</group>
</content>
</services>
I am using a docker swarm to make an overlay network connection between the two instances. The command for which looks something like this:
docker network create --driver=overlay --subnet=<IP>/24 vespa_conn --attachable
The command to create a container on the first instance that I had used is:
docker run --detach --hostname vespa-master --network=vespa_conn <other arguments> --env VESPA_CONFIGSERVERS=vespa-master vespaengine/vespa
and the command to create a container on the second instance is:
docker run --detach --hostname vespa-searcher --network=vespa_conn <other arguments> --env VESPA_CONFIGSERVERS=vespa-master vespaengine/vespa
The reference for these commands is from this page.
And after creating and deploying my application the state of the node on the second container is not showing up.
vespa-get-cluster-state
Cluster content:
content/distributor/0: up
content/distributor/1: down
content/storage/0: up
content/storage/1: down
The issue that I found was:
content/distributor/0: Failed to fetch json: Connection error: socket write error
admin/cluster-controllers/0: Failed to fetch json: Connection error: socket write error
admin/slobrok.0: Failed to fetch json: Connection error: socket write error
admin/metrics/vespa-master: Failed to fetch json: Connection error: socket write error
hosts/vespa-master/sentinel: Failed to fetch json: Connection error: socket write error
hosts/vespa-master/logd: Failed to fetch json: Connection error: socket write error
[generation not up-to-date ignored]
container/container.1: Failed to fetch json: Connection error: socket write error
hosts/vespa-searcher/logd: Failed to fetch json: Connection error: socket write error
[generation not up-to-date ignored]
After some tries. I had fixed the problem by adding:
'override VESPA_CONFIGSERVERS vespa-master' in /opt/vespa/conf/vespa/default-env.txt file in the second container and then restarting the services.
Is there any better way to do this, so that I don't have to manually update the default-env.txt file?
Also, While I was adding the 'configserver' or 'services' at the end of the line of docker run command as specified in the page I was getting this error:
[2020-10-15 11:36:13.782540] 1935/8285 (vespa-model-inspect.config.frt.frtconnection) warning: Connection to tcp/localhost:19090 failed or timed out
[2020-10-15 11:36:13.782631] 1935/8285 (vespa-model-inspect.config.frt.frtconnection) warning: FRT Connection tcp/localhost:19090 suspended until 2020-10-15 11:36:23 GMT
[2020-10-15 11:36:13.782647] 1935/8285 (vespa-model-inspect.config.frt.frtconfigagent) info: Error response or no response from config server (key: name=model,namespace=cloud.config,configId=admin/model) (errcode=104, validresponse:0), trying again in 6000 milliseconds
What will be the reason for this error, Am I doing something wrong here?
To get this working you should avoid having underscores in the network name, use the fully qualified name for the config server and name the containers to get DNS working.
Create the network on a manager swarm host:
docker network create --driver=overlay --attachable vespa-net
Start a Vespa container running both the config server and the services (no argument to the entrypoint):
docker run --detach --name vespa-master --hostname vespa-master.vespa-net --network=vespa-net --env VESPA_CONFIGSERVERS=vespa-master.vespa-net vespaengine/vespa
Start a Vespa container running only the services (services argument to entrypoint):
docker run --detach --name vespa-searcher --hostname vespa-searcher.vespa-net --network=vespa-net --env VESPA_CONFIGSERVERS=vespa-master.vespa-net vespaengine/vespa services
Then use the fully qualified names in the hosts.xml:
<hosts>
<host name="vespa-master.vespa-net">
<alias>admin0</alias>
</host>
<host name="vespa-searcher.vespa-net">
<alias>searcher1</alias>
</host>
</hosts>
By deploying your unmodified services.xml I get the following state:
[root#vespa-master /]# vespa-get-cluster-state
Cluster content:
content/distributor/0: up
content/distributor/1: up
content/storage/0: up
content/storage/1: up
Related
We installed the last version (4.2.3) of Openfire on locahost to test it before run in production in our local domain.
When we try to connect with Spark 2.7.7, Spark 2.8.3 or even tried another client like Jitsi 2.10.5550, it responds "wrong username or password".
Server is up and running.
Administration interface available on port 9090
It is correctly linked to our Active Directory database.
Firewall is disabled on local machine
Tried to connect both from localhost and another computer on same LAN.
Raw sent packets :
<stream:stream to="demo-300" xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0">
<starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls"/>
<stream:stream to="demo-300" xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0">
<iq id="nmonD-0" type="get"><query xmlns="jabber:iq:auth"><username>cba</username></query></iq>
<iq id="nmonD-1" type="get"><ping xmlns='urn:xmpp:ping' /></iq>
Raw received packets :
<?xml version='1.0' encoding='UTF-8'?>
<stream:stream xmlns:stream="http://etherx.jabber.org/streams" xmlns="jabber:client" from="server.domain.local" id="p2cgyfth7" xml:lang="en" version="1.0">
<stream:features>
<starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls"></starttls>
<mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl">
<mechanism>GSSAPI</mechanism>
</mechanisms>
<compression xmlns="http://jabber.org/features/compress">
<method>zlib</method>
</compression>
<ver xmlns="urn:xmpp:features:rosterver"/>
</stream:features>
<proceed xmlns="urn:ietf:params:xml:ns:xmpp-tls"/>
<?xml version='1.0' encoding='UTF-8'?>
<stream:stream xmlns:stream="http://etherx.jabber.org/streams" xmlns="jabber:client" from="server.domain.local" id="p2cgyfth7" xml:lang="en" version="1.0">
<stream:features><mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl">
<mechanism>GSSAPI</mechanism>
</mechanisms>
<compression xmlns="http://jabber.org/features/compress">
<method>zlib</method>
</compression>
<ver xmlns="urn:xmpp:features:rosterver"/></stream:features>
and on every minute :
<iq type="error" id="1rCcI-3" to="server.domain.local/p2cgyfth7">
<ping xmlns="urn:xmpp:ping"></ping>
<error code="401" type="auth">
<not-authorized xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/>
</error>
</iq>
Server configuration :
Does anyone can help me?
Solution is to change sasl.mechs setting to PLAIN instead of GSSAPI
But I don't really know why!
Because on client side, it uses GSSAPI poperty to connect to server !
I have two dockers sitting on two different machines, both running the vespa. When I submit an application which have two nodes - vespa1 and vespa2 (resolved in /etc/hosts). I get the following error.
Uploading application '/vespa-eval/src/main/application/' using http://localhost:19071/application/v2/tenant/default/session?name=application
Session 6 for tenant 'default' created.
Preparing session 6 using
http://localhost:19071/application/v2/tenant/default/session/6/prepared
Request failed. HTTP status code: 400
Invalid application package: default.default: Error loading model:
Could not find host in the application's host system: 'vespa-container'. Hostsystem=host 'vespa1',host 'vespa2'
I do not have a problem when using only localhost.
hosts.xml
<?xml version="1.0" encoding="utf-8" ?>
<hosts>
<host name="vespa1">
<alias>node0</alias>
</host>
<host name="vespa2">
<alias>node1</alias>
</host>
</hosts>
services.xml
<?xml version="1.0" encoding="utf-8" ?>
<services version="1.0">
<admin version="2.0">
<adminserver hostalias="node0"/>
<configservers>
<configserver hostalias="node0"/>
</configservers>
</admin>
<container id="container" version="1.0">
<document-api />
<search />
<nodes>
<node hostalias="node0" />
<node hostalias="node1" />
</nodes>
</container>
<content id="product" version="1.0">
<redundancy>1</redundancy>
<documents>
<document type="product" mode="index" />
</documents>
<nodes>
<node hostalias="node0" distribution-key="0" />
<node hostalias="node1" distribution-key="1" />
</nodes>
</content>
</services>
Looks like a host named vespa-container is already deployed but not in the new application package. To debug, try
vespa-model-inspect hosts
on the config server and see if it lists the host. Maybe a good idea to try from scratch, I don't see anything wrong with the enclosed files. To clean the config server, search for
vespa-configserver-remove-state
in the documentation
I came across the same issue, and fixed the error by replacing 'vespa-container' (below command) to the hostname of physical box. However, this caused a couple of other errors in rpc connection. Did you fix the problem yet? #aman.gupta
docker run --detach --name vespa --hostname vespa-container --privileged \
--volume $VESPA_SAMPLE_APPS:/vespa-sample-apps --publish 8080:8080 vespaengine/vespa
I have a SFTP inbound channel set up to poll a remote sftp server and copy files to a local directory. When it runs, it gives me a 'Permission denied' error, but in the log file it correctly mentions the file name. So it appears to be able to correctly list the contents of the remote path, but is unable to read the files.
I haven't been able to figure out what the access issue is exactly. When i fiddled with it on a test server i could see I would get the same issue if the sftp user had at least r-x access on the remote dir, but no access on the files themselves. However on the live server where i get the issue, the user does have this required level of access.
Running the sftp command copies the files without any issues:
/usr/bin/sftp -2 -i KEYFILE USER#SERVER:REMOTEDIR/FILEPATTERN* LOCALDIR
Here is how i have the SFTP channel in my Spring Integration config:
<int:poller default="true" fixed-rate="${fixed.rate}" />
<bean id="sftpClientFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="${sftp.inbound.channel.host}" />
<property name="port" value="${sftp.inbound.channel.availableServerPort}" />
<property name="user" value="${sftp.inbound.channel.userid}" />
<property name="password" value="${sftp.inbound.channel.password}" />
<property name="privateKey" value="file:///${sftp.inbound.channel.server.key}"></property>
</bean>
<int-sftp:inbound-channel-adapter id="sftpInbound"
channel="sftpChannel" session-factory="sftpClientFactory"
filename-pattern="${input.file.format}" auto-create-local-directory="true"
delete-remote-files="false" remote-directory="${sftp.inbound.channel.remote.directory}"
local-directory="${sftp.inbound.channel.local.directory}">
</int-sftp:inbound-channel-adapter>
<int:channel id="sftpChannel">
<int:queue />
</int:channel>
The project is using Spring Integration version 4.0.4-RELEASE
This is the full exception trace. The file name gets correctly logged at the placeholder <FILENAME>
ERROR 9860 --- [ask-scheduler-2] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Problem occurred while synchronizing remote to local directory
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:209)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.receive(AbstractInboundFileSynchronizingMessageSource.java:167)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:124)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:192)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:55)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:149)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:146)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:298)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:52)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:49)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:292)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.springframework.messaging.MessagingException: Failure occurred while copying from remote to local directory
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyFileToLocalDirectory(AbstractInboundFileSynchronizer.java:238)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer$1.doInSession(AbstractInboundFileSynchronizer.java:177)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer$1.doInSession(AbstractInboundFileSynchronizer.java:167)
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:302)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:167)
... 20 more
Caused by: org.springframework.core.NestedIOException: failed to read file <FILENAME>; nested exception is 3: Permission denied
at org.springframework.integration.sftp.session.SftpSession.read(SftpSession.java:132)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyFileToLocalDirectory(AbstractInboundFileSynchronizer.java:231)
... 24 more
Caused by: 3: Permission denied
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2846)
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:1313)
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:1266)
at org.springframework.integration.sftp.session.SftpSession.read(SftpSession.java:128)
... 25 more
I'll appreciate if anyone can help me figure out what I may be missing.
My tomcat deployment is failing every time with the following error:
403 Access Denied
You are not authorized to view this page.
This is my tomacat-users.xml:
<?xml version='1.0' encoding='utf-8'?>
<tomcat-users>
<role rolename="manager-gui"/>
<user username="bhaskar" password="bhaskar007" roles="manager-gui"/>
</tomcat-users>
I don't know what am I doing wrong in this case. Tomcat manager is allowing me to login but not to deploy. Please help as I am prestty sure that my deployment path is correct.
When I am running solr with runjetty in eclipse ,I am getting an exception(IlleagalStateException) and port no:8080 already in use?
Can anyone help me on this?
in your solr installation folders, search for "example/etc/jetty.xml"
this is the relevant part you're looking for:
<Set name="port">
<SystemProperty name="jetty.port" default="8080"/>
</Set>
change the "default" value as you like (unused port)
or launch jetty adding
-Djetty.port=11111
from the command line (1111 is just a random number, you choose again the one you need)
This means that there is a java service which is using the port, go to task manager and process and kill any javaw process that is running currently ...
Hope that might fix.
change you apache tomcat port to some other port and try again...
Please set port value in server.xml file
Its path is like,
Tomcat-installation-dir\conf\server.xml
Search tag like this,
<!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
<Connector port="8080" maxHttpHeaderSize="8192"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true" />
Change the port number to any other number like 9090 for example, and after changes it should look like this,
<!-- Define a non-SSL HTTP/1.1 Connector on port 9090 -->
<Connector port="9090" maxHttpHeaderSize="8192"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true" />
Dont change anything other than connector port,change only the value of port and save the file and now hit the url,
localhost:9090
Or use the port number which you have updated in the server.xml file.
localhost:port_in_server.xml
So if every thing goes fine it should open the tomcat home page....This will make sure that you have changed the tomcat's default port(i.e.8080)listen to 9090.
Shutdown the tomcat server at this point.
Now you stop and restart the jetty server and it will work for you with no issues..
Hope this helps....