why is "arp -a" showing inconsistent outputs that are vastly different? - arp

I have newly set up a computer with Ubuntu 18.04 and connected to my home wifi. When I try arp -a command to scan other devices connected to the same network, I see some very weird outputs.
First, the connect is fine by checking ifconfig:
john#home:~$ ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 246054 bytes 21958490 (21.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 246054 bytes 21958490 (21.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlo1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.14 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::e3f:e0a5:2438:a96a prefixlen 64 scopeid 0x20<link>
inet6 240d:1a:6a5:c900:1420:b3cc:994b:1b7b prefixlen 64 scopeid 0x0<global>
inet6 240d:1a:6a5:c900:74f4:f504:3a41:bc12 prefixlen 64 scopeid 0x0<global>
ether 04:33:c2:c4:02:a2 txqueuelen 1000 (Ethernet)
RX packets 2452125 bytes 3302288691 (3.3 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 964749 bytes 117659686 (117.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
It has the ip address of 192.168.1.14
Then I tried arp -a:
john#home:~$ arp -a
? (192.168.1.19) at 92:c4:78:3c:46:16 [ether] on wlo1
_gateway (192.168.1.1) at e4:7e:66:1f:bf:4c [ether] on wlo1
? (192.168.1.5) at 26:36:46:f9:69:83 [ether] on wlo1
which makes sense because I can confirm that my ipad has ip address of 192.168.1.19 and my phone has ip address of 192.168.1.5
However, after a while I executed arp -a again, and the output blows my mind:
john#home:~$ arp -a
? (192.168.1.206) at <incomplete> on wlo1
? (192.168.1.183) at <incomplete> on wlo1
? (192.168.1.107) at <incomplete> on wlo1
? (192.168.1.8) at <incomplete> on wlo1
? (192.168.1.18) at <incomplete> on wlo1
? (192.168.1.165) at <incomplete> on wlo1
? (192.168.1.186) at <incomplete> on wlo1
? (192.168.1.110) at <incomplete> on wlo1
? (192.168.1.77) at <incomplete> on wlo1
? (192.168.1.33) at <incomplete> on wlo1
? (192.168.1.178) at <incomplete> on wlo1
? (192.168.1.123) at <incomplete> on wlo1
? (192.168.1.112) at <incomplete> on wlo1
? (192.168.1.96) at <incomplete> on wlo1
? (192.168.1.117) at <incomplete> on wlo1
? (192.168.1.74) at <incomplete> on wlo1
? (192.168.1.95) at <incomplete> on wlo1
? (192.168.1.84) at <incomplete> on wlo1
? (192.168.1.41) at <incomplete> on wlo1
? (192.168.1.62) at <incomplete> on wlo1
? (192.168.1.51) at <incomplete> on wlo1
? (192.168.1.8) at <incomplete> on wlo1
? (192.168.1.29) at <incomplete> on wlo1
? (192.168.1.18) at <incomplete> on wlo1
? (192.168.1.231) at <incomplete> on wlo1
? (192.168.1.252) at <incomplete> on wlo1
? (192.168.1.241) at <incomplete> on wlo1
? (192.168.1.198) at <incomplete> on wlo1
? (192.168.1.219) at <incomplete> on wlo1
? (192.168.1.208) at <incomplete> on wlo1
? (192.168.1.165) at <incomplete> on wlo1
? (192.168.1.186) at <incomplete> on wlo1
? (192.168.1.143) at <incomplete> on wlo1
? (192.168.1.132) at <incomplete> on wlo1
? (192.168.1.153) at <incomplete> on wlo1
? (192.168.1.110) at <incomplete> on wlo1
? (192.168.1.99) at <incomplete> on wlo1
? (192.168.1.120) at <incomplete> on wlo1
? (192.168.1.77) at <incomplete> on wlo1
? (192.168.1.66) at <incomplete> on wlo1
? (192.168.1.87) at <incomplete> on wlo1
? (192.168.1.44) at <incomplete> on wlo1
? (192.168.1.33) at <incomplete> on wlo1
? (192.168.1.54) at <incomplete> on wlo1
? (192.168.1.11) at <incomplete> on wlo1
? (192.168.1.21) at <incomplete> on wlo1
? (192.168.1.234) at <incomplete> on wlo1
? (192.168.1.244) at <incomplete> on wlo1
? (192.168.1.201) at <incomplete> on wlo1
? (192.168.1.222) at <incomplete> on wlo1
? (192.168.1.211) at <incomplete> on wlo1
? (192.168.1.168) at <incomplete> on wlo1
? (192.168.1.189) at <incomplete> on wlo1
? (192.168.1.178) at <incomplete> on wlo1
? (192.168.1.135) at <incomplete> on wlo1
? (192.168.1.156) at <incomplete> on wlo1
? (192.168.1.145) at <incomplete> on wlo1
? (192.168.1.102) at <incomplete> on wlo1
? (192.168.1.123) at <incomplete> on wlo1
? (192.168.1.112) at <incomplete> on wlo1
? (192.168.1.69) at <incomplete> on wlo1
? (192.168.1.90) at <incomplete> on wlo1
? (192.168.1.47) at <incomplete> on wlo1
? (192.168.1.36) at <incomplete> on wlo1
? (192.168.1.57) at <incomplete> on wlo1
? (192.168.1.3) at <incomplete> on wlo1
? (192.168.1.24) at <incomplete> on wlo1
? (192.168.1.237) at <incomplete> on wlo1
? (192.168.1.226) at <incomplete> on wlo1
? (192.168.1.247) at <incomplete> on wlo1
? (192.168.1.204) at <incomplete> on wlo1
? (192.168.1.193) at <incomplete> on wlo1
? (192.168.1.214) at <incomplete> on wlo1
? (192.168.1.171) at <incomplete> on wlo1
? (192.168.1.160) at <incomplete> on wlo1
? (192.168.1.181) at <incomplete> on wlo1
? (192.168.1.138) at <incomplete> on wlo1
? (192.168.1.159) at <incomplete> on wlo1
? (192.168.1.148) at <incomplete> on wlo1
? (192.168.1.105) at <incomplete> on wlo1
? (192.168.1.126) at <incomplete> on wlo1
? (192.168.1.115) at <incomplete> on wlo1
? (192.168.1.72) at <incomplete> on wlo1
? (192.168.1.93) at <incomplete> on wlo1
? (192.168.1.82) at <incomplete> on wlo1
? (192.168.1.39) at <incomplete> on wlo1
? (192.168.1.60) at <incomplete> on wlo1
? (192.168.1.49) at <incomplete> on wlo1
? (192.168.1.6) at <incomplete> on wlo1
? (192.168.1.27) at <incomplete> on wlo1
? (192.168.1.16) at <incomplete> on wlo1
? (192.168.1.229) at <incomplete> on wlo1
? (192.168.1.250) at <incomplete> on wlo1
? (192.168.1.207) at <incomplete> on wlo1
? (192.168.1.196) at <incomplete> on wlo1
? (192.168.1.217) at <incomplete> on wlo1
? (192.168.1.174) at <incomplete> on wlo1
? (192.168.1.163) at <incomplete> on wlo1
? (192.168.1.184) at <incomplete> on wlo1
? (192.168.1.141) at <incomplete> on wlo1
? (192.168.1.130) at <incomplete> on wlo1
? (192.168.1.151) at <incomplete> on wlo1
? (192.168.1.108) at <incomplete> on wlo1
? (192.168.1.97) at <incomplete> on wlo1
? (192.168.1.118) at <incomplete> on wlo1
? (192.168.1.75) at <incomplete> on wlo1
? (192.168.1.64) at <incomplete> on wlo1
? (192.168.1.85) at <incomplete> on wlo1
? (192.168.1.42) at <incomplete> on wlo1
? (192.168.1.63) at <incomplete> on wlo1
? (192.168.1.52) at <incomplete> on wlo1
? (192.168.1.9) at <incomplete> on wlo1
? (192.168.1.30) at <incomplete> on wlo1
? (192.168.1.19) at <incomplete> on wlo1
? (192.168.1.232) at <incomplete> on wlo1
? (192.168.1.253) at <incomplete> on wlo1
? (192.168.1.242) at <incomplete> on wlo1
? (192.168.1.199) at <incomplete> on wlo1
? (192.168.1.220) at <incomplete> on wlo1
? (192.168.1.209) at <incomplete> on wlo1
? (192.168.1.166) at <incomplete> on wlo1
? (192.168.1.187) at <incomplete> on wlo1
? (192.168.1.176) at <incomplete> on wlo1
? (192.168.1.133) at <incomplete> on wlo1
? (192.168.1.154) at <incomplete> on wlo1
? (192.168.1.111) at <incomplete> on wlo1
? (192.168.1.100) at <incomplete> on wlo1
? (192.168.1.121) at <incomplete> on wlo1
? (192.168.1.78) at <incomplete> on wlo1
? (192.168.1.67) at <incomplete> on wlo1
? (192.168.1.88) at <incomplete> on wlo1
? (192.168.1.45) at <incomplete> on wlo1
? (192.168.1.34) at <incomplete> on wlo1
? (192.168.1.55) at <incomplete> on wlo1
? (192.168.1.12) at <incomplete> on wlo1
_gateway (192.168.1.1) at e4:7e:66:1f:bf:4c [ether] on wlo1
? (192.168.1.22) at <incomplete> on wlo1
? (192.168.1.235) at <incomplete> on wlo1
? (192.168.1.224) at <incomplete> on wlo1
? (192.168.1.245) at <incomplete> on wlo1
? (192.168.1.202) at <incomplete> on wlo1
? (192.168.1.223) at <incomplete> on wlo1
? (192.168.1.212) at <incomplete> on wlo1
? (192.168.1.169) at <incomplete> on wlo1
? (192.168.1.190) at <incomplete> on wlo1
? (192.168.1.179) at <incomplete> on wlo1
? (192.168.1.136) at <incomplete> on wlo1
? (192.168.1.157) at <incomplete> on wlo1
? (192.168.1.146) at <incomplete> on wlo1
? (192.168.1.103) at <incomplete> on wlo1
? (192.168.1.124) at <incomplete> on wlo1
? (192.168.1.113) at <incomplete> on wlo1
? (192.168.1.70) at <incomplete> on wlo1
? (192.168.1.91) at <incomplete> on wlo1
? (192.168.1.80) at <incomplete> on wlo1
? (192.168.1.37) at <incomplete> on wlo1
? (192.168.1.58) at <incomplete> on wlo1
? (192.168.1.15) at <incomplete> on wlo1
? (192.168.1.4) at <incomplete> on wlo1
? (192.168.1.25) at <incomplete> on wlo1
? (192.168.1.238) at <incomplete> on wlo1
? (192.168.1.227) at <incomplete> on wlo1
? (192.168.1.248) at <incomplete> on wlo1
? (192.168.1.205) at <incomplete> on wlo1
? (192.168.1.194) at <incomplete> on wlo1
? (192.168.1.215) at <incomplete> on wlo1
? (192.168.1.172) at <incomplete> on wlo1
? (192.168.1.161) at <incomplete> on wlo1
? (192.168.1.182) at <incomplete> on wlo1
? (192.168.1.139) at <incomplete> on wlo1
? (192.168.1.128) at <incomplete> on wlo1
? (192.168.1.149) at <incomplete> on wlo1
? (192.168.1.106) at <incomplete> on wlo1
? (192.168.1.127) at <incomplete> on wlo1
? (192.168.1.116) at <incomplete> on wlo1
? (192.168.1.73) at <incomplete> on wlo1
? (192.168.1.94) at <incomplete> on wlo1
? (192.168.1.83) at <incomplete> on wlo1
? (192.168.1.40) at <incomplete> on wlo1
? (192.168.1.61) at <incomplete> on wlo1
? (192.168.1.50) at <incomplete> on wlo1
? (192.168.1.7) at <incomplete> on wlo1
? (192.168.1.28) at <incomplete> on wlo1
? (192.168.1.17) at <incomplete> on wlo1
? (192.168.1.230) at <incomplete> on wlo1
? (192.168.1.251) at <incomplete> on wlo1
? (192.168.1.240) at <incomplete> on wlo1
? (192.168.1.197) at <incomplete> on wlo1
? (192.168.1.218) at <incomplete> on wlo1
? (192.168.1.175) at <incomplete> on wlo1
? (192.168.1.164) at <incomplete> on wlo1
? (192.168.1.185) at <incomplete> on wlo1
? (192.168.1.142) at <incomplete> on wlo1
? (192.168.1.131) at <incomplete> on wlo1
? (192.168.1.152) at <incomplete> on wlo1
? (192.168.1.109) at <incomplete> on wlo1
? (192.168.1.98) at <incomplete> on wlo1
? (192.168.1.119) at <incomplete> on wlo1
? (192.168.1.76) at <incomplete> on wlo1
? (192.168.1.65) at <incomplete> on wlo1
? (192.168.1.86) at <incomplete> on wlo1
? (192.168.1.43) at <incomplete> on wlo1
? (192.168.1.32) at <incomplete> on wlo1
? (192.168.1.53) at <incomplete> on wlo1
? (192.168.1.10) at <incomplete> on wlo1
? (192.168.1.31) at <incomplete> on wlo1
? (192.168.1.20) at <incomplete> on wlo1
? (192.168.1.233) at <incomplete> on wlo1
? (192.168.1.254) at <incomplete> on wlo1
? (192.168.1.243) at <incomplete> on wlo1
? (192.168.1.200) at <incomplete> on wlo1
? (192.168.1.221) at <incomplete> on wlo1
john#home:~$
What is this? What just happened?
I thought arp -a is supposed to scan and list other devices on the same network. What are these results?

arp -a does not actively scan, it merely shows what the ARP table contains at the moment.
The ARP table contains translations of known IP to MAC address translations.
An entry becomes known once a communication with an IP needs to be translated to MAC the first time. It is resolved via ARP protocol and stored in the ARP table.
The situation you see in the ARP means there have been some communication attempt with a specific IP address. However if you see incomplete, that means the IP was not successfuly resolved to a MAC.
It may suggest something was scanning IP addresses in your range but was mostly not successful.
see more here: https://www.dummies.com/programming/networking/network-administration-arp-command/
However the fact, that something is scanning may be a sign of some malicious code running on your computer.

Related

KC-SERVICES0089: Failed to run scheduled task ClearExpiredClientInitialAccessTokens

I use Keycloak 10.0.2 and SQL Server as database. Sometimes I get an error in Keycloak which prevents the application to run properly and the only solution is to restart the application.
Seems that connection gets closed somehow and keycloak is not able to open it again. What could be the reason?
2020-06-22 09:09:28,649 ERROR [org.keycloak.services] (Timer-2)
KC-SERVICES0089: Failed to run scheduled task
ClearExpiredClientInitialAccessTokens:
javax.persistence.PersistenceException:
org.hibernate.exception.GenericJDBCException: could not prepare
statement
at org.hibernate#5.3.15.Final//org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:154)
at org.hibernate#5.3.15.Final//org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:181)
at org.hibernate#5.3.15.Final//org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1594)
at org.keycloak.keycloak-model-jpa#10.0.2//org.keycloak.models.jpa.JpaRealmProvider.removeExpiredClientInitialAccess(JpaRealmProvider.java:826)
at org.keycloak.keycloak-model-infinispan#10.0.2//org.keycloak.models.cache.infinispan.RealmCacheSession.removeExpiredClientInitialAccess(RealmCacheSession.java:1208)
at org.keycloak.keycloak-services#10.0.2//org.keycloak.services.scheduled.ClearExpiredClientInitialAccessTokens.run(ClearExpiredClientInitialAccessTokens.java:30)
at org.keycloak.keycloak-services#10.0.2//org.keycloak.services.scheduled.ClusterAwareScheduledTaskRunner$1.call(ClusterAwareScheduledTaskRunner.java:56)
at org.keycloak.keycloak-services#10.0.2//org.keycloak.services.scheduled.ClusterAwareScheduledTaskRunner$1.call(ClusterAwareScheduledTaskRunner.java:52)
at org.keycloak.keycloak-model-infinispan#10.0.2//org.keycloak.cluster.infinispan.InfinispanClusterProvider.executeIfNotExecuted(InfinispanClusterProvider.java:78)
at org.keycloak.keycloak-services#10.0.2//org.keycloak.services.scheduled.ClusterAwareScheduledTaskRunner.runTask(ClusterAwareScheduledTaskRunner.java:52)
at org.keycloak.keycloak-services#10.0.2//org.keycloak.services.scheduled.ScheduledTaskRunner.run(ScheduledTaskRunner.java:45)
at org.keycloak.keycloak-services#10.0.2//org.keycloak.timer.basic.BasicTimerProvider$1.run(BasicTimerProvider.java:51)
at java.base/java.util.TimerThread.mainLoop(Timer.java:556)
at java.base/java.util.TimerThread.run(Timer.java:506) Caused by: org.hibernate.exception.GenericJDBCException: could not prepare
statement
at org.hibernate#5.3.15.Final//org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:47)
at org.hibernate#5.3.15.Final//org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113)
at org.hibernate#5.3.15.Final//org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:186)
at org.hibernate#5.3.15.Final//org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareStatement(StatementPreparerImpl.java:81)
at org.hibernate#5.3.15.Final//org.hibernate.hql.internal.ast.exec.BasicExecutor.doExecute(BasicExecutor.java:87)
at org.hibernate#5.3.15.Final//org.hibernate.hql.internal.ast.exec.BasicExecutor.execute(BasicExecutor.java:59)
at org.hibernate#5.3.15.Final//org.hibernate.hql.internal.ast.exec.DeleteExecutor.execute(DeleteExecutor.java:109)
at org.hibernate#5.3.15.Final//org.hibernate.hql.internal.ast.QueryTranslatorImpl.executeUpdate(QueryTranslatorImpl.java:453)
at org.hibernate#5.3.15.Final//org.hibernate.engine.query.spi.HQLQueryPlan.performExecuteUpdate(HQLQueryPlan.java:378)
at org.hibernate#5.3.15.Final//org.hibernate.internal.SessionImpl.executeUpdate(SessionImpl.java:1550)
at org.hibernate#5.3.15.Final//org.hibernate.query.internal.AbstractProducedQuery.doExecuteUpdate(AbstractProducedQuery.java:1603)
at org.hibernate#5.3.15.Final//org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1585)
... 11 more Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The connection is
closed.
at com.microsoft//com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:234)
at com.microsoft//com.microsoft.sqlserver.jdbc.SQLServerConnection.checkClosed(SQLServerConnection.java:1088)
at com.microsoft//com.microsoft.sqlserver.jdbc.SQLServerConnection.prepareStatement(SQLServerConnection.java:3409)
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.doPrepareStatement(BaseWrapperManagedConnection.java:761)
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.prepareStatement(BaseWrapperManagedConnection.java:747)
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.WrappedConnection$4.produce(WrappedConnection.java:478)
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.WrappedConnection$4.produce(WrappedConnection.java:476)
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.SecurityActions.executeInTccl(SecurityActions.java:97)
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.WrappedConnection.prepareStatement(WrappedConnection.java:476)
at org.hibernate#5.3.15.Final//org.hibernate.engine.jdbc.internal.StatementPreparerImpl$1.doPrepare(StatementPreparerImpl.java:90)
at org.hibernate#5.3.15.Final//org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:176)
... 20 more
EDITED:
Here is the datasource configuration part within standalone-ha.xml
<subsystem xmlns="urn:jboss:domain:datasources:5.0">
<datasources>
<datasource jndi-name="java:jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true" use-java-context="true" statistics-enabled="${wildfly.datasources.statistics-enabled:${wildfly.statistics-enabled:false}}">
<connection-url>jdbc:sqlserver://sqlserver\sqlinstance:1431;DatabaseName=keycloak;</connection-url>
<driver>sqlserver</driver>
<security>
<user-name>keycloak</user-name>
<password>keycloak</password>
</security>
</datasource>
<drivers>
<driver name="h2" module="com.h2database.h2">
<xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
</driver>
<driver name="sqlserver" module="com.microsoft">
<driver-class>com.microsoft.sqlserver.jdbc.SQLServerDriver</driver-class>
<xa-datasource-class>com.microsoft.sqlserver.jdbc.SQLServerXADataSource</xa-datasource-class>
</driver>
</drivers>
</datasources>
</subsystem>

Error getting file length for [segments_XX] in Solr 6.1

We are getting below error when sitecore 8 content management website is using Solr 6.1. Please advice.
WARN true
LukeRequestHandler
Error getting file length for [segments_7c]
java.nio.file.NoSuchFileException: G:\Solr\Solr61\server\solr\itembucket_prod\data\index\segments_7c
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.Files.size(Files.java:2332)
at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
at org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
at org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597)
at org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585)
at org.apache.solr.handler.admin.CoreAdminOperation.getCoreStatus(CoreAdminOperation.java:968)
at org.apache.solr.handler.admin.CoreAdminOperation$4.call(CoreAdminOperation.java:170)
at org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:367)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:663)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)

Solr 4.10.2 - DataImportHandler - Qustion

I have a question on loading data into Solr using DataImportHandler.
I have a xml file which I need to load into Solr using data import handler via Xpath transformer:
XML file structure is:
<record>
<item>
<tags>
<tag1>
<tag>
<id>01</id>
<value>value01</value>
</tag>
<tag>
<id>02</id>
<value>value02</value>
</tag>
<tag1>
<tag2>
<tag>
<id>03</id>
<value>value03</value>
</tag>
<tag2>
<tag3>
<tag>
<id>04</id>
<value>value04</value>
</tag>
<tag>
<id>05</id>
<value>value05</value>
</tag>
<tag>
<id>06</id>
<value>value06</value>
</tag>
<tag3>
</tags>
</item>
</record>
I can use /record/item/tags//tag/id to load the data in a multivalue field but I need to maintain the relationship of id with value, I mean I need a way, so that at query time I can get the value related to id.
Please advice. Thanks in advance.

Routing queues with Service Mix

I'm trying to configure Apache ServiceMix to implement the network topology below:
I'm very new in Service Mix, Camel, ActiveMq etc. stuff and the main issue I'm trying to solve is to route messages from Output queue of DC1.ActiveMqBroker to the Input queue of DC2.ActiveMqBroker.
I'm sure this should be easy. Could someone point me to the good article or write rough snippet of configuration (in Spring/Blueprint, doesn't matter)?
UPDATE:
Sorry for long text but I don't see other way to tell my problem.
My sample configuration:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="dc2" dataDirectory="${karaf.data}/activemq/dc2" useShutdownHook="false">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue="input" producerFlowControl="true" memoryLimit="1mb"/>
<policyEntry queue="output" producerFlowControl="true" memoryLimit="1mb"/>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${karaf.data}/activemq/dc2/kahadb"/>
</persistenceAdapter>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://localhost:61619"/>
</transportConnectors>
</broker>
<bean id="dc1activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="brokerURL" value="tcp://localhost:61619" />
</bean>
<bean id="dc2activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="brokerURL" value="tcp://localhost:61618" />
</bean>
<camelContext xmlns="http://camel.apache.org/schema/blueprint">
<route>
<from uri="dc1activemq:queue:output"/>
<log message="Took message from dc1 to dc2"/>
<to uri="dc2activemq:queue:input"/>
</route>
</camelContext>
And I constantly get following error:
08:06:40,739 | INFO | rint Extender: 3 | Activator | ? ? | 68 - or
g.apache.camel.camel-core - 2.6.0 | Found 1 #Converter classes to load
08:06:40,740 | INFO | rint Extender: 3 | Activator | ? ? | 68 - or
g.apache.camel.camel-core - 2.6.0 | Found 1 #Converter classes to load
08:06:40,741 | INFO | rint Extender: 3 | Activator | ? ? | 68 - or
g.apache.camel.camel-core - 2.6.0 | Found 1 #Converter classes to load
08:06:40,741 | INFO | rint Extender: 3 | Activator | ? ? | 68 - or
g.apache.camel.camel-core - 2.6.0 | Found 2 #Converter classes to load
08:06:40,749 | INFO | rint Extender: 3 | Activator | ? ? | 68 - or
g.apache.camel.camel-core - 2.6.0 | Found 13 #Converter classes to load
08:06:40,754 | INFO | rint Extender: 3 | BlueprintCamelContext | ? ? | 68 - or
g.apache.camel.camel-core - 2.6.0 | JMX enabled. Using ManagedManagementStrategy.
08:06:40,758 | INFO | rint Extender: 3 | BlueprintCamelContext | ? ? | 68 - or
g.apache.camel.camel-core - 2.6.0 | Apache Camel 2.6.0 (CamelContext: 211-camel-165) is starting
08:06:42,364 | INFO | rint Extender: 3 | BlueprintCamelContext | ? ? | 68 - or
g.apache.camel.camel-core - 2.6.0 | Route: route55 started and consuming from: Endpoint[dc1activemq://queue:output]
08:06:42,364 | INFO | rint Extender: 3 | BlueprintCamelContext | ? ? | 68 - or
g.apache.camel.camel-core - 2.6.0 | Total 1 routes, of which 1 is started.
08:06:42,365 | INFO | rint Extender: 3 | BlueprintCamelContext | ? ? | 68 - or
g.apache.camel.camel-core - 2.6.0 | Apache Camel 2.6.0 (CamelContext: 211-camel-165) started in 1.606 seconds
08:06:48,379 | WARN | tenerContainer-1 | DefaultMessageListenerContainer | ? ? | 77 - or
g.springframework.jms - 3.0.5.RELEASE | Could not refresh JMS Connection for destination 'output' - retrying in 5000 ms. Cau
se: Could not connect to broker URL: tcp://localhost:61619. Reason: java.net.ConnectException: Connection refused: connect
08:06:54,381 | WARN | tenerContainer-1 | DefaultMessageListenerContainer | ? ? | 77 - or
g.springframework.jms - 3.0.5.RELEASE | Could not refresh JMS Connection for destination 'output' - retrying in 5000 ms. Cau
se: Could not connect to broker URL: tcp://localhost:61619. Reason: java.net.ConnectException: Connection refused: connect
I'm making an assumption that you've got a handle on how Camel is started up in ServiceMix and have a few (at least) proof-of-concept routes going.. if not, start there. Also assuming that you know the URLs of the 2 message brokers, and that they're set up already.
Given that, one solution is to register two ActiveMQComponents:
<bean id="dc1activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="brokerURL" value="tcp://DC1BrokerLocation:12345" />
</bean>
<bean id="dc2activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="brokerURL" value="tcp://DC2BrokerLocation:12345" />
</bean>
And then in your camel context:
<camelContext xmlns="http://camel.apache.org/schema/blueprint">
<route>
<from uri="dc1activemq:queue:whateverTheOutQueueIsCalled"/>
<log message="Took message from dc1 to dc2"/>
<to uri="dc2activemq:queue:whateverTheInQueueIsCalled"/>
</route>
<route>
<from uri="dc2activemq:queue:whateverTheOutQueueIsCalled"/>
<log message="Took message from dc2 to dc1"/>
<to uri="dc1activemq:queue:whateverTheInQueueIsCalled"/>
</route>
<camelContext>

SolR : NullPointerException when using spellcheck.q

I've already posted about a problem which brought me to this error, but I write it something more specific about the error:
When I use spellcheck.q in my query to define what will be "spellchecked", I always have this error, for every configuration I try:
java.lang.NullPointerException
at org.apache.solr.handler.component.SpellCheckComponent.getTokens(SpellCheckComponent.java:476)
at org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:131)
at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:202)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
All my other functions works great, this is the only thing which doesn't work at all, just when I add "&spellcheck.q=my%20sentence" in the query...
Has anyone already have solved this problem?
It is fixed in trunk.
So get latest code from SVN or a nightly build.

Resources