Setting up JanusGraph i noticed the following in the console:
09:04:12,175 INFO ReflectiveConfigOptionLoader:173 - Loaded and initialized config classes: 10 OK out of 12 attempts in PT0.023S
09:04:12,230 INFO Reflections:224 - Reflections took 28 ms to scan 1 urls, producing 2 keys and 2 values
09:04:12,291 WARN GraphDatabaseConfiguration:1445 - Local setting index.search.index-name=entity (Type: GLOBAL_OFFLINE) is overridden by globally managed value (janusgraph). Use the ManagementSystem interface instead of the local configuration to control this setting.
09:04:12,294 WARN GraphDatabaseConfiguration:1445 - Local setting index.search.backend=solr (Type: GLOBAL_OFFLINE) is overridden by globally managed value (elasticsearch). Use the ManagementSystem interface instead of the local configuration to control this setting.
09:04:12,300 INFO CassandraThriftStoreManager:628 - Closed Thrift connection pooler.
and then i see the following:
Exception in thread "main" java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex
How do i stop using elasticsearch and switch to Solr?
My properties file is as follows:
index.search.backend=solr
index.search.directory=/path/to/directory/for/solr/index/something
index.search.index-name=something
index.search.solr.mode=http
index.search.solr.http-urls=http://127.0.0.1:8983/solr
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
The answer to this basically the same as this one for Titan. JanusGraph was forked from Titan.
You are probably trying to connect to an existing graph that was previously configured to use Elasticsearch. By default, the keyspace is named janusgraph.
1) You could connect to a different keyspace by updating conf/janusgraph-cassandra.properties
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
storage.cassandra.keyspace=mygraph
2) You could drop the existing keyspace. If you used bin/janusgraph.sh start from the quick start directions (which starts a single node Cassandra and a single node Elasticsearch),
bin/janusgraph.sh clean
Or if you have a standalone Cassandra installation:
$CASSANDRA_HOME/bin/cqlsh -e 'drop keyspace if exists janusgraph'
Then you would be able to connect with the default conf/janusgraph-cassandra.properties.
Related
I am new to the OpenStack environment and started to get into it with a small DevStack setup. I worked the following instructions on a Ubuntu 18.04 machine through and everything worked fine. In order to play with some dns zones I started to research about designate. After adapting the following instructions to my setup I got some errors.
Executing stack.sh produces the following error:
++/opt/stack/designate/devstack/plugin.sh:source:5 set +o xtrace
2021-01-12 21:44:39.009 | Initializing Designate
DROP DATABASE
Could not load 'database': type object 'deprecated' has no attribute 'WALLABY'
Could not load 'pool': type object 'deprecated' has no attribute 'WALLABY'
Could not load 'tlds': type object 'deprecated' has no attribute 'WALLABY'
usage: designate [-h] [--config-dir DIR] [--config-file PATH] [--debug]
[--log-config-append PATH] [--log-date-format DATE_FORMAT]
[--log-dir LOG_DIR] [--log-file PATH] [--nodebug]
[--nouse-journal] [--nouse-json] [--nouse-syslog]
[--nowatch-log-file]
[--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-journal]
[--use-json] [--use-syslog] [--watch-log-file]
{} ...
designate: error: argument category: invalid choice: 'database' (choose from )
Error on exit
World dumping... see /opt/stack/logs/worlddump-2021-01-12-214442.txt for details
nova-compute: no process found
neutron-dhcp-agent: no process found
neutron-l3-agent: no process found
neutron-metadata-agent: no process found
neutron-openvswitch-agent: no process found
I was not sure if my setup was legit. So I tried to use the example config from the designate tutorial. But the same problem occurred.
My actual local.conf:
[[local|localrc]]
USE_PYTHON3=True
ADMIN_PASSWORD=***
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DEST=/opt/stack
SERVICE_HOST=192.168.1.***
HOST_IP=$SERVICE_HOST
disable_service mysql
enable_service postgresql
enable_plugin designate https://opendev.org/openstack/designate
enable_service tempest
Checking the plugin.sh. It looks like the error occurred from this function:
function init_designate {
# (Re)create designate database
recreate_database designate utf8
# Init and migrate designate database
$DESIGNATE_BIN_DIR/designate-manage database sync
init_designate_backend
}
Hope somebody can give me a hint to run DevStack with designate.
Thanks in advance.
The issue you are having is a version mismatch with the cloud install and the designate plugin. Designate is expecting a newer verison of the oslo_log package.
Check that the "devstack" version you have checked out is on the master branch.
The line:
enable_plugin designate https://opendev.org/openstack/designate
Is pulling the master branch of designate for the devstack plugin.
If you are trying to install on a stable branch version OpenStack, you will need to specify a reference for the devstack plugin as well (example, stable/victoria):
enable_plugin designate https://opendev.org/openstack/designate stable/victoria
As mentioned above, you will also need to enable the designate services:
enable_service designate,designate-central,designate-api,designate-worker,designate-producer,designate-mdns
I am new to flink. I am trying to run the flink example on my local PC(windows).
However, after I run the start-cluster.bat, I login to the dashboard, it shows the task manager is 0.
I checked the log and seems it fails to initialize:
2020-02-21 23:03:14,202 ERROR org.apache.flink.runtime.taskexecutor.TaskManagerRunner - TaskManager initialization failed.
org.apache.flink.configuration.IllegalConfigurationException: Failed to create TaskExecutorResourceSpec
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpec.FromConfig(TaskExecutorResourceUtils.java:72)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:356)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<init>(TaskManagerRunner.java:152)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:308)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerSecurely$2(TaskManagerRunner.java:322)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:321)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:287)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: The required configuration option Key: 'taskmanager.cpu.cores' , default: null (fallback keys: []) is not set
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkConfigOptionIsSet(TaskExecutorResourceUtils.java:90)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.lambda$checkTaskExecutorResourceConfigSet$0(TaskExecutorResourceUtils.java:84)
at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkTaskExecutorResourceConfigSet(TaskExecutorResourceUtils.java:84)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:70)
... 7 more
2020-02-21 23:03:14,217 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
Basically, it looks like a required option 'taskmanager.cpu.cores' is not set. However, I can't find this property in flink-conf.yaml and in the document(https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html) either.
I am using flink 1.10.0. Any help would be highly appreciated!
That configuration option is intended for internal use only -- it shouldn't be user configured, which is why it isn't documented.
The windows start-cluster.bat is failing because of a bug introduced in Flink 1.10. See https://jira.apache.org/jira/browse/FLINK-15925.
One workaround is to use the bash script, start-cluster.sh, instead.
See also this mailing list thread: https://lists.apache.org/thread.html/r7693d0c06ac5ced9a34597c662bcf37b34ef8e799c32cc0edee373b2%40%3Cdev.flink.apache.org%3E
I am facing an issue with Ninject IOC container.
I am using Sitecore 8.2 update 5 and switching from Lucene to Solr search engine using the steps mentioned in https://sitecorerockz.wordpress.com/2018/08/01/lucene-to-solr/
I am using Solr 6.6.3. Earlier this project was on Sitecore 6.X version and from time to time some upgrades happened, and now it is in Sitecore 8.2 update 5.
The same Solr setup is working fine for the fresh Sitecore 8.2 update 5 setup.
I created Solr diagnostic page and kept it in /Sitecore/admin folder to check the error details, I am getting the below error for all the indexes:
Solr Indexes Microsoft.Practices.ServiceLocation.ActivationException: Activation error occured while trying to get instance of type ISolrOperations`1, key "sitecore_analytics_index" ---> Ninject.ActivationException: Error activating ISolrOperations{Dictionary{string, Object}} No matching bindings are available, and the type is not self-bindable. Activation path: 1) Request for ISolrOperations{Dictionary{string, Object}} Suggestions: 1) Ensure that you have defined a binding for ISolrOperations{Dictionary{string, Object}}. 2) If the binding was defined in a module, ensure that the module has been loaded into the kernel. 3) Ensure you have not accidentally created more than one kernel. 4) If you are using constructor arguments, ensure that the parameter name matches the constructors parameter name. 5) If you are using automatic module loading, ensure the search path and filters are correct. at Ninject.KernelBase.Resolve(IRequest request) in c:\Projects\Ninject\ninject\src\Ninject\KernelBase.cs:line 376 at Ninject.ResolutionExtensions.Get(IResolutionRoot root, Type service, String name, IParameter[] parameters) in c:\Projects\Ninject\ninject\src\Ninject\Syntax\ResolutionExtensions.cs:line 164 at MyLibrary.test.Infrastructure.NinjectServiceLocator.DoGetInstance(Type serviceType, String key) in C:\test_Git\Sitecore\src\test\Infrastructure\NinjectServiceLocator.cs:line 15 at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 49 --- End of inner exception stack trace --- at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 53 at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance[TService](String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 103 at Sitecore.ContentSearch.SolrProvider.SolrSearchIndex.Initialize() at ASP._Page_sitecore_admin_solr_diagnostic_cshtml.Execute() in c:\test_Git\Sitecore\build\25Sep2019\Website\sitecore\admin\solr-diagnostic.cshtml:line 29
What am I missing, could you please advise me?
The SetLocatorProvider was getting initialized two times
Modified the Custom code related to Ninject IOC
Our current solution was using Ninject.dll 3.0.0.0 now I used new version of Ninject.dll which is 3.2.2.0 under folder bin>>Social
Replaced all the Solr dll files from fresh Sitecore 8.2 update 5 files
I have successfully installed the SortableBehaviorBundle in the Sonata-Admin. Now i'm not using only the default DB-connection, but some different for different bundles.
in config.yml
doctrine:
dbal:
default_connection: mysql
connections:
mysql:
driver: "%mysql_database_driver%"
...
geobundle:
driver: "%geo_database_driver%"
...
in vendor/pixassociates/sortable-behavior-bundle/Pix/SortableBehaviorBundle/Resources/config/services.yml
there ist the definition of
...
arguments:
- "#doctrine.orm.entity_manager"
if I change this to
- "#doctrine.orm.geobundle_entity_manager"
I get the sorted data from my geobundle-DB. If I try to override the setting by my own app/config/services.yml, I get a:
Attempted to call an undefined method named "get" of class
"Macrocom\GeoAdminBundle\Entity\AppAccessServiceprovider". Did you
mean to call e.g. "getAppAccess", "getConfig", "getCreatedAt",
"getId", "getIsActive", "getPosition", "getServiceprovider" or
"getUpdatedAt"?
but when I leave this on original setting I get
An exception has been thrown during the rendering of a template ("The
class 'ACME\GeoAdminBundle\Entity\AppAccessServiceprovider' was
not found in the chain configured namespaces
ACME\ApiBundle\Entity, ACME\CrmSyncBundle\Entity").
witch seems to try to access the mysql-DB and namespace.
As I dont wat to change the SortableBehaviorBundle everytime I deploy or update teh project to the server, is there a way to override the default-DB for this bundle only.
Best regards
RJ
I'm trying to connect my ldap with the geonetwork database but every time I log in it doesn't show the administrator button. Then I check the database and it is empty. I am using GeOrchestra 13.09 in a localhost enviroment, the geoserver and mapfishapp are running well and they log in without a problem.
My config-security.properties is
Core security properties
logout.success.url=/index.html
passwordSalt=secret-hash-salt=
# LDAP Connection Settings
ldap.base.provider.url=ldap://localhost:389
ldap.base.dn=dc=geobolivia,dc=gob,dc=bo
ldap.security.principal=cn=admin,dc=geobolivia,dc=gob,dc=bo
ldap.security.credentials=geobolivia
ldap.base.search.base=ou=users
ldap.base.dn.pattern=uid={0},${ldap.base.search.base}
#ldap.base.dn.pattern=mail={0},${ldap.base.search.base}
# Define if groups and profile information are imported from LDAP. If not, local database is used.
# When a new user connect first, the default profile is assigned. A user administrator can update
# privilege information.
ldap.privilege.import=true
ldap.privilege.export=true
ldap.privilege.create.nonexisting.groups=false
# Define the way to extract profiles and privileges from the LDAP
# 1. Define one attribute for the profile and one for groups in config-security-overrides.properties
# 2. Define one attribute for the privilege and define a custom pattern (use LDAPUserDetailsContextMapperWithPa$
ldap.privilege.pattern=
#ldap.privilege.pattern=CAT_(.*)_(.*)
ldap.privilege.pattern.idx.group=1
ldap.privilege.pattern.idx.profil=2
# 3. Define custom location for extracting group and role (no support for group/role combination) (use LDAPUser$
#ldap.privilege.search.group.attribute=cn
#ldap.privilege.search.group.object=ou=groups
#ldap.privilege.search.group.query=(&(objectClass=posixGroup)(memberUid={0})(cn=EL_*))
#ldap.privilege.search.group.pattern=EL_(.*)
#ldap.privilege.search.privilege.attribute=cn
#ldap.privilege.search.privilege.object=ou=groups
#ldap.privilege.search.privilege.query=(&(objectClass=posixGroup)(memberUid={0})(cn=SV_*))
#ldap.privilege.search.privilege.pattern=SV_(.*)
ldap.privilege.search.group.attribute=cn
ldap.privilege.search.group.object=ou=groups
ldap.privilege.search.group.query=(&(objectClass=posixGroup)(memberUid={1})(cn=EL_*))
ldap.privilege.search.group.pattern=EL_(.*)
ldap.privilege.search.privilege.attribute=cn
ldap.privilege.search.privilege.object=ou=groups
ldap.privilege.search.privilege.query=(&(objectClass=posixGroup)(memberUid={1})(cn=SV_ADMIN))
ldap.privilege.search.privilege.pattern=SV_(.*)
# Run LDAP sync every day at 23:30
# Run LDAP sync every day at 23:30
#ldap.sync.cron=0 30 23 * * ?
ldap.sync.cron=0 * * * * ?
#ldap.sync.cron=0 0/1 * 1/1 * ? *
ldap.sync.startDelay=60000
ldap.sync.user.search.base=${ldap.base.search.base}
ldap.sync.user.search.filter=(&(objectClass=*)(mail=*#*)(givenName=*))
ldap.sync.user.search.attribute=uid
ldap.sync.group.search.base=ou=groups
ldap.sync.group.search.filter=(&(objectClass=posixGroup)(cn=EL_*))
ldap.sync.group.search.attribute=cn
ldap.sync.group.search.pattern=EL_(.*)
# CAS properties
cas.baseURL=https://localhost:8443/cas
cas.ticket.validator.url=${cas.baseURL}
cas.login.url=${cas.baseURL}/login
cas.logout.url=${cas.baseURL}/logout?url=${geonetwork.https.url}/
<import resource="config-security-cas.xml"/>
<import resource="config-security-cas-ldap.xml"/>
# either the hardcoded url to the server
# or if has the form it will be replaced with
# the server details from the server configuration
geonetwork.https.url=https://localhost/geonetwork-private/
#geonetwork.https.url=https://geobolivia.gob.bo:443
#geonetwork.https.url=https://localhost:443
The geonetwork.log shows these results:
2014-03-11 13:41:00,004 DEBUG [geonetwork.ldap] - LDAPSynchronizerJob starting ...
2014-03-11 13:41:00,006 DEBUG [org.springframework.ldap.core.support.AbstractContextSource] - Got Ldap context on server 'ldap://localhost:389/dc=geobolivia,dc=gob,dc=bo'
2014-03-11 13:41:00,008 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Returning cached instance of singleton bean 'resourceManager'
2014-03-11 13:41:00,026 DEBUG [geonetwork.ldap] - LDAPSynchronizerJob done.
2014-03-11 13:41:26,429 INFO [geonetwork.lucene] - Done running PurgeExpiredSearchersTask. 0 versions still cached.
2014-03-11 13:41:56,430 INFO [geonetwork.lucene] - Done running PurgeExpiredSearchersTask. 0 versions still cached.
and the that appear in the geonetwork.log is
2014-03-11 13:44:06,426 INFO [jeeves.service] - Dispatching : xml.search.keywords
2014-03-11 13:44:06,427 ERROR [jeeves.service] - Exception when executing service
2014-03-11 13:44:06,427 ERROR [jeeves.service] - (C) Exc : java.lang.IllegalArgumentException: The thesaurus external.theme.inspire-service-taxonomy does not exist, there for the query cannot be excuted: 'Query [query=SELECT DISTINCT id,uppc,lowc,broader,spa_prefLabel,spa_note FROM {id} rdf:type {skos:Concept},[{id} gml:BoundedBy {} gml:upperCorner {uppc}],[{id} gml:BoundedBy {} gml:lowerCorner {lowc}],[{id} skos:broader {broader}],[{id} skos:prefLabel {spa_prefLabel} WHERE lang(spa_prefLabel) LIKE "es" IGNORE CASE],[{id} skos:scopeNote {spa_note} WHERE lang(spa_note) LIKE "es" IGNORE CASE] WHERE (spa_prefLabel LIKE "***" IGNORE CASE OR id LIKE "*") LIMIT 35 USING NAMESPACE skos=<http://www.w3.org/2004/02/skos/core#>,gml=<http://www.opengis.net/gml#>, interpreter=KeywordResultInterpreter]'
The version of GeoNetwork currently used in geOrchestra does not show the "administration" button on its first page. You have to fire a search, then in "other actions" menu on the top right, you should be able to get to the administration interface. We know that it is not very intuitive, but it should change in the next months (we recently planned an upgrade of GeoNetwork before the end of the year).
Did you solve it? I think in your config-security.properties, at this place ldap.base.dn.pattern=uid={0},${ldap.base.search.base}
you need to replace {0} with the username typed in the sign-in screen of geonetwork