Error in Using YCSB with Gemfire - benchmarking

Hii Am Using YCSB for Benchmarking Pivotal Gemfire My Gemfire server is running properly and by using following command am running the benchmarking test.
bin/ycsb load gemfire -P workloads/workloada -p gemfire.serverhost=x.x.x.x -P gemfire-binding/conf/cache.xml -p gemfire.serverport=40404 -s > load.txt
Loading workload...
Starting test.
0 sec: 0 operations;
Exception in thread "Thread-1" java.lang.IllegalStateException: You must use client-cache in the cache.xml when ClientCacheFactory is used.
at com.gemstone.gemfire.internal.cache.xmlcache.CacheCreation.create(CacheCreation.java:316)
at com.gemstone.gemfire.internal.cache.xmlcache.CacheXmlParser.create(CacheXmlParser.java:274)
at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:3495)
at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.initializeDeclarativeCache(GemFireCacheImpl.java:926)
at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.init(GemFireCacheImpl.java:708)
at com.gemstone.gemfire.internal.cache.GemFireCacheImpl.create(GemFireCacheImpl.java:533)
at com.gemstone.gemfire.cache.client.ClientCacheFactory.basicCreate(ClientCacheFactory.java:207)
at com.gemstone.gemfire.cache.client.ClientCacheFactory.create(ClientCacheFactory.java:161)
at com.yahoo.ycsb.db.GemFireClient.init(GemFireClient.java:125)
at com.yahoo.ycsb.DBWrapper.init(DBWrapper.java:63)
at com.yahoo.ycsb.ClientThread.run(Client.java:189)
0 sec: 0 operations;
Please anybody tell can tell me where am going wrong
Thanks in advance

In your GemFireClient.java you are setting up a client cache with the ClientCacheFactory class but the cache.xml that you are giving it for configuration is specifying a cache element instead of a client-cache element. Try changing your cache.xml to use a client-cache similar to the below example. Note that when configuring a client-cache in your cache.xml you will need to use a different DTD than the one used above.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE client-cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN"
"http://www.gemstone.com/dtd/cache7_0.dtd">
<client-cache copy-on-read="false" >
<region name="usertable" refid="PARTITION"/>
</client-cache>

you do not need to specify a cache.xml when running the YCSB client. The cache.xml in gemfire-binding/conf folder is meant to be supplied to the GemFire server.

Related

Unable to install devstack with designate

I am new to the OpenStack environment and started to get into it with a small DevStack setup. I worked the following instructions on a Ubuntu 18.04 machine through and everything worked fine. In order to play with some dns zones I started to research about designate. After adapting the following instructions to my setup I got some errors.
Executing stack.sh produces the following error:
++/opt/stack/designate/devstack/plugin.sh:source:5 set +o xtrace
2021-01-12 21:44:39.009 | Initializing Designate
DROP DATABASE
Could not load 'database': type object 'deprecated' has no attribute 'WALLABY'
Could not load 'pool': type object 'deprecated' has no attribute 'WALLABY'
Could not load 'tlds': type object 'deprecated' has no attribute 'WALLABY'
usage: designate [-h] [--config-dir DIR] [--config-file PATH] [--debug]
[--log-config-append PATH] [--log-date-format DATE_FORMAT]
[--log-dir LOG_DIR] [--log-file PATH] [--nodebug]
[--nouse-journal] [--nouse-json] [--nouse-syslog]
[--nowatch-log-file]
[--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-journal]
[--use-json] [--use-syslog] [--watch-log-file]
{} ...
designate: error: argument category: invalid choice: 'database' (choose from )
Error on exit
World dumping... see /opt/stack/logs/worlddump-2021-01-12-214442.txt for details
nova-compute: no process found
neutron-dhcp-agent: no process found
neutron-l3-agent: no process found
neutron-metadata-agent: no process found
neutron-openvswitch-agent: no process found
I was not sure if my setup was legit. So I tried to use the example config from the designate tutorial. But the same problem occurred.
My actual local.conf:
[[local|localrc]]
USE_PYTHON3=True
ADMIN_PASSWORD=***
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DEST=/opt/stack
SERVICE_HOST=192.168.1.***
HOST_IP=$SERVICE_HOST
disable_service mysql
enable_service postgresql
enable_plugin designate https://opendev.org/openstack/designate
enable_service tempest
Checking the plugin.sh. It looks like the error occurred from this function:
function init_designate {
# (Re)create designate database
recreate_database designate utf8
# Init and migrate designate database
$DESIGNATE_BIN_DIR/designate-manage database sync
init_designate_backend
}
Hope somebody can give me a hint to run DevStack with designate.
Thanks in advance.
The issue you are having is a version mismatch with the cloud install and the designate plugin. Designate is expecting a newer verison of the oslo_log package.
Check that the "devstack" version you have checked out is on the master branch.
The line:
enable_plugin designate https://opendev.org/openstack/designate
Is pulling the master branch of designate for the devstack plugin.
If you are trying to install on a stable branch version OpenStack, you will need to specify a reference for the devstack plugin as well (example, stable/victoria):
enable_plugin designate https://opendev.org/openstack/designate stable/victoria
As mentioned above, you will also need to enable the designate services:
enable_service designate,designate-central,designate-api,designate-worker,designate-producer,designate-mdns

[flink]Task manager initialization failed

I am new to flink. I am trying to run the flink example on my local PC(windows).
However, after I run the start-cluster.bat, I login to the dashboard, it shows the task manager is 0.
I checked the log and seems it fails to initialize:
2020-02-21 23:03:14,202 ERROR org.apache.flink.runtime.taskexecutor.TaskManagerRunner - TaskManager initialization failed.
org.apache.flink.configuration.IllegalConfigurationException: Failed to create TaskExecutorResourceSpec
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpec.FromConfig(TaskExecutorResourceUtils.java:72)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:356)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<init>(TaskManagerRunner.java:152)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:308)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerSecurely$2(TaskManagerRunner.java:322)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:321)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:287)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: The required configuration option Key: 'taskmanager.cpu.cores' , default: null (fallback keys: []) is not set
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkConfigOptionIsSet(TaskExecutorResourceUtils.java:90)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.lambda$checkTaskExecutorResourceConfigSet$0(TaskExecutorResourceUtils.java:84)
at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkTaskExecutorResourceConfigSet(TaskExecutorResourceUtils.java:84)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:70)
... 7 more
2020-02-21 23:03:14,217 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
Basically, it looks like a required option 'taskmanager.cpu.cores' is not set. However, I can't find this property in flink-conf.yaml and in the document(https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html) either.
I am using flink 1.10.0. Any help would be highly appreciated!
That configuration option is intended for internal use only -- it shouldn't be user configured, which is why it isn't documented.
The windows start-cluster.bat is failing because of a bug introduced in Flink 1.10. See https://jira.apache.org/jira/browse/FLINK-15925.
One workaround is to use the bash script, start-cluster.sh, instead.
See also this mailing list thread: https://lists.apache.org/thread.html/r7693d0c06ac5ced9a34597c662bcf37b34ef8e799c32cc0edee373b2%40%3Cdev.flink.apache.org%3E

How to retry failed scenario in Behave using python

Can someone please tell me how I can run a failed test again in Behave using Python?
I want to re-run the failed test case automatically if it fails.
The behave library actually has a RerunFormatter which can help you rerun the failing scenarios of your previous test-run. It creates a text file of all your failing scenarios like:
# -- file:rerun.features
# RERUN: Failing scenarios during last test run.
features/auth.feature:10
features/auth.feature:42
features/notifications.feature:67
To use the RerunFormatter all you need to do is put it in your behave configuration file (behave.ini):
# -- file:behave.ini
[behave]
format = rerun
outfiles = rerun_failing.features
To rerun the failing scenarios, use this command:
behave #rerun_failing.features
I know that's a later answer but it could help others.
There is another approach that also could help, it's implementing it under the environment.py file, you could do the retry by a specific tag.
Provides support functionality to retry scenarios a number of times
before their failure is accepted. This functionality can be helpful
when you use behave tests in an unreliable server/network
infrastructure.
For example, I am running tag "#smoke_test" on CI, so I choose this tag to patch with retry condition.
First, on your environment.py import the following:
# -- file: environment.py
from behave.contrib.scenario_autoretry import patch_scenario_with_autoretry
Then add the method:
# -- file:environment.py
#
def before_feature(context, feature):
for scenario in feature.scenarios:
if "smoke_test" in scenario.effective_tags:
patch_scenario_with_autoretry(scenario, max_attempts=3)
*max_attempts are by default set as 3. I just described there to make it explicit that you can actually set how many retries you want.

Disable scalatest logging statements when running tests from maven

What is the method to disable logging on the scalatest log4j messages:
The log4j.properties is as follows:
log4j.rootLogger=INFO,CA,FA
#Console Appender
log4j.appender.CA=org.apache.log4j.ConsoleAppender
log4j.appender.CA.layout=org.apache.log4j.PatternLayout
log4j.appender.CA.layout.ConversionPattern=%d{HH:mm:ss.SSS} %p %c: %m%n
log4j.appender.CA.Threshold = INFO
#File Appender
log4j.appender.FA=org.apache.log4j.FileAppender
log4j.appender.FA.append=false
log4j.appender.FA.file=target/unit-tests.log
log4j.appender.FA.layout=org.apache.log4j.PatternLayout
log4j.appender.FA.layout.ConversionPattern=%d{HH:mm:ss.SSS} %p %c{1}: %m%n
log4j.appender.FA.Threshold = INFO
..
log4j.logger.org.scalatest=WARN
However we are seeing INFO level scalatest log4j messages:
2014-11-30 14:25:57,263 INFO [ScalaTest-run-running-DiscoverySuite] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2014-11-30 14:25:57,493 INFO [ScalaTest-run-running-DiscoverySuite] hbase.HBaseCommonTestingUtility (HBaseTestingUtility.java:startMiniCluster(840)) - Starting up minicluster with 1 master(s) and 2 regionserver(s) and 2 datanode(s)
2014-11-30 14:25:57,499 INFO [ScalaTest-run-running-DiscoverySuite] hbase.HBaseCommonTestingUtility (HBaseTestingUtility.java:setupClusterTestDir(390)) - Created new mini-cluster data directory: /shared/hwspark/target/
Alternatively, you can throw this bit of code anywhere in one of your tests,
org.slf4j.LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME)
.asInstanceOf[ch.qos.logback.classic.Logger]
.setLevel(ch.qos.logback.classic.Level.WARN)
which will set all logging to the WARN level.
Those log messages are not actually being printed by ScalaTest, but by something you are using from your ScalaTest tests. The reason "ScalaTest" shows up in them is that ScalaTest does change the name of threads when suites and tests are executed, so that if someone has a suite that hangs forever and does a thread dump to investigate, it is more obvious what test and suite is causing the run to hang. Log4J seems to print out the thread name in square brackets, so that can give you a hint as to where these log messages are coming from.
In my case it was slick.relational I looked at the classpath with the class reported in ScalaTest-run-running-.... information and found the class to be find in a package imported and added that specific package to logback.xml as
<logger name="slick.relational" level="INFO"/>
In your case search for HBaseTestingUtility or the other class reported there to find which jar it contains it and work out your logback logger name from the package prefix.

committed before 500 null error in solr 3.6.1

In solr 3.6.1, At some point am getting the following error when concurrent request(concurrent load test) performed against the solr server.
org.apache.solr.common.SolrException log
SEVERE: org.mortbay.jetty.EofException
Caused by: java.net.SocketException: Broken pipe
and
Committed before 500 null||org.mortbay.jetty.EofException|?at
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)|?at
org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)|?at
org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)|?at
sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:278)|?at
sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)|?at
java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)|?at
org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)|?at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:353)|?at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:273)|?at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)|?at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)|?at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)|?at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)|?at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)|?at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)|?at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)|?at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)|?at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)|?at
org.mortbay.jetty.Server.handle(Server.java:326)|?at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)|?at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)|?at
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)|?at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)|?at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)|?at
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)|?at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)|Caused
by: java.net.SocketException: Broken pipe|?at
java.net.SocketOutputStream.socketWrite0(Native Method)|?at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)|?at
java.net.SocketOutputStream.write(SocketOutputStream.java:136)|?at
org.mortbay.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:368)|?at
org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:129)|?at
org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:161)|?at
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:714)|?... 25
more
Kindly suggest any idea to resolve this error from solr ?
I don't think it's your solr, the broken pipe happens (happened to me, at least) because of a timeout problem with the client.
Check for your curl timeout value and try to set explicitly a keep-alive Tomcat so you can avoid this situation again.
quick update (just to give a hint, configuration may vary)
in your jetty folder, you should look for a folder named WEB-INF that should contain a file named jetty-web.xml (or web-jetty.xml)
adding these lines:
<session-config>
<session-timeout>720</session-timeout>
</session-config>
should help you (change 720 in what you like more)
there's also the option
<Set name="maxIdleTime">300000</Set>
that may do your trick. You'll have to dig into jetty's doc a lot to figure out this for your case
more about this: here and here

Resources