Cannot execute Solr queries on non-Search nodes - solr

I'm using Datastax Enterprise. And I got the exception:
com.datastax.driver.core.exceptions.InvalidQueryException: Cannot execute Solr queries on non-Search nodes.
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:50)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
at com.example.cassandra.simple_client.App.main(App.java:98)
When I try to run the following:
Cluster cluster = Cluster.builder()
.addContactPoint("104.236.160.56")
.addContactPoint("104.236.160.96")
.withRetryPolicy(DefaultRetryPolicy.INSTANCE)
.withLoadBalancingPolicy(new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().build()))
.build();
Metadata metadata = cluster.getMetadata();
System.out.printf("Connected to cluster: %s\n", metadata.getClusterName());
for ( Host host : metadata.getAllHosts() )
System.out.printf("Datacenter: %s; Host: %s; Rack: %s; Dse version: %s; Cassandra version: %s\n", host.getDatacenter(), host.getAddress(), host.getRack(), host.getDseVersion(), host.getCassandraVersion());
try {
Session session = cluster.connect("rombie");
ResultSet result = session.execute("SELECT * FROM rombie.force WHERE solr_query='{\"q\":\"point:[ 30 TO *]\", \"sort\":\"point desc\"}' LIMIT 50 ALLOW FILTERING");
List<Row> list = result.all();
for (Row row : list)
System.out.println(row.getString("force_tag"));
} catch (Exception e) {
e.printStackTrace();
}
2 nodes in the same datacenter and can see each other:
104.236.160.56: Cassandra
104.236.160.96: Solr
Note: It's working if I comment the Cassandra node: 104.236.160.56 OR using a normal query instead of a Solr query

Related

What protocol does SnowFlake JDBC driver use?

I'm trying to find out what protocol the SnowFlake JDBC library uses to communicate with SnowFlake. I see hints here and there that it seems to be using HTTPS as the protocol. Is this true?
To my knowledge, other JDBC libraries like for example for Oracle or PostgreSQL use the lower level TCP protocol to communicate with their database servers, and not the application-level HTTP(S) protocol, so I'm confused.
My organization only supports securely routing http(s)-based communication. Can I use this snowflake jdbc library then?
I have browsed all documentation that I could find, but wasn't able to answer this question.
My issue on GitHub didn't get an answer either.
Edit: Yes, I've seen this question, but I don't feel that it answers my question. SSL/TLS is an encryption, but that doesn't specify the data format.
It looks like the jdbc driver uses HTTP Client HttpUtil.initHttpClient(httpClientSettingsKey, null);, as you can see in here
The HTTP Utility Class is available here
Putting an excerpt of the session open method here in case the link goes bad/dead.
/**
* Open a new database session
*
* #throws SFException this is a runtime exception
* #throws SnowflakeSQLException exception raised from Snowflake components
*/
public synchronized void open() throws SFException, SnowflakeSQLException {
performSanityCheckOnProperties();
Map<SFSessionProperty, Object> connectionPropertiesMap = getConnectionPropertiesMap();
logger.debug(
"input: server={}, account={}, user={}, password={}, role={}, database={}, schema={},"
+ " warehouse={}, validate_default_parameters={}, authenticator={}, ocsp_mode={},"
+ " passcode_in_password={}, passcode={}, private_key={}, disable_socks_proxy={},"
+ " application={}, app_id={}, app_version={}, login_timeout={}, network_timeout={},"
+ " query_timeout={}, tracing={}, private_key_file={}, private_key_file_pwd={}."
+ " session_parameters: client_store_temporary_credential={}",
connectionPropertiesMap.get(SFSessionProperty.SERVER_URL),
connectionPropertiesMap.get(SFSessionProperty.ACCOUNT),
connectionPropertiesMap.get(SFSessionProperty.USER),
!Strings.isNullOrEmpty((String) connectionPropertiesMap.get(SFSessionProperty.PASSWORD))
? "***"
: "(empty)",
connectionPropertiesMap.get(SFSessionProperty.ROLE),
connectionPropertiesMap.get(SFSessionProperty.DATABASE),
connectionPropertiesMap.get(SFSessionProperty.SCHEMA),
connectionPropertiesMap.get(SFSessionProperty.WAREHOUSE),
connectionPropertiesMap.get(SFSessionProperty.VALIDATE_DEFAULT_PARAMETERS),
connectionPropertiesMap.get(SFSessionProperty.AUTHENTICATOR),
getOCSPMode().name(),
connectionPropertiesMap.get(SFSessionProperty.PASSCODE_IN_PASSWORD),
!Strings.isNullOrEmpty((String) connectionPropertiesMap.get(SFSessionProperty.PASSCODE))
? "***"
: "(empty)",
connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY) != null
? "(not null)"
: "(null)",
connectionPropertiesMap.get(SFSessionProperty.DISABLE_SOCKS_PROXY),
connectionPropertiesMap.get(SFSessionProperty.APPLICATION),
connectionPropertiesMap.get(SFSessionProperty.APP_ID),
connectionPropertiesMap.get(SFSessionProperty.APP_VERSION),
connectionPropertiesMap.get(SFSessionProperty.LOGIN_TIMEOUT),
connectionPropertiesMap.get(SFSessionProperty.NETWORK_TIMEOUT),
connectionPropertiesMap.get(SFSessionProperty.QUERY_TIMEOUT),
connectionPropertiesMap.get(SFSessionProperty.TRACING),
connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY_FILE),
!Strings.isNullOrEmpty(
(String) connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY_FILE_PWD))
? "***"
: "(empty)",
sessionParametersMap.get(CLIENT_STORE_TEMPORARY_CREDENTIAL));
HttpClientSettingsKey httpClientSettingsKey = getHttpClientKey();
logger.debug(
"connection proxy parameters: use_proxy={}, proxy_host={}, proxy_port={}, proxy_user={},"
+ " proxy_password={}, non_proxy_hosts={}, proxy_protocol={}",
httpClientSettingsKey.usesProxy(),
httpClientSettingsKey.getProxyHost(),
httpClientSettingsKey.getProxyPort(),
httpClientSettingsKey.getProxyUser(),
!Strings.isNullOrEmpty(httpClientSettingsKey.getProxyPassword()) ? "***" : "(empty)",
httpClientSettingsKey.getNonProxyHosts(),
httpClientSettingsKey.getProxyProtocol());
// TODO: temporarily hardcode sessionParameter debug info. will be changed in the future
SFLoginInput loginInput = new SFLoginInput();
loginInput
.setServerUrl((String) connectionPropertiesMap.get(SFSessionProperty.SERVER_URL))
.setDatabaseName((String) connectionPropertiesMap.get(SFSessionProperty.DATABASE))
.setSchemaName((String) connectionPropertiesMap.get(SFSessionProperty.SCHEMA))
.setWarehouse((String) connectionPropertiesMap.get(SFSessionProperty.WAREHOUSE))
.setRole((String) connectionPropertiesMap.get(SFSessionProperty.ROLE))
.setValidateDefaultParameters(
connectionPropertiesMap.get(SFSessionProperty.VALIDATE_DEFAULT_PARAMETERS))
.setAuthenticator((String) connectionPropertiesMap.get(SFSessionProperty.AUTHENTICATOR))
.setOKTAUserName((String) connectionPropertiesMap.get(SFSessionProperty.OKTA_USERNAME))
.setAccountName((String) connectionPropertiesMap.get(SFSessionProperty.ACCOUNT))
.setLoginTimeout(loginTimeout)
.setAuthTimeout(authTimeout)
.setUserName((String) connectionPropertiesMap.get(SFSessionProperty.USER))
.setPassword((String) connectionPropertiesMap.get(SFSessionProperty.PASSWORD))
.setToken((String) connectionPropertiesMap.get(SFSessionProperty.TOKEN))
.setPasscodeInPassword(passcodeInPassword)
.setPasscode((String) connectionPropertiesMap.get(SFSessionProperty.PASSCODE))
.setConnectionTimeout(httpClientConnectionTimeout)
.setSocketTimeout(httpClientSocketTimeout)
.setAppId((String) connectionPropertiesMap.get(SFSessionProperty.APP_ID))
.setAppVersion((String) connectionPropertiesMap.get(SFSessionProperty.APP_VERSION))
.setSessionParameters(sessionParametersMap)
.setPrivateKey((PrivateKey) connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY))
.setPrivateKeyFile((String) connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY_FILE))
.setPrivateKeyFilePwd(
(String) connectionPropertiesMap.get(SFSessionProperty.PRIVATE_KEY_FILE_PWD))
.setApplication((String) connectionPropertiesMap.get(SFSessionProperty.APPLICATION))
.setServiceName(getServiceName())
.setOCSPMode(getOCSPMode())
.setHttpClientSettingsKey(httpClientSettingsKey);
// propagate OCSP mode to SFTrustManager. Note OCSP setting is global on JVM.
HttpUtil.initHttpClient(httpClientSettingsKey, null);
SFLoginOutput loginOutput =
SessionUtil.openSession(loginInput, connectionPropertiesMap, tracingLevel.toString());
isClosed = false;
authTimeout = loginInput.getAuthTimeout();
sessionToken = loginOutput.getSessionToken();
masterToken = loginOutput.getMasterToken();
idToken = loginOutput.getIdToken();
mfaToken = loginOutput.getMfaToken();
setDatabaseVersion(loginOutput.getDatabaseVersion());
setDatabaseMajorVersion(loginOutput.getDatabaseMajorVersion());
setDatabaseMinorVersion(loginOutput.getDatabaseMinorVersion());
httpClientSocketTimeout = loginOutput.getHttpClientSocketTimeout();
masterTokenValidityInSeconds = loginOutput.getMasterTokenValidityInSeconds();
setDatabase(loginOutput.getSessionDatabase());
setSchema(loginOutput.getSessionSchema());
setRole(loginOutput.getSessionRole());
setWarehouse(loginOutput.getSessionWarehouse());
setSessionId(loginOutput.getSessionId());
setAutoCommit(loginOutput.getAutoCommit());
// Update common parameter values for this session
SessionUtil.updateSfDriverParamValues(loginOutput.getCommonParams(), this);
String loginDatabaseName = (String) connectionPropertiesMap.get(SFSessionProperty.DATABASE);
String loginSchemaName = (String) connectionPropertiesMap.get(SFSessionProperty.SCHEMA);
String loginRole = (String) connectionPropertiesMap.get(SFSessionProperty.ROLE);
String loginWarehouse = (String) connectionPropertiesMap.get(SFSessionProperty.WAREHOUSE);
if (loginDatabaseName != null && !loginDatabaseName.equalsIgnoreCase(getDatabase())) {
sqlWarnings.add(
new SFException(
ErrorCode.CONNECTION_ESTABLISHED_WITH_DIFFERENT_PROP,
"Database",
loginDatabaseName,
getDatabase()));
}
if (loginSchemaName != null && !loginSchemaName.equalsIgnoreCase(getSchema())) {
sqlWarnings.add(
new SFException(
ErrorCode.CONNECTION_ESTABLISHED_WITH_DIFFERENT_PROP,
"Schema",
loginSchemaName,
getSchema()));
}
if (loginRole != null && !loginRole.equalsIgnoreCase(getRole())) {
sqlWarnings.add(
new SFException(
ErrorCode.CONNECTION_ESTABLISHED_WITH_DIFFERENT_PROP, "Role", loginRole, getRole()));
}
if (loginWarehouse != null && !loginWarehouse.equalsIgnoreCase(getWarehouse())) {
sqlWarnings.add(
new SFException(
ErrorCode.CONNECTION_ESTABLISHED_WITH_DIFFERENT_PROP,
"Warehouse",
loginWarehouse,
getWarehouse()));
}
// start heartbeat for this session so that the master token will not expire
startHeartbeatForThisSession();
}

My H2/C3PO/Hibernate setup does not seem to preserving prepared statements?

I am finding my database is the bottleneck in my application, as part of this it looks like Prepared statements are not being reused.
For example here method I use
public static CoverImage findCoverImageBySource(Session session, String src)
{
try
{
Query q = session.createQuery("from CoverImage t1 where t1.source=:source");
q.setParameter("source", src, StandardBasicTypes.STRING);
CoverImage result = (CoverImage)q.setMaxResults(1).uniqueResult();
return result;
}
catch (Exception ex)
{
MainWindow.logger.log(Level.SEVERE, ex.getMessage(), ex);
}
return null;
}
But using Yourkit profiler it says
com.mchange.v2.c3po.impl.NewProxyPreparedStatemtn.executeQuery() Count 511
com.mchnage.v2.c3po.impl.NewProxyConnection.prepareStatement() Count 511
and I assume that the count for prepareStatement() call should be lower, ais it is looks like we create a new prepared statment every time instead of reusing.
https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html
I am using C3po connecting poolng wehich complicates things a little, but as I understand it I have it configured correctly
public static Configuration getInitializedConfiguration()
{
//See https://www.mchange.com/projects/c3p0/#hibernate-specific
Configuration config = new Configuration();
config.setProperty(Environment.DRIVER,"org.h2.Driver");
config.setProperty(Environment.URL,"jdbc:h2:"+Db.DBFOLDER+"/"+Db.DBNAME+";FILE_LOCK=SOCKET;MVCC=TRUE;DB_CLOSE_ON_EXIT=FALSE;CACHE_SIZE=50000");
config.setProperty(Environment.DIALECT,"org.hibernate.dialect.H2Dialect");
System.setProperty("h2.bindAddress", InetAddress.getLoopbackAddress().getHostAddress());
config.setProperty("hibernate.connection.username","jaikoz");
config.setProperty("hibernate.connection.password","jaikoz");
config.setProperty("hibernate.c3p0.numHelperThreads","10");
config.setProperty("hibernate.c3p0.min_size","1");
//Consider that if we have lots of busy threads waiting on next stages could we possibly have alot of active
//connections.
config.setProperty("hibernate.c3p0.max_size","200");
config.setProperty("hibernate.c3p0.max_statements","5000");
config.setProperty("hibernate.c3p0.timeout","2000");
config.setProperty("hibernate.c3p0.maxStatementsPerConnection","50");
config.setProperty("hibernate.c3p0.idle_test_period","3000");
config.setProperty("hibernate.c3p0.acquireRetryAttempts","10");
//Cancel any connection that is more than 30 minutes old.
//config.setProperty("hibernate.c3p0.unreturnedConnectionTimeout","3000");
//config.setProperty("hibernate.show_sql","true");
//config.setProperty("org.hibernate.envers.audit_strategy", "org.hibernate.envers.strategy.ValidityAuditStrategy");
//config.setProperty("hibernate.format_sql","true");
config.setProperty("hibernate.generate_statistics","true");
//config.setProperty("hibernate.cache.region.factory_class", "org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory");
//config.setProperty("hibernate.cache.use_second_level_cache", "true");
//config.setProperty("hibernate.cache.use_query_cache", "true");
addEntitiesToConfig(config);
return config;
}
Using H2 1.3.172, Hibernate 4.3.11 and the corresponding c3po for that hibernate version
With reproducible test case we have
HibernateStats
HibernateStatistics.getQueryExecutionCount() 28
HibernateStatistics.getEntityInsertCount() 119
HibernateStatistics.getEntityUpdateCount() 39
HibernateStatistics.getPrepareStatementCount() 189
Profiler, method counts
GooGooStaementCache.aquireStatement() 35
GooGooStaementCache.checkInStatement() 189
GooGooStaementCache.checkOutStatement() 189
NewProxyPreparedStatement.init() 189
I don't know what I shoud be counting as creation of prepared statement rather than reusing an existing prepared statement ?
I also tried enabling c3p0 logging by adding a c3p0 logger ands making it use same log file in my LogProperties but had no effect.
String logFileName = Platform.getPlatformLogFolderInLogfileFormat() + "songkong_debug%u-%g.log";
FileHandler fe = new FileHandler(logFileName, LOG_SIZE_IN_BYTES, 10, true);
fe.setEncoding(StandardCharsets.UTF_8.name());
fe.setFormatter(new com.jthink.songkong.logging.LogFormatter());
fe.setLevel(Level.FINEST);
MainWindow.logger.addHandler(fe);
Logger c3p0Logger = Logger.getLogger("com.mchange.v2.c3p0");
c3p0Logger.setLevel(Level.FINEST);
c3p0Logger.addHandler(fe);
Now that I have eventually got c3p0Based logging working and I can confirm the suggestion of #Stevewaldman is correct.
If you enable
public static Logger c3p0ConnectionLogger = Logger.getLogger("com.mchange.v2.c3p0.stmt");
c3p0ConnectionLogger.setLevel(Level.FINEST);
c3p0ConnectionLogger.setUseParentHandlers(false);
Then you get log output of the form
24/08/2019 10.20.12:BST:FINEST: com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache ----> CACHE HIT
24/08/2019 10.20.12:BST:FINEST: checkoutStatement: com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 347; checked out: 1; num connections: 13; num keys: 347
24/08/2019 10.20.12:BST:FINEST: checkinStatement(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 347; checked out: 0; num connections: 13; num keys: 347
making it clear when you get a cache hit. When there is no cache hit yo dont get the first line, but get the other two lines.
This is using C3p0 9.2.1

Creating SQL server RDS instance using Terraform

I'm going to create a SQL server database in RDS using Terraform. My Terraform file looks like this:
### RDS ###
# Subnet Group
resource "aws_db_subnet_group" "private" {
name = "db_arcgis-${var.env_name}-dbsubnet"
description = "Subnet Group for Arcgis ${var.env_tag}} DB"
subnet_ids = ["${aws_subnet.public1.id}", "${aws_subnet.public2.id}"]
tags {
Env = "${var.env_tag}"
}
}
# RDS DB parameter group
# Must enabled triggers to allow Multi-AZ
resource "aws_db_parameter_group" "allow_triggers" {
name = "arcgis-${var.env_name}-allow-triggers"
family = "sqlserver-se-12.0"
description = "Parameter Group for Arcgis ${var.env_tag} to allow triggers"
parameter {
name = "log_bin_trust_function_creators"
value = "1"
}
tags {
Env = "${var.env_tag}"
}
}
# RDS
resource "aws_db_instance" "main" {
allocated_storage = "${var.db_size}"
engine = "${var.db_engine}"
engine_version = "${var.db_version}"
instance_class = "${var.db_instance}"
identifier = "arcgis-${var.env_name}-db"
name = "${var.db_name}"
username = "${var.db_username}"
password = "${var.db_password}"
db_subnet_group_name = "${aws_db_subnet_group.private.id}"
parameter_group_name = "${aws_db_parameter_group.allow_triggers.id}"
multi_az = "${var.db_multiaz}"
vpc_security_group_ids = ["${aws_security_group.private_rds.id}"]
#availability_zone = "${var.vpc_az1}"
publicly_accessible = "true"
backup_retention_period = "2"
apply_immediately = "true"
tags {
Env = "${var.env_tag}"
}
}
I get this error by applying the Terraform files:
Error applying plan:
1 error(s) occurred:
* aws_db_parameter_group.allow_triggers: Error modifying DB Parameter Group: InvalidParameterValue: Could not find parameter with name: log_bin_trust_function_creators
status code: 400, request id: d298ab14-8b94-11e6-a088-31e21873c378
The obvious issue here is that log_bin_trust_function_creators isn't an available parameter for the sqlserver-se-12.0 parameter group family as you can see here when listing all the parameters in a parameter group based on sqlserver-se-12.0:
$ aws rds describe-db-parameters --db-parameter-group-name test-sqlserver-se-12-0 --query 'Parameters[*].ParameterName'
[
"1204",
"1211",
"1222",
"1224",
"2528",
"3205",
"3226",
"3625",
"4199",
"4616",
"6527",
"7806",
"access check cache bucket count",
"access check cache quota",
"ad hoc distributed queries",
"affinity i/o mask",
"affinity mask",
"agent xps",
"allow updates",
"backup compression default",
"blocked process threshold (s)",
"c2 audit mode",
"clr enabled",
"contained database authentication",
"cost threshold for parallelism",
"cross db ownership chaining",
"cursor threshold",
"database mail xps",
"default full-text language",
"default language",
"default trace enabled",
"disallow results from triggers",
"filestream access level",
"fill factor (%)",
"ft crawl bandwidth (max)",
"ft crawl bandwidth (min)",
"ft notify bandwidth (max)",
"ft notify bandwidth (min)",
"in-doubt xact resolution",
"index create memory (kb)",
"lightweight pooling",
"locks",
"max degree of parallelism",
"max full-text crawl range",
"max server memory (mb)",
"max text repl size (b)",
"max worker threads",
"media retention",
"min memory per query (kb)",
"min server memory (mb)",
"nested triggers",
"network packet size (b)",
"ole automation procedures",
"open objects",
"optimize for ad hoc workloads",
"ph timeout (s)",
"priority boost",
"query governor cost limit",
"query wait (s)",
"recovery interval (min)",
"remote access",
"remote admin connections",
"remote login timeout (s)",
"remote proc trans",
"remote query timeout (s)",
"replication xps",
"scan for startup procs",
"server trigger recursion",
"set working set size",
"show advanced options",
"smo and dmo xps",
"transform noise words",
"two digit year cutoff",
"user connections",
"user options",
"xp_cmdshell"
]
Instead that parameter is only available in MySQL flavours:
$ aws rds describe-db-parameters --db-parameter-group-name default.mysql5.6 --query 'Parameters[*].ParameterName'
[
"allow-suspicious-udfs",
"auto_increment_increment",
"auto_increment_offset",
"autocommit",
"automatic_sp_privileges",
"back_log",
"basedir",
"binlog_cache_size",
"binlog_checksum",
"binlog_error_action",
"binlog_format",
"binlog_max_flush_queue_time",
"binlog_order_commits",
"binlog_row_image",
"binlog_rows_query_log_events",
"binlog_stmt_cache_size",
"binlogging_impossible_mode",
"bulk_insert_buffer_size",
"character-set-client-handshake",
"character_set_client",
"character_set_connection",
"character_set_database",
"character_set_filesystem",
"character_set_results",
"character_set_server",
"collation_connection",
"collation_server",
"completion_type",
"concurrent_insert",
"connect_timeout",
"core-file",
"datadir",
"default_storage_engine",
"default_time_zone",
"default_tmp_storage_engine",
"default_week_format",
"delay_key_write",
"delayed_insert_limit",
"delayed_insert_timeout",
"delayed_queue_size",
"div_precision_increment",
"end_markers_in_json",
"enforce_gtid_consistency",
"eq_range_index_dive_limit",
"event_scheduler",
"explicit_defaults_for_timestamp",
"flush",
"flush_time",
"ft_boolean_syntax",
"ft_max_word_len",
"ft_min_word_len",
"ft_query_expansion_limit",
"ft_stopword_file",
"general_log",
"general_log_file",
"group_concat_max_len",
"gtid-mode",
"host_cache_size",
"init_connect",
"innodb_adaptive_flushing",
"innodb_adaptive_flushing_lwm",
"innodb_adaptive_hash_index",
"innodb_adaptive_max_sleep_delay",
"innodb_autoextend_increment",
"innodb_autoinc_lock_mode",
"innodb_buffer_pool_dump_at_shutdown",
"innodb_buffer_pool_dump_now",
"innodb_buffer_pool_filename",
"innodb_buffer_pool_instances",
"innodb_buffer_pool_load_abort",
"innodb_buffer_pool_load_at_startup",
"innodb_buffer_pool_load_now",
"innodb_buffer_pool_size",
"innodb_change_buffer_max_size",
"innodb_change_buffering",
"innodb_checksum_algorithm",
"innodb_cmp_per_index_enabled",
"innodb_commit_concurrency",
"innodb_compression_failure_threshold_pct",
"innodb_compression_level",
"innodb_compression_pad_pct_max",
"innodb_concurrency_tickets",
"innodb_data_home_dir",
"innodb_fast_shutdown",
"innodb_file_format",
"innodb_file_per_table",
"innodb_flush_log_at_timeout",
"innodb_flush_log_at_trx_commit",
"innodb_flush_method",
"innodb_flush_neighbors",
"innodb_flushing_avg_loops",
"innodb_force_load_corrupted",
"innodb_ft_aux_table",
"innodb_ft_cache_size",
"innodb_ft_enable_stopword",
"innodb_ft_max_token_size",
"innodb_ft_min_token_size",
"innodb_ft_num_word_optimize",
"innodb_ft_result_cache_limit",
"innodb_ft_server_stopword_table",
"innodb_ft_sort_pll_degree",
"innodb_ft_user_stopword_table",
"innodb_io_capacity",
"innodb_io_capacity_max",
"innodb_large_prefix",
"innodb_lock_wait_timeout",
"innodb_log_buffer_size",
"innodb_log_compressed_pages",
"innodb_log_file_size",
"innodb_log_group_home_dir",
"innodb_lru_scan_depth",
"innodb_max_dirty_pages_pct",
"innodb_max_purge_lag",
"innodb_max_purge_lag_delay",
"innodb_monitor_disable",
"innodb_monitor_enable",
"innodb_monitor_reset",
"innodb_monitor_reset_all",
"innodb_old_blocks_pct",
"innodb_old_blocks_time",
"innodb_online_alter_log_max_size",
"innodb_open_files",
"innodb_optimize_fulltext_only",
"innodb_page_size",
"innodb_print_all_deadlocks",
"innodb_purge_batch_size",
"innodb_purge_threads",
"innodb_random_read_ahead",
"innodb_read_ahead_threshold",
"innodb_read_io_threads",
"innodb_read_only",
"innodb_replication_delay",
"innodb_rollback_on_timeout",
"innodb_rollback_segments",
"innodb_sort_buffer_size",
"innodb_spin_wait_delay",
"innodb_stats_auto_recalc",
"innodb_stats_method",
"innodb_stats_on_metadata",
"innodb_stats_persistent",
"innodb_stats_persistent_sample_pages",
"innodb_stats_transient_sample_pages",
"innodb_strict_mode",
"innodb_support_xa",
"innodb_sync_array_size",
"innodb_sync_spin_loops",
"innodb_table_locks",
"innodb_thread_concurrency",
"innodb_thread_sleep_delay",
"innodb_undo_directory",
"innodb_undo_logs",
"innodb_undo_tablespaces",
"innodb_use_native_aio",
"innodb_write_io_threads",
"interactive_timeout",
"join_buffer_size",
"keep_files_on_create",
"key_buffer_size",
"key_cache_age_threshold",
"key_cache_block_size",
"key_cache_division_limit",
"lc_time_names",
"local_infile",
"lock_wait_timeout",
"log-bin",
"log_bin_trust_function_creators",
"log_bin_use_v1_row_events",
"log_error",
"log_output",
"log_queries_not_using_indexes",
"log_slave_updates",
"log_slow_admin_statements",
"log_slow_slave_statements",
"log_throttle_queries_not_using_indexes",
"log_warnings",
"long_query_time",
"low_priority_updates",
"lower_case_table_names",
"master-info-repository",
"master_verify_checksum",
"max_allowed_packet",
"max_binlog_cache_size",
"max_binlog_size",
"max_binlog_stmt_cache_size",
"max_connect_errors",
"max_connections",
"max_delayed_threads",
"max_error_count",
"max_heap_table_size",
"max_insert_delayed_threads",
"max_join_size",
"max_length_for_sort_data",
"max_prepared_stmt_count",
"max_seeks_for_key",
"max_sort_length",
"max_sp_recursion_depth",
"max_tmp_tables",
"max_user_connections",
"max_write_lock_count",
"metadata_locks_cache_size",
"min_examined_row_limit",
"myisam_data_pointer_size",
"myisam_max_sort_file_size",
"myisam_mmap_size",
"myisam_sort_buffer_size",
"myisam_stats_method",
"myisam_use_mmap",
"net_buffer_length",
"net_read_timeout",
"net_retry_count",
"net_write_timeout",
"old-style-user-limits",
"old_passwords",
"optimizer_prune_level",
"optimizer_search_depth",
"optimizer_switch",
"optimizer_trace",
"optimizer_trace_features",
"optimizer_trace_limit",
"optimizer_trace_max_mem_size",
"optimizer_trace_offset",
"performance_schema",
"performance_schema_accounts_size",
"performance_schema_digests_size",
"performance_schema_events_stages_history_long_size",
"performance_schema_events_stages_history_size",
"performance_schema_events_statements_history_long_size",
"performance_schema_events_statements_history_size",
"performance_schema_events_waits_history_long_size",
"performance_schema_events_waits_history_size",
"performance_schema_hosts_size",
"performance_schema_max_cond_classes",
"performance_schema_max_cond_instances",
"performance_schema_max_file_classes",
"performance_schema_max_file_handles",
"performance_schema_max_file_instances",
"performance_schema_max_mutex_classes",
"performance_schema_max_mutex_instances",
"performance_schema_max_rwlock_classes",
"performance_schema_max_rwlock_instances",
"performance_schema_max_socket_classes",
"performance_schema_max_socket_instances",
"performance_schema_max_stage_classes",
"performance_schema_max_statement_classes",
"performance_schema_max_table_handles",
"performance_schema_max_table_instances",
"performance_schema_max_thread_classes",
"performance_schema_max_thread_instances",
"performance_schema_session_connect_attrs_size",
"performance_schema_setup_actors_size",
"performance_schema_setup_objects_size",
"performance_schema_users_size",
"pid_file",
"plugin_dir",
"port",
"preload_buffer_size",
"profiling_history_size",
"query_alloc_block_size",
"query_cache_limit",
"query_cache_min_res_unit",
"query_cache_size",
"query_cache_type",
"query_cache_wlock_invalidate",
"query_prealloc_size",
"range_alloc_block_size",
"read_buffer_size",
"read_only",
"read_rnd_buffer_size",
"relay-log",
"relay_log_info_repository",
"relay_log_recovery",
"safe-user-create",
"secure_auth",
"secure_file_priv",
"server_id",
"simplified_binlog_gtid_recovery",
"skip-character-set-client-handshake",
"skip-slave-start",
"skip_external_locking",
"skip_name_resolve",
"skip_show_database",
"slave_checkpoint_group",
"slave_checkpoint_period",
"slave_parallel_workers",
"slave_pending_jobs_size_max",
"slave_sql_verify_checksum",
"slave_type_conversions",
"slow_launch_time",
"slow_query_log",
"slow_query_log_file",
"socket",
"sort_buffer_size",
"sql_mode",
"sql_select_limit",
"stored_program_cache",
"sync_binlog",
"sync_frm",
"sync_master_info",
"sync_relay_log",
"sync_relay_log_info",
"sysdate-is-now",
"table_definition_cache",
"table_open_cache",
"table_open_cache_instances",
"temp-pool",
"thread_cache_size",
"thread_stack",
"time_zone",
"timed_mutexes",
"tmp_table_size",
"tmpdir",
"transaction_alloc_block_size",
"transaction_prealloc_size",
"tx_isolation",
"updatable_views_with_limit",
"validate-password",
"validate_password_dictionary_file",
"validate_password_length",
"validate_password_mixed_case_count",
"validate_password_number_count",
"validate_password_policy",
"validate_password_special_char_count",
"wait_timeout"
]

Solr 6.0.0 - SolrCloud java example

I have solr installed on my localhost.
I started standard solr cloud example with embedded zookeepr.
collection: gettingstarted
shards: 2
replication : 2
500 records/docs to process time took 115 seconds[localhost tetsing] -
why is this taking this much time to process just 500 records.
is there a way to improve this to some millisecs/nanosecs
NOTE:
I have tested the same on remote machine solr instance, localhost having data index on remote solr [inside java commented]
I started my solr myCloudData collection with Ensemble with single zookeepr.
2 solr nodes,
1 Ensemble zookeeper standalone
collection: myCloudData,
shards: 2,
replication : 2
Solr colud java code
package com.test.solr.basic;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.common.SolrInputDocument;
public class SolrjPopulatorCloudClient2 {
public static void main(String[] args) throws IOException,SolrServerException {
//String zkHosts = "64.101.49.57:2181/solr";
String zkHosts = "localhost:9983";
CloudSolrClient solrCloudClient = new CloudSolrClient(zkHosts, true);
//solrCloudClient.setDefaultCollection("myCloudData");
solrCloudClient.setDefaultCollection("gettingstarted");
/*
// Thread Safe
solrClient = new ConcurrentUpdateSolrClient(urlString, queueSize, threadCount);
*/
// Depreciated - client
//HttpSolrServer server = new HttpSolrServer("http://localhost:8983/solr");
long start = System.nanoTime();
for (int i = 0; i < 500; ++i) {
SolrInputDocument doc = new SolrInputDocument();
doc.addField("cat", "book");
doc.addField("id", "book-" + i);
doc.addField("name", "The Legend of the Hobbit part " + i);
solrCloudClient.add(doc);
if (i % 100 == 0)
System.out.println(" Every 100 records flush it");
solrCloudClient.commit(); // periodically flush
}
solrCloudClient.commit();
solrCloudClient.close();
long end = System.nanoTime();
long seconds = TimeUnit.NANOSECONDS.toSeconds(end - start);
System.out.println(" All records are indexed, took " + seconds + " seconds");
}
}
You are committing every new document, which is not necessary. It will run a lot faster if you change the if (i % 100 == 0) block to read
if (i % 100 == 0) {
System.out.println(" Every 100 records flush it");
solrCloudClient.commit(); // periodically flush
}
On my machine, this indexes your 500 records in 14 seconds. If I remove the commit() call from the for loop, it indexes in 7 seconds.
Alternatively, you can add a commitWithinMs parameter to the solrCloudClient.add() call:
solrCloudClient.add(doc, 15000);
This will guarantee your records are committed within 15 seconds, and also increase your indexing speed.

How to know which server is online or offline in a OAM cluster

Hello I´ve a problem when I try to monitor which one of a cluster oam servers is online and offline I use the the getServerDiagnosticInfo() method of AccessClient class from aSDK, but the Hashtable that returns only contains Keys (name and port of server) and Values that contains another HashTable (ObKeyMapVal a subtype of HashTable) but I think that this object must contains the health, server port, server name and number of connections as mentioned in the API doc but when I print the size and contents of it only prints "0" and [] (its empty)
snippet:
try{
AccessClient ac = AccessClient.createDefaultInstance("/dir",AccessClient.CompatibilityMode.OAM_10G);
Hashtable info = ac.getServerDiagnosticInfo();
Set<?> servers = info.keySet();
Collection<?> serverInfo = info.values();
System.out.println("Num of servers: " + servers.size());
Iterator it = servers.iterator();
Object servidor = null;
Object dato = null;
while(it.hasNext()){
servidor = it.next();
System.out.println("Server: " + servidor);
dato = info.get(servidor);
System.out.println("Data: " + dato);
ObKeyValMap ob = (ObKeyValMap) dato;
System.out.println("Size: " + ob.keySet().size());
System.out.println("Is Empty: " + ob.keySet().isEmpty());
System.out.println("Properties: " + ob.keySet());
}
ac.shutdown();
} catch (oracle.security.am.asdk.AccessException e) {
e.printStackTrace();
} catch (Exception e){
e.printStackTrace();
}
And got the next output:
Num of servers: 2
Server: myserver1.com5575
Data: {}
Size: 0
Is Empty: true
Properties: []
Server: myserver2.com5575
Data: {}
Size: 0
Is Empty: true
Properties: []
Thanks for your help !!!
Once you get the OAM Server Host and Port using getServerDiagnosticInfo(). Try to do telnet ( I am not Java Expert, following link may help How to connect/telnet to SPOP3 server using java (Sockets)?) , if the server is up the telnet session will be established.

Resources