I am getting
org.apache.solr.common.SolrException: Could not load config file
C:\nemoCode\sceneric-hybris\hybris\config\solr\embedded\solrconfig.xml
INFO | jvm 1 | main | 2017/05/23 11:54:01.550 | at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:530)
INFO | jvm 1 | main | 2017/05/23 11:54:01.550 | at org.apache.solr.core.CoreContainer.create(CoreContainer.java:597)
INFO | jvm 1 | main | 2017/05/23 11:54:01.550 | at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:251)
INFO | jvm 1 | main | 2017/05/23 11:54:01.550 | at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:243)
INFO | jvm 1 | main | 2017/05/23 11:54:01.550 | at java.util.concurrent.FutureTask.run(FutureTask.java:262)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | at java.util.concurrent.FutureTask.run(FutureTask.java:262)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | at java.lang.Thread.run(Thread.java:745)
INFO | jvm 1 | main | 2017/05/23 11:54:01.551 | Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or 'C:\nemoCode\sceneric-hybris\hybris\config\solr\embedded\conf'
INFO | jvm 1 | main | 2017/05/23 11:54:01.552 | at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:342)
INFO | jvm 1 | main | 2017/05/23 11:54:01.552 | at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:288)
INFO | jvm 1 | main | 2017/05/23 11:54:01.552 | at org.apache.solr.core.Config.<init>(Config.java:116)
INFO | jvm 1 | main | 2017/05/23 11:54:01.552 | at org.apache.solr.core.Config.<init>(Config.java:86)
INFO | jvm 1 | main | 2017/05/23 11:54:01.553 | at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:139)
INFO | jvm 1 | main | 2017/05/23 11:54:01.553 | at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:527)
INFO | jvm 1 | main | 2017/05/23 11:54:01.553 | ... 9 more
But I dont have that path in my c drive. Where it is configured that it should serach from that perticular file path???
I think that the path is configured inside your solr.xml , you will find it inside ${HYBRIS_CONFIG_DIR}/solr/embedded/solr.xml
The solr.xml file specifies configuration options for each Solr core, including configuration options for multiple cores. The file also contains mappings for request URLs, and indicates which cores to load when the server starts.
so check the instanceDir and dataDir of one of your cores
An example of core inside solr.xml
<core name="master_apparel-de_Product"
instanceDir="A:\source\hybris.5.2.0\hybris\config/solr/embedded"
dataDir="A:\source\hybris.5.2.0\hybris\data\solrfacetsearch\MASTER\apparel-de_Product_1"/>
This is not a usual location for solrconfig.xml. Usually the location is:
[solr.home]/[corename]/conf/solrconfig.xml
It is possible to deviate from that by changing config property in the core.properties files that is just in the [corename] directory. That location can be relative which is what may be causing you some issues.
Related
Do we have any mechanism in Snowflake where we alert Users running a Query containing Large Size Tables , this way user would get to know that Snowflake would consume many warehouse credits if they run this query against large size dataset,
There is no alert mechanism for this, but users may run EXPLAIN command before running the actual query, to estimate the bytes/partitions read:
explain select c_name from "SAMPLE_DATA"."TPCH_SF10000"."CUSTOMER";
+-------------+----+--------+-----------+-----------------------------------+-------+-----------------+-----------------+--------------------+---------------+
| step | id | parent | operation | objects | alias | expressions | partitionsTotal | partitionsAssigned | bytesAssigned |
+-------------+----+--------+-----------+-----------------------------------+-------+-----------------+-----------------+--------------------+---------------+
| GlobalStats | | | | 6585 | 6585 | 109081790976 | | | |
| 1 | 0 | | Result | | | CUSTOMER.C_NAME | | | |
| 1 | 1 | 0 | TableScan | SAMPLE_DATA.TPCH_SF10000.CUSTOMER | | C_NAME | 6585 | 6585 | 109081790976 |
+-------------+----+--------+-----------+-----------------------------------+-------+-----------------+-----------------+--------------------+---------------+
https://docs.snowflake.com/en/sql-reference/sql/explain.html
You can also assign users to specific warehouses, and use resource monitors to limit credits on those warehouses.
https://docs.snowflake.com/en/user-guide/resource-monitors.html#assignment-of-resource-monitors
As the third alternative, you may set STATEMENT_TIMEOUT_IN_SECONDS to prevent long running queries.
https://docs.snowflake.com/en/sql-reference/parameters.html#statement-timeout-in-seconds
In vertica, Is there any way to access admintools without admin rights?. How to find nodes and cluster configure details in normal user in vertica?
You might have already tried it.
You must have access to the Linux shell level on one of the Vertica nodes, as a user that is allowed to write to /opt/vertica/log/adminTools.log . And that is, by default, dbadmin.
I would regard it as quite a security risk to tamper with the permissions around that, really.
A good start would be:
$ vsql -U <user> -w <pwd> -h 127.127.127.128 -d dbnm -x -c "select * from host_resources"
-[ RECORD 1 ]------------------+-----------------------------------------
host_name | 127.127.127.128
open_files_limit | 65536
threads_limit | 7873
core_file_limit_max_size_bytes | 0
processor_count | 1
processor_core_count | 2
processor_description | Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
opened_file_count | 7
opened_socket_count | 10
opened_nonfile_nonsocket_count | 7
total_memory_bytes | 8254820352
total_memory_free_bytes | 1034915840
total_buffer_memory_bytes | 523386880
total_memory_cache_bytes | 5516861440
total_swap_memory_bytes | 2097147904
total_swap_memory_free_bytes | 2097147904
disk_space_free_mb | 425185
disk_space_used_mb | 76682
disk_space_total_mb | 501867
system_open_files | 1440
system_max_files | 798044
-[ RECORD 2 ]------------------+-----------------------------------------
host_name | 127.127.127.129
open_files_limit | 65536
threads_limit | 7873
core_file_limit_max_size_bytes | 0
processor_count | 1
processor_core_count | 2
processor_description | Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
opened_file_count | 7
opened_socket_count | 9
opened_nonfile_nonsocket_count | 7
total_memory_bytes | 8254820352
total_memory_free_bytes | 1836150784
total_buffer_memory_bytes | 487129088
total_memory_cache_bytes | 4774060032
total_swap_memory_bytes | 2097147904
total_swap_memory_free_bytes | 2097147904
disk_space_free_mb | 447084
disk_space_used_mb | 54783
disk_space_total_mb | 501867
system_open_files | 1408
system_max_files | 798044
-[ RECORD 3 ]------------------+-----------------------------------------
host_name | 127.127.127.130
open_files_limit | 65536
threads_limit | 7873
core_file_limit_max_size_bytes | 0
processor_count | 1
processor_core_count | 2
processor_description | Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
opened_file_count | 7
opened_socket_count | 9
opened_nonfile_nonsocket_count | 7
total_memory_bytes | 8254820352
total_memory_free_bytes | 1747091456
total_buffer_memory_bytes | 531447808
total_memory_cache_bytes | 4813959168
total_swap_memory_bytes | 2097147904
total_swap_memory_free_bytes | 2097147904
disk_space_free_mb | 451444
disk_space_used_mb | 50423
disk_space_total_mb | 501867
system_open_files | 1408
system_max_files | 798044
In general, check the Vertica docu for system tables and information available there....
How can i map the value's I got from the column name into an array that i can later use in my bash script?
+------------------------------+----------+-----------+---------+
| name | status | update | version |
+------------------------------+----------+-----------+---------+
| enable-jquery-migrate-helper | inactive | none | 1.0.1 |
| gravityforms | inactive | none | 2.4.17 |
| gutenberg | inactive | none | 8.8.0 |
| redirection | inactive | none | 4.8 |
| regenerate-thumbnails | inactive | none | 3.1.3 |
| safe-svg | inactive | none | 1.9.9 |
| weglot | inactive | none | 3.1.9 |
| wordpress-seo | inactive | available | 14.8 |
+------------------------------+----------+-----------+---------+
I already tried the following, but this would only save the name of the headers in the table:
IFS=$'\n' read -r -d '' -a my_array < <( wp plugin list --status=inactive --skip-plugins && printf '\0' )
echo $my_array
name status update version
After I have retrieved the value's I want to loop over them to add them to an array
Better use the CSV output format rather than the default table format if your intent is mapping the result with a shell or awk script:
wp plugin list --status=inactive --skip-plugins --format=csv
which would output this:
name,status,update,version
enable-jquery-migrate-helper,inactive,none,1.0.1
gravityforms,inactive,none,2.4.17
gutenberg,inactive,none,8.8.0
redirection,inactive,none,4.8
regenerate-thumbnails,inactive none,3.1.3
safe-svg,inactive,none,1.9.9
weglot,inactive,none,3.1.9
wordpress-seo,inactive,available,14.8
I am getting this error.
Stacktrace:] with root cause
INFO | jvm 1 | 2019/06/06 08:49:59 | java.lang.AbstractMethodError
INFO | jvm 1 | 2019/06/06 08:49:59 | at net.sourceforge.jtds.jdbc.JtdsConnection.isValid(JtdsConnection.java:2833)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:874)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:389)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2398)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.dbcp.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jsp.startpage_jsp.getDBConnection(startpage_jsp.java:60)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jsp.startpage_jsp.getUserInfo(startpage_jsp.java:474)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jsp.startpage_jsp._jspService(startpage_jsp.java:732)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
INFO | jvm 1 | 2019/06/06 08:49:59 | at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:476)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:386)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:330)
INFO | jvm 1 | 2019/06/06 08:49:59 | at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.tomcat.ForgotPasswordFilter.doFilter(ForgotPasswordFilter.java:78)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.webrequestsigner.WebRequestSignerFilter.doFilter(WebRequestSignerFilter.java:67)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:610)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:660)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.authenticator.SingleSignOn.invoke(SingleSignOn.java:240)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.owasp.valve.WhitelistHTTPMethodsValve.invoke(WhitelistHTTPMethodsValve.java:72)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.owasp.valve.XSSProtectionHeaderValve.invoke(XSSProtectionHeaderValve.java:175)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.tomcat.AddHeaderValve.invoke(AddHeaderValve.java:117)
INFO | jvm 1 | 2019/06/06 08:49:59 | at nl.planon.tomcat.ClickjackHostValve.invoke(ClickjackHostValve.java:107)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:806)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
INFO | jvm 1 | 2019/06/06 08:49:59 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
INFO | jvm 1 | 2019/06/06 08:49:59 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
INFO | jvm 1 | 2019/06/06 08:49:59 | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
INFO | jvm 1 | 2019/06/06 08:49:59 | at java.lang.Thread.run(Thread.java:748)
INFO | jvm 1 | 2019/06/06 08:49:59 |
I found some solutions with adding validationQuery="select 1" to context.xml. But in my context.xml I don't have a resource.
<Context>
<!-- Authenticate against PlanonRealmLogin (JAAS) -->
<!-- allRolesMode=authOnly" means that no role is needed for '*' requirement -->
<Realm appName="PlanonRealmLogin"
className="org.apache.catalina.realm.JAASRealm"
userClassNames="nl.planon.cerebeus.PnUser"
roleClassNames="nl.planon.cerebeus.PnRole"
allRolesMode="authOnly"/>
<!--Valve className="nl.planon.tomcat.AccessKeyValve" throttle="5000"/-->
<!--Valve className="nl.planon.tomcat.ForgotPasswordLoginValve"/-->
<!-- Will force authentication attempts to be parsed as UTF-8. The Landingpage will prevent HTTP 408 messages
because now, even without a stored original location, Tomcat knows where to forward to. -->
<!-- exceptionAttributeName="PnLoginException"-->
<Valve className="nl.planon.tomcat.PnMessageFormAuthenticator" landingPage="/" characterEncoding="utf-8"/>
<!-- This valve excludes valid webdav users with role webdav_readwrite to enter the web application(s) -->
<!--Valve className="nl.planon.tomcat.ExcludingRoleValve"/-->
<!--Parameter name="trustedServiceKeystore" value="${catalina.home}/webclientKeystore.jks" />
<Parameter name="trustedServiceName" value="webclient" /-->
<Manager pathname="" />
<ResourceLink
name="jdbc/PlanonDS"
global="jdbc/PlanonDS"
type="javax.sql.DataSource" />
<!-- Whitelist the minimal set of HTTP Methods that Web Bootstrap needs -->
<!--Valve className="nl.planon.owasp.valve.WhitelistHTTPMethodsValve" methods="GET, OPTIONS, HEAD, POST, PUT, DELETE" /-->
In my server.xml I have mentioned resource.
<GlobalNamingResources>
<!-- Editable user database that can also be used by
UserDatabaseRealm to authenticate users
-->
<Resource auth="Container" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" name="UserDatabase" pathname="conf/tomcat-users.xml" type="org.apache.catalina.UserDatabase"/>
<Resource name="jdbc/PlanonDS"
auth="Container"
type="javax.sql.DataSource"
username=""
password=""
driverClassName="net.sourceforge.jtds.jdbc.Driver"
url="jdbc:jtds:sqlserver://SZH1DB;instance=planon"
validationQuery="select 1"
maxActive="8"
maxIdle="4"/>
Here I have used validationQuery="select 1". But still I get the same error.
Can someone help me with this error?
Some times happens that the GAE App engine instance is failing to respond successfully, for requests that apparently do not cause exceptions in the Django app.
Then I check the processlist in MySQL instance and see that there are many unnecessary processes open by localhost, and probably the server app is trying to open a new connection and hits the process limit.
Why is the server creating new processes but fails to close the connections at the end? How to close these connections programatically?
If I restart the App engine instance the 500 errors (and mysql threads) disappear.
| 7422 | root | localhost | prova2 | Sleep | 1278 | | NULL
| 7436 | root | localhost | prova2 | Sleep | 703 | | NULL
| 7440 | root | localhost | prova2 | Sleep | 699 | | NUL
| 7442 | root | localhost | prova2 | Sleep | 697 | | NULL
| 7446 | root | localhost | prova2 | Sleep | 694 | | NULL
| 7448 | root | localhost | prova2 | Sleep | 694 | | NULL
| 7450 | root | localhost | prova2 | Sleep | 693 | | NULL
Actually the problematic code was middleware that stores the queries and produces some summary data of requests. The problem of sleeping connections disappears when I remove this section in appengine_config.py:
def webapp_add_wsgi_middleware(app):
from google.appengine.ext.appstats import recording
app = recording.appstats_wsgi_middleware(app)
return app