I am running a Solr cloud consisting of five servers. I start a dataimport from the Solr Admin UI. When I refresh the status I do not get a consistent result - one time I might get the status of the currently running import, another I might get "No information available (idle)", and another I might get the status of a previous attempt which succeeded/failed". Am I correct in assuming that this is happening because the import is handled by a single server but the status request is going to all of the servers so 80% of the time the request will be answered by a server that is not running the import?
Is there a way that I can just get the status of the running import? I tried sending the status request to the server that is running the import and I still get different messages each time I make the request.
Related
Let's say I run a recorded script for 'New User Registration' function of a web site to evaluate the response time for entire scenario. When I run the recorded script from JMeter, for each registration script, is there a new user record getting created in the application database ?
Yes, if you record registration and correlate it (meaning you create a valid unique name for every request) you will create a real user in your environment.
JMeter is simulating a real scenario which effect your environment.
That is part of the reason JMeter will be executed in different environment than production (as stage)
Well-behaved JMeter script must represent a real user using a real browser as close as it is possible.
Browsers execute HTTP requests and render the response
JMeter executes the same HTTP requests but doesn't render the response, instead it records performance metrics like response time, connect time, latency, throughput, etc.
HTTP is a stateful protocol therefore given you execute the same request you will get the same response. So if there are no mistakes in your script it either should create a new user or fail due to non-unique username error.
Yes, if your script accurately represents the full set of data flows associated with the business process, "New User Registration," then the end state of that process should be identical to that of the user behavior so modeled.
A record will be created in the database. If not, then your user is not accurate in its behavior
I am suddenly getting a 403 error when I try to POST an update to the Retrieve and Rank service. This code is under development but it has been working up until yesterday. The failure occurs only when doing a POST to /v1/solr_clusters/{solr_cluster_id}/solr/{collection_name}/update, and it fails the same way whether I do it via my program, the Swagger API documentation, or cURL. All other operations to this service that I've tried work fine when using the same credentials that I'm using with this POST. The error message I'm getting back is
Error: WRRCSH004: Service [1d111267-76b7-417a-98bd-4e9a58072ef9] is not authorized for cluster [sc262b05e8_dcf5_40b4_b662_ae85058ff07f]!. I don't know where the identifier (1d111267-76b7-417a-98bd-4e9a58072ef9) is coming from; that's not the userid I'm sending in.
Looking into your issue it appears your Bluemix organization has multiple service instances. The 403 issue you were seeing is because you're trying to access a Solr cluster using credentials from one of your instances against a cluster in the other instance. The 1d111267-76b7-417a-98bd-4e9a58072ef9 represents one of these service instances—but the issue is that the cluster you're trying to access is not part of that instance. A good way to test this is to ensure you're using the same credentials that generate the 403 but simply try to list the Solr clusters you have created by doing a GET against https://gateway.watsonplatform.net/retrieve-and-rank/api/v1/solr_clusters/.
As for the 500 issue, I wasn't able to see anything on our end. If you're still experiencing that I would suggest posting another question and we can look into things again.
Thanks,
-Scott
I am using solr 4.6 for fetching records from sql server using data import handler.But while fetching i am getting error and the reason for error is one of my field is of LatLong type.So when My sql latlong field contains wrong value for eg 23.454,545454 As u can see longitude value i.e 545454 is wrong so solr dih gives error. I want know where solr keeps these error logs. I am using jetty container for solr.
JAVA_HOME=/usr/java/default
JAVA_OPTIONS="-Dsolr.solr.home=/opt/solr/solr $JAVA_OPTIONS"
JETTY_HOME=/opt/solr
JETTY_USER=solr
JETTY_LOGS=/opt/solr/logs
All of these settings are important. In particular, not setting JETTY_LOGS would lead jetty to attempt (and fail) to place request logs in /home/solr/logs.
go through the link
https://wiki.apache.org/solr/SolrJetty
Some time ago I had Composite C1 installed on a public url to test it (for example http://c1.mydomain.com). But I did remove it already some time ago.
I checked my firewall logs recently and I discovered requests for http://c1.mydomain.com/Composite/top.aspx every single night from IP address 109.238.52.32. (Composite.net's ip address is 109.238.52.24, which is almost the same, so I assume the requests are comming from Composite.net.)
So the question is: Why is Composite.net requesting my admin page every single day?
What you are seeing is a process that check long-term usage of the software in order to generate statistics. The daily URL request looks at the HTTP status code (HTTP 200 vs other) to determine if your website is still online and using the CMS. Since the URL is unique and likely not used by any other system it is a good indication.
When that is said it is probably silly to keep on requesting the same URL once the check starts reporting 404 or similar.
I'm trying to set up the email receiver found here, to process incoming emails and send them out as POST data to a script on my server to be handled further from there. The issue I'm having is when I send one test email to foo#[myappname].appspotmail.com, the App Engine logs show that the email is continually "received" over and over again every couple of minutes, even though I only sent it once. Then after several minutes of this, when I go into settings and disable the app, I get at least one "Delivery to the following recipient failed permanently" message to the email account I was sending the emails from (makes sense, since the app is now disabled and not accepting any incoming mail).
What I'm having trouble understanding is why the application is behaving like it's getting multiple emails sent to it when it's only one. Do I need to modify the Python script to do something to delete or halt the email once it's been processed the first time? If so, does anyone have any suggestions as to how to do that? The Python script that I'm using is found here.
User voscausa answered my question-- the email requests kept retrying because the script was erroring. Thanks!