Why am I getting a HTTP Connectivity error on Solr cloud? - solr

I created a collection(say test_collection) on a Solr cloud instance using the Solr admin tool with a custom configset.
So write operations can be performed on this server are being carried out via the following service:
https://servername:port/solr/test_collection/update?commit=true
The following service is used by a middleware(Mulesoft) to write records to Solr in batches of 50 records.
However every time a write is performed, all records get loaded in Solr, except for a batch of 50 records which fail with the following exception:
"errortype": "HTTP:CONNECTIVITY",
"errormessage": "HTTP POST on resource 'https://servername:port/solr/test_collection/update' failed:
Remotely closed."

Related

Snowflake JDBC ResultSet with more than 1000 rows not reaching Client

Our Application is fetching data From Snowflake privatelink account using JDBC queries , the app is running behind a restricted firewall and proxy , When we run SnowCD it shows many URL are blocked , but if we pass proxy information in snowcd then it succesfully pass all test.
Now when we run our app to connect snowflake and execute queries , those queries which returns small data executes but those who returns large data (3000 rows+) goes in waiting , and after long wait timeout error comes.
Same queries works when data is small.
net.snowflake.client.jdbc.SnowflakeChunkDownloader : Timeout waiting for the download of #chunk0
From this stackoverflow discussion I came to know, that when snowflake JDBC execute a small resultset the response comes directly , if its a large resultset a separate request goes to Internal stage (aws s3) and that url is different than snowflake account url , and if proxy is there this might create problem. Private link don't need proxy parameters but STAGE url's need proxy.
But when i am trying proxy properties in JDBC Url and Tomcat level as well there is no difference, it's not working.
I didn't found any proper Snowflake Documentation to explain this Large ResultSet vs Small Result Set behavior.

Snowflake task failure notification

I have Snowflake tasks that runs every 30 minutes. Currently, when the task fails due to underlying data issue in the stored procedure that the Task calls, there is no way to notify the users on the failure.
SELECT *
FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY());
How can notifications be setup for a Snowflake Task failure? The design plan I have in mind is to build a python application that runs every 30mins and looks for any error on TASK_HISTORY table. Please advise if there are any better approaches to handle failure notifications
I think currently a python script would the best way to address this.
You can use this SQL to query last runs, read into a data frame and filter out errors
select *
from table(information_schema.task_history(scheduled_time_range_start=>dateadd(minutes, -30,current_timestamp())))
It is possible to create Notification Integration and send message when error occurs. As of May 2022 this feature is in preview, supported by accounts on Amazon Web Servives.
Enabling Error Notifications for Tasks
This topic provides instructions for configuring error notification support for tasks using cloud messaging. This feature triggers a notification describing the errors encountered when a task executes SQL code
Currently, error notifications rely on cloud messaging provided by the
Amazon Simple Notification Service service;
support for Google Cloud Pub/Sub queue and Microsoft Azure Event Grid is planned.
New Tasks
Create a new task using CREATE TASK. For descriptions of all available task parameters, see the SQL command topic:
CREATE TASK <name>
[...]
ERROR_INTEGRATION = <integration_name>
AS <sql>
Existing tasks:
ALTER TASK <name> SET ERROR_INTEGRATION = <integration_name>;
A new Snowflake feature was announced for Task Error Notifications on AWS via SNS. This doc walks though how to set this up for task failures.
https://docs.snowflake.com/en/user-guide/tasks-errors.html

SFDC Bulk API option in informatica session not working

When i am loading data from oracle database to Salesforce from informatica with SFDC Bulk API checked, then no data is getting inserted into salesforce. In Workflow Monitor it is showing the successful records but when i checked in Salesforce its not getting inserted.How to bulk load to Salesforce?
There should be some errors in your data that is the reason you are not able to see any data in your salesforce target.
While using SFDC Bulk API option the rejected data will not be written to any reject file. To know if there are any errors in your data implement below steps in order.
In target session properties check the below options.
Use SFDC Error File
Monitor Bulk jobs Until all Batches Processed.
Set the location of the BULK error files(you should provide a path)
After doing the above changes run the workflow if there are any errors in the data it will be moved to the reject file along with the error message which will be saved in the location you provided in step3.

Analytics are not displayed real time WSO2 Analytics

I have configured a distributed Setup of API Manager 2.1.0 and configured Analytics 2.1.0 as well. It is taking too long to display the analytics after API Invocation.
Server 1 (1 Publisher instance,1 Store instance, 1 Analytics instance, 1 Traffic Manager instance)
Server 2 (1 Key Manager Instance, 1 Gateway Instance)
Server 3 (1 Key Manager Instance, 1 Gateway Instance)
It seems the batch scripts run only once per day though the cron expression is set as "0 0/5 * 1/1 * ? *" in few scripts such as APIM_STAT_SCRIPT, APIM_STAT_SCRIPT_THROTTLE, APIM_LAST_ACCESS_TIME_SCRIPT.
But when I try to execute those scripts manually I'm getting a warning as
"Scheduled task for the script : APIM_LAST_ACCESS_TIME_SCRIPT is already running. Please try again after the scheduled task is completed.".
But the data was not populated in Summary tables until next day.
I want these scripts to be executed every 15 mints.
When I configured single API Manager 2.1.0 instance with Analytics 2.1.0 in the same server it worked as expected.
How can I resolve this?
Since the no of analytics records are increasing day by day it is taking too long to execute batch scripts. That is the reason taking too long to display Analytics.
In order to increase the performance we can remove historical data using Data Purging option in APIM Analytics.
I was able to resolve above issue after purge historical data.
For more information refer https://docs.wso2.com/display/AM210/Purging+Analytics+Data

Solr Write-lock issue

Environment: Solr 1.4 on Windows/MS SQL Server
A write lock is getting created whenever I am trying to do a full-import of documents using DIH. Logs say "Creating a connection with the database....." and the process is not going forward (Not getting a database connection). So the indexes are not getting created. Note that no other process is accessing the index and even I restarted my MS SQL Server service. However still I see a write.lock file in my index directory.
What could be the reason for this? Even I have set the flag unlockOnStartup in solrconfig to be true, still the indexing is not happening.
Problem was resolved. There was some issue with the java update and the microsoft jdbc driver.

Resources