My Kibana instance randomly will take long times to load data on Kibana index - elk

I am using json script to fetch Okta users data and save in .json files by cronjob till this process working fine.
After this try to load json files data into Kibana index to create visualizations using config files, Starting point data loading working fine randomly after certain period of time loading process getting very slow.
on basis of my observation only 3k to 4k data loaded daily means in 24 hr. .
can anyone please suggest any solution on this ?

Related

Processing Json file to post in batches

Im working on a Wordpress project that will take entries from a 3rd party API and add them as individual post on the site with the data saved as metadata. Up until now it was going well but with the number of entries on the API increasing im starting to run in to issue regarding the server timing out while making the API request.
What I have done now is to wright the response from the API that is all the interies in json format to a file then creating the post on the site from the file. But still running in to time out issues.
That brings me to my question.
I want to break the data (from the JSON file) up in to smaller manageable request to the server. only processing lets say 20 entries at a time out if the 1000+. But I need to find out how do I select the entry number in the json file, for example after processing the first 20 entries I then want the function to go back to the json file but this time start at the 21st entry in the file, if at all possible. Or is there a better method of programmatically creating post from a json file with a large amount of entries.
// Just some more info.
Im using wp_remote_post() with blocking set to false to run the function that creates the post in batches in the background.

Getting large images from mulesoft into salesforce

So currently I am doing a synchronous call to mulesoft which returns raw image(no encoding is done) and then storing the image in a document.So when ever we are getting bigger images more than 6 MB it is hitting the governerlimit for max size.So wanted to know is there a way to get a reduced or compressed image
I have no idea if Mule has anything to preprocess images, compress...
In apex you could try to make the operation asynchronous to benefit from 22 mb limit. But there wil be no UI element for it anymore, your component / user would have to periodically check if the file got saved or something.
you could always change the direction. Make Mule push to salesforce over standard API instead of apex code pulling from Mule. From what I remember standard files API is good for up to 2GB.
Maybe send some notification to mule that you want file XYZ attached to account 123, mule would insert contentversion, contentdocumentlink? And have apex periodically check.
And when file is not needed - nightly job to delete files created by "Mr mule" over a week ago?

azure logic app to grab email attachment slow to trigger

I have a azure logic app that monitors my emails and when target is found, it drops the attachment into blob storage. The plan is a consumption plan.
The issue is, sometimes it takes up to 50 minutes for the email to be grabbed and dropped. I know there is a startup time when things go idle, but I was reading seconds/minutes. Not close to an hour. Does anyone know how I can trouble shoot this?
sometimes it takes up to 50 minutes to grab and drop the email
Based on this doc ,
The reason for delay is:
When the triggers encounter a new file, it will try to ensure that the new file is completely written. For instance, it is possible that the file is being written or modified, and updates are being made at the time the trigger polled the file server. To avoid returning a file with partial content, the trigger will take note of the timestamp such files which are modified recently, but will not immediately return those files. Those files will be returned only when the trigger polls again. Sometimes, this may lead a delay up to twice the trigger polling interval. This also means that the trigger does not guarantee to return all files in a single run when "Split On" option is disabled.
For more information you can refer this:
. Automate tasks to process emails by using Azure Logic Apps | MS DOC, .
.How to Send an Email with one or more attachments after getting the content from Blob storage? | SO Thread & Logic app Created with add email attachments in Blob storage .

Full & Incremental data load - api

I am using application insights api to get the events data to a database. However, I see that there is a limit of 500.
My use case is - Dump all the historic data coming from the api to database and then run a job every hour so that new data is only dumped into the database
How do I achieve this?
Currently - The code is consuming the api and storing the data in database(only the 500 rows that is given out of the api)
Problem -
A 500 limit in application insights api
Unable to get all the historic data from the api
Mechanism to setup a incremental load - not known
Any idea on this would be very helpful

Node.JS & Socket.IO - Prioritise / Pause & Resume

I am building a real time application using AngularJS, NodeJS & Socket.IO. The application facilitates 3-4 large tables which are populted from a MongoDB database and after the initial load they only get updated.
Each table store almost 15-30Mb of data and it takes about 30 seconds to populate them using Socket.IO . Now, if a user navigates to a different page before all data is downloaded, the data required by the new page stays in the queue and is received after the table from the first page populates in full. Is there a way to pause or cancel the request when navigating to a different page?
Thanks in advance for your help.
--- Edit ---
To make the question more clear, here is the process that a user might follow:
The User opens /index and Grid_1 starts loading with Data from the server using Socket.IO. The data is coming in chunks of 50 records at the time. Grid_1 will eventualy get populated with 15.000 records and it will take 30 to download all the data.
After the user waits at /index for 10 secords, he decides to visit /mySecondPage where there is Grid_2 which simillar with Grid_1 populates from the database with 15.000 records which takes 30 seconds to download - again using Socket.IO. Since the user switched from /index to /mySecondPage before the data is populated in Grid_1, Grid_2 is not populated with data before Grid_1's data is fully downloaded. Hope that made it more clear?

Resources