Hasura GraphQL 504 error(Gateway Time-out) while run the query in GraphQL console - API explorer tool - database

I have a table with 2.3 millions of data in Postgres DB. While I do the search for user email Id using '_ilike' in Hasura GraphQL console - API explorer tool. Its took too long time to load after 30 seconds its thrown 504 error(Gateway Time-out).Its happens 3/4 times. If I use the same in API in React Application its shown the correct search output within 30 Seconds.

Related

SOLR 9 http response time is very slow while query itself is fast

While trying to setup SOLR 9 in docker, I noticed its taking too long to send any request back, even Admin UI is loading extremely slowly, however the QTime in response is not much so solr internal queries is faster, as you can see from screenshot below QTime is just 18ms however request took ~25s to finish.
I tried searching for this issue but couldn’t find anything related to this.
Similarly its taking too long to load static files for Admin panel.
PS: Internet speed/connectivity at the client is not an issue here.

NextJS - Incremental Static Regeneration on Vercel constantly times out (on a pro plan - 60s time-out limit)

I'm having issues with a NextJS project hosted on a Vercel pro plan. Routes that are set up with ISR just keep timing out. I have revalidate set to 120s currently but I've tried it with 60s, 30s, and 6000s too just in case I was calling it too often.
I'm using Strapi as the headless CMS that serves the API for NextJS to build pages from and is deployed on Render's German region. The database Strapi uses is a mongodb databse hosted on MongoDB Atlas and deployed on MongoDB's Ireland AWS region. (I've also set the Serverless Functions on Vercel to be served from London, UK but I'm not sure if that affects ISR?)
There are 3 different dynamic routes with about 20 pages each and on build-time they average 6999ms, 6508ms and 6174ms respectively. Yet at run-time, if I update some content in the Strapi CMS and wait the 120s that I've set for revalidate the page hardly ever gets rebuilt. If I look at the vercel dashboard "Functions" tab that shows realtime logs, I see that many of the pages have attempted to rebuild all at the same time and they are all hitting the 60s time-out limit.
I also have the vercel logs being sent to LogTail and if I filter logs for the name of the page that I've edited, I can see that it returns a 304 status code before 120s has passed as expected but then after 120s it tries to fetch and build the new page and nearly always returns the time-out error.
So my first question is, why are so many of them trying to rebuild at the same time if nothing has changed in the CMS for all of those pages but the 1 I've deliberately changed myself? And secondly, why at build time does it only take an average of 6000ms to build a page but during ISR they are hitting the 60s time-out limit?
Could it be because so many rebuilds are being triggered that they are all end up causing each other to time-out? If so, then how to I tackle that first issue?
Here is a screenshot of my vercel realtime logs. As you can see, many of the pages are all trying to rebuild at once but I'm not sure why, I've only changed the data for one page in this instance.
To try and debug the issue, I decided to create a Postman Flow for building one of the dynamic routes and then added up the time for each api call that is needed to build the page and I get 7621ms on average after a few tests. Here is a screenshot of the Postman console:
I'm not that experienced with NextJs ISR so I'm hoping I'm just doing something wrong or I've not got a setting correct on vercel etc. but after looking on stackoverflow and other websites, I believe I'm using ISR as expected. If anybody has any ideas or advice about what might be going on, I'd very much appreciate it.

Salesforce through HTTP query retrieves only Limited number of records through the JSON response

I am a Java developer and building a SAP HANA adapter via Java. I need to retrieve records from a Salesforce application to populate HANA tables. So I connect to a Salesforce application through HTTP GET with a Authorization Header and the query goes like https://<salesforceInstance>/services/data/v42.0/query/?q=<GET query>. It seems to work fine but my JSON response has only 500 records. However the Salesforce object has over 35000 records. Is there a way that I can retrieve all the records?
In the response there should be a special link to fetch next chunk of data. It's bit like cursor in a normal database. See "nextrecordsurl" in https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_query.htm

Full & Incremental data load - api

I am using application insights api to get the events data to a database. However, I see that there is a limit of 500.
My use case is - Dump all the historic data coming from the api to database and then run a job every hour so that new data is only dumped into the database
How do I achieve this?
Currently - The code is consuming the api and storing the data in database(only the 500 rows that is given out of the api)
Problem -
A 500 limit in application insights api
Unable to get all the historic data from the api
Mechanism to setup a incremental load - not known
Any idea on this would be very helpful

Redash not caching the queries

I am not able to see query being cached in ReDash. I used the ami version provided on their website for self hosting, updated it to the latest version and fired few queries on athena, but everytime it queries Athena and fetches result.
All the configuration are default. Neither I can find out anything from the logs
Please help
It caches results if you were to add an Athena visualization to a dashboard.
If the Athena query takes 1 minute to run, and you schedule a 10 minute refresh... it will cache the results for 10 minutes -- but only be cached when you're viewing the dashboard. So if you share the dashboard link, they don't have to wait 1 minute for it to run the query live.

Resources