Redash not caching the queries - redash

I am not able to see query being cached in ReDash. I used the ami version provided on their website for self hosting, updated it to the latest version and fired few queries on athena, but everytime it queries Athena and fetches result.
All the configuration are default. Neither I can find out anything from the logs
Please help

It caches results if you were to add an Athena visualization to a dashboard.
If the Athena query takes 1 minute to run, and you schedule a 10 minute refresh... it will cache the results for 10 minutes -- but only be cached when you're viewing the dashboard. So if you share the dashboard link, they don't have to wait 1 minute for it to run the query live.

Related

SOLR 9 http response time is very slow while query itself is fast

While trying to setup SOLR 9 in docker, I noticed its taking too long to send any request back, even Admin UI is loading extremely slowly, however the QTime in response is not much so solr internal queries is faster, as you can see from screenshot below QTime is just 18ms however request took ~25s to finish.
I tried searching for this issue but couldn’t find anything related to this.
Similarly its taking too long to load static files for Admin panel.
PS: Internet speed/connectivity at the client is not an issue here.

NextJS - Incremental Static Regeneration on Vercel constantly times out (on a pro plan - 60s time-out limit)

I'm having issues with a NextJS project hosted on a Vercel pro plan. Routes that are set up with ISR just keep timing out. I have revalidate set to 120s currently but I've tried it with 60s, 30s, and 6000s too just in case I was calling it too often.
I'm using Strapi as the headless CMS that serves the API for NextJS to build pages from and is deployed on Render's German region. The database Strapi uses is a mongodb databse hosted on MongoDB Atlas and deployed on MongoDB's Ireland AWS region. (I've also set the Serverless Functions on Vercel to be served from London, UK but I'm not sure if that affects ISR?)
There are 3 different dynamic routes with about 20 pages each and on build-time they average 6999ms, 6508ms and 6174ms respectively. Yet at run-time, if I update some content in the Strapi CMS and wait the 120s that I've set for revalidate the page hardly ever gets rebuilt. If I look at the vercel dashboard "Functions" tab that shows realtime logs, I see that many of the pages have attempted to rebuild all at the same time and they are all hitting the 60s time-out limit.
I also have the vercel logs being sent to LogTail and if I filter logs for the name of the page that I've edited, I can see that it returns a 304 status code before 120s has passed as expected but then after 120s it tries to fetch and build the new page and nearly always returns the time-out error.
So my first question is, why are so many of them trying to rebuild at the same time if nothing has changed in the CMS for all of those pages but the 1 I've deliberately changed myself? And secondly, why at build time does it only take an average of 6000ms to build a page but during ISR they are hitting the 60s time-out limit?
Could it be because so many rebuilds are being triggered that they are all end up causing each other to time-out? If so, then how to I tackle that first issue?
Here is a screenshot of my vercel realtime logs. As you can see, many of the pages are all trying to rebuild at once but I'm not sure why, I've only changed the data for one page in this instance.
To try and debug the issue, I decided to create a Postman Flow for building one of the dynamic routes and then added up the time for each api call that is needed to build the page and I get 7621ms on average after a few tests. Here is a screenshot of the Postman console:
I'm not that experienced with NextJs ISR so I'm hoping I'm just doing something wrong or I've not got a setting correct on vercel etc. but after looking on stackoverflow and other websites, I believe I'm using ISR as expected. If anybody has any ideas or advice about what might be going on, I'd very much appreciate it.

How to make config changes take effect in Solr 7.3

We are using solr.SynonymFilterFactory with synonyms.txt in Solr during querying. I realized that there is an error in synonyms.txt, corrected it and uploaded the new file. I can see the modified synonyms.txt from Admin. But it looks like the queries are still using the old synonyms.txt. I am executing test queries from Admin with debugQuery=true and can see the synonyms getting used. How can this be fixed? It is a production environment with 3 nodes using zookeeper for management.
You'll need to reload your core for the changes to take effect.
In a single-node Solr you can do that from the Admin page: go to Core Admin, select your core, and hit Reload. This will slow down some queries but it shouldn't drop queries or connections.
You can also reload the core via the API:
curl 'http://localhost:8983/solr/admin/cores?action=RELOAD&core=your-core'
I am not sure how this works on an environment with 3 nodes, though.

How to fetch Google App Engine logs in minute chunks via appcfg.py?

My latest project expects 100.000 unique visitors a day and I started to use Google App Engine to be able to scale the infrastructure according to the load.
I would like to fetch log lines with appcfg.py request_logs every minute to get the latest data and add them to my monitoring dashboard utilizing LogStash/ElasticSearch/Kibana.
Is there a way to tell appcfg.py request_logs to load the log lines from an explicit time range like a minute from 2014-10-10T10:00:00 to 2014-10-10T10:01:00 ?
I am a bit afraid that I have too many log lines and I am not able to retrieve all of them because of limit to the number of lines the appcfg.py request_logs can retrieve, that I may incurring too much cost if I retrieve all log lines every minute, etc.
You can only specify by days:
-n NUM_DAYS, --num_days=NUM_DAYS
Number of days worth of log data to get. The cut-off
point is midnight US/Pacific. Use 0 to get all
available logs. Default is 1, unless --append is also
given; then the default is 0.
I haven't tried this myself but if you can create a local script to process a website you use the Logs API to generate a webpage that is read from your script. Make sure to set the login as admin only to prevent others from reading it.

How to see fresh logs in appengine dashboard?

I'm using AppEngine for some small web application. However, the logs I see in the appengine dashboard are always 30 mins to a few hours old. Is there any way to see "fresher" logs? Thanks.
I have an appengine app and there too the latest log now is 33 mins old. I suspect it takes a while for events to get there. It appears to me there can't be a way to modify this. But make sure first that you choose the correct Time Zone from the dropdown menu. I noticed it affects the time of the latest log.
This might be a bug in Google App Engine.
Here is a workaround taken from this response in the previous link:
I found that changing the timezone to PST shows the freshest logs.

Resources