Set timeout for get request with HTTP.jl - request

I got to scan IP ranges and want to reduce waiting time on the timeouts.
How do i specify the request timeout, with Julia's HTTP.jl package?
I have found the readtimeout option in the docs for v.0.6.15:
conf = (readtimeout = 10,
pipeline_limit = 4,
retry = false,
redirect = false)
HTTP.get("http://httpbin.org/ip"; conf..)
But in the current stable version v0.8.6 readtimeout seams to only appear on the server side.
Testcode with readtimeout=2 and v.0.8.6:
#time begin
try
HTTP.get("http://2.160.0.0:80/"; readtimeout=2)
catch e
#info e
end
end
Output:
113.642150 seconds (6.69 k allocations: 141.328 KiB)
┌ Info: IOError(Base.IOError("connect: connection timed out (ETIMEDOUT)", -4039) during request(http://2.160.0.0:80/))
└ # Main In[28]:5
So the request took about 114 seconds, hence i think that this option is currently unsupported.
Edit
I checked the source code (HTTP.jl) of the stable release:
Timeout options
- `readtimeout = 60`, close the connection if no data is received for this many
seconds. Use `readtimeout = 0` to disable.
with this example given:
HTTP.request("GET", "http://httpbin.org/ip"; retries=4, cookies=true)
HTTP.get("http://s3.us-east-1.amazonaws.com/"; aws_authorization=true)
conf = (readtimeout = 10,
pipeline_limit = 4,
retry = false,
redirect = false)
HTTP.get("http://httpbin.org/ip"; conf..)
HTTP.put("http://httpbin.org/put", [], "Hello"; conf..)
So it should be working...

It definitely does not do what is expected.
There are multiple things going on here:
First off the default for idempotent requests in HTTP.jl is to retry 4 times. So a HTTP.get will only fail after 5 * readtimeout. You can change this by passing retry = false in the arguments.
Second thing I noticed is that the check for timed out connections is on a very long interval. (See TimeoutRequest Layer) It only checks every 8 to 12 seconds for a time out, so a timeout below 8 seconds is not doing anything. (I suspect these should be 8-12 milliseconds not seconds as implemented)
And finally the docs for 0.8.6 are missing for HTTP.request. I already made a PR to fix this.

Related

How to test a React Snackbar did not appear with Cypress

My React web application, reports errors to users via the Snackbar component. By default, Snackbars don't autohide for accessibility what if we do want to hide Snackbars automatically using the autoHideDuration parameter? In my case, I'm using 6000 milliseconds (i.e. 6 seconds).
How can I use Cypress to verify that no error message appeared on the screen?
I tried to detect that no Snackbar appeared with the following logic:
function errorSnackbarDoesNotExist(errorMessagePrefix) {
cy.get(".MuiSnackbar-root").should("not.exist");
cy.get("[id=idErrorSnackbar]").should("not.exist");
cy.get("[id=idErrorAlert]").should("not.exist");
cy.contains(errorMessagePrefix).should("not.exist");
}
However, when I forced an error to ensure that this function would detect an actual error, it did not work: none of the assertions in errorSnackbarDoesNotExist() failed as I wanted them to.
I could not find a Cypress recipe for testing a Snackbar/Toast which is asynchronous.
I did try adding a { timeout: 10000 } to the cy.get() statements, but that didn't work. I thought this was supposed to wait for 10 seconds (which is longer than the 6 seconds of my autoHideDuration). It seems like the timeout was not working, as reported also as a Cypress issue Timeout option not respected for .should('not.exist') #7957.
Someone asked a similar question but they wanted to know how to manipulate the system internally (e.g. by a stub) to cause an error. In my case, I'm not asking about how to cause the error, I'm asking about how to detect that NO error was reported to the end user.
I got it to work by adding a short timeout instead of a long one, as follows:
function errorSnackbarDoesNotExist(errorMessagePrefix) {
cy.get(".MuiSnackbar-root", { timeout: 1 }).should("not.exist");
cy.get("[id=idErrorSnackbar]", { timeout: 1 }).should("not.exist");
cy.get("[id=idErrorAlert]", { timeout: 1 }).should("not.exist");
cy.contains(errorMessagePrefix, { timeout: 1 }).should("not.exist");
}
An article about the Cypress "should" assertion helped me understand that a short timeout, not a long timeout was what's needed. With the long timeout, Cypress may have detected the Snackbar but since it waited long enough for the Snackbar to disappear, maybe it only paid attention to the final state of the screen at the end of the timeout period.
I'll provide a deeper analysis as to why cy.get(".MuiSnackbar-root", { timeout: 10000 }).should("not.exist"); may not have been doing what I intended. Here's what I think happened, second by second:
Second
Activity
0
Application threw an error and displayed the Snackbar. Snackbar detected so should("not.exist") was false but the timeout was 10 seconds so it tried again.
1
Snackbar still detected but there's 9 seconds left to wait for cy.get(".MuiSnackbar-root", { timeout: 10000 }).should("not.exist") to become true
2
Snackbar still detected but there's 8 seconds left to wait for cy.get(".MuiSnackbar-root", { timeout: 10000 }).should("not.exist") to become true
...
...
6
Snackbar hidden and since cy.get(".MuiSnackbar-root", { timeout: 10000 }).should("not.exist") is now true, it stops waiting for the timeout and Cypress considers the validation as passed
So I think the solution is to change the Cypress timeout to a duration shorter than the autoHideDuration instead of longer. I found that a timeout of 1 millisecond was enough to make Cypress detect the unwanted Snackbar. I'm not sure if that is long enough. Maybe it needs to be something squarely in the middle of the 6 seconds.
Just a note about { timeout: 10000 } - its not a wait, it's the period of time that the command will retry if it fails.
On that basis, it's possible if the snackbar is hidden, then you trigger the error the code runs before the snackbar is displayed.
I'm assuming that even with the auto-hide feature, the snackbar will always appear on errormessage and then disappear after 6 seconds.
In which case you need an on-off type test
function errorSnackbarDoesNotExist(errorMessagePrefix) {
cy.get(".MuiSnackbar-root").should("be.visible"); // existence is implied
cy.get(".MuiSnackbar-root", { timeout: 6100 }).should("not.exist");
...
}
Or if you don't like your test waiting the 6 seconds, try adding cy.clock()
function errorSnackbarDoesNotExist(errorMessagePrefix) {
cy.get(".MuiSnackbar-root").should("be.visible"); // existence is implied
cy.clock()
cy.tick(6000) // move app timers +6s
cy.get(".MuiSnackbar-root").should("not.exist");
...
cy.clock().then(clock => clock.restore()) // unfreeze timers
}

Save Google App Script state while parsing an object array and continue where left off later on

I am using this simple google app script to parse through all available Google Sites and dump the html content of individual pages. There are quite many pages so the script will eventually run into 6 minute time limit.
Is it possible to somehow use the PropertiesService to save the current progress (especially in the array loops) and continue where left off later on?
var sites = SitesApp.getAllSites("somedomain.com");
var exportFolder = DriveApp.getFolderById("a4342asd1242424folderid-");
// Cycle through all sites
for (var i in sites){
var SiteName = sites[i].getName();
var pages = sites[i].getAllDescendants();
// Create folder in Drive for each site name
var siteFolder = exportFolder.createFolder(SiteName)
for (var p in pages){
// Get page name and url
var PageUrl = pages[p].getUrl();
//Dump the raw html content in the text file
var htmlDump = pages[p].getHtmlContent();
siteFolder.createFile(PageUrl+".html", htmlDump)
}
}
I can image how one can use the Properties Service to store current line number in the Spreadsheet, and continute where left off. But how can this be done with array containing objects like Sites or Pages?
Using Objects with Properties Service
According to the quotas the maximum size of something you can store in the properties service is 9kb. With a total of 500kb. So if your object is less than this size, it should be no problem. That said, you will need to convert the object to a string with JSON.stringify() and when you retrieve it, use JSON.parse.
Working around the run time limit
What is commonly done to work around the limit is to structure a process around the properties service and triggers. Essentially you make the script keep track of time, and if it starts to take a long time, you get it to save its position and then create a trigger so that the script runs again in 10 seconds (or however long you want), for example:
function mainJob(x) {
let timeStart = new Date()
console.log("Starting at ", timeStart)
for (let i = x; i < 500000000; i++){ // NOTE THE i = x
// MAIN JOB INSTRUCTIONS
let j = i
// ...
// Check Time
let timeCheck = new Date()
if (timeCheck.getTime() - timeStart.getTime() > 30000) {
console.log("Time limit reached, i = ", i)
// Store iteration number
PropertiesService
.getScriptProperties()
.setProperty('PROGRESS', i)
console.log("stored value of i")
// Create trigger to run in 10 seconds.
ScriptApp.newTrigger("jobContinue")
.timeBased()
.after(10000)
.create()
console.log("Trigger created for 10 seconds from now")
return 0
}
}
// Reset progress counter
PropertiesService
.getScriptProperties()
.setProperty('PROGRESS', 0)
console.log("job complete")
}
function jobContinue() {
console.log("Restarting job")
previousTrigger = ScriptApp.getProjectTriggers()[0]
ScriptApp.deleteTrigger(previousTrigger)
console.log("Previous trigger deleted")
triggersRemain = ScriptApp.getProjectTriggers()
console.log("project triggers", triggersRemain)
let progress = PropertiesService
.getScriptProperties()
.getProperty('PROGRESS')
console.log("about to start main job again at i = ", progress)
mainJob(progress)
}
function startJob() {
mainJob(0)
}
Explanation
This script only has a for loop with 500 million iterations in which it assigns i to j, it is just an example of a long job that potentially goes over the run time limit.
The script is started by calling function startJob which calls mainJob(0).
Within mainJob
It starts by creating a Date object to get the start time of the mainJob.
It takes the argument 0 and uses it to initialize the for loop to 0 as you would normally initialise a for loop.
At the end of every iteration, it creates a new Date object to compare with the one created at the beginning of mainJob. In the example, it is set to see if the script has been running for 30 seconds, this can obviously be extended but keep it well below the limit.
If it has taken more than 30 seconds, it stores the value of i in the properties service and then creates a trigger to run jobContinue in 10 seconds.
After 10 seconds, the function jobContinue calls the properties service for the value for i, and calls mainJob with the value returned from the properties service.
jobContinue also deletes the trigger it just created to keep things clean.
This script should run as-is in a new project, try it out! When I run it, it takes around 80 seconds, so it runs the first time, creates a trigger, runs again, creates a trigger, runs again and then finally finishes the for loop.
References
quotas
JSON.stringify()
JSON.parse.
ScriptApp
Triggers
If you are able to process all pages of 1 site under 6 minutes then you could try saving the site names first in a sheet or props depending on the number again. And keep processing n-sites per run. Can also try SitesApp.getAllSites(domain, start, max) and save start value in props after incrementing.
Can do something similar for pages if you cannot process them under 6 minutes.
SitesApp.getAllDescendants(options)

React + redux warning logic running after 60 seconds

The error is warning: logic (L(COUNTDOWN_ADD)-1) is still running after 60s, forget to call done()? For non-ending logic, set warnTimeout: 0
I am building a countdown, in which someone can input a start time and end time. This may be running for an hour -- any idea why this console.log message is appearing?
This is intended behavior for v0.12. See changelog.
To fix it just do what error says - set warnTimeout to 0.
Here is the example.
const fooLogic = createLogic({
...
warnTimeout: 0, // default: 60000 (one minute)
})

404 after 43 seconds TTFB

I have script which uses simple_html_dom to parse different site data. It looks through my table of users, grabs the various sites needed, and then parses the data and stores them into my db.
The problem is that when I iterate through more than 3 users I get a 404 error. After a lot of debugging (much of which I'm learning as I go) it looks like as soon as my TTFB hits 40 seconds I get a 404 not found error. Anything under that the page returns fine.
I included the following in my php file to extend the time but this problem seems to ignore these statements.
// It may take a whils to crawl a site ...
ini_set("memory_limit", "-1");
ini_set('max_execution_time', 300); //300 seconds = 5 minutes
ini_set('max_input_time', -1); //300 seconds = 5 minutes
set_time_limit(0);
But I've never had this problem before where I get a 404 for a page that exists. I'm somewhat new to simple_html_dom and crawling through different pages but is the problem that the wait time is too long? If so how do I can I fix that? Thanks
So it did not have to do with execution time or any setting I could change with the php script. For anyone having the same issue this was fixed by changing the way simple_html_dom loads the script from:
$html = new simple_html_dom();
$html->load_file($url_link);
To:
$html = #file_get_contents($url_link);
$html = str_get_html($html);
Hope this helps someone else!

Indexeddb: Differences between onsuccess and oncomplete?

I use two different events for the callback to respond when the IndexedDB transaction finishes or is successful:
Let's say... db : IDBDatabase object, tr : IDBTransaction object, os : IDBObjectStore object
tr = db.transaction(os_name,'readwrite');
os = tr.objectStore();
case 1 :
r = os.openCursor();
r.onsuccess = function(){
if(r.result){
callback_for_result_fetched();
r.result.continue;
}else callback_for_transaction_finish();
}
case 2:
tr.oncomplete = callback_for_transaction_finish();
It is a waste if both of them work similarly. So can you tell me, is there any difference between them?
Sorry for raising up quite an old thread, but it's questioning is a good starting point...
I've looked for a similar question but in a bit different use case and actually found no good answers or even a misleading ones.
Think of a use case when you need to make several writes into the objectStore of even into several ones. You definitely don't want to manage each single write and it's own success and error events. That is the meaning of transaction and this is the (proper) implementation of it for indexedDB:
var trx = dbInstance.transaction([storeIdA, storeIdB], 'readwrite'),
storeA = trx.objectStore(storeIdA),
storeB = trx.objectStore(storeIdB);
trx.oncomplete = function(event) {
// this code will run only when ALL of the following requests are succeed
// and only AFTER ALL of them were processed
};
trx.onerror = function(error) {
// this code will run if ANY of the following requests will fail
// and only AFTER ALL of them were processed
};
storeA.put({ key:keyA, value:valueA });
storeA.put({ key:keyB, value:valueB });
storeB.put({ key:keyA, value:valueA });
storeB.put({ key:keyB, value:valueB });
Clue to this understanding is to be found in the following statement of W3C spec:
To determine if a transaction has completed successfully, listen to the transaction’s complete event rather than the success event of a particular request, because the transaction may still fail after the success event fires.
While it's true these callbacks function similarly they are not the same: the difference between onsuccess and oncomplete is that transactions complete but requests, which are made on those transactions, are successful.
oncomplete is only defined in the spec as related to a transaction. A transaction doesn't have an onsuccess callback.
I would only caution that there is no garentee that getting a successful trx.oncomplete means the data was written to the disk/database:
We are seeing a problem with trx.oncomplete where the data is not being written to the db on disk. FireFox has an explanation of what they did that is causing this problem here: https://developer.mozilla.org/en-US/docs/Web/API/IDBTransaction/oncomplete
It seems that windows/edge is also having the same issue. Basically, there is no guarantee that your app will have data written to the database, if/when the user decides to kill or power down the device. We've even tried waiting up to 15 minutes before shutting down in some cases and haven't seen the data written. For me I'd always want to ensure that a data write completes and is committed.
Are there other solutions for a real persistent database, or enhancements to the IndexedDB beyond FF experimental add...

Resources