Inserting an idle time when looping - database

I have written a VBA code to generate some database entries based on an excel list: due to the way the database is set up, I need to first generate an URL and then open the URL to generate the online DB entry.
I could loop through the entire list, but I am afraid that launching 50 or even more consecutive instances of IE with this loop will hung the computer. It's worth mentioning that upon manually opening a single URL the DB takes some seconds to display the proper page.
Is there a way to define an idle time at the end of each iteration?

What about Application.Wait()? For example, count from 0 to 10 every 5 seconds:
Sub main()
For i = 0 To 10
MsgBox i
Application.Wait (Now() + TimeValue("0:00:05"))
Next i
End Sub

Related

Generate Sequential Application number

I have a requirement where I need to generate Unique Application Number and in sequential format.
Sample Application Number - APP-Date-0001 the 001 will keep on increasing for entire day and counter should be reset next day. So for next day it should again start from 001 with current date.
The problem will occur when 2 users are creating application at the same time.
Keep counter and last date it was used in custom setting or similar object.
But access that custom setting with normal SOQL, not via the special custom setting methods like getInstance().
Finally - in that SOQL query use FOR UPDATE. https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_select_for_update.htm
If 2 operations start on same time - 1 will be held until other one finishes or timeout happens

Show second to last item in Influx query (or ignore last)

I am using Grafana to show the number of entries added to the database every minute, and I would like to display the last recent fully counted value.
If I give the following command:
SELECT count("value") FROM "numSv" GROUP BY time(1m)
1615904700000000000 60
1615904760000000000 60
1615904820000000000 60
1615904880000000000 60
1615904940000000000 36
Grafana is going to display the last entry, which is still in the process of counting. How can I display the n[-1] entry, which has been fully counted?
Otherwise, how do I ask Influx to give me the same results excluding the last dataset?
P.S.: Using WHERE time > now() - 60s, etc... doesn't work.
Use "magic" Grafana time range math and select dashboard time range from now-1m/m to now-1m/m. That generates an absolute time range, which refers to last fully counted minute. Query is then standard with $timeFilter Grafana macro:
SELECT count("value") FROM "numSv" WHERE $timeFilter

How to run a cron command every hour taking script execution time in account?

I have a bunch of data to monitor. My data are statistics that can only be retrieved every hour but can change every second and I want to store into a database as much values as I can for each data set.
I've though about several approaches for this problem and I finally chose to refresh and read all statistics at once instead of reading them independently.
So that, I came out with command mycommand which reads all my statics with the cost of several minutes (let's say 30) of execution. Now I would like to run this script every hour, but taking the script execution into account.
I actually run
* */1 * * * mycommand.sh
and receive many annoying error emails (actually one every hour) and I effectly retrieve my statistics every 2 hours.
1h 30 minutes is the half of 3 hours. So you could have two entries in crontab(5) running the same /home/gogaz/mycommand.sh script, one to run it at 1, 4, 7, ... hours (every 3 hours from 1am) and another to run it at 2:30, 5:30, 8:30 hours, ... (every 3 hours from 2:30am) etc
Writing these entries is left as an exercise to the reader.
See also anacrontab(5) and at(1). For example, you might run your script once using batch, but terminate your script with an at command rescheduling that same script (the drawback is handling of unexpected errors).
If you redirect your stdout and stderr in your crontab entry, you won't get any emails.

Load time variance with .CacheSize/.PageSize in ADODB.Recordset

I am working on a project for a client using a classic ASP application I am very familiar with, but in his environment is performing more slowly than I have ever seen in a wide variety of other environments. I'm on it with many solutions; however, the sluggishness has got me to look at something I've never had to look at before -- it's more of an "acadmic" question.
I am curious to understand a category page with say 1800 product records takes ~3 times as long to load as a category page with say 54 when both are set to display 50 products per page. That is, when the number of items to loop through is the same, why does the variance in the total number of records make a difference in loading the number of products displayed when that is a constant?
Here are the methods used, abstracted to the essential aspects:
SELECT {tableA.fields} FROM tableA, tableB WHERE tableA.key = tableB.key AND {other refining criteria};
set rs=Server.CreateObject("ADODB.Recordset")
rs.CacheSize=iPageSize
rs.PageSize=iPageSize
pcv_strPageSize=iPageSize
rs.Open query, connObj, adOpenStatic, adLockReadOnly, adCmdText
dim iPageCount, pcv_intProductCount
iPageCount=rs.PageCount
If Cint(iPageCurrent) > Cint(iPageCount) Then iPageCurrent=Cint(iPageCount)
If Cint(iPageCurrent) < 1 Then iPageCurrent=1
if NOT rs.eof then
rs.AbsolutePage=Cint(iPageCurrent)
pcArray_Products = rs.getRows()
pcv_intProductCount = UBound(pcArray_Products,2)+1
end if
set rs = nothing
tCnt=Cint(0)
do while (tCnt < pcv_intProductCount) and (count < pcv_strPageSize)
{display stuff}
count=count + 1
loop
The record set is converted to an array via getRows() and the destroyed; records displayed will always be iPageSize or less.
Here's the big question:
Why, on the initial page load for the larger record set (~1800 records) does it take significantly longer to loop through the page size (say 50 records) than on the smaller records set (~54 records)? It's running through 0 to 49 either way, but takes a lot longer to do that the larger the initial record set/getRows() array is. That is, why would it take longer to loop through the first 50 records when the initial record set/getRows() array is larger when it's still looping through the same number of records/rows before exiting the loop?
Running MS SQL Server 2008 R2 Web edition
You are not actually limiting the number of records returned. It will take longer to load 36 times more records. You should change your query to limit the records directly rather than retrieving all of them and terminating your loop after the first 50.
Try this:
SELECT *
FROM
(SELECT *, ROW_NUMBER() OVER(ORDER BY tableA.Key) AS RowNum
FROM tableA
INNER JOIN tableB
ON tableA.key = tableB.key
WHERE {other refining criteria}) AS ResultSet
WHERE RowNum BETWEEN 1 AND 50
Also make sure the columns you are using to join are indexed.

GAE Task Queues how to make the delay?

In Task Queues code is executed to connect to the server side
through URL Fetch.
My file queue.yaml.
queue:
- Name: default
rate: 10 / m
bucket_size: 1
In such settings, Tusk performed all at once, simultaneously.
Specificity is that between the requests should be delayed at least 5
sec. Task must perform on stage with a difference> 5 sec. (but
does not parallel).
What are the values set in queue.yaml?
You can't specify minimum delays between tasks in queue.yaml, currently; you should do it (partly) in your own code. For example, if you specify a bucket size of 1 (so that more than one task should never be executing at once) and make sure the tasks runs for at least 5 seconds (get a start=time.time() at the start, time.sleep(time.time()-(5+start)) at the end) this should work. If it doesn't, have each task record in the store the timestamp it finished, and when it start check if the last task ended less than 5 seconds ago, and in that case terminate immediately.
The other way could be store the task data in table. In your task-queue add a id parameter. Fetch 1st task from table and pass its id to task queue processing servlet. In servlet at the end delay for 5 second and feth next task, pass its id and.... so on.

Resources