App Engine not iterating over a cursor? - google-app-engine

I perform an app engine query to get a cursor (wrec), and the code shows the number of records correctly by iterating. But then "for rec in wrec" does not run (no logging.info inside this loop).
There's also a GQL SELECT of the same table, with another cursor (wikiCursor) that jinja2 renders properly. Here's the part that doesn't work:
wrec = Wiki.all().ancestor(wiki_key()).filter('pagename >=', findPage).filter('pagename <', findPage + u'\ufffd').run()
foundRecs = sum(1 for _ in wrec)
logging.info("Class WikiPage: foundRecs is %s", foundRecs)
aFoundRecs = []
if foundRecs > 0:
for rec in wrec:
logging.info("Class WikiPage: value is %s", wrec.pagename)
aFoundRecs.append(rec.pagename)
self.render("permalink.html", userRec=self.userRec, wikipage=pagename,
wikiCursor=wikiCursor, wrec=wrec, foundRecs=foundRecs)
else:
errorText = "Could not find anything for your entry."
self.render("permalink.html", userRec=self.userRec, wikipage=pagename, wikiCursor=wikiCursor, error=errorText)
Here is a portion of the log, showing the first logging.info statement, but not the second:
INFO 2014-09-15 19:35:29,525 main.py:410] Class WikiPage: foundRecs is 3
INFO 2014-09-15 19:35:29,581 module.py:652] default: "POST / HTTP/1.1" 200 3058
Why is the wrec for loop not running?

When you use sum to count, the query already iterates until the end. That is expected behavior that if you try to iterate over it again, it won't work (because it already at the ends)

Related

lapply calling .csv for changes in a parameter

Good afternoon
I am currently trying to pull some data from pushshift but I am maxing out at 100 posts. Below is the code for pulling one day that works great.
testdata1<-getPushshiftData(postType = "submission", size = 1000, before = "1546300800", after= "1546200800", subreddit = "mysubreddit", nest_level = 1)
I have a list of Universal Time Codes for the beginning and ending of each day for a month. What I would like to do is get the syntax to replace the "after" and "before" values for each day and for each day to be added to the end of the pulled data. Even if it placed the data to a bunch of separate smaller datasets I could work with it.
Here is my (feeble) attempt. "links" is the data frame with the UTCs
mydata<- lapply(1:30, function(x) getPushshiftData(postType = "submission", size = 1000, after= links$utcstart[,x],before = links$utcendstart[,x], subreddit = "mysubreddit", nest_level = 1))
Here is the error message I get: Error in links$utcstart[, x] : incorrect number of dimensions
I've also tried without the "function (x)" argument and get the following message:
Error in ifelse(is.null(after), "", sprintf("&after=%s", after)) :
object 'x' not found
Can anyone help with this?

how to use gatling feeder to use the data once per user for entire duration?

In gatling how to achieve the following?
sample_testdata.csv
orderId
101112
111213
121314
131415
sample test running with 4 users and multiple iterations
user1 should use orderId 101112 for all the iterations
user2 should use orderId 111213 for all the iterations
and so on ...
I am not able to find uniqueonce strategy in feeder.
code:
scenario("Get Art")
.during(test_duration minutes) {
feed(fdr_arts)
.exec(_.set("hToken",hToken))
.exec(_.set("hTimeStamp",hTimeStamp))
.exec(_.set("gToken", gToken))
.exec(actionBuilder = http("Get Arts")
.post(getArtUrl)
}
Your scenario includes .during - which is a looping construct - and inside that you're calling feed. So each user is going to keep on looping for test_duration and on each loop it's going to pull the next value from the feeder.
To get the behaviour you want, you need to put the feeder before the loop...
scenario("Get Art")
.feed(fdr_arts)
.during(test_duration minutes) {
.exec(_.set("hToken",hToken))
.exec(_.set("hTimeStamp",hTimeStamp))
.exec(_.set("gToken", gToken))
.exec(actionBuilder = http("Get Arts")
.post(getArtUrl)
}
val txn_getArt = exec(_.set("hToken",hToken))
.exec(_.set("hTimeStamp",hTimeStamp))
.exec(_.set("gToken", gToken))
.exec(actionBuilder = http("Get Arts")
.post(getArtUrl)
// Chaining it after feeder does the trick in scenario
scenario("Get Art").repeat(5000){feed(fdr_arts).exec(txn_getArt)}

unable to extra/list all event log on watson assistant wrokspace

Please help I was trying to call watson assistant endpoint
https://gateway.watsonplatform.net/assistant/api/v1/workspaces/myworkspace/logs?version=2018-09-20 to get all the list of events
and filter by date range using this params
var param =
{ workspace_id: '{myworkspace}',
page_limit: 100000,
filter: 'response_timestamp%3C2018-17-12,response_timestamp%3E2019-01-01'}
apparently I got any empty response below.
{
"logs": [],
"pagination": {}
}
Couple of things to check.
1. You have 2018-17-12 which is a metric date. This translates to "12th day of the 17th month of 2018".
2. Assuming the date should be a valid one, your search says "Documents that are Before 17th Dec 2018 and after 1st Jan 2019". Which would return no documents.
3. Logs are only generated when you call the message() method through the API. So check your logging page in the tooling to see if you even have logs.
4. If you have a lite account logs are only stored for 7 days and then deleted. To keep logs longer you need to upgrade to a standard account.
Although not directly related to your issue, be aware that page_limit has an upper hard coded limit (IIRC 200-300?). So you may ask for 100,000 records, but it won't give it to you.
This is sample python code (unsupported) that is using pagination to read the logs:
from watson_developer_cloud import AssistantV1
username = '...'
password = '...'
workspace_id = '....'
url = '...'
version = '2018-09-20'
c = AssistantV1(url=url, version=version, username=username, password=password)
totalpages = 999
pagelimit = 200
logs = []
page_count = 1
cursor = None
count = 0
x = { 'pagination': 'DUMMY' }
while x['pagination']:
if page_count > totalpages:
break
print('Reading page {}. '.format(page_count), end='')
x = c.list_logs(workspace_id=workspace_id,cursor=cursor,page_limit=pagelimit)
if x is None: break
print('Status: {}'.format(x.get_status_code()))
x = x.get_result()
logs.append(x['logs'])
count = count + len(x['logs'])
page_count = page_count + 1
if 'pagination' in x and 'next_url' in x['pagination']:
p = x['pagination']['next_url']
u = urlparse(p)
query = parse_qs(u.query)
cursor = query['cursor'][0]
Your logs object should contain the logs.
I believe the limit is 500, and then we return a pagination URL so you can get the next 500. I dont think this is the issue but once you start getting logs back its good to know

Retry loop until condition met

I am trying to navigate my mouse on object but I want to create a condition that will check if "surowiec" is still on the screen, if not I want to skip loop and go to another one. After it finish the second one get back to first and repeat.
[error] script [ Documents ] stopped with error in line 12 [error] FindFailed ( can not find surowiec.png in R[0,0 1920x1080]#S(0) )
w_lewo = Location(345,400)
w_prawo = Location(1570,400)
w_gore = Location(345,400)
w_dol = Location(345,400)
surowiec = "surowiec.png"
while surowiec:
if surowiec == surowiec:
exists("surowiec.png")
if exists != None:
click("surowiec.png")
wait(3)
exists("surowiec.png")
elif exists == None:
surowiec = None
click(w_prawo)
wait(8)
surowiec = surowiec
How about a small example:
while True:
if exists(surowiec):
print('A')
click(surowiec)
else:
print('B')
break
A while loop that is True will always run, until it it meets a break to exit the loop. Also have a look at the functions that are available in Sikuli, it can somethimes be hard to find them, that they are available. So here are a few nice ones:
Link: Link 1 and Pushing keys and Regions
The commands that I found myself very usefull are is exists and if not exists, and find that will allow to locate an image on the screen. Then you don't have to find an image over and over again if it stays on the same location. image1 = find(surowiec)

Check if there are list elements different from character(0)

I have a function to update the database for a list of ID (max 1,000 elements at once, for IN in Oracle DB):
## Split the data frame into chunks of 1,000 elements max.
chunk <- split(df, ceiling(seq_along(df$id) / 1000))
res <- lapply(chunk,
function (x) sqlQuery(database,
paste0(
"UPDATE sometable
SET flag = '", batch.id, "'\n ",
"WHERE id IN (", paste0("'", x$id, "'", collapse=", "), ")")))
I'd like to test that my updates went well. But I did not find a way to get the count of updated rows via rODBC calls.
Thus, as a (satisfying!?) workaround, I'm looking at the text results of sqlQuery, and understand that when everything goes well, I get a list of character(0) results.
Now, how to assert that all the UPDATE queries were correctly done?
I tried using any or all calls, with no success:
> all(res == character(0))
[1] TRUE
> all(res != character(0))
[1] TRUE
> any(res != character(0))
[1] FALSE
> any(res == character(0))
[1] FALSE
I don't understand that, whatever my condition, the result is always TRUE (for all) or FALSE (for any).
Tried all.equal with no more success:
> all.equal(character(0), res)
[1] "Modes: character, list" "Lengths: 0, 1" "names for current but not for target" "target is character, current is list"
you have to lapply over res. as in lapply(res, identical, character(0L))
You can try:
all(vapply(res,length,1L)==0)
This evaluates the length of each component of res and returns TRUE if all the elements are of zero length (as character(0) is).

Resources