How to apply condition IF...ELSE in MAF CUCUMBER - database

Currently, I have 2 cases in the following detail
I run an API with a file upload, after API is run successfully. There are 2 statuses it can be displayed
status = waiting
status = ready_to_process (this is the correct status if the system not have a problem)
NOTE: The status change like that is cause be system sometime can't change at the time file is uploaded, but it can be recorded to DB and run successfully
And then in the Database, data change is also different
Changed to the role: unknow
Changed to the role: pass
How can I write steps if..else in Cucumber use MAF framework like:
If API run then it returns status 1 => verify the result 1 in DB
Else
verify result 2 in DB

You can't write conditional code in Cucumber features. What you should be doing is writing a scenario for each condition. So you should write something like
Scenario: Run ends with waiting status
...
Scenario: Run ends with ready to process status
...

Related

getting issue while Re Running the request - JMeter

Scenario:
After login into the application, system check number of pending files and if any pending file present then get assign to the user. The maximum files that can be assigned to user is two.
Once user process the first files, again system check for the pending files and if any pending file present then get assign to the user.
For this I used Loop Controller, but it not working.
Thread Group
HTTP Request - Login
JDBC Connection Configuration
JDBC Request
JSR223 PostProcessor
List<String> fileIDListresultSet = vars.getObject("File_ID")
vars.put("fileIDListSize", String.valueOf(fileIDListresultSet.size()))
Loop Controller
${fileIDListSize}
HTTP Request - 1 Lock File
JSR223 PreProcessor
def counterVal = vars.get("fileIDCounter") as int
def fileIDListresultSet = vars.getObject("File_ID").get(counterVal).get("FileId")
vars.put("fileId",fileIDListresultSet.toString())
HTTP Request - 2 Process File
JDBC Request
JSR223 PostProcessor
List<String> fileIDListresultSet = vars.getObject("File_ID")
vars.put("fileIDListSize", String.valueOf(fileIDListresultSet.size()))
Counter
From the first glance your Counter placement is incorrect, it needs to be a child of the Loop Controller, see JMeter Scoping Rules - The Ultimate Guide article for more details
You don't even need the counter as the Loop Controller exposes a special variable holding current iteration number - ${__jm__Loop Controller__idx}
In any case we don't have any possibility to help you unless you show the values of these File_ID, fileIDListSize, fileIDCounter, FileId and fileId for each iteration (can be obtained using Debug Sampler) along with jmeter.log file contents

Validate Stored Procedure Success in Powerapps

I am fairly new to powerapps, but it sounds like there is a major limitation on being able to return values for a SQL Server stored procedure.
I have an app that when you push a button pulls data from various controls on screen and submits it to a stored procedure. This is done by invoking a flow. The code is basically :
EditPuddles.Run(ActionDrop.Selected.Value, PuddlesText.Text,
ClassicDrop.Selected.Value, ServiceRates.Text, User().FullName)
The code works and does what it is supposed to. However, what I need now more than anything is it to tell me when it fails or succeeds.
Ideally I would have it return a values that I could use to determine if I should display a success or failure message. I get that I cannot return a data set, but it must at least be able to tell if there is an error.
I'm also new to PowerApps and Power Automate but I figured out a way to show results on success and an error message on failure, after a flow is executed.
PowerApps
For your example the code for the property "OnSelect" of the button should be:
UpdateContext({ PuddlesResult: EditPuddles.Run("a", "b", "c") });
If(
Not IsBlank(PuddlesResult.errormessage),
// on failure:
Notify(PuddlesResult.errormessage, NotificationType.Error, 5000),
// on success: (use PuddlesResult.ResultSets.Table1)
Notify("All good", NotificationType.Information, 5000)
)
Power Automate
Your flow in Power Automate should look like this:
Capture the error message
On error the action "Execute stored procedure (V2)" returns output:
To return the message in the output update the action "Respond to a PowerApp or flow":
Text: ErrorMessage
Expression: outputs('Execute_stored_procedure_(V2)')?['body']?['message']
Return the error message only on failure/timeouts
Select "Configure run after" on the action "Respond to PowerApp or flow":
And set it to only run this action on failure and time outs:
For returning normal results you can use action "Response" and set the "Configure run after" settings to "Is successful".
I hope this helps you and others as it took me a long time too to figure this out. But it will allow you to handle succes and failure appropriately in your PowerApp.

Gatling: Cannot print a response from WebSocket server

I'm using the following code in Gatling:
.exec(ws("Open WS connection")
.open("/${session_id}/socket?device=other"))
.pause(2)
.exec(ws("Get client browser id")
.sendText("[]")
.check(wsListen.within(10).until(1).jsonPath("$.[2]").saveAs("clientID")))
It does not report any failure. I assume it means that the JSON value was stored in the clientID variable successfully.
When I add
.exec{
session =>
println("clientID: " + session("clientID").as[String])
session
}
I get error
[ERROR] i.g.c.a.b.SessionHookBuilder$$anon$1 - 'hook-1' crashed with 'java.util.NoSuchElementException: key not found: clientID', forwarding to the next one
This call works in JMeter.
Please help.
I guess you have to reconciliate ws branch and main branch:
https://gatling.io/docs/2.3/http/websocket/#reconciliate
As stated in the ref doc:
As a consequence, main HTTP branch and a WebSocket branch can exist in a Gatling scenario in a dissociated way, in parallel. When doing so, each flow branch has it’s own state, so a user might have to reconcile them, for example when capturing data from a WebSocket check and wanting this data to be available to the HTTP branch.

App Engine generating infinite retries

I have a backends that is normally invoked by a cron to run a few times every day. Yesterday, I noticed it was restarting without stopping. I dont see a place in my code where that invocation is happening. Rather, the task queue seems to indicate it is running due to re-tries due to errors. One error is that status is saved to bigQuery and that is failing because a quoto is exceeded. But this seems to generate an infinite loop. Is this a bug in app engine or I am doing something wrong? Is there a way to indicate to not restart a task if it fails? My other app engine tasks that terminate without 200 status dont do that...
Here is a trace of the queue from which the restarts keep happening:
Here is the logging showing continous running
And here is the http header inside the logging
UPDATE1
Here is the cron:
<?xml version="1.0" encoding="UTF-8"?>
<cronentries>
<cron>
<url>/uploadToBigQueryStatus</url>
<description>Check fileNameSaved Status</description>
<schedule>every 15 minutes from 02:30 to 03:30</schedule>
<timezone>US/Pacific</timezone>
<target>checkuploadstatus-backend</target>
</cron>
</cronentries>
UPDATE 2
As for the comment about catching the error: The error I believe is that the biqQuery job fails because a quota has been hit. Strange thing is that it happened yesterday, and the quota should have been reset, so the error should have good away for at least a while. I dont understand why the task retries, I never selected that option that I am aware of.
I killed the servlet and emptied the task queue so at least it is stopped. But I dont know the root cause. IF BQ table quota was the reason, that shouldnt cause an infinite retry!
UPDATE 3
I have not trapped the servlet call that produced the error that led to the infinite retry. But I checked this cron activated servlet today and found I had another non-200 result. The return value this time was 500 and it is caused by a DataStore time-out exception.
Here is the screen shot of the return that show 500 return code.
Here is the exception info page 1
And the following data
The offending code line is the for loop iterating on the data store query
if (keys[0] != null) {
/* Define the query */
q = new Query(bucket).setAncestor(keys[0]);
pq = datastore.prepare(q);
gotResult = false;
// First system time stamp
Date date= new Timestamp(new Date().getTime());
Timestamp timeStampNow = new Timestamp(date.getTime());
for (Entity result : pq.asIterable()) {
I will add a try-catch on this for loop as it is crashing in this iteration.
if (keys[0] != null) {
/* Define the query */
q = new Query(bucket).setAncestor(keys[0]);
pq = datastore.prepare(q);
gotResult = false;
// First system time stamp
Date date= new Timestamp(new Date().getTime());
Timestamp timeStampNow = new Timestamp(date.getTime());
try {
for (Entity result : pq.asIterable()) {
Hopefully, the data store read will not crash the servlet but it will render a failure. At leas the cron will run again and pickup other non-handled results.
By the way, is this a java error or app engine? I see a lot of these data store time outs and I will add a try-catch around all the result loops. Still, it should not cause the infinite retry that I experienced. I will see if I can find the actual crash..problem is that it overloaded my logging...More later.
UPDATE 4
I went back to the logs to see when the inifinite loop began. In the logs below, I opened the run that is at the head of the continuous running. YOu can see that it fails with 500 every 5th time. It is not the cron that invoked it, it was me calling the servlet to check biq query upload status (I write to the data store the job info, then read it back in servlet and write to bigQuery the job status and if done, erase the data store entry.) I cannot explain the steady 500 errors every 5th call, but it is always the Data Store Timeout exception.
UPDATE 5
Can the infinite retries be happening because of the queue configuration?
CheckUploadStatus
20/s
10
100
10
200
2
I just noticed another task queue had a 500 return code and it was continuously retrying. I did some search and found some people have tried to configure
the queues for no retry. They said that didnt work.
See this link:
Google App Engine: task_retry_limit doesn't work?
But one re-try is possible? That is far better than infinite.
It is contradictory that Google enforces quotas but seems to prefer infinite retries. I would much prefer block the retries by default on non-200 return code and then have NO QUOTAS!!!
According to Retrying cron jobs that fail:
If a cron job's request handler returns a status code that is not in
the range 200–299 (inclusive) App Engine considers the job to have
failed. By default, failed jobs are not retried.
To set failed jobs to be retried:
Include a retry-parameters block in your cron.xml file.
Choose and set the retry parameters in the retry-parameters block.
Your cron config doesn't specify the necessary retry parameters, so the jobs returning the 500 code should, indeed, not be retried, as you expect.
So this looks like a bug. Possibly a variant of the (older) known issue 10075 - the 503 code mentioned there might have changed in the mean time - but it is also a quota-related failure.
The suggestion from GAEfan's comment is likely a good workaround:
You will need to catch the error, and send a 200 response to stop the
task queue from retrying. – GAEfan 1 hour ago

How can I prevent accidentally overwriting an already existing database?

I'm adding BaseX to an existing web application and currently writing code to import data into it. The documentation is crystal-clear that
An existing database will be overwritten.
Finding this behavior mindboggingly dangerous, I tried it with the hope that the documentation was wrong but unfortunately my test confirmed it. For instance, using basexclient I can do this:
> create db test
Database 'test' created in 12.03 ms.
> create db test
Database 'test' created in 32.43 ms.
>
I can also replicate this behavior with the Python client, which is I what I'm actually using for my application. Reducing my code to the essentials:
session = BaseXClient.Session("127.0.0.1", 1984, "admin", "admin")
session.create("test", "")
It does not matter whether test exists or not, the whole thing is overwritten if it exists.
How can I work around this dangerous default behavior? I'd would like to prevent the possibility of missteps in production.
You can issue a list command before you create your database. For instance with the command line client if the database does not exist:
> list foo
Database 'foo' was not found.
Whereas if the database exists:
> list test
Input Path Type Content-Type Size
------------------------------------
This is a database that is empty so it does not show any contents but at least you do not get the error message. When you use a client you have to check whether it errors out or not. With the Python client you could do:
def exists(session, db):
try:
session.execute("list " + db)
except IOError as ex:
if ex.message == "Database '{0}' was not found.".format(db):
return False
raise
return True
The client raises IOError if the server raises an error, which is a very generic way to report a problem. So you have to test the error message to figure out what is going on. We reraise if it happens that the error message is not the one which pertains to our test. This way we don't swallow exceptions caused by unrelated issues.
With that function you could do:
session = BaseXClient.Session("127.0.0.1", 1984, "admin", "admin")
if exists(session, "test"):
raise SomeRelevantException("Oi! You are about to overwrite your database!")
session.create("test", "")

Resources