I just started using SpecFlow along with Selenium & N Unit.
I have a basic question, maybe I know the answer to it, but want to get it confirmed.
Consider, there are two features - Register and Add new transaction. Two separate features with their respective step definitions. How do I share the IWebDriver element across both the features?
I do not want to launch a new browser again and add a transaction. I want to execute both as a flow.
My thoughts are this functionality is not allowed using SpecFlow as basic use of feature-based testing is violated as trying to run two features in the same session. Will context injection assist help in this case?
What you want to do is a bad idea. you should start a new browser session for each feature, IMHO.
There is no guarantee in what order your tests will execute, this will be decided by the test runner, so you might get Feature2 running before Feature1.
In fact your scenarios should be independent as well.
you can share the webdriver instance between steps as in this answer but you should use specflows other features like scenario Background to do setup, or definign steps which do your comman setup.
EDIT
We have a similar issue with some of our tests and this is what we do:
We create a sceanrio for the first step
Feature: thing 1 is done
Scenario: Do step 1
Given we set things for step 1 up
When we execute step 1
Then some result of step one should be verified
Then we do one for step 2 (which lets assume relies on step 1)
Feature: thing 2 is processed
Scenario: Do step 2
Given we have done step 1
And we set things for step 2 up
When we execute step 2
Then some result of step 2 should be verified
This first step Given we have done step 1
is a step that calls all the steps of feature 1:
[Given("we have done step 1")]
public void GivenWeHaveDoneStep1()
{
Given("we set things for step 1 up");
When("we execute step 1");
Then("some result of step one should be verified");
}
Then if we have step 3 we do this:
Feature: thing 3 happens
Scenario: Do step 3
Given we have done step 2
And we set things for step 3 up
When we execute step 3
Then some result of step 3 should be verified
Again the Given we have done step 2 is a composite step that calls all the steps in the scenarion for step 2 (and hence all the steps for step 1)
[Given("we have done step 2")]
public void GivenWeHaveDoneStep2()
{
Given("we have done step 1");
Given("we set things for step 2 up");
When("we execute step 2");
Then("some result of step 2 should be verified");
}
We repeat this process so that when we get to step 5, it is running all the steps in the correct order. Sometimes one we get to step 5 we #ignore the previous 4 steps as they will all be called by step 5 anyway.
Related
I currently have:
/exercises, which lists all exercises
And /exercises/1, which shows exercise with id 1
One exercise can have zero or many runs:
/runs Shows all runs across all exercises
/runs/1 Shows run with is 1
So how can I have a page that allows you to create a run for an exercise?
Options I thought of are:
/exercises/1/run-wizard
/runs/wizard?exerciseId=1
What do you think?
It's really up to you. One way to go would be:
/exercises/:id1/run/add <-- ADD RUN TO EXERCISE id1
/exercises/:id1/run/:id2 <-- ACCESS RUN id2 ON EXERCISE id1
We are importing data on Salesforce through Talend and we have multiple items with the same internal id.
Such import fails with error "Duplicate external id specified" because of how upsert works in Salesforce. At the moment, we worked that around by using the commit size of the tSalesforceOutput to 1, but that works only for small amount of data or it would exhaust Salesforce API Limits.
Is there a known approach to it in Talend? For example, to ensure items that have same external ID ends up in different "commits" of tSalesforceOutput?
Here is the design for the solution I wish to propose:
tSetGlobalVar is here to initialize the variable "finish" to false.
tLoop starts a while loop with (Boolean)globalMap.get("finish") == false as an end condition.
tFileCopy is used to copy the initial file (A for example) to a new one (B).
tFileInputDelimited reads file B.
tUniqRow eliminates duplicates. Uniques records go to tLogRow you have to replace by tSalesforceOutput. Duplicates records if any go to tFileOutputDelimited called A (same name as the original file) with the option "Throw an error if the file already exist" unchecked.
OnComponent OK after tUniqRow activates the tJava which set the new value for the global finish with the following code:
if (((Integer)globalMap.get("tUniqRow_1_NB_DUPLICATES")) == 0) globalMap.put("finish", true);
Explaination with the following sample data:
line 1
line 2
line 3
line 2
line 4
line 2
line 5
line 3
On the 1st iteration, 5 uniques records are pushed into tLogRow, 3 duplicates are pushed into file A and "finish" is not changed as there is duplicates.
On the 2nd iteration, operations are repeated for 2 uniques records and 1 duplicate.
On the 3rd iteration, operations are repeated for 1 unique and as there not anymore duplicate, "finish" is set to true and the loop automatically finishes.
Here is the final result:
You can also decide to use an other global variable to set the salesforce commit level (using the syntax (Integer)globalMap.get("commitLevel")). This variable will be set to 200 by default and to 1 in the tJava if any duplicates. At the same time, set "finish" to true (without testing the number of duplicates) and you'll have a commit level to 200 for the 1st iteration and to 1 for the 2nd (and no need more than 2 iterations).
You'll decide the better choice depending on the number of potential duplicates, but you can notice that you can do it whitout any change to the job design.
I think it should solve your problem. Let me know.
Regards,
TRF
Do you mean you have the same record (the same account for example) twice or more in the input?If so, can't you try to eliminate the duplicates and keep only the record you need to push to Salesforce?Else, if each record has specific informations (so you need all the input records to have a complete one in Salesforce), consider to merge the records before to push the result into Salesforce.
And finally, if you can't do that, push the doublons in a temporary space, push the records but the doublons into Salesforce and iterate other this process until there is no more doublons.Personally, if you can't just eliminate the doublons, I prefer the 2nd approach as it's the solution to have less Salesforce API calls.
Hope this helps.
TRF
I have a Zeppelin notebook 'Test'. This notebook has 2 paragraphs like below
1.
%spark
import statements;
val df=sqlContext.read.format....cassandra..table
df.registerTempTable("users")
2.
%spark.sql
select date,count(users) from users
I am scheduling this notebook to run every 5 minutes. On the first run, I am getting error from the second paragraph that 'users' table is not found.
I need to add dependency to 2nd paragraph such that 2nd paragraph runs only when first one completes. How to achieve this in Zeppelin 0.6.0?
Kick off paragraph 2 from paragraph 1 with z.run
z.run("paragraphID")
You can run paragraphs sequentially in zeppelin 0.8
I have a dataflow inside a dtsx package that processes all of the data that I need to process. At the end, I need to perform some clean up tasks. For example, assume the following stucture:
If a record is true for all 3 cases then I want to run all three OLE DB Commands. If the record is true for only case 1 then it should only run case 1.
I could do this with a multicast and 3 seperate conditional splits (as per below), but I was hoping for a cleaner way. Any ideas?
Using a multicast and three conditional splits is the easiest to implement, and is also probably the most easily understood.
A Script Component set up as a Transformation with three outputs is probably the next easiest to implement - but it will involve a bunch of setup and some level of coding. The data flow will look relatively pretty:
For each output, make sure to set the Synchronous Input ID to None (so that you can control when rows are created); then you'll need to duplicate each of the input columns in each output. The script code itself will look like this:
public override void IncomingRows_ProcessInputRow(IncomingRowsBuffer Row)
{
if (Case1Logic(Row))
{
Case1OutputBuffer.AddRow();
Case1OutputBuffer.ProductId = Row.ProductID;
Case1OutputBuffer.Name = Row.Name;
// etc. for all columns
}
if (Case2Logic(Row))
{
Case2OutputBuffer.AddRow();
Case2OutputBuffer.ProductId = Row.ProductID;
Case2OutputBuffer.Name = Row.Name;
// etc. for all columns
}
if (Case3Logic(Row))
{
Case3OutputBuffer.AddRow();
Case3OutputBuffer.ProductId = Row.ProductID;
Case3OutputBuffer.Name = Row.Name;
// etc. for all columns
}
}
private bool Case1Logic(IncomingRowsBuffer Row)
{
// Whatever the Case 1 logic involves
}
private bool Case2Logic(IncomingRowsBuffer Row)
{
// Whatever the Case 2 logic involves
}
private bool Case3Logic(IncomingRowsBuffer Row)
{
// Whatever the Case 3 logic involves
}
Have fun keeping this up to date when people decide they want to change the columns!
If that's not hairy enough for you, you could write your own custom transformation. The gory details on how to do that are in MSDN; suffice it to say that there will be a lot more code involved. You'll also learn more than you maybe ever wanted to about how SSIS handles buffer management, which may in turn explain why the out-of-the-box Conditional Split doesn't let you send the same row to more than one output.
Finally, if you want a truly ugly-looking solution that will also be a maintenance nightmare, try building a Conditional Split with one output for each combination of cases. Put a Union All tranformation in front of each of your OLE destinations. Direct the Cases 1, 2 and 3 output to a three-way Multicast, with one multicast output going to each of the three Union All transformations. The Cases 1 and 2, Cases 2 and 3 and Cases 1 and 3 outputs of the Conditional Split would each go to two-way Multicast transformations (which would in turn feed the appropriate Union All), while the Case 1, Case 2 and Case 3 outputs would go directly to the appropriate Union All. It would look something like this:
In summary, I think your original idea is the simplest and probably best.
Depending on how comfortable you are with C# or VB, you could write your own transform that is essentially the multicast and conditional split in one. Add a script component onto the data flow and it'll ask you whether it's a source, transform, or destination. Choose transform, add your input and three outputs and go from there. Good luck!
I'm running load tests in Visual Studio 2010 Ultimate, and I'm trying to build some custom reporting tools. In the LoadTestTestResults table, there's a column labeled Outcome. I've seen it have the values 0, 1, 3, and (mostly) 10. But I can't find anything that explains what the different values mean.
I think that 10 is a success outcome, according to a comment in Prc_GetUserTestDetail. No clue on the others -- they don't seem to match up with any numbers in the VS summary.
What do these outcome codes mean?
I contacted a Microsoft developer from the MSDN blog on VS load testing and asked about this. Here's the information I got back, in case anybody else needs it:
The Outcome field is an enum that stores the status of an individual test case within a load test run. It can have values from 0 - 13.
0 - Error: There was a system error while we were trying to execute a test.
1 - Failed: Test was executed, but there were issues. Issues may involve exceptions or failed assertions.
2 - Timeout: The test timed out.
3 - Aborted: Test was aborted. This was not caused by a user gesture, but rather by a framework decision.
4 - Inconclusive: Test has completed, but we can't say if it passed or failed. May be used for aborted tests...
5 - PassedButRunAborted: Test was executed w/o any issues, but run was aborted.
6 - NotRunnable: Test had its chance for been executed but was not, as ITestElement.IsRunnable == false.
7 - NotExecuted: Test was not executed. This was caused by a user gesture - e.g. user hit stop button.
8 - Disconnected: Test run was disconnected before it finished running.
9 - Warning: To be used by Run level results. This is not a failure.
10 - Passed: Test was executed w/o any issues.
11 - Completed: Test has completed, but there is no qualitative measure of completeness.
12 - InProgress: Test is currently executing.
13 - Pending: Test is in the execution queue, was not started yet.