I am following an example here: https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/StatefulSequenceSource.java
I am trying to build a source using a jdbc connection which extends RichParallelFunction and implements CheckpointedFunction, as I would like to be able to save my watermark from my source tables in the case of restart.
When testing locally with docker, I can call my run() method just fine and read data from my source database, but I am not sure where the snapshotState and initializeState methods actually get called. I have logic in those methods that should be setting the value of my watermark based on first startup/recovery - I just never see that being accessed, and not sure if I should be calling the methods externally?
Thanks for the help in advance!
The methods will be called by the Flink framework when it needs to (when performing a checkpoint or a save point).
See https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/fault-tolerance/state/#using-operator-state and https://nightlies.apache.org/flink/flink-docs-stable/api/java/org/apache/flink/streaming/api/checkpoint/CheckpointedFunction.html
I am learning cucumber through some tutorials and there is something that I don't know how to do.. I need to make a scenario that depends on another scenario (for example log out scenario I have to be logged in first before logging out) so what should I do? should I write steps of login at log out scenario (at the feature file) or there is a way to call the whole log in scenario at the log out scenario
also I need to know should I set up driver before each scenario and quit the driver after each scenario?
There is no support for creating a scenario that depends on another scenario in Cucumber-JVM. I think still is supported in the Ruby implementation in Cucumber. It is, however, a practice that is dangerous. The support for calling a scenario from another scenario will not be supported in future versions of Cucumber.
That said, how do you solve your problem when you want to reuse functionality? You mention logout, how do you handle that when many scenarios requires the state to be logged out for a user?
The solution is to implement the functionality in a helper method or helper class that each step that requires a user to be logged out calls.
This allow each scenario to be independent of all other scenarios. This in turn will allow you to run the scenarios in a random order. I don't think the execution order for the scenarios is guaranteed. I definitely know that it is discussed to have the JUnit runner to run scenarios in a random order just to enforce the habit of not having scenarios depend on other scenarios.
Your other question, how to set up WebDriver before a scenario and how to tear it down, is solved using the Before and After hooks in Cucumber. When using them, be careful not to import the JUnit version of Before and After.
Take a look into cucumber hooks, this allows you to set up global 'before' and 'after' steps, which will run for every scenario without having to specify them in your feature files.
Because they run for every scenario, they're ideal for something like initialising the driver at the start of each test. It may be suitable for running your logon, but if there's a chance you'll have a scenario which doesn't involve logging on then it wouldn't be the way to go (alternative further down). The same applies for the after scenario, which is where you could perform the log off and shut down your driver. As an example:
/**
* Before each scenario, initialise webDriver.
*/
#Before
public void beforeScenario() {
this.application.initialiseWebDriver();
}
/**
* After each scenario, quit the web driver.
*/
#After
public void afterScenario() {
this.log.trace("afterScenario");
this.application.quitBrowser();
}
In my example, I'm simply starting the driver in the before scenario, and closing it in the after, But in theory these before and after methods could contain anything, you just need to have them in your step definitions class and annotate them with the '#Before' and '#After' tags as shown.
As well as these, you can also have multiple before and after tags which you can call by tagging the scenario. As an example:
/**
* Something to do after certain scenarios.
*/
#After("#doAfterMethod")
public void afterMethod() {
this.application.afterThing();
}
You can set up something like this in your step defs, and as standard it won't run. However, you can tag your scenario with '#doAfterMethod' and it will run for the tagged scenarios, which makes this good for a common scenario that you will need at the end of tests, but not all of them. The same would work for methods to run before a scenario, just change the '#After' to '#Before'.
Bear in mind that if you do use these, the global Before and After (so in this example the driver initialisation and quitting) will always be the first and last things to run, with any other before/afters in between them and the scenario.
Further Reading:
https://github.com/cucumber/cucumber/wiki/Hooks
https://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/
You can set test dependency with qaf bdd. You can use dependsOnMethods or dependsOnGroups in scenario meta-data to set dependency same as in TestNG, because qaf-BDD is TestNG BDD implementation.
I have inherited a Web API 2 project written in C#/.NET that uses ADO.NET to access an SQL Server database.
The data access layer of the project contains many methods which look similar to this:
public class DataAccessLayer
{
private SqlConnection _DBConn;
public DataAccessLayer()
{
_DBConn = new SqlConnection(ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString);
}
public string getAllProductsAsJSON()
{
DataTable dt = new DataTable();
using (SqlConnection con = _DBConn)
{
using (SqlCommand cmd = new SqlCommand("SELECT productId, productName FROM product ORDER BY addedOn DESC", con))
{
cmd.CommandType = CommandType.Text;
// add parameters to the command here, if required.
con.Open();
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(dt);
return JsonConvert.SerializeObject(dt);
}
}
}
// ... more methods here, but all basically following the above style of
// opening a new connection each time a method is called.
}
Now, I want to write some unit tests for this project. I have studied the idea of using SQL transactions to allow for insertion of mock data into the database, testing against the mock data, and then rolling back the transaction in order to allow for testing against a "live" (development) database, so you can have access to the SQL Server functionality without mocking it out completely (e.g. you can make sure your views/functions are returning valid data AND that the API is properly processing the data all at once). Some of the methods in the data access layer add data to the database, so the idea is that I would want to start a transaction, call a set of DAL methods to insert mock data, call other methods to test the results with assertions, and then roll back the entire test so that no mock data gets committed.
The problem I am having is that, as you can see, this class has been designed to create a new database connection every single time that a query is made. If I try to think like the original developer probably thought, I could see how it could make at least some sense to do this, considering the fact that these classes are used by a web API, so a persistent database connection would be impractical especially if a web API call involves transactions, because you then do need a separate connection per request to maintain separation.
However, because this is happening I don't think I can use the transaction idea to write tests as I described, because uncommitted data would not be accessible across database connections. So if I wrote a test which calls DAL methods (and also business-logic layer methods which in turn call DAL methods), each method will open its own connection to the database, and thus I have no way to wrap all of the method calls in a transaction to begin with.
I could rewrite each method to accept an SQLConnection as one of its parameters, but if I do this, I not only have to refactor over 60 methods, but I also have to rework every single place that such methods are called in the Web API controllers. I then have to move the burden of creating and managing DB connections to the Web API (and away from the DAL, which is where it philosophically should be).
Short of literally rewriting/refactoring 60+ methods and the entire Web API, is there a different approach I can take to writing unit tests for this project?
EDIT: My new idea is to simply remove all calls to con.Open(). Then, in the constructor, not just create the connection but also open it. Finally, I'll add beginTransaction, commitTransaction and rollbackTransaction methods that operate directly upon the connection object. The core API never needs to call these functions, but the unit tests can call them. This means the unit test code can simply create an instance, which will create a connection which persists across the entire lifetime of the class. Then it can use beginTransaction, then do whatever tests it wants, and finally rollbackTransaction. Having a commitTransaction is good for completeness and also exposing this functionality to the business-logic layer has potential use.
There are multiple possible answers to this question, depending on what exactly you are trying to accomplish:
Are you primarily interested in unit testing your application logic (e.g., controller methods), rather than the data access layer itself?
Are you looking to unit test the logic inside your data access layer?
Or are you trying to test everything together (i.e., integration or end-to-end testing)?
I am assuming you are interested in the first scenario, testing your application logic. In that case, I would advise against connecting to the database at all (even a development database) in your unit tests. Generally, unit tests should not be interacting with any outside system (e.g., database, filesystem, or network).
I know you mentioned you were interested in testing multiple parts of the functionality all at once:
I have studied the idea of using SQL transactions [...] so you can have access to the SQL Server functionality without mocking it out completely (e.g. you can make sure your views/functions are returning valid data AND that the API is properly processing the data all at once).
However, that rather goes against the philosophy of unit testing. The whole point of a unit test is to test a single unit in isolation. Typically, this unit ("System Under Test", or SUT, in more technical terms) is a single method inside some class (for instance, an action method in one of your controllers). Anything other than the SUT should be stubbed or mocked out.
To accomplish this, broadly speaking, you will need to refactor your code to use dependency injection, and also use a mocking framework in your tests:
Dependency Injection: If you are not using a dependency injection framework already, chances are your controller classes are instantiating your DataAccessLayer class directly. This approach will not work for unit tests - instead, you will want to refactor the controller class to accept its dependencies via the constructor, and then use a dependency injection framework to inject the real DataAccessLayer in your application code, and inject a mock/stub implementation in your tests. Some popular dependency injection frameworks include Autofac, Ninject, and Microsoft Unity. Depending on which framework you choose, this may also require that you refactor DataAccessLayer a bit so it implements an interface (e.g., IDataAccessLayer).
Mocking Framework: In your tests, rather than using the real DataAccessLayer class directly, you will instead create a mock, and set up expectations on that mock. Some popular mocking frameworks for .NET include Moq, RhinoMocks, and NSubstitute).
Granted, if the code was not initially written with unit testing in mind (i.e., no dependency injection), this may involve a fair amount of refactoring. This is where alltej's suggestion comes in with creating a wrapper for interacting with the legacy (i.e., untested) code.
I strongly recommend you read the book The Art of Unit Testing: With Examples in C# (by Roy Osherove). That will help you understand the ideology behind unit testing a bit better.
If you are actually interested in testing multiple parts of your functionality at once, then what you are describing (as others have pointed out) is integration, or end-to-end testing. The setup for this would be entirely different (and often more challenging), but even then, the recommended approach would be to connect to a separate database (specifically for integration testing, separate even from your development database), rather than rolling back transactions.
When working with legacy system, what I would do is create a wrapper for this DLLs/projects to isolate communication with legacy and to protect integrity of your new subsystem/domain or bounded context. This isolation layer is known as anticorruption layer in DDD terminology. This layer contains interfaces written in terms of your new bounded context. The interface adapts and interacts with your API layer or to other services in the domain. You can then write unit/mock tests with this interfaces. You can also create an integration tests from your anticorruption layer which will eventually call the database via the legacy dlls.
Actually, from what I see in the code, the DAL creates only one connection in the constructor and then it keeps using it to fire commands, one command per method in the DAL. It will only create new connections if you create another instance of the DAL class.
Now, what you are describing is multiple test into, integration, end to end and I am not convinced that the transaction idea, while original, is actually doable.
When writing integration tests, I prefer to actually create all the data required by the test and then simply remove it at the end, that way nothing is left behind and you know for sure if your system works or not.
So imagine you're testing retrieving account data for a user, I would create a user, activate them, attach an account and then test against that real data.
The UI does not need to go all the way through, unless you really want to do end to end tests. If you don't, then you can just mock the data for each scenario you want to test and see how the UI behaves under each scenario.
What I would suggest is that you test your api separately, test each endpoint and make sure it works as expected with integration tests covering all scenarios needed.
If you have time, then write some end to end tests, possibly using a tool like Selenium or whatever else you fancy.
I would also extract an interface from that DAL in preparation of mocking the entire layer when needed. That should give you a good start in what you want to do.
We have an application that is essentially implementing its own messaging queue. When a user interacts with the application it will generate an action, being a custom action and not the .NET class, that will be handled by our ActionDispatcher.
In the ActionDispatcher class I have a Stack of CustomAction objects. I'd like to run the ActionDispatcher in its own thread, but then you have all the issues with communicating with the main UI thread using Invoke and BeginInvoke.
There are several different methods that the ActionDispatcher may call, each one would require a delegate on the UI side to be used to communicate with the other thread I believe. Is there a simplier way?
The reason for wanting a seperate thread is that the ActionDispatcher processes messages that originate from a server as well as the UI. This is a client application, and many actions are generated by the server. The idea is that I have we have our own queue that both the UI and server add messages to.
It really depends on the architecture of your application, but the quick and short answer is: No. If you're not on the UI thread, then you have to use the Dispather's Invoke or BeginInvoke method to get access or execute code back on the UI thread.
As a side note, this is a little different than in WinForms, it almost sounds like you're coming from a WinForms perspective, so you might want to look up the WPF Dispatcher.
On the other hand, I would suggest that you look into something like Prism's IEventAggregator. I'm sure there are other similar implementations, but Prism has one nice feature, you can tell it you want to subscribe to an event and have it come in on the UI thread and prism does the rest of the work for you.
Personally, I think using an EventAggregator pattern is better, I'm not sure it's necessarily simpler though.
You need to use Dispatcher.CheckAccess().
http://msdn.microsoft.com/en-us/library/system.windows.threading.dispatcher.checkaccess.aspx
Hi I am evaluating ICUTest for use on a project. My initial view is that it looks like a promising Visual testing library. The scenario I have for using ICUTest is to start an application with a specific configuration and I expect the main application window to display based on the configuration settings. Each unit test should start the application and then after completing it should shutdown the application gracefully.
At the moment I can get individual tests to run, but when I run multiple tests I start running into all types of threading issues. Has anyone had any experience with this?
There are two ways to test your application.
1) The easiest (and most reusable) way is to just test your main app window like any other window. Do your initialization after a window event (like Window.Loaded) or through the constructor (e.g. new MainWindow("myapp.config") ).
2) If initialization must be done before the window is up then you can start the app thread with code similar to the one here.
Note: in WPF, you can only start an Application once, so method (1) is preferable.
Also, make sure you wrap all your GUI related calls in an ICU.Invoke(...) block.