I am using StoredProcedureItemReader in Spring Batch for Reading Items from a DB via Stored Procedure (Which accepts input parameters).
I have done setting the basic configurations for StoredProcedureItemReader but Not getting how to set the parameters values in it.
StoredProcedureItemReader storedProcItemReader = new StoredProcedureItemReader();
storedProcItemReader.setDataSource(dataSource);
storedProcItemReader.setProcedureName("proc_name");
SqlParameter[] parameter = {new SqlParameter(OracleTypes.VARCHAR),new SqlParameter(OracleTypes.VARCHAR),new SqlParameter(OracleTypes.CURSOR)};
storedProcItemReader.setParameters(parameter);
storedProcItemReader.setPreparedStatementSetter(??)
I want to set the values for two input parameters via PreparedStatementSetter. How do i set it. Do i need to use a preparedstatement for it. As i have already given the proc name (which has all the query to execute).
Thanks
You need to use an ItemPreparedStatementSetter :
public class MyItemPreparedStatementSetter implements ItemPreparedStatementSetter<T> {
#Override
public void setValues(T item, PreparedStatement ps) throws SQLException {
//Set your values here, example :
ps.setString(1, item.getProperty());
}
}
Statement fields are 1-indexed.
Then you can pass it to your reader :
storedProcItemReader.setPreparedStatementSetter(new MyItemPreparedStatementSetter());
Related
I am implementing my own Activiti command intereceptor like this :
public class ActivitiCommandInterceptor extends AbstractCommandInterceptor {
private RuntimeService runtimeService;
private CommandInterceptor delegate;
public ActivitiSpringTxCommandInterceptor(RuntimeService runtimeService, CommandInterceptor delegate) {
this.runtimeService = runtimeService;
this.delegate=delegate;
}
#Override
public <T> T execute(CommandConfig config, Command<T> command) {
String myVariable = runtimeService.getVariable(<missingExecutionId>, "myVariableName");
...
}
}
Inside the execute() method I need to retrieve a variable from the execution context related to this command.
To do that, I need to have the executionId, but I can't find a way to retrieve it.
How can I get my variable from this interceptor?
Thanks
You can create a nativeExecutionQuery
This allows us to use SQL to perform operations directly on DB.
For your case, just find all the execution IDs that contains your variables, and filter them according to your need.
I'm seeing a very bizarre issue with iBatis when trying to read a property from a Java map using isEqual, but not with other iBatis operators. For example it is able to read the map properties fine when using isNotNull and iterate. The xml:
<isNotNull property="filterCriteria.account">
AND
id
<isEqual property="filterCriteria.account.meetsCriteria" compareValue="false">
NOT
</isEqual>
IN
(SELECT DISTINCT id
FROM account
WHERE some other criteria....
)
</isNotNull>
The 2 java classes we're using here:
public class SearchProfile {
private Map<String, SearchProfileCriteria> filterCriteria;
public SAOSearchProfile() {
filterCriteria = new HashMap<>();
}
public Map<String, SAOSearchProfileCriteria> getFilterCriteria() {
return filterCriteria;
}
public void setFilterCriteria(Map<String, SAOSearchProfileCriteriaBase> filterCriteria) {
this.filterCriteria = filterCriteria;
}
}
Above is the container object that is passed to iBatis for the querying, and below is the criteria object that will be the value of the map. In this example it is keyed with the String "account"
public class SearchProfileCriteria {
boolean meetsCriteria;
public String getCriteriaAsString() {
return StringUtils.getStringValueFromBoolean(meetsCriteria);
}
public boolean isMeetsCriteria() {
return meetsCriteria;
}
public void setMeetsCriteria(boolean meetsCriteria) {
this.meetsCriteria = meetsCriteria;
}
public String getSQLString(){
return meetsCriteria ? "" : "NOT";
}
}
And the exception:
Cause: com.ibatis.common.beans.ProbeException: There is no READABLE property named 'account' in class 'java.util.Map'; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
The getSQLString() method was my half baked attempt at a work around, the String gets escaped in the query and throws a syntax error.
When I remove the <isEqual> block the query executes find, which indicates it is able to read the "account" key when checking the to see if it is null. As I mentioned above, we're also able to use the map keys in <iterate> tags without issue. It seems <isEqual> and <isNotEqual> are the only tags causing issues. Does anyone have experience with this or know what may be going on?
Beware: Using isNotNull, isEqual, iterate is iBatis, they don't exist anymore in Mybatis, so referencing to Mybatis indifferently is confusing.
Reference documentation.
For your issue, how does it behave if replacing Map with a class (property will be known at compile time)?
Or try using <isPropertyAvailable>.
The work around could work with right syntax: $ instead of #: $filterCriteria.account.SQLString$ instead of #filterCriteria.account.SQLString#, then the value is just concatenated instead of bound as parameter.
I am beginning to use Dapper and love it so far. However as i venture further into complexity, i have ran into a big issue with it. The fact that you can pass an entire custom object as a parameter is great. However, when i add another custom object a a property, it no longer works as it tries to map the object as a SQL parameter. Is there any way to have it ignore custom objects that are properties of the main object being passed thru? Example below
public class CarMaker
{
public string Name { get; set; }
public Car Mycar { get; set; }
}
propery Name maps fine but property MyCar fails because it is a custom object. I will have to restructure my entire project if Dapper can't handle this which...well blows haha
Dapper extensions has a way to create custom maps, which allows you to ignore properties:
public class MyModelMapper : ClassMapper<MyModel>
{
public MyModelMapper()
{
//use a custom schema
Schema("not_dbo_schema");
//have a custom primary key
Map(x => x.ThePrimaryKey).Key(KeyType.Assigned);
//Use a different name property from database column
Map(x=> x.Foo).Column("Bar");
//Ignore this property entirely
Map(x=> x.SecretDataMan).Ignore();
//optional, map all other columns
AutoMap();
}
}
Here is a link
There is a much simpler solution to this problem.
If the property MyCar is not in the database, and it is probably not, then simple remove the {get;set;} and the "property" becomes a field and is automatically ignored by DapperExtensions. If you are actually storing this information in a database and it is a multi-valued property that is not serialized into a JSON or similar format, I think you are probably asking for complexity that you don't want. There is no sql equivalent of the object "Car", and the properties in your model must map to something that sql recognizes.
UPDATE:
If "Car" is part of a table in your database, then you can read it into the CarMaker object using Dapper's QueryMultiple.
I use it in this fashion:
dynamic reader = dbConnection.QueryMultiple("Request_s", param: new { id = id }, commandType: CommandType.StoredProcedure);
if (reader != null)
{
result = reader.Read<Models.Request>()[0] as Models.Request;
result.reviews = reader.Read<Models.Review>() as IEnumerable<Models.Review>;
}
The Request Class has a field as such:
public IEnumerable<Models.Review> reviews;
The stored procedure looks like this:
ALTER PROCEDURE [dbo].[Request_s]
(
#id int = null
)
AS
BEGIN
SELECT *
FROM [biospecimen].requests as bn
where bn.id=coalesce(#id, bn.id)
order by bn.id desc;
if #id is not null
begin
SELECT
*
FROM [biospecimen].reviews as bn
where bn.request_id = #id;
end
END
In the first read, Dapper ignores the field reviews, and in the second read, Dapper loads the information into the field. If a null set is returned, Dapper will load the field with a null set just like it will load the parent class with null contents.
The second select statement then reads the collection needed to complete the object, and Dapper stores the output as shown.
I have been implementing this in my Repository classes in situations where a target parent class has several child classes that are being displayed at the same time.
This prevents multiple trips to the database.
You can also use this approach when the target class is a child class and you need information about the parent class it is related to.
Im using DataProvider in TestNG for my Selenium Scripts . My requirement is to just use a single DataProvider and pass the data to many test methods .
For example : Say i have 10 test methods , So i need to create a Single DataProvider , so that it can pass data to all those 10 Test methods.
Is it possible to do it ? If yes , how to implement it .
Or is there any alternative for this ??
Pl Help !!!
If each of your test method has #Test annotation, then you can simply add parameter to this as -
#Test(dataProvider="Name of your DataProvider")
You can do this with all of the 10 test methods & this will make them get data from your single DataProvider.
I hope it helps. . .cheers!!
Yes it is possible.
So your data provider needs to know for which method or class it is providing the data. I made the following implementation. So you can get the context of the calling method in a data provider and you can ask it what is the parent class name for which the data has to be provided, and then depending on that you can have multiple files which you can read and supply the data or have different rows in the same csv differentiated by class name from where you can read the required row
#DataProvider(name="getDataFromFile")
public static Iterator<Object[]> getDataFromFile(Method testMethod) throws Exception
{
String expected=null;
String className=testMethod.getDeclaringClass().getSimpleName();
Reporter.log("Providing data for class " + className,true);
List<Map<String, String>> setupData = getTestDataFromCsv(classname);
//provide data here
}
Update on this:
I was looking for a solution for the same. But it is not possible to split the data provider. But no harm in reusing the data provider for all methods, the disadvantage is each method must use the complete list of arguments. All other options are more complex to implement and maintain. For my scenario, it is better than creating and maintaining separate data providers for each test methods.
#BeforeMethod
public void setUp() {
init();
login= new LoginPage(myD);
clientsearch = new ClientSearchPage(myD);
toppanel= new TopPanelPage(myD);
}
#Test(dataProvider="search_data")
public void verifySearchByClientNumber(String clientnumber, String policynumber, String policynumberClient, String webreference,
String webreferenceClient, String surname, String surnameClient, String forename, String forenameClient, String dob, String dobClient){
login.Login();
log.info("Logged in successfully, now in ClientSearch Page..");
log.info("Entering client number.." );
clientsearch.enterClientNumber(clientnumber);
log.info("Clicking on the Search button ..." );
clientsearch.clickSearchButton();
log.info("Verifying Client present in results.." );
boolean res=clientsearch.isClientPresent(clientnumber);
Assert.assertEquals(res, true,"Assertion failed !!");
toppanel.clickLogoutButton();
}
#Test(dataProvider="search_data")
public void verifySearchByPolicyNumber(String clientnumber, String policynumber, String policynumberClient, String webreference,
String webreferenceClient, String surname, String surnameClient, String forename, String forenameClient, String dob, String dobClient){
login.Login();
log.info("Logged in successfully, now in ClientSearch Page..");
log.info("Entering Policy number.." );
clientsearch.enterPolicyNumber(policynumber);
log.info("Clicking on the Search button ..." );
clientsearch.clickSearchButton();
log.info("Verifying Client present in results.." );
boolean res=clientsearch.isClientPresent(policynumberClient);
Assert.assertEquals(res, true,"Assertion failed !!");
toppanel.clickLogoutButton();
}
//More methods here with same data provider....
#AfterMethod
public void endTest() {
myD.quit();
}
Is there any way of accessing both a result set and output parameters from a stored procedure added in as a function import in an Entity Framework model?
I am finding that if I set the return type to "None" such that the designer generated code ends up calling base.ExecuteFunction(...) that I can access the output parameters fine after calling the function (but of course not the result set).
Conversely if I set the return type in the designer to a collection of complex types then the designer generated code calls base.ExecuteFunction<T>(...) and the result set is returned as ObjectResult<T> but then the value property for the ObjectParameter instances is NULL rather than containing the proper value that I can see being passed back in Profiler.
I speculate the second method is perhaps calling a DataReader and not closing it. Is this a known issue? Any work arounds or alternative approaches?
Edit
My code currently looks like
public IEnumerable<FooBar> GetFooBars(
int? param1,
string param2,
DateTime from,
DateTime to,
out DateTime? createdDate,
out DateTime? deletedDate)
{
var createdDateParam = new ObjectParameter("CreatedDate", typeof(DateTime));
var deletedDateParam = new ObjectParameter("DeletedDate", typeof(DateTime));
var fooBars = MyContext.GetFooBars(param1, param2, from, to, createdDateParam, deletedDateParam);
createdDate = (DateTime?)(createdDateParam.Value == DBNull.Value ?
null :
createdDateParam.Value);
deletedDate = (DateTime?)(deletedDateParam.Value == DBNull.Value ?
null :
deletedDateParam.Value);
return fooBars;
}
According to this SO post, the sproc doesn't actually execute until you iterate the resultset. I simulated your scenario, ran some tests and confirmed this is the case. You didn't add a code sample, so I can't see what you're doing exactly, but as per your response below, try caching the resulset in a list (eg, Context.MyEntities.ToList()) and then check the value of the ObjectParameter