JDBI: Connections being automatically closed after idle - database

I'm relatively new to connection pooling, but from what I've read it seems ideal to leave some connections idle for faster performance.
I'm currently using JDBI, and after idle periods I'll get a
com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
I would assume this is either because my database configuration settings are lacking or that I must be using the framework incorrectly:
config.yml:
database:
# whether or not idle connections should be validated
checkConnectionWhileIdle: false
# the maximum amount of time to wait on an empty pool before throwing an exception
maxWaitForConnection: 10s
# Limits for simultaneous connections to DB
minSize: 10
initialSize: 10
maxSize: 100
DAOs:
public class AccountDAO {
private final Jdbi jdbi;
public AccountDAO(Jdbi jdbi) {
this.jdbi = jdbi;
}
public void addAccount(String id) {
jdbi.useHandle(h ->
h.createUpdate("INSERT INTO Account(id) values (:id)")
.bind("id", id)
.execute());
}
}
public class RestaurantDAO {
private final Jdbi jdbi;
public RestaurantDAO(Jdbi jdbi) {
this.jdbi = jdbi;
}
public Optional<RestaurantDTO> getRestaurantByName(String restName) {
return jdbi.withHandle(h ->
h.createQuery("SELECT * FROM Restaurant WHERE restName =:restName")
.bind("restName", restName)
.mapToBean(RestaurantDTO.class)
.findOne());
}
public void addRestaurant(String restName) {
jdbi.useHandle(h ->
h.createUpdate("INSERT INTO Restaurant(restName) values (:restName)")
.bind("restName", restName)
.execute()
);
}
}
public class ReviewDAO(Jdbi jdbi) {
this.jdbi = jdbi;
}
public Optional<ReviewDTO> getReviewByAuthorAndRestaurant(String author, String restName) {
return jdbi.withHandle(h ->
h.createQuery("SELECT * FROM Review WHERE author=:author AND restName =:restName")
.bind("author", author)
.bind("restName", restName)
.mapToBean(ReviewDTO.class)
.findOne());
}
public List<ReviewDTO> getReviewsByAuthor(String author) {
return jdbi.withHandle(h ->
h.createQuery("SELECT * FROM Review WHERE author =:author ORDER BY created DESC")
.bind("author", author)
.mapToBean(ReviewDTO.class)
.list());
}
public List<ReviewDTO> getReviewsByRestaurant(String restName) {
return jdbi.withHandle(h ->
h.createQuery("SELECT * FROM Review WHERE restName =:restName ORDER BY created DESC")
.bind("restName", restName)
.mapToBean(ReviewDTO.class)
.list());
}
public List<ReviewDTO> getRecentReviews() {
return jdbi.withHandle(h ->
h.createQuery("SELECT top 5 * FROM Review ORDER BY created DESC")
.mapToBean(ReviewDTO.class)
.list());
}
public void addReview(String author, String restName, String title, String bodyText, int rating) {
jdbi.useHandle(h ->
h.createUpdate("INSERT INTO Review(bodyText, rating, restName, author, title) values (:bodyText, :rating, :restName, :author, :title)")
.bind("bodyText", bodyText)
.bind("rating", rating)
.bind("restName", restName)
.bind("author", author)
.bind("title", title)
.execute());
}
public void updateReview(String author, String restName, String title, String bodyText, int rating) {
jdbi.useHandle(h ->
h.createUpdate("UPDATE Review SET bodyText=:bodyText, rating=:rating, title=:title where author=:author AND restName=:restName")
.bind("bodyText", bodyText)
.bind("rating", rating)
.bind("title", title)
.bind("author", author)
.bind("restName", restName)
.execute());
}
public void deleteReview(String author, String restName) {
jdbi.useHandle(h ->
h.createUpdate("DELETE FROM Review WHERE author=:author AND restName=:restName")
.bind("author", author)
.bind("restName", restName)
.execute());
}
}
Using the setting
checkConnectionOnBorrow: true
Might work, but I would assume that the ideal solution would be to prevent my initial connections from being closed in the first place?
Any assistance is appreciated

It turns out my DB host, Azure, automatically closes idle connections after 30 minutes. For the time being, I've added aggressive validation settings to my config to renew the pool accordingly. Probably just gonna switch hosts since it doesn't look like you can configure the timeout on Azure's end.
validationQuery: "/* APIService Health Check */ SELECT 1"
validationQueryTimeout: 3s
checkConnectionWhileIdle: true
minIdleTime: 25m
evictionInterval: 5s
validationInterval: 1m

Related

How to achieve KGroupTable use case in flink

I am doing some poc on flink but I am not able to find documentation around how will I achieve use case similar to KGroupTable in kafka stream as shown below
KTable<byte[], Long> aggregatedStream = groupedTable.aggregate(() -> 0L,
(aggKey, newValue, aggValue) -> aggValue + newValue.length(),
(aggKey, oldValue, aggValue) -> aggValue - oldValue.length(), Serdes.Long(),     "aggregation-table-store");
Use case I want to aggregate account balance from transactions I receive. If I get an update on existing transaction Id I want to remove old value and add new value. Lets say if a transaction gets cancelled I want to remove the old value from account balance.
eg
TransactionId AccountId Balance
1 account1 1000 // account1 - 1000
2 account1 2000 // account1 - 3000
3 account2 2000 // account1 - 3000, account2 - 2000
1 account1 500 // account1 - 2500, account2 - 2000
In above example 4th update is, i got an update on existing transaction #1 so it will remove the old balance (1000) and add new balance (500)
Thanks
Here's a sketch of how you could approach that. I used Tuples because I was lazy; it would be better to use POJOs.
import org.apache.flink.api.common.functions.RichMapFunction;
import org.apache.flink.api.common.state.MapState;
import org.apache.flink.api.common.state.MapStateDescriptor;
import org.apache.flink.api.common.state.ReducingState;
import org.apache.flink.api.common.state.ReducingStateDescriptor;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.tuple.Tuple3;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
public class TransactionsWithRetractions {
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource<Tuple3<Integer, String, Float>> rawInput = env.fromElements(
new Tuple3<>(1, "account1", 1000.0F ),
new Tuple3<>(2, "account1", 2000.0F),
new Tuple3<>(3, "account2", 2000.0F),
new Tuple3<>(1, "account1", 500.0F)
);
rawInput
.keyBy(t -> t.f1)
.map(new ManageAccounts())
.print();
env.execute();
}
public static class ManageAccounts extends RichMapFunction<Tuple3<Integer, String, Float>, Tuple2<String, Float>>{
MapStateDescriptor<Integer, Float> transactionsDesc;
ReducingStateDescriptor<Float> balanceDesc;
#Override
public void open(Configuration parameters) throws Exception {
transactionsDesc = new MapStateDescriptor<Integer, Float>("transactions", Integer.class, Float.class);
balanceDesc = new ReducingStateDescriptor<>("balance", (f, g) -> f + g, Float.class);
}
#Override
public Tuple2<String, Float> map(Tuple3<Integer, String, Float> event) throws Exception {
MapState<Integer, Float> transactions = getRuntimeContext().getMapState(transactionsDesc);
ReducingState<Float> balance = getRuntimeContext().getReducingState(balanceDesc);
Float currentValue = transactions.get(event.f0);
if (currentValue == null) {
currentValue = 0F;
}
transactions.put(event.f0, event.f2);
balance.add(event.f2 - currentValue);
return new Tuple2<>(event.f1, balance.get());
}
}
}
When run, this produces:
1> (account1,1000.0)
8> (account2,2000.0)
1> (account1,3000.0)
1> (account1,2500.0)
Note that this implementation keeps all transactions in state forever, which might become problematic in a real application, though you can scale Flink state to be very large.

Spring boot: Database connection not closing properly

I'm executing queries periodically (by a scheduler) using my Spring Boot application
application.properties
src_mssqlserver_url=jdbc:sqlserver://192.168.0.1;databaseName=Test;
src_mssqlserver_username=tester
src_mssqlserver_password=tester1
src_mssqlserver_driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
Datasource and JdbcTemplate Bean
#Primary
#Bean(name = "src_mssqlserver")
#ConfigurationProperties(prefix = "spring.ds_mssqlserver")
public DataSource srcDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getProperty("src_mssqlserver_driverClassName"));
dataSource.setUrl(env.getProperty("src_mssqlserver_url"));
dataSource.setUsername(env.getProperty("src_mssqlserver_username"));
dataSource.setPassword(env.getProperty("src_mssqlserver_password"));
return dataSource;
}
#Bean(name = "srcJdbcTemplate")
public JdbcTemplate srcJdbcTemplate(#Qualifier("src_mssqlserver") DataSource dsSrcSqlServer) {
return new JdbcTemplate(dsSrcSqlServer);
}
Usage: This method is called from a scheduler with list of items to process (normally 1000 records), this process runs in an hour once.
#Autowired
#Qualifier("srcJdbcTemplate")
private JdbcTemplate srcJdbcTemplate;
public void batchInsertUsers(final List<User> users) {
String queryInsert = "INSERT INTO [User] ([Name]"
+ " , [Created_Date]"
+ " , [Notes])"
+ " VALUES (?, SYSDATETIMEOFFSET(), ?)";
srcJdbcTemplate.batchUpdate(queryInsert, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
User user = users.get(i);
ps.setString(1, user.getName());
ps.setString(2, user.getNotes());
}
#Override
public int getBatchSize() {
return sites.size();
}
});
I'm getting warnings from database administrator that my code keeping too much connections open. Please share some standard and workable way to handle such situation.
Thanks.
DriverManagerDataSource is NOT meant for production, it opens and closes a connection each time it needs one.
Use a connection pool like c3p0DataSource.

Switching from local database to SQL server

So our current code loaded a csv file into a local jdbcTemplate in which then I do some querying. The issue was always performance, and we finally got access to a SQL server that could load the data. Naturally the company gets the guy with basically no database skills to set this up :P
#Autowired
DataSource dataSource;
#RequestMapping("/queryService")
public void queryService(#RequestParam("id")String id)
{
log.info("Creating tables");
jdbcTemplate.execute("DROP TABLE accounts IF EXISTS");
jdbcTemplate.execute("CREATE TABLE accounts(id VARCHAR(255), name VARCHAR(255), Organization__c VARCHAR(255)";
insertBatch(accounts,dataSource);
ArrayList<Account2> filteredaccs = filterAccount(jdbcTemplate);
.
public void insertBatch(ArrayList<Account2> accs, DataSource dataSource) {
List<Map<String, Object>> batchValues = new ArrayList<>(accs.size());
for (Account2 a : accs) {
Map<String, Object> map = new HashMap<>();
map.put("id", a.getId());
map.put("name", a.getName());
map.put("Organization__c", a.getOrganization__c());
batchValues.add(map);
}
SimpleJdbcInsert simpleJdbcInsert = new SimpleJdbcInsert(dataSource).withTableName("accounts");
int[] ints = simpleJdbcInsert.executeBatch(batchValues.toArray(new Map[accs.size()]));
}
.
public ArrayList<Account2> filterAccount(JdbcTemplate jdbcTemplate)
{
String sql= "query string";
ArrayList<Account2> searchresults = (ArrayList<Account2>) jdbcTemplate.query(sql,
new RowMapperResultSetExtractor<Account2>(new AccountRowMapper(), 130000));
return searchresults;
}
.
public class AccountRowMapper implements RowMapper<Account2> {
public Account2 mapRow(ResultSet rs, int rowNum) throws SQLException {
Account2 a = new Account2();
a.setId(rs.getString("id"));
a.setName(rs.getString("name"));
a.setOrganization__c(rs.getString("Organization__c"));
return a;
}
}
The question here is what is the quickest way for me to 'switch' over to using a SQL server to pull the data down, with the same table and rows, without changing too much of my current code?

stored procedure 'auto_pk_for_table' not found

I don't know why I received the error :
org.apache.cayenne.CayenneRuntimeException: [v.4.0.M5 Feb 24 2017 07:47:55] Commit Exception
[...]
Caused by: java.sql.SQLException: Procédure stockée 'auto_pk_for_table' introuvable.
[...]
I'm using Cayenne :
<dependency>
<groupId>org.apache.cayenne</groupId>
<artifactId>cayenne-server</artifactId>
<version>4.0.M5</version>
</dependency>
and JDTS for sql server :
<dependency>
<groupId>net.sourceforge.jtds</groupId>
<artifactId>jtds</artifactId>
<version>1.3.1</version>
</dependency>
The connexion is ok :
avr. 10, 2017 2:36:30 PM org.apache.cayenne.datasource.DriverDataSource getConnection
INFOS: +++ Connecting: SUCCESS.
I'm trying to create a new user (I'm starting by bascis!) so my code is :
(I cut a little bit, it's too long:!)
public abstract class _UserInfo extends CayenneDataObject {
public static final String ADDRESS_PROPERTY = "address";
public void setAddress(String address) {
writeProperty(ADDRESS_PROPERTY, address);
}
public String getAddress() {
return (String)readProperty(ADDRESS_PROPERTY);
}
}
public class UserInfo extends _UserInfo implements Serializable {
private static final long serialVersionUID = 1L;
public String address;
public String getAdress() {
return address;
}
public void setAddress(String address) {
super.setAddress(address);
}
//I have the hashcode and equals too
}
Then, I used vaadin to create my form :
public class UserAddView extends CustomComponent implements View {
private static final long serialVersionUID = 1L;
private TextField address;
private Button save;
public static final String USERVIEW = "user";
public boolean checkValidation() {
if (!checkTextFieldValid(address))
return false;
return true;
}
public boolean checkTextFieldValid(TextField element) {
if (element == null || element.isEmpty()) {
Notification.show(
"You should register a " + element.getDescription(),
Type.WARNING_MESSAGE);
return false;
}
return true;
}
public UserAddView() {
VerticalLayout mainLayout = new VerticalLayout();
mainLayout.setSizeFull();
setCompositionRoot(mainLayout);
final VerticalLayout vlayout = new VerticalLayout();
address = new TextField("Address:");
address.setDescription("Address");
vlayout.addComponent(address);
save = new Button("Save");
vlayout.addComponent(save);
mainLayout.addComponent(new HeaderMenu());
mainLayout.addComponent(vlayout);
addListeners();
}
private void addListeners() {
save.addClickListener(new ClickListener() {
private static final long serialVersionUID = 1L;
#Override
public void buttonClick(ClickEvent event) {
if (checkValidation() == true) {
ServerRuntime cayenneRuntime = ServerRuntime.builder()
.addConfig("cayenne-myapplication.xml").build();
ObjectContext context = cayenneRuntime.newContext();
UserInfo user = context.newObject(UserInfo.class);
user.setAddress(address.getValue());
user.getObjectContext().commitChanges();
Notification.show(
"Has been saved, We will send you your password by email. Your user login is: "
+ email.getValue(), Type.TRAY_NOTIFICATION);
getUI().getNavigator().navigateTo(HomepageView.MAINVIEW);
}
}
});
}
#Override
public void enter(ViewChangeEvent event) {
// TODO Auto-generated method stub
}
}
EDIT, add information : In my user object, I have a userid (primary key), in cayenne I wrote it as primary key too and in smallint. This error seems to be link... https://cayenne.apache.org/docs/3.1/api/org/apache/cayenne/dba/sybase/SybasePkGenerator.html
The error happens when you insert a new object. For each new object Cayenne needs to generate a value of the primary key. There are various strategies to do this. The default strategy depends on the DB that you are using. For SQLServer (and for Sybase, as you've discovered :)) that strategy is to use a special stored procedure.
To create this stored procedure (and other supporting DB objects), go to CayenneModeler, open your project, and select "Tools > Generate Database Schema". In "SQL Options" tab, uncheck all checkboxes except for "Create Primary Key Support". The SQL you will see in the window below the checkboxes is what you need to run on SQL server. Either do it from Cayenne modeler or copy/paste to your favorite DB management tool.
There's also an alternative that does not require a stored procedure - using DB auto-increment feature. For this you will need to go to each DbEntity in the Modeler and under the "Entity" tab select "Database-Generated" in the "Pk Generation Strategy" dropdown. This of course implies that your PK column is indeed an auto-increment in the DB (meaning you may need to adjust your DB schema accordingly).

Dapper control dates

This question is meant to bring some light around control date times using Dapper.
These controls are used to audit the information in a data storage and figure out when a particular row has been created / updated. I couldn't manage to find any information on GitHub's project, either here in StackOverflow, so I would like this post to become a central source of truth to help others or even to turn into a future extension of the library.
Any answer, resource or best practice will be appreciated.
I've ran into a case where I was working with a database that was consumed by both Rails and Dapper. Rails was managing created_at and updated_at, not the database. So with the .net application I had to implement a solution that managed these and provided the ability to add additional business logic at these layers such as events.
I've included a basic example of how I handled this with a wrapper around Dapper Simple Crud for inserts and updates. This example does not include exposing the other critical methods from dapper and simplecrud such as Query, GET, Delete, etc. You will need to expose those at your discresion.
For safety ensure that you decorate your models created_at property with the attribute [Dapper.IgnoreUpdate]
[Table("examples")]
public partial class example
{
[Key]
public virtual int id { get; set; }
[Required(AllowEmptyStrings = false)]
[StringLength(36)]
public virtual string name { get; set; }
[Dapper.IgnoreUpdate]
public virtual DateTime created_at { get; set; }
public virtual DateTime updated_at { get; set; }
}
public class ExampleRepository : IExampleRepository
{
private readonly IYourDapperWrapper db;
public PartnerRepository(IYourDapperWrapper yourDapperWrapper){
if (yourDapperWrapper == null) throw new ArgumentNullException(nameof(yourDapperWrapper));
db = yourDapperWrapper;
}
public void Update(example exampleObj)
{
db.Update(exampleObj);
}
public example Create(example exampleObj)
{
var result = db.Insert(exampleObj);
if (result.HasValue) exampleObj.id = result.value;
return exampleObj;
}
}
public class YourDapperWrapper : IYourDapperWrapper
{
private IDbConnectionFactory db;
public YourDapperWrapper(IDbConnectionFactory dbConnectionFactory){
if (dbConnectionFactory == null) throw new ArgumentNullException(nameof(dbConnectionFactory));
db = dbConnectionFactory;
}
public int Insert(object model, IDbTransaction transaction = null, int? commandTimeout = null)
{
DateUpdate(model, true);
var results = Db.NewConnection().Insert(model, transaction, commandTimeout);
if (!results.HasValue || results == 0) throw new DataException("Failed to insert object.");
return results;
}
public int Update(object model, IDbTransaction transaction = null, int? commandTimeout = null)
{
DateUpdate(model, false);
var results = Db.NewConnection().Update(model, transaction, commandTimeout);
if (!results.HasValue || results == 0) throw new DataException("Failed to update object.");
return results;
}
private void DateUpdate(object model, bool isInsert)
{
model.GetType().GetProperty("updated_at")?.SetValue(model, DateTime.UtcNow, null);
if (isInsert) model.GetType().GetProperty("created_at")?.SetValue(model, DateTime.UtcNow, null);
}
}

Resources