I am using the commons-dbcp2 library for JdbcConnectionPooling:
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-dbcp2</artifactId>
<version>2.1.1</version>
</dependency>
When i am initializing the connection pool i am binding it with Schema Name by making the schema name as part of URL as:
BasicDataSource ds = new BasicDataSource();
String url = "<url>";
ds.setDriverClassName("<DriverClass>");
ds.setUsername("<userName>");
ds.setPassword("<Password>");
ds.setInitialSize(5);
ds.setMaxTotal(10);
ds.setMaxIdle(5);
String schema = "<mySchema>";
ds.setUrl(url + "?currentschema=" + schema);
try (Connection conn = ds.getConnection()) {
}catch(Exception ex){
LOG.error("Issue while creating connection pool", ex);
}
Is this correct way of creating the connectionpool (By binding the connection pool to a schema name)? What is the impact if i try to run a query [with the connection borrowed from the pool] on another schema?
I would say schema name shouldn't be part of URL since db connections are made to databases and not to schema per se. Also, there are data bases like DB2 where currentschema wouldn't work.
We should keep in mind the basic purposes behind the concept of schema - organization of tables - functionally and user - wise and we shouldn't start associating it with connections.
As per schema permissions granted, a user would be able to connect to only specific schema and user should mandatory use schema name in all of queries. Queries shouldn't be ambiguous.
For your second question,
What is the impact if i try to run a query [with the connection
borrowed from the pool] on another schema?
I think, you can very well test it out and my guess is it shouldn't work but behavior might vary from database to database.
I say that it wouldn't work because of this note on setUrl method-
Note: this method currently has no effect once the pool has been
initialized. The pool is initialized the first time one of the
following methods is invoked: getConnection, setLogwriter,
setLoginTimeout, getLoginTimeout, getLogWriter.
It shouldn't work because URL once supplied wouldn't automatically change for second schema. Pooling API should simply pass on that previous URL to new query executions and it might work/not work depending on underlying drivers / dbs.
In these APIs, a method like , setSchema() not being provided may have its own reasons - your code should be as neutral as possible.
Hope it helps !!
Related
I have an application that supports two databases. MSSQL and SQLite. I am revamping the underlying data access and models and using RepoDb. I would be using the same model for the SQLite and MSSQL. Depending on the connection string I create my connection object (i.e. SQLiteConnection or SqlConnection). I am facing a problem with one of my entities. The problem is with a column type.
public class PLANT
{
public string OP_ID {get;set;}
}
The OP_ID in the SQL Server maps to a uniqueidentifier, and in SQLite to nvarchar. Where I try to do it, it works fine with SQLiteConnection. The problem I face is when I use SqlConnection
var plant = connection.Query<PLANT>(e => e.PL_ID == "3FFA25B5-4DF5-4216-846C-2C9F58B7DD90").FirstOrDefault();
I get error
“No coercion operator is defined between types 'System.Guid' and 'System.String “
I have tried using the IPropertyHandler<Guid, string> on the OP_ID; it works for SqlConnection but fails for SQLiteConnection.
Is there a way that I can use the same model for both connections?
I strongly recommend that you share the models between multiple databases if the PK is on the same type, otherwise you will be ending up some coerce problem like this due to the fact that one DB does not support that target type (i.e. UNIQUEIDENTIFIER).
In anyway, a PropertyHandler is a not way for this as the input types is different. You can use separate models for your SQLite and SqlServer, otherwise, you can explicitly set the RepoDb.Converter.ConversionType = Automatic so the coerce will automatically be handled.
I am not recommending the CoversionType to Automatic as it is an additional conversion logic on top of data extraction. But that would fix it.
I have a DB2 table with about 150 K records. I have another SQL Server table with the same columns. One of the table columns - let's call it code - is a unique value and is indexed. I am using Spring Batch. Periodically, I get a file with a list of codes. For example, a file with 5 K codes. For each code in the file, I need to read records from the DB2 table, whose code column matches the code in the file and insert a few columns from those records to the SQL Server table. I want to use SQL and not JPA and believe there is a limit (let's say 1000) on how many values can be in the SQL IN clause. Should this be my chunk size?
How should the Spring Batch application to do this be designed? I have considered below strategies but need help deciding which one (or any other) is better.
1) Single step job with reader reading codes from file, processor using a JdbcTemplate to get rows for a chunk of codes and writer writing the rows using JdbcBatchItemWriter - seems like the JdbcTemplate would have an open DB connection through out job execution.
2) JdbcPagingItemReader - Spring Batch documentation cautions that databases like DB2 have pessimistic locking strategies and suggests using driving query instead
3) Driving Query - Is there an example? - How does the processor convert the key to a full object here? How long does the connection stay open?
4) Chaining readers - is this possible? - first reader will read from file, second from DB2 and then processor and writer.
I would go with your option #1. Your file containing unique codes effectively becomes your driving query.
Your ItemReader will read the file and emit each code to your ItemProcessor.
The ItemProcessor can either directly use a JdbcTemplate, or you can delegate to a separate data access object (DAO) in your project, but either way, with each invocation of the process method a new record will be pulled in from your DB2 database table. You can do whatever other processing is necessary here prior to emitting the appropriate object for your ItemWriter which then inserts or updates the necessary record(s) in your SQL Server database table.
Here's an example from a project where I used an ItemReader<Integer> as my driving query to collect the IDs of devices on which I needed to process configuration data. I then passed those ID data on to my ItemProcessor which dealt with one configuration file at a time:
public class DeviceConfigDataProcessor implements ItemProcessor<Integer,DeviceConfig> {
#Autowired
MyJdbcDao myJdbcDao;
#Override
public DeviceConfig process(Integer deviceId) throws Exception {
DeviceConfig deviceConfig = myJdbcDao.getDeviceConfig( deviceId );
// process deviceConfig as needed
return deviceConfig;
}
}
You would swap out deviceId for code, and DeviceConfig for whatever domain object is appropriate to your project.
If you're using Spring Boot, you should have a ConnectionPool automatically, and your DAO will pull a single record at a time for processing, so you don't need to worry about persistent connections to the database, pessimistic locks, etc.
I'm using DBUnit to populate the database so that its content is a known content during testing.
The db schema I'm working on is in an Oracle 11g instance in which they reside other db schemas. In some of these schemas has been defined a table to which has been associated with a public synonym and on which have been given the rights to select.
When I run the xml that defines how the database must be populated, also if the xml file doesn't contain the table defined in several schemas, DBUnit throws the AmbiguousTableNameException exception on that table.
I found that there are 3 solutions to solve this behavior:
Use a database connection credential that has access to only one
database schema.
Specify a schema name to the DatabaseConnection or
DatabaseDataSourceConnection constructor.
Enable the qualified table name support (see How-to documentation).
In my case, I can only apply the solution 1, but even if I adopt it, I got the same exception.
The table that gives me problems is defined in 3 schemas and I don't have the opportunity to act on it in any way.
Please, someone could help me?
I found the solution: I specified the schema in the name of the tables and I have set to true the property http://www.dbunit.org/features/qualifiedTableNames (corresponding to org.dbunit.database.FEATURE_QUALIFIED_TABLE_NAMES).
By this way, my xml code to populate tables look like:
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
<SCHEMA.TABLE ID_FIELD="1" />
</dataset>
where SCHEMA is the schema name, TABLE is the table name.
To se the property I've used the following code:
DatabaseConfig dBConfig = dBConn.getConfig(); // dBConn is a IDatabaseConnection
dBConfig.setProperty(DatabaseConfig.FEATURE_QUALIFIED_TABLE_NAMES, true);
In my case,
I granted dba role to user, thus dbunit throw AmbiguousTableNameException.
After I revoke dba role to user, I solve that problem.
SQL> revoke dba from username;
I had the same AmbiguousTableNameException while executing Dbunits aginst Oracle DB. It was working fine and started throwing error one day.
Rootcause: while calling a stored procedure, it got modified by mistake to lower case. When changed to upper case it stared working.
I could solve this also by setting the shema name to IDatabaseTester like iDatabaseTester.setSchema("SCHEMANAMEINCAPS")
Thanks
Smitha
I was using SpringJDBC along with MySQL Connector (v8.0.17). Following the 2 steps explained in this answer alone did not help.
First I had to set the schema on the spring datasource.
Then I also had to set a property "databaseTerm" to "schema"
by default it is set to "catalogue" as explained here.
We must set this property because (in Spring's implementation of javax.sql.DataSource) if it's not set (i.e. defaulted to "catalogue") then the connection returned by dataSource.getConnection() will not have the schema set on it even if we had set it on the dataSource.
#Bean
public DriverManagerDataSource cloudmcDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("<driver>");
dataSource.setUrl("<url>");
dataSource.setUsername("<uname>");
dataSource.setPassword("<password>");
dataSource.setSchema("<schema_name>");
Properties props = new Properties();
// the following key-value pair are constants; must be set as is
props.setProperty("databaseTerm", "schema");
dataSource.setConnectionProperties(props);
return dataSource;
}
Don't forget to make the changes explained in answer here.
I am trying to implement this solution:
NHibernate-20-SQLite-and-In-Memory-Databases
The only problem is that we have hbms like this:
<class name="aTable" table="[dbo].[aTable]" mutable="true" lazy="false">
with [dbo] in the table name, because we are working with mssql, and this does not work with Sqlite.
I found this posting on the rhino-tools-dev group where they talk about just removing the schema from the mapping, but on NH2 there doesn't seem to be a classMapping.Schema.
There is a classMapping.Table.Schema, but it seems to be read-only. For example, this doesn't work:
foreach (PersistentClass cp in configuration.ClassMappings) {
// Does not work - throws a
//System.IndexOutOfRangeException: Index was outside the bounds of the array.
cp.Table.Schema = "";
}
Is there a way to tell Sqlite to ignore the [dbo] (I tried attach database :memory: as dbo, but this didn't seem to help)?
Alternatively, can I programmatically remove it from the classmappings (unfortunately changing the hbms is not possible right now)?
We had too many problems with SQLite which eventually pushed us to switch to SQL Express.
Problems I remember:
SQLite, when used in-memory, discards the database when Session is closed
SQLite does not support bunch of SQL constructs such basic ones as ISNULL, but also more advanced like common table expressions and others added in SQL 2005 and 2008. This becomes important when you start writing complex named queries.
SQLite's datetime has bigger range of possible values than SQL Server's
The API NHibernate uses for SQLite behaves differently than ADO.NET for MS SQL Server when used in scope of transaction. One example is the hbm-to-ddl tool whose Execute method does not work inside transaction with SQL Server but works fine with SQLite.
To summarize, SQLite-based unit-testing is very far from being conclusively representative of the issues you'll encounter when using MS SQL Server in PROD and therefore undermines the credibility of unit-testing overall.
We are using Sqlite to run unit tests with NH 2.0.1. Actually, I didn't run into this problem. I just didn't specify dbo, I think it is default on SqlServer.
By the way, there is a default_schema parameter in the configuration file. This is actually the database name, but you can try putting the dbo there, only for the SqlServer configuration of course.
After looking through the source of NH and some experimenting i think i found a simple workaround -
foreach (PersistentClass cp in configuration.ClassMappings)
{
// Input : [dbo].[Tablename] Output : Tablename
cp.Table.Name = Regex.Replace(cp.Table.Name, #"^\[.*\]\.\[", "");
cp.Table.Name = Regex.Replace(cp.Table.Name, #"\]$", "");
// just to be sure
cp.Table.Schema = null;
}
note that i can set Table.Schema to null while an empty string threw an exception ...
thanks for the answers !
Are there any rapid Database protoyping tools that don't require me to declare a database schema, but rather create it based on the way I'm using my entities.
For example, assuming an empty database (pseudo code):
user1 = new User() // Creates the user table with a single id column
user1.firstName = "Allain" // alters the table to have a firstName column as varchar(255)
user2 = new User() // Reuses the table
user2.firstName = "Bob"
user2.lastName = "Loblaw" // Alters the table to have a last name column
Since there are logical assumptions that can be made when dynamically creating the schema, and you could always override its choices by using your DB tools to tweak it later.
Also, you could generate your schema by unit testing it this way.
And obviously this is only for prototyping.
Is there anything like this out there?
Google's Application Engine works like this. When you download the toolkit you get a local copy of the database engine for testing.
Grails uses Hibernate to persist domain objects and produces behavior similar to what you describe. To alter the schema you simply modify the domain, in this simple case the file is named User.groovy.
class User {
String userName
String firstName
String lastName
Date dateCreated
Date lastUpdated
static constraints = {
userName(blank: false, unique: true)
firstName(blank: false)
lastName(blank: false)
}
String toString() {"$lastName, $firstName"}
}
Saving the file alters the schema automatically. Likewise, if you are using scaffolding it is updated. The prototype process becomes run the application, view the page in your browser, modify the domain, refresh the browser, and see the changes.
I agree with the NHibernate approach and auto-database-generation. But, if you want to avoid writing a configuration file, and stay close to the code, use Castle's ActiveRecord. You declare the 'schema' directly on the class with via attributes.
[ActiveRecord]
public class User : ActiveRecordBase<User>
{
[PrimaryKey]
public Int32 UserId { get; set; }
[Property]
public String FirstName { get; set; }
}
There are a variety of constraints you can apply (validation, bounds, etc) and you can declare relationships between different data model classes. Most of these options are parameters added to the attributes. It's rather simple.
So, you're working with code. Declaring usage in code. And when you're done, let ActiveRecord create the database.
ActiveRecordStarter.Initialize();
ActiveRecordStarter.CreateSchema();
May be not exactly responding to your general question, but if you used (N)Hibernate then you can automatically generate the database schema from your hbm mapping files.
Its not done directly from your code as you seem to be wanting but Hibernate Schema generation seems to work well for us
Do you want the schema, but have it generated, or do you actually want NO schema?
For the former I'd go with nhibernate as #tom-carter said. Have it generate your schema for you, and you are all good (atleast until you roll your app out, then look at something like Tarantino and RedGate SQL Diff or whatever it's called to generate update scripts)
If you want the latter.... google app engine does this, as I've discovered this afternoon, and it's very nice. If you want to stick with code under your control, I'd suggest looking at CouchDB, tho it's a bit of upfront work getting it setup. But once you have it, it's a totally, 100% schema-free database. Well, you have an ID and a Version, but thats it - the rest is up to you. http://incubator.apache.org/couchdb/
But by the sounds of it (N)hibernate would suite the best, but I could be wrong.
You could use an object database.