I'm using DBUnit to populate the database so that its content is a known content during testing.
The db schema I'm working on is in an Oracle 11g instance in which they reside other db schemas. In some of these schemas has been defined a table to which has been associated with a public synonym and on which have been given the rights to select.
When I run the xml that defines how the database must be populated, also if the xml file doesn't contain the table defined in several schemas, DBUnit throws the AmbiguousTableNameException exception on that table.
I found that there are 3 solutions to solve this behavior:
Use a database connection credential that has access to only one
database schema.
Specify a schema name to the DatabaseConnection or
DatabaseDataSourceConnection constructor.
Enable the qualified table name support (see How-to documentation).
In my case, I can only apply the solution 1, but even if I adopt it, I got the same exception.
The table that gives me problems is defined in 3 schemas and I don't have the opportunity to act on it in any way.
Please, someone could help me?
I found the solution: I specified the schema in the name of the tables and I have set to true the property http://www.dbunit.org/features/qualifiedTableNames (corresponding to org.dbunit.database.FEATURE_QUALIFIED_TABLE_NAMES).
By this way, my xml code to populate tables look like:
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
<SCHEMA.TABLE ID_FIELD="1" />
</dataset>
where SCHEMA is the schema name, TABLE is the table name.
To se the property I've used the following code:
DatabaseConfig dBConfig = dBConn.getConfig(); // dBConn is a IDatabaseConnection
dBConfig.setProperty(DatabaseConfig.FEATURE_QUALIFIED_TABLE_NAMES, true);
In my case,
I granted dba role to user, thus dbunit throw AmbiguousTableNameException.
After I revoke dba role to user, I solve that problem.
SQL> revoke dba from username;
I had the same AmbiguousTableNameException while executing Dbunits aginst Oracle DB. It was working fine and started throwing error one day.
Rootcause: while calling a stored procedure, it got modified by mistake to lower case. When changed to upper case it stared working.
I could solve this also by setting the shema name to IDatabaseTester like iDatabaseTester.setSchema("SCHEMANAMEINCAPS")
Thanks
Smitha
I was using SpringJDBC along with MySQL Connector (v8.0.17). Following the 2 steps explained in this answer alone did not help.
First I had to set the schema on the spring datasource.
Then I also had to set a property "databaseTerm" to "schema"
by default it is set to "catalogue" as explained here.
We must set this property because (in Spring's implementation of javax.sql.DataSource) if it's not set (i.e. defaulted to "catalogue") then the connection returned by dataSource.getConnection() will not have the schema set on it even if we had set it on the dataSource.
#Bean
public DriverManagerDataSource cloudmcDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("<driver>");
dataSource.setUrl("<url>");
dataSource.setUsername("<uname>");
dataSource.setPassword("<password>");
dataSource.setSchema("<schema_name>");
Properties props = new Properties();
// the following key-value pair are constants; must be set as is
props.setProperty("databaseTerm", "schema");
dataSource.setConnectionProperties(props);
return dataSource;
}
Don't forget to make the changes explained in answer here.
Related
I am using the commons-dbcp2 library for JdbcConnectionPooling:
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-dbcp2</artifactId>
<version>2.1.1</version>
</dependency>
When i am initializing the connection pool i am binding it with Schema Name by making the schema name as part of URL as:
BasicDataSource ds = new BasicDataSource();
String url = "<url>";
ds.setDriverClassName("<DriverClass>");
ds.setUsername("<userName>");
ds.setPassword("<Password>");
ds.setInitialSize(5);
ds.setMaxTotal(10);
ds.setMaxIdle(5);
String schema = "<mySchema>";
ds.setUrl(url + "?currentschema=" + schema);
try (Connection conn = ds.getConnection()) {
}catch(Exception ex){
LOG.error("Issue while creating connection pool", ex);
}
Is this correct way of creating the connectionpool (By binding the connection pool to a schema name)? What is the impact if i try to run a query [with the connection borrowed from the pool] on another schema?
I would say schema name shouldn't be part of URL since db connections are made to databases and not to schema per se. Also, there are data bases like DB2 where currentschema wouldn't work.
We should keep in mind the basic purposes behind the concept of schema - organization of tables - functionally and user - wise and we shouldn't start associating it with connections.
As per schema permissions granted, a user would be able to connect to only specific schema and user should mandatory use schema name in all of queries. Queries shouldn't be ambiguous.
For your second question,
What is the impact if i try to run a query [with the connection
borrowed from the pool] on another schema?
I think, you can very well test it out and my guess is it shouldn't work but behavior might vary from database to database.
I say that it wouldn't work because of this note on setUrl method-
Note: this method currently has no effect once the pool has been
initialized. The pool is initialized the first time one of the
following methods is invoked: getConnection, setLogwriter,
setLoginTimeout, getLoginTimeout, getLogWriter.
It shouldn't work because URL once supplied wouldn't automatically change for second schema. Pooling API should simply pass on that previous URL to new query executions and it might work/not work depending on underlying drivers / dbs.
In these APIs, a method like , setSchema() not being provided may have its own reasons - your code should be as neutral as possible.
Hope it helps !!
I have all my tables in the same schema in a SQLServer database, so I would rather not have to specify the schema in every #Table annotation. I have put the schema name in my connection string:
spring.datasource.url=jdbc:sqlserver://localhost;instanceName=SQLEXPRESS;databaseSchema=TST;databaseName=TestDB;integratedSecurity=true;
When my annotation is #Table(name="MY_TABLE"), and Hibernate attempts an insert I get an Invalid object name 'MY_TABLE'. error message.
If the annotation is #Table(name="MY_TABLE", schema="TST") then the insert works as expected.
Does the SQLServer dialect not honor the schema in the connection string?
Here are all the Spring/Hibernate properties:
spring.datasource.url=jdbc:sqlserver://localhost;instanceName=SQLEXPRESS;databaseSchema=EXP;databaseName=YRC_PILOT2;integratedSecurity=true;
spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.jpa.database-platform = org.hibernate.dialect.SQLServerDialect
spring.jpa.show-sql=true
If I have to specify the schema for every table, so be it. But that seems a bit kludgey if I ever want to switch schema names.
Use spring.jpa.properties.hibernate.default_schema=yourschemaname.
You can use spring.jpa.properties.* To set JPA native properties
From the Spring boot Docs:
all properties in spring.jpa.properties.* are passed through as normal JPA properties (with the prefix stripped)
See http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-configure-jpa-properties
Thanks to datGnomeLife & Neil Stockton I started looking outside the connection string. That led me to How to set up default schema name in JPA configuration? which actually answered my question, which Possible to set default schema from connection string? did not.
I'm trying to understand how to configure my Hibernate to work properly with my MSSQL DB and its schemas.
The problem is that during validation of tables, it logs (for every table):
org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl
- HHH000262: Table not found SHARED_CONFIGURATION
I debugged Hibernate to find out what causes this and found that it calls something like:
EXECUTE [mydb]..sp_columns N'SHARED_CONFIGURATION',N'',N'mydb'
Notice that 2nd parameter is schema name and there is passed empty string. When I tried to run this query against DB it returned empty result set. But when I passed 'dbo' as 2nd parameter the result set was not empty (meaning that Hibernate should call this instead).
OK so I was like it seems that I need to define schema. But both setting hibernate.default_schema or setting schema in #Table annotation on my entites threw exception:
Schema-validation: missing table [SHARED_CONFIGURATION]
So now I'm wondering what is the real problem. I also wanted to set default schema in my DB but was not allowed (Cannot alter the user 'sa', because it does not exist or you do not have permission.) even when executed with user 'sa' itself:
ALTER USER sa WITH DEFAULT_SCHEMA = dbo;
Note that this happens with any driver (JTDS, official MS driver..)
Can someone explain what is happening here and how "correctly" get rid of that warning message in log that says table does not exist even when it exists (and application is able to run properly with the database)?
I had the same problem and solved by setting the property hibernate.hbm2ddl.jdbc_metadata_extraction_strategy to individually
I have a dataset (defined in xml) and i am using PostGreSQL, POJOs annotated with JPA, and DbUnit with Junit for tests.
When the test runs, it creates the tables and sequences in the database, but when it starts to read the dataset (xml) with the table definitions and columns, it fires the following error
org.dbunit.dataset.NoSuchTableException "nameoftable" I tried to put the name of the table with all caps and normal caps, and it won't work. The table was created in the public schema, and then i tried to define in the xml the table as public."nameoftable" but it also won't work.... any ideas?
I tried to run this tests with DUnit in the following versions: 2.2.2, 2.3.0, and 2.4.5.
Thanks.
With DBUnit you can either use a specific schema to test against, or a full database (with potentially multiple schema). If you use the latter you need to specify the schema in the dataset when importing/exporting or it can get itself confused; in PostgreSQL at least, I've not tried it with anything else.
To enforce this add the following code in:
if (strSchema == null || strSchema.isEmpty()) {
conn = new DatabaseConnection(jdbcConnection);
conn.getConfig().setProperty(
"http://www.dbunit.org/features/qualifiedTableNames", true);
}
else
conn = new DatabaseConnection(jdbcConnection, strSchema);
The important bit is the setting of the property; the rest is something I use to make the connection relevant to the DB or schema (based upon a schema name extracted from the hibernate config XML).
As the title suggests I am confused as to why some tables in my database fall over if you do something like;
SELECT * FROM [user].[table]
And yet on another tables it works fine.
I am testing some code that will eventually be on a server that cries if you don't use [user].[table] so I would really like to force this on my machine.
Could someone explain why this happens and possible show me how to fix it.
More Info
Here is the message I get when I try and run a query using [user].[table] instead of just [table];
[Microsoft][ODBC SQL Server
Driver][SQL Server]Invalid object name
'usr.tbl'
The "user" bit is the schema a table belongs to
So you can have dbo.table and user.table in the same database.
By default, SELECT * FROM table will usually look for the dbo.table. However, if the login/user has a different default schema then it will look for thatschema.table
To fix it:
You can use ALTER SCHEMA .. TRANSFER.. to fix the current setup
Ongoing, ensure every table reference has the correct schema on CREATE, ALTER, SELECT, whatever
Also see "User-Schema Separation" on MSDN
What you refer to as [user] is actually something called a schema. Every user has a default schema, which means that when you are logged in as that user you can refer to the tables in the default schema without the schema prefix. One way to solve this would to be to make sure that no user has the default schema where the tables are located. Basically you can just make an emptry schema and use that as the default schema for all your users.
Go to YourDatabase->Security->Users and see the properties (by right clicking) to change the default schema for your users.