I have a dataset (defined in xml) and i am using PostGreSQL, POJOs annotated with JPA, and DbUnit with Junit for tests.
When the test runs, it creates the tables and sequences in the database, but when it starts to read the dataset (xml) with the table definitions and columns, it fires the following error
org.dbunit.dataset.NoSuchTableException "nameoftable" I tried to put the name of the table with all caps and normal caps, and it won't work. The table was created in the public schema, and then i tried to define in the xml the table as public."nameoftable" but it also won't work.... any ideas?
I tried to run this tests with DUnit in the following versions: 2.2.2, 2.3.0, and 2.4.5.
Thanks.
With DBUnit you can either use a specific schema to test against, or a full database (with potentially multiple schema). If you use the latter you need to specify the schema in the dataset when importing/exporting or it can get itself confused; in PostgreSQL at least, I've not tried it with anything else.
To enforce this add the following code in:
if (strSchema == null || strSchema.isEmpty()) {
conn = new DatabaseConnection(jdbcConnection);
conn.getConfig().setProperty(
"http://www.dbunit.org/features/qualifiedTableNames", true);
}
else
conn = new DatabaseConnection(jdbcConnection, strSchema);
The important bit is the setting of the property; the rest is something I use to make the connection relevant to the DB or schema (based upon a schema name extracted from the hibernate config XML).
Related
I have a database with mixed case, i.e testDATABASE.
I run(using ODBC) the query use database ""testDATABASE";", then I run the query use schema "PUBLIC",
the query fail with the error:
ERROR: SQL compilation error:
Object does not exist, or operation cannot be performed.
Error Code: 2043
Query = use schema "PUBLIC"
when I run it not via odbc but in the notebook it works fine.
same queries with database that does not contain mixed case works fine.
if i run use schema "testDATABASE"."PUBLIC" it runs OK via ODBC and notebook.
is there a known issue about it? how can i run it with 2 queries in ODBCand make it work?
Thanks.
In your question it looks like your use database command had double double quotes,
but your schema didn't, perhaps that might be the issue.
Overall Suggestions :
When you make object names MiXeD-CaSe it simply makes use of the objects more difficult, so I'd recommend trying to not do mixed case if you can avoid it. You may not be able to avoid this, that's OK, it's just a suggestion.
if you can't avoid it, the only time I'd use the double quotes is when the object name
(in this case, the database name) has mixed case.
In your case, you should be able to run (you may have to double-double quote it in ODBC):
use database "testDATABASE";
and then this - note no double quotes needed because it's not mixed case
use schema PUBLIC;
this document illustrates how you don't need to prefix the schema with the database:
https://docs.snowflake.com/en/sql-reference/sql/use-schema.html
something else I recommend to folks getting started, for each user I like to set all the default context items (role, warehouse, namespace)
ALTER USER rich SET DEFAULT_ROLE = 'RICH_ROLE';
ALTER USER rich SET DEFAULT_WAREHOUSE = 'RICH_WH' ;
ALTER USER rich SET DEFAULT_NAMESPACE = 'RICH_DB.TEST_SCHEMA';
I'm starting to use RepoDb, but my SQL Server 2016 database has tables with a dot in the middle like this: User.Data.
Moving from full .NET Entity Framework to RepoDb, I'm facing this issue. I'm using the fluent mapping and I wrote something like this:
FluentMapper
.Entity<UserData>()
.Table("[User.Data]")
.Primary(u => u.UserId)
I get the exception: MissingFieldsException and it says:
There are no database fields found for table '[User.Data]'. Make sure that the target table '[User.Data]' is present in the database and/or at least a single field is available.
Just for curiosity, I created a table UserData with the same attributes and primary key and it worked great (change the fluent mapper whit: .Table("[UserData]").
Am I missing something?
Thanks for helping me
The support to this in only available at RepoDb.SqlServer (v1.0.13) version or above. You can use either of the approaches below. Make sure to specify the quotes if you are using the database and schema.
Via built-in MapAttribute.
[Map("[User.Data]")]
Via TableAttribute of the System.ComponentModel.DataAnnotations namespace.
[Table("[User.Data]")]
Via FluentMapper as you did.
FluentMapper
.Entity<UserData>()
.Table("[User.Data]");
I am trying to use unit test along with h2 database. My application uses MSSQL database. And below are the 2 table that am using in my application:
SchemaA.dbo.Table1<br>
SchemaB.dbo.table2<br>
#Entity<br>
#Table(name="SchemaB..table")<br>
Class A <br>
{
private Long id;
............
}
I am trying to write unit test to test the persistance of the above class. But h2 database does not recognise this tablename syntax:
SchemaB..table
Note : the 2 dots between schema name and table name.
Any suggestion would be greatly appreciated.
You may want to use the schema attribute of the Table JPA annotation.
For example:
#Entity(name = "Foo")
#Table(name = "TABLE_FOO", schema = "bar")
If you have a single data source, which connects to your h2 with user A. In order to access schema 'bar', you may want to tell h2 to automatically create schema 'bar' on connect.
jdbc:h2:mem:play;MODE=MySQL;INIT=RUNSCRIPT FROM 'test/init.sql'
The final part of the JDBC URL test/init.sql points to a sql file with the following content.
CREATE SCHEMA IF NOT EXISTS bar
H2 will execute the sql and create the schema on connect.
I've created a demo project at github.
The project has an init.sql file that creates 2 schema, foo and bar.
2 model classes foo.A and bar.B that use #Entity(schema="foo", name="A") to specify schema accordingly. see app/models.
The test case uses play framework therefore the built-in evolution tool can be applied every time test cases are executed. But it should be fine to use setUp method to apply your own sql script before executing test cases. Please see test folder for the sample test case. (it's actually scalaTest but it basically has the same idea as junit)
I'd like to use SQL OUTPUT clause to keep history of the records on my database while I'm using Entity Framework. To achieve this, EF needs to generate the following example for a DELETE statement.
Delete From table1
output deleted.*, 'user name', getdate() into table1_hist
Where field = 1234;
The table table1_hist has the same columns as table1, with the addition of two columns to store the name of the user who did the action and when it happened. However, EF doesn't seem to have a way to support this SQL Server's clause, so I'm lost on how to implement that.
I looked at EF's source code, and the DELETE command is create inside a internal static method (GenerateDeleteSql in System.Data.Entity.SqlServer.SqlGen.DmlSqlGenerator class), so I can't extend the class to add the behavior I want. It looks like I'll have to rewrite the SQL Server provider based on the existing code, but that is something I'd like to avoid...
So, my question is if there's another option to do this (an extension, for example) or do I have to rewrite this provider?
Thank you.
Have you considered either
Using Stored Procedures to encapsulate your data logic
A delete trigger to capture the data
Change Data Capture (Enterprise edition only)
not actually deleting the data - merely setting a flag in the data to mark it as deleted.
I'm using DBUnit to populate the database so that its content is a known content during testing.
The db schema I'm working on is in an Oracle 11g instance in which they reside other db schemas. In some of these schemas has been defined a table to which has been associated with a public synonym and on which have been given the rights to select.
When I run the xml that defines how the database must be populated, also if the xml file doesn't contain the table defined in several schemas, DBUnit throws the AmbiguousTableNameException exception on that table.
I found that there are 3 solutions to solve this behavior:
Use a database connection credential that has access to only one
database schema.
Specify a schema name to the DatabaseConnection or
DatabaseDataSourceConnection constructor.
Enable the qualified table name support (see How-to documentation).
In my case, I can only apply the solution 1, but even if I adopt it, I got the same exception.
The table that gives me problems is defined in 3 schemas and I don't have the opportunity to act on it in any way.
Please, someone could help me?
I found the solution: I specified the schema in the name of the tables and I have set to true the property http://www.dbunit.org/features/qualifiedTableNames (corresponding to org.dbunit.database.FEATURE_QUALIFIED_TABLE_NAMES).
By this way, my xml code to populate tables look like:
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
<SCHEMA.TABLE ID_FIELD="1" />
</dataset>
where SCHEMA is the schema name, TABLE is the table name.
To se the property I've used the following code:
DatabaseConfig dBConfig = dBConn.getConfig(); // dBConn is a IDatabaseConnection
dBConfig.setProperty(DatabaseConfig.FEATURE_QUALIFIED_TABLE_NAMES, true);
In my case,
I granted dba role to user, thus dbunit throw AmbiguousTableNameException.
After I revoke dba role to user, I solve that problem.
SQL> revoke dba from username;
I had the same AmbiguousTableNameException while executing Dbunits aginst Oracle DB. It was working fine and started throwing error one day.
Rootcause: while calling a stored procedure, it got modified by mistake to lower case. When changed to upper case it stared working.
I could solve this also by setting the shema name to IDatabaseTester like iDatabaseTester.setSchema("SCHEMANAMEINCAPS")
Thanks
Smitha
I was using SpringJDBC along with MySQL Connector (v8.0.17). Following the 2 steps explained in this answer alone did not help.
First I had to set the schema on the spring datasource.
Then I also had to set a property "databaseTerm" to "schema"
by default it is set to "catalogue" as explained here.
We must set this property because (in Spring's implementation of javax.sql.DataSource) if it's not set (i.e. defaulted to "catalogue") then the connection returned by dataSource.getConnection() will not have the schema set on it even if we had set it on the dataSource.
#Bean
public DriverManagerDataSource cloudmcDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("<driver>");
dataSource.setUrl("<url>");
dataSource.setUsername("<uname>");
dataSource.setPassword("<password>");
dataSource.setSchema("<schema_name>");
Properties props = new Properties();
// the following key-value pair are constants; must be set as is
props.setProperty("databaseTerm", "schema");
dataSource.setConnectionProperties(props);
return dataSource;
}
Don't forget to make the changes explained in answer here.