I've created a spring batch to query a Azure SQL server database and write the data into a CSV file. I do not have create permissions for the database. I get this error Invalid Object name BATCH_JOB_INSTANCE on running the batch. I don't want the spring batch meta-data tables to be created in the main database. Or it would be helpful if I can have them in another local or in-memory db like h2db.
I've also added spring-batch-initialize-schema=never already, which was the case with most answers to similar questions on here, but that didn't help.
Edit:
I resolved the Invalid Object name error by preventing the metadata tables from being created into the main database by extending the DefaultBatchConfigurer Class and Overriding the setDataSource method, thus having them created in the in-memory map-repository. Now I want to try two options:
How to have the meta data tables to be created in a local db or in-memory db like h2db.
Or If I have the meta data tables created already in the main database, in a different schema than my main table I'm fetching from. How to point my job to those meta-data tables in another schema, to store the job and step details data in those.
#Configuration
public class SpringBatchConfig extends DefaultBatchConfigurer{
#Override
public void setDataSource(DataSource datasource) {
}
...
My application.properties file looks like this:
spring.datasource.url=
spring.datasource.username=
spring.datasource.password=
spring.datasource.driver-class-name=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring-batch-initialize-schema=never
spring.batch.job.enabled=false
spring.jpa.hibernate.ddl-auto=update
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.SQLServer2012Dialect
I've created a demo with two datasources. Batch metadata will sotre in H2 DB and the Job datasource is Azure SQL.
Here is the project structure:
We need define a DataSourceConfig class and use #Primary annotation for DataSource bean:
#Configuration
public class DataSourceConfig {
#Bean(name = "mssqlDataSource")
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource appDataSource(){
return DataSourceBuilder.create().build();
}
#Bean(name = "h2DataSource")
#Primary
// #ConfigurationProperties(prefix="spring.datasource.h2")
public DataSource h2DataSource() {
return DataSourceBuilder.create()
.url("jdbc:h2:mem:thing:H2;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE")
.driverClassName("org.h2.Driver")
.username("sa")
.password("")
.build();
}
}
In the ItemReaderDbDemo class, we use #Autowired #Qualifier("mssqlDataSource") to specify the dataSource in the Spring Batch task:
#Configuration
public class ItemReaderDbDemo {
//generate task Object
#Autowired
private JobBuilderFactory jobBuilderFactory;
//Step exec tasks
//generate step Object
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
#Qualifier("mssqlDataSource")
private DataSource dataSource;
#Autowired
#Qualifier("dbJdbcWriter")
private ItemWriter<? super Todo> dbJdbcWriter;
#Bean
public Job itemReaderDbDemoJob() {
return jobBuilderFactory.get("itemReaderDbDemoJob").start(itemReaderDbStep()).build();
}
#Bean
public Step itemReaderDbStep() {
return stepBuilderFactory.get("itemReaderDbStep")
.<Todo,Todo>chunk(2)
.reader(dbJdbcReader())
.writer(dbJdbcWriter)
.build();
}
#Bean
#StepScope
public JdbcPagingItemReader<Todo> dbJdbcReader() {
JdbcPagingItemReader<Todo> reader = new JdbcPagingItemReader<Todo>();
reader.setDataSource(dataSource);
reader.setFetchSize(2);
reader.setRowMapper(new RowMapper<Todo>() {
#Override
public Todo mapRow(ResultSet rs, int rowNum) throws SQLException {
Todo todo = new Todo();
todo.setId(rs.getLong(1));
todo.setDescription(rs.getString(2));
todo.setDetails(rs.getString(3));
return todo;
}
});
SqlServerPagingQueryProvider provider = new SqlServerPagingQueryProvider();
provider.setSelectClause("id,description,details");
provider.setFromClause("from dbo.todo");
//sort
Map<String,Order> sort = new HashMap<>(1);
sort.put("id", Order.DESCENDING);
provider.setSortKeys(sort);
reader.setQueryProvider(provider);
return reader;
}
}
Here is my application.properties:
logging.level.org.springframework.jdbc.core=DEBUG
spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.datasource.jdbcUrl=jdbc:sqlserver://josephserver2.database.windows.net:1433;database=<Your-Database-Name>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;
spring.datasource.username=<Your-UserName>
spring.datasource.password=<Your-Password>
spring.datasource.initialization-mode=always
It return expected result from my Azure SQL. By the way, my Azure sql username does not have create permissions for the database.
The result shows:
How to have the meta data tables to be created in a local db or in-memory db like h2db.
You can use spring.batch.initialize-schema=embedded for that.
Or If I have the meta data tables created already in the main database, in a different schema than my main table I'm fetching from. How to point my job to those meta-data tables in another schema, to store the job and step details data in those.
spring batch works against a datasource, not a particular schema. If meta-data tables are in a different schema, then you need to create a second datasource pointing to that schema and set it on the job repository.
I know this post is a little bit old, but I'd like to give an update.
For newer versions of Spring Boot spring.batch.initialize-schema is deprecated.
I'm using Spring Boot 2.7.1 and the newer property is spring.batch.jdbc.initialize-schema.
In my case: when I was receiving the error message was due that the user did not have the CREATE TABLE permission to create the corresponding spring bacth tables.
Adding the permissions fix the issue.
Related
I open one database at the start, then need to open another database based on user selecting two values. The database selection has to be at run-time and will change every time.
Have tried to access the Connection String using the Connection String class and have tried other options like Singleton which I do not understand. I am running this on a local Windows 10 system running SQL Server Express. Am coding using Asp.Net Core 2.1
> ASP.Net Core v2.1
Building multi tenant, multi year application
Every client will have one SQL DATABASE per year
I hope to have a table with the following structure
COMPANY_CODE VARCHAR(3),
COMPANY_YEAR INT,
COMPANY_DBNAME VARCHAR(5)
Sample Data
COMPANY_CODE: AAD
COMPANY_YEAR: 19
COMPANY_DB: AAD19
COMPANY_CODE: AAD
COMPANY_YEAR: 18
COMPANY_DB: AAD18
COMPANY_CODE: AAD
COMPANY_YEAR: 17
COMPANY_DB: AAD17
So, every company will multiple rows - one for each financial year.
The COMPANY_DB column will store the DB name to open for that session.
Once the user is authenticated, I want to change the connection string to point to the database in the COMPANY_DB column of the selected row and then let the logged in user perform transactions.
I am unable to figure out how to change the connection string that is embedded in startup.cs.
Any tips on how to achieve this will be most appreciated.
I figured out that you are using one DbContext class for each database. See here for more information: docs.
Remove AddDbContext from Startup, remove OnConfiguring from DbContext and pass options to the constructor.
public class BloggingContext : DbContext
{
public BloggingContext(DbContextOptions<BloggingContext> options)
: base(options)
{ }
public DbSet<Blog> Blogs { get; set; }
}
Then, write service providing DbContext:
public interface IBlogContextProvider
{
BlogContext GetBlogContext(string connectionString);
}
public class BlogContextProvider : IBlogContextProvider
{
BlogContext GetBlogContext(string connectionString)
{
var optionsBuilder = new DbContextOptionsBuilder<BloggingContext>();
optionsBuilder.UseSqlServer(connectionString);
return new BlogContext(optionsBuilder);
}
}
Add service in your Startup.cs:
services.AddScoped<IBlogContextProvider, BlogContextProvider>();
Now you can use DI:
public class HomeController : Controller
{
private IBlogContextProvider _provider;
public HomeController(IBlogContextProvider provider)
{
_provider = provider;
}
public ActionResult Index()
{
using (var context = _provider.GetBlogContext(<your connection string>))
{
//your code here
}
return View();
}
}
EDIT: Of course, you can write ContextProvider as generic.
I would have to make many modifications to be able to transfer my laravel project with a single database that has many query builders and eloquent to a project that supports more than one database?
I understand that once a new database is installed it is necessary to use:
connection('mysql2')
When consulting a database, do we tend to change the whole project with this sentence? specifying the connection in each place?
You can add a $connection property to your Eloquent models to specify the database connection there. This way you don't need to update your queries.
protected $connection = 'connection-name';
Migration with multiple connections
public function up()
{
Schema::connection('mysql-2')->create('user_details', function (Blueprint $table) {
//........
});
}
public function down()
{
Schema::connection('mysql-2')->dropIfExists('user_details');
}
Handle Relationship with multiple database connections
UserDetail.php //mysql-2 (connection-2)
class UserDetail extends Model
{
protected $connection = 'mysql-2';
public function user()
{
return $this->setConnection('mysql')
->belongsTo(User::class);
}
}
User.php //mysql (connection-1) //default connection
class User extends Model
{
//with default connection
public function detail()
{
return $this->setConnection('mysql-2')
->hasOne(UserDetail::class);
}
}
You don't need to a specified connection in a controller for retrieving/delete/insert data
I Understand that you are asking basically how to change database.
For whole project: you can edit mysql connection details in your .env file.
You can also use 2 databases with 1 project , you can learn how to do that from this question which has been already answered: How to use multiple databases in Laravel
I am sorry if i didn't understand your question.
Let me know if it helps you.
I have one database with 3 schemas (OPS, TEST, TRAIN). All of these schemas have a completely identical table structure. Now lets say I have an endpoint /cars that accepts a query param for the schema/environment. When the user makes a GET request to this endpoint, I need the Spring Boot backend to be able to dynamically access either the OPS, TEST, or TRAIN schema based on the query param specified in the client request.
The idea is something like this where the environment is passed as a request param to the endpoint and then is somehow used in the code to set the schema/datasource that the repository will use.
#Autowired
private CarsRepository carsRepository;
#GetMapping("/cars")
public List<Car> getCars(#RequestParam String env) {
setSchema(env);
return carsRepository.findAll();
}
private setSchema(String env) {
// Do something here to set the schema that the CarsRepository
// will use when it runs the .findAll() method.
}
So, if a client made a GET request to the /cars endpoint with the env request param set to "OPS" then the response would be a list of all the cars in the OPS schema. If a client made the same request but with the env request param set to "TEST", then the response would be all the cars in the TEST schema.
An example of my datasource configuration is below. This one is for the OPS schema. The other schemas are done in the same fashion, but without the #Primary annotation above the beans.
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(
entityManagerFactoryRef = "opsEntityManagerFactory",
transactionManagerRef = "opsTransactionManager",
basePackages = { "com.example.repo" }
)
public class OpsDbConfig {
#Autowired
private Environment env;
#Primary
#Bean(name = "opsDataSource")
#ConfigurationProperties(prefix = "db-ops.datasource")
public DataSource dataSource() {
return DataSourceBuilder
.create()
.url(env.getProperty("db-ops.datasource.url"))
.driverClassName(env.getProperty("db-ops.database.driverClassName"))
.username(env.getProperty("db-ops.database.username"))
.password(env.getProperty("db-ops.database.password"))
.build();
}
#Primary
#Bean(name = "opsEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean opsEntityManagerFactory(
EntityManagerFactoryBuilder builder,
#Qualifier("opsDataSource") DataSource dataSource
) {
return builder
.dataSource(dataSource)
.packages("com.example.domain")
.persistenceUnit("ops")
.build();
}
#Primary
#Bean(name = "opsTransactionManager")
public PlatformTransactionManager opsTransactionManager(
#Qualifier("opsEntityManagerFactory") EntityManagerFactory opsEntityManagerFactory
) {
return new JpaTransactionManager(opsEntityManagerFactory);
}
}
Personally, I don't feel its right to pass environment as Request Param and toggle the repository based on the value passed.
Instead you can deploy multiple instance of the service pointing to different data source and have a gate keeper(router) to route to the respective service.
By this way clients will be exposed to one gateway service which in turn routes to respective service based on input to gate keeper.
You typically don't want TEST/ACPT instances running on the very same machines because it typically gets harder to [keep under] control the extent to which load on these environments will make the PROD environment slow down.
You also don't want the setup you envisage because it makes it nigh impossible to evolve the app and/or its database structure. (You're not going to switch db schema in PROD at the very same time you're doing this in DEV are you ? Not doing that simultaneous switch is wise, but it breaks your presupposition that "all three databases have exactly the same schema".
I used this pluralsight video on MVC code first migrations to keep my default MVC IdentityDb context and create another context for custom tables. Since then I get an error trying to connect connecting to the database online:
CREATE DATABASE permission denied in database 'master'.
.........
It works locally. My connection string are correct and my context classes point to the right connection string name:
public class IdentityDb : IdentityDbContext<ApplicationUser>
{
public IdentityDb()
: base("DefaultConnection", throwIfV1Schema: false)
{
}
public static IdentityDb Create()
{
return new IdentityDb();
}
}
public class CustomDb : DbContext
{
public CustomDb() : base("DefaultConnection") { }
public DbSet<Inquiry> Inquiry { get; set; }
public DbSet<Product> Product { get; set; }
}
Connection string:
<add name="DefaultConnection" connectionString="server=***.db.1and1.com; initial catalog=***;uid=***;pwd=***" providerName="System.Data.SqlClient" />
I've read that the connection string name should be the same as the context class name but since I have two contexts I need a common name (DefaultConnection) which I've specified in the contexts.
It works connecting to my local database but not when its online so I did wonder if this would relate to the migration history table being up to date online and EF 6 trying to update the database but the entries in the migrations table match.
Any help appreciated.
* UPDATE *
I tried resetting the EF migrations with this guide thinking if the migrations where out of sync with the online DB it could result in EF trying to re-create the database causing this issue. However the problem still persists!
I have now added these lines to my context constructors respectively:
Database.SetInitializer<IdentityDb>(null);
Database.SetInitializer<CustomDb>(null);
This has stopped the error but kind of defeated the purpose of EF because I now have to remove it when creating migrations and manually script the changes to the online DB, then put it back in for the site to work online.
I am making a project using JSF, and I know how to get data from my view. I also know how to get data with the JDBC connector. And also how to put data in the view, from some objects, but my question is:
How to put data directly from my database, for example a list of person, in JSF, for example with the tag <h:outputText value="#{}"/> ?
I have found some examples with instantiate objects, but I did not found a real example with data from a DB.
JSF is just an MVC framework to develop web applications in Java. JSF doesn't associate with any data source at all. The only data JSF will use is retrieved from:
The data already stored in the proper object as attribute: HttpServletRequest, HttpSession or ServletContext.
The request/view/session/application context in form of fields in the managed beans, recognized by classes decorated as #ManagedBeans or #Named if using CDI. The data of these fields will be stored as attributes in the objects mentioned in the section above, depending on the scope of the managed bean.
By knowing this, then the only thing you should worry about is to fill the fields in your managed beans. You can fill them with incoming data from database, from a web service or whatever data source you have in mind.
For example, if you want/need to populate your data to pre process a request, you can do the following:
#ManagedBean
#ViewScoped
public class SomeBean {
List<Entity> entityList;
#PostConstruct
public void init() {
SomeService someService = new SomeService();
entityList = someService.findEntityList();
}
//getters and setters for the list...
}
//as you can see, this class is just pure Java
//you may use other frameworks if you want/need
public class SomeService {
public List<Entity> findEntityList() {
String sql = "SELECT field1, field2... FROM table";
List<Entity> entityList = new ArrayList<>();
try (Connection con = ...; //retrieve your connection somehow
PreparedStatement pstmt = con.prepareStatement(sql)) {
ResultSet rs = pstmt.executeQuery();
while (rs.next()) {
Entity entity = new Entity();
entity.setField1(rs.getString("field1"));
entity.setField2(rs.getString("field2"));
//...
entityList.add(entity);
}
} catch (Exception e) {
//handle exception ...
e.printStackTrace();
}
return entityList;
}
}