ABP console background jobs processor single db connection - abp

So i have a abp console application, which processes background jobs using hangfire + redis. In jobs mysql database is being accessed to select / insert records. Inside a job a single record is selected after some processing (it can take ~1sec - ~15mins), it is being updated in db. One job is scheduled more than 300K times and growing. Problem occurs when job is executed ~7000 times in one go. It is causing mysql connection problem. I want to use single connection and always keep it open. Any other suggestions are more than welcome!
public class MyJob : AsyncBackgroundJob<MyJobArgs>, ITransientDependency
{
private readonly IRepository<MyTable, Guid> _repository;
public MyJob (IRepository<MyTable, Guid> repository)
{
_repository = repository;
}
public override async Task ExecuteAsync(MyJobArgs args)
{
var data = await _repository.GetAsync(args.Id);
// processing //--- ~1sec - ~15mins
await _repository.UpdateAsync(data);
}
}

Related

Spring Batch "Invalid object name BATCH_JOB_INSTANCE"

I've created a spring batch to query a Azure SQL server database and write the data into a CSV file. I do not have create permissions for the database. I get this error Invalid Object name BATCH_JOB_INSTANCE on running the batch. I don't want the spring batch meta-data tables to be created in the main database. Or it would be helpful if I can have them in another local or in-memory db like h2db.
I've also added spring-batch-initialize-schema=never already, which was the case with most answers to similar questions on here, but that didn't help.
Edit:
I resolved the Invalid Object name error by preventing the metadata tables from being created into the main database by extending the DefaultBatchConfigurer Class and Overriding the setDataSource method, thus having them created in the in-memory map-repository. Now I want to try two options:
How to have the meta data tables to be created in a local db or in-memory db like h2db.
Or If I have the meta data tables created already in the main database, in a different schema than my main table I'm fetching from. How to point my job to those meta-data tables in another schema, to store the job and step details data in those.
#Configuration
public class SpringBatchConfig extends DefaultBatchConfigurer{
#Override
public void setDataSource(DataSource datasource) {
}
...
My application.properties file looks like this:
spring.datasource.url=
spring.datasource.username=
spring.datasource.password=
spring.datasource.driver-class-name=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring-batch-initialize-schema=never
spring.batch.job.enabled=false
spring.jpa.hibernate.ddl-auto=update
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.SQLServer2012Dialect
I've created a demo with two datasources. Batch metadata will sotre in H2 DB and the Job datasource is Azure SQL.
Here is the project structure:
We need define a DataSourceConfig class and use #Primary annotation for DataSource bean:
#Configuration
public class DataSourceConfig {
#Bean(name = "mssqlDataSource")
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource appDataSource(){
return DataSourceBuilder.create().build();
}
#Bean(name = "h2DataSource")
#Primary
// #ConfigurationProperties(prefix="spring.datasource.h2")
public DataSource h2DataSource() {
return DataSourceBuilder.create()
.url("jdbc:h2:mem:thing:H2;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE")
.driverClassName("org.h2.Driver")
.username("sa")
.password("")
.build();
}
}
In the ItemReaderDbDemo class, we use #Autowired #Qualifier("mssqlDataSource") to specify the dataSource in the Spring Batch task:
#Configuration
public class ItemReaderDbDemo {
//generate task Object
#Autowired
private JobBuilderFactory jobBuilderFactory;
//Step exec tasks
//generate step Object
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
#Qualifier("mssqlDataSource")
private DataSource dataSource;
#Autowired
#Qualifier("dbJdbcWriter")
private ItemWriter<? super Todo> dbJdbcWriter;
#Bean
public Job itemReaderDbDemoJob() {
return jobBuilderFactory.get("itemReaderDbDemoJob").start(itemReaderDbStep()).build();
}
#Bean
public Step itemReaderDbStep() {
return stepBuilderFactory.get("itemReaderDbStep")
.<Todo,Todo>chunk(2)
.reader(dbJdbcReader())
.writer(dbJdbcWriter)
.build();
}
#Bean
#StepScope
public JdbcPagingItemReader<Todo> dbJdbcReader() {
JdbcPagingItemReader<Todo> reader = new JdbcPagingItemReader<Todo>();
reader.setDataSource(dataSource);
reader.setFetchSize(2);
reader.setRowMapper(new RowMapper<Todo>() {
#Override
public Todo mapRow(ResultSet rs, int rowNum) throws SQLException {
Todo todo = new Todo();
todo.setId(rs.getLong(1));
todo.setDescription(rs.getString(2));
todo.setDetails(rs.getString(3));
return todo;
}
});
SqlServerPagingQueryProvider provider = new SqlServerPagingQueryProvider();
provider.setSelectClause("id,description,details");
provider.setFromClause("from dbo.todo");
//sort
Map<String,Order> sort = new HashMap<>(1);
sort.put("id", Order.DESCENDING);
provider.setSortKeys(sort);
reader.setQueryProvider(provider);
return reader;
}
}
Here is my application.properties:
logging.level.org.springframework.jdbc.core=DEBUG
spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.datasource.jdbcUrl=jdbc:sqlserver://josephserver2.database.windows.net:1433;database=<Your-Database-Name>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;
spring.datasource.username=<Your-UserName>
spring.datasource.password=<Your-Password>
spring.datasource.initialization-mode=always
It return expected result from my Azure SQL. By the way, my Azure sql username does not have create permissions for the database.
The result shows:
How to have the meta data tables to be created in a local db or in-memory db like h2db.
You can use spring.batch.initialize-schema=embedded for that.
Or If I have the meta data tables created already in the main database, in a different schema than my main table I'm fetching from. How to point my job to those meta-data tables in another schema, to store the job and step details data in those.
spring batch works against a datasource, not a particular schema. If meta-data tables are in a different schema, then you need to create a second datasource pointing to that schema and set it on the job repository.
I know this post is a little bit old, but I'd like to give an update.
For newer versions of Spring Boot spring.batch.initialize-schema is deprecated.
I'm using Spring Boot 2.7.1 and the newer property is spring.batch.jdbc.initialize-schema.
In my case: when I was receiving the error message was due that the user did not have the CREATE TABLE permission to create the corresponding spring bacth tables.
Adding the permissions fix the issue.

Change SQL Server Connection String Dynamically inside an ASP.Net Core application

I open one database at the start, then need to open another database based on user selecting two values. The database selection has to be at run-time and will change every time.
Have tried to access the Connection String using the Connection String class and have tried other options like Singleton which I do not understand. I am running this on a local Windows 10 system running SQL Server Express. Am coding using Asp.Net Core 2.1
> ASP.Net Core v2.1
Building multi tenant, multi year application
Every client will have one SQL DATABASE per year
I hope to have a table with the following structure
COMPANY_CODE VARCHAR(3),
COMPANY_YEAR INT,
COMPANY_DBNAME VARCHAR(5)
Sample Data
COMPANY_CODE: AAD
COMPANY_YEAR: 19
COMPANY_DB: AAD19
COMPANY_CODE: AAD
COMPANY_YEAR: 18
COMPANY_DB: AAD18
COMPANY_CODE: AAD
COMPANY_YEAR: 17
COMPANY_DB: AAD17
So, every company will multiple rows - one for each financial year.
The COMPANY_DB column will store the DB name to open for that session.
Once the user is authenticated, I want to change the connection string to point to the database in the COMPANY_DB column of the selected row and then let the logged in user perform transactions.
I am unable to figure out how to change the connection string that is embedded in startup.cs.
Any tips on how to achieve this will be most appreciated.
I figured out that you are using one DbContext class for each database. See here for more information: docs.
Remove AddDbContext from Startup, remove OnConfiguring from DbContext and pass options to the constructor.
public class BloggingContext : DbContext
{
public BloggingContext(DbContextOptions<BloggingContext> options)
: base(options)
{ }
public DbSet<Blog> Blogs { get; set; }
}
Then, write service providing DbContext:
public interface IBlogContextProvider
{
BlogContext GetBlogContext(string connectionString);
}
public class BlogContextProvider : IBlogContextProvider
{
BlogContext GetBlogContext(string connectionString)
{
var optionsBuilder = new DbContextOptionsBuilder<BloggingContext>();
optionsBuilder.UseSqlServer(connectionString);
return new BlogContext(optionsBuilder);
}
}
Add service in your Startup.cs:
services.AddScoped<IBlogContextProvider, BlogContextProvider>();
Now you can use DI:
public class HomeController : Controller
{
private IBlogContextProvider _provider;
public HomeController(IBlogContextProvider provider)
{
_provider = provider;
}
public ActionResult Index()
{
using (var context = _provider.GetBlogContext(<your connection string>))
{
//your code here
}
return View();
}
}
EDIT: Of course, you can write ContextProvider as generic.

How to profile Entity Framework activity against SQL Server?

It's easy to use SQL Server Profiler to trace stored procedures activity. But how to trace SQL queries issued by LINQ via Entity Framework? I need to identify such queries (LINQ code) that consume a lot of time, are called most frequently and therefore are the first candidates for optimization.
Add this key to your connection string:
Application Name=EntityFramework
And filter by this in Profiler
Adding #ErikEJ's answer : if you are using .net Core, so you are using EFCore. There are no Database.Log property. You should use OnConfiguring override of your DbContext class and then
optionsBuilder.LogTo(Console.WriteLine);
Sample :
public class AppDbContext : DbContext
{
public DbSet<User> Users { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.LogTo(Console.WriteLine);
}
}
I've found useful DbContext.Database.Log property.
MSDN article Logging and Intercepting Database Operations
The DbContext.Database.Log property can be set to a delegate for any method that takes a string. Most commonly it is used with any TextWriter by setting it to the “Write” method of that TextWriter. All SQL generated by the current context will be logged to that writer. For example, the following code will log SQL to the console:
using (var context = new BlogContext())
{
context.Database.Log = Console.Write;
// Your code here...
}
What gets logged?
When the Log property is set all of the following will be logged:
The approximate amount of time it took to execute the command. Note that this is the time from sending the command to getting the result object back. It does not include time to read the results.
SQL for all different kinds of commands. For example:
Queries, including normal LINQ queries, eSQL queries, and raw queries from methods such as SqlQuery
Inserts, updates, and deletes generated as part of SaveChanges
Relationship loading queries such as those generated by lazy loading
Parameters
Whether or not the command is being executed asynchronously
A timestamp indicating when the command started executing
Whether or not the command completed successfully, failed by throwing an exception, or, for async, was canceled
Some indication of the result value

How to automate or create a scheduled Job for the below query in SQL Server

I am having the following query to check whether any data is stored in SQL Cache
select er.*, st.*
from sys.dm_exec_requests er
cross apply sys.dm_exec_sql_text(er.sql_handle) st
where st.text not like '%C7DB%' -- filter self
GO
From the above query I need to pick the Plan_Handle(if any present) and run the 2nd query to clear the SQL Cache
DBCC FREEPROCCACHE (0x060007002D2BE10840E13F38030000000000000000000000);
I need to run this manually every 30 mins.
Can you please suggest, how to automate this process
In java, we can automate this, by using Quartz Time scheduler.
You have to write one simple java program, deploy it on the server, and it automatically runs(Which executes your sql statements and done your task) after a particular time.
Follow this link, you will get your task done within 1-2 hours.
Link - "http://www.quartz-scheduler.org/documentation/quartz-2.x/tutorials/"
Simple Java Program :-
i. Write a HelloSchedule class which have configuration about your repetition information like every 30 mins. (It need quartz jar file, you can download it from quartz webside)
import java.util.Date;
import org.quartz.JobDetail;
import org.quartz.Scheduler;
import org.quartz.SchedulerFactory;
import org.quartz.SimpleTrigger;
import org.quartz.impl.JobDetailImpl;
import org.quartz.impl.StdSchedulerFactory;
import org.quartz.impl.triggers.SimpleTriggerImpl;
public class HelloSchedule {
public HelloSchedule() throws Exception {
SchedulerFactory sf = new StdSchedulerFactory();
Scheduler sched = sf.getScheduler();
sched.start();
JobDetailImpl jd = new JobDetailImpl("myjob", sched.DEFAULT_GROUP, HelloJob.class);
SimpleTriggerImpl st = new SimpleTriggerImpl("mytrigger", sched.DEFAULT_GROUP, new Date(),
null, SimpleTrigger.REPEAT_INDEFINITELY, 60L * 1000L);
//60L * 1000L - It is time for it is repeting, after each 1 min it is executing.You have to replace here for half hour.
sched.scheduleJob(jd, st);
}
public static void main(String args[]) {
try {
System.out.println("Before Cron execution complition...!!!");
new HelloSchedule();
System.out.println("After Cron execution complition...!!!");
} catch (Exception e) {
}
}
}
ii. Write a HelloJob class.
public class HelloJob implements Job {
public void execute(JobExecutionContext arg0) throws JobExecutionException{
// Write here your Repetation behaviour.
// That is Connectivity with your database.
// Your Sql1 and Sql2
}
}

At what point is a transaction commited?

I've read about entities lifecycle, and the locking strategies, and I watched some videos about this but I'm still not sure I understand.I understand there is also a locking mechanism in the underlying RDBMS (I'm using mysql).
I would like to know at what point a transaction is committed / entity is detached and how does it affect other transactions from a locking point of view. At what point does an user have to wait till a transaction finishes ? I've made two different scenarios below. For the sake of understanding I'm asserting the table in the scenarios contains a lot of rows and the for loops takes 10 minute to complete.
Scenario 1:
#Stateless
public class AService implements AServiceInterface {
#PersistenceContext(unitName = "my-pu")
private EntityManager em;
#Override
public List<Aclass> getAll() {
Query query = em.createQuery(SELECT_ALL_ROWS);
return query.getResultList();
}
public void update(Aclass a) {
em.merge(a);
}
}
and a calling class:
public aRadomClass{
#EJB
AServiceInterface service;
public void method(){
List<Aclass> listAclass = service.getAll();
for(Aclass a : listAclass){
a.setProperty(methodThatTakesTime());
service.update(a);
}
}
}
Without specifying a locking strategy : If another user wants to makes an update to one row in the table and the for loop already began but is not finished. Does he have to wait till the for loop is completed ?
Scenario 2:
#Stateless
public class AService implements AServiceInterface {
#PersistenceContext(unitName = "my-pu")
private EntityManager em;
#Override
public List<Aclass> getAllAndUpdate() {
Query query = em.createQuery(SELECT_ALL_ROWS);
List<Aclass> listAclass = query.getResultList();
for(Aclass a : listAclass ){
a.setProperty(methodThatTakesTime());
em.merge(a);
}
}
}
Same question.
It is important what kind of class is your aRandomClass. If it is also an EJB, you should take a look in the transaction propagation. If it is a servlet, then the transaction is closed automatically right after your EJB method exits (no matter which one). That is done using dynamic proxies. So in scenario 1 the EJB container will open and close multiple transactions: one for service.getAll() and one for each service.update(a) call. In scenario 2, if method getAllAndUpdate() is called only once, a single transaction will be opened and it will be closed on method exit.

Resources