I have a SpringBoot 2.2.6 WebApp, I'm using java8, Apache Maven 3.6.3, JPA 2.2, Hibernate Core 5.4.12.Final.
I have a model similar to follow:
Entity Period that represent time period by a data:
#Entity
#Table
#Data
public class Period {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Integer id;
private LocalDate date;
}
an Entity Monitor, for each Period we can have more than one Monitor (1 -> N):
#Entity
#Table
#Data
public class Monitor {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Integer id;
private String description;
#ManyToOne(optional = false, fetch = FetchType.LAZY)
#JoinColumn(name = "id_period", referencedColumnName = "id")
private Period period;
}
And a MonitorInstance Entity, for each Monitor we can have more MonitorInstance
#Entity
#Table
#Data
public class MonitorInstance {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Integer id;
private String someData;
private String someOthers;
#ManyToOne(optional = false, fetch = FetchType.LAZY)
#JoinColumn(name = "id_monitor", referencedColumnName = "id")
#JoinColumn(name = "id_period", referencedColumnName = "id_period")
private Monitor monitor;
}
On MonitorInstance I added a second column which references not a primary key but a foreign key:
#JoinColumn(name = "id_period", referencedColumnName = "id_period")
These because if I have TABLE Period as follows:
----------------------
-- ID -- DATA --------
-- 1 -- 2022-03-03 --
-- 2 -- 2022-06-03 --
And a TABLE Monitor as follows:
-------------------------------------
-- ID -- DESCRIPTION --- ID_PERIOD --
-- 1 -- tes1 --- 1
-- 2 -- tes1 --- 2 --
And try to insert (manually, via batch, via jpa or in any other way) something like:
insert into
monitor_instance
(id, some_data, some_others, id_monitor, id_period)
values
(1, 'data', 'otherdata', 1, 2);
Should return an IntegrityConstraintViolation because of Monitor with id 1 is linked with Period with id 1.
And it works! The application started, the db (even via flyway) is created but sometimes when I perform a clean verify the console log me up the follow error:
org.hibernate.MappingException: Unable to find column with logical name "id_period" in table "period"
There are a lot of question about this kind of error even in this forum. I read them all and, most of them are confusing and bad explained (at least for me obviously), in any case I didn't find a working solution or an explanation why this kind of error happens.
Clearly this is an example situation, the real DB is a little bit complex but the logic is the same.
Can someone help me to find out what happens or, at least, how to improve my entities so that the result is the same but that problem no longer occurs?
N.B.: I want to remember that the error is not systemic! The code is still the same and sometimes the errors show up and sometimes not (my GitLab pipeline pages seems a traffic light).
Related
Creation of column with prefix and sequence using JPA and SQL Server.
I need some column (not primary key) with prefix and sequence, for example T1, T2, T3.
I've tried this:
CREATE SEQUENCE t_sequence START WITH 1 INCREMENT BY 1;
ALTER TABLE gate ADD T_NUM INT;
ALTER TABLE gate ADD T VARCHAR(30);
...
#Column(name = "T_NUM")
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "t_sequence ")
#SequenceGenerator(name = "t_sequence ", t_sequence = "mir_sequence", allocationSize = 1, initialValue = 1)
private int tNum;
#Column(name = "T")
private String t;
...
#PrePersist
public void onCreate() {
super.onCreate();
t= "T" + this.tNum;
}
As a result I always have T0 in t column.
In the e2e Flink SQL tutorial the source table is defined as a Kafka-sourced table with timestamp column upon which watermarking is enabled
CREATE TABLE user_behavior (
user_id BIGINT,
item_id BIGINT,
category_id BIGINT,
behavior STRING,
ts TIMESTAMP(3),
proctime AS PROCTIME(), -- generates processing-time attribute using computed column
WATERMARK FOR ts AS ts - INTERVAL '5' SECOND -- defines watermark on ts column, marks ts as event-time attribute
) WITH (
'connector' = 'kafka', -- using kafka connector
'topic' = 'user_behavior', -- kafka topic
'scan.startup.mode' = 'earliest-offset', -- reading from the beginning
'properties.bootstrap.servers' = 'kafka:9094', -- kafka broker address
'format' = 'json' -- the data format is json
);
As long as GROUP BY is made by a TUMBLE upon ts field, it seems natural (since Flink knows when to trigger / eject the windows) but in the middle of the tutorial we see the following expression
INSERT INTO cumulative_uv
SELECT date_str, MAX(time_str), COUNT(DISTINCT user_id) as uv
FROM (
SELECT
DATE_FORMAT(ts, 'yyyy-MM-dd') as date_str,
SUBSTR(DATE_FORMAT(ts, 'HH:mm'),1,4) || '0' as time_str,
user_id
FROM user_behavior)
GROUP BY date_str;
Here we see that GROUP BY is made on derivative date_str field, but how does watermarking works here? How does Flink decides when to "close" date_str bucket? Since date_str is some function over ts, it must somehow understand how the watermark update for ts would translate into waterlevel for date_str field which seems unfeasable to me. How does it work internally, does Flink stores all encountered records in it's state?
Perhaps you can refer to the link below to learn about the generation and delivery of Watermarks, especially "How Operators Process Watermarks"
In this example, the watermark is generated from the ts of the source operator, and the downstream operator will only process the watermark, which has nothing to do with the date_str field.
public class TimestampsAndWatermarksOperator<T> extends AbstractStreamOperator<T>
implements OneInputStreamOperator<T, T>, ProcessingTimeCallback {
......
#Override
public void open() throws Exception {
super.open();
timestampAssigner = watermarkStrategy.createTimestampAssigner(this::getMetricGroup);
watermarkGenerator =
emitProgressiveWatermarks
? watermarkStrategy.createWatermarkGenerator(this::getMetricGroup)
: new NoWatermarksGenerator<>();
wmOutput = new WatermarkEmitter(output);
watermarkInterval = getExecutionConfig().getAutoWatermarkInterval();
if (watermarkInterval > 0 && emitProgressiveWatermarks) {
final long now = getProcessingTimeService().getCurrentProcessingTime();
getProcessingTimeService().registerTimer(now + watermarkInterval, this);
}
}
#Override
public void processElement(final StreamRecord<T> element) throws Exception {
final T event = element.getValue();
final long previousTimestamp =
element.hasTimestamp() ? element.getTimestamp() : Long.MIN_VALUE;
final long newTimestamp = timestampAssigner.extractTimestamp(event, previousTimestamp);
element.setTimestamp(newTimestamp);
output.collect(element);
watermarkGenerator.onEvent(event, newTimestamp, wmOutput);
}
......
#Override
public void processWatermark(org.apache.flink.streaming.api.watermark.Watermark mark)
throws Exception {
// if we receive a Long.MAX_VALUE watermark we forward it since it is used
// to signal the end of input and to not block watermark progress downstream
if (mark.getTimestamp() == Long.MAX_VALUE) {
wmOutput.emitWatermark(Watermark.MAX_WATERMARK);
}
}
......
}
https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/datastream/event-time/generating_watermarks/
Not able to map or get the desired results using Spring JPA for below setup.
My Stored Procedure is as follows:
CREATE PROCEDURE [dbo].[sp_name] AS BEGIN
SET NOCOUNT ON;
MERGE Products AS TARGET
USING UpdatedProducts AS SOURCE
ON (TARGET.ProductID = SOURCE.ProductID)
--When records are matched, update the records if there is any change
WHEN MATCHED AND TARGET.ProductName <> SOURCE.ProductName OR TARGET.Rate <> SOURCE.Rate
THEN UPDATE SET TARGET.ProductName = SOURCE.ProductName, TARGET.Rate = SOURCE.Rate
--When no records are matched, insert the incoming records from source table to target table
WHEN NOT MATCHED BY TARGET
THEN INSERT (ProductID, ProductName, Rate) VALUES (SOURCE.ProductID, SOURCE.ProductName, SOURCE.Rate)
--When there is a row that exists in target and same record does not exist in source then delete this record target
WHEN NOT MATCHED BY SOURCE
THEN DELETE
--$action specifies a column of type nvarchar(10) in the OUTPUT clause that returns
--one of three values for each row: 'INSERT', 'UPDATE', or 'DELETE' according to the action that was performed on that row
OUTPUT
DELETED.ProductID AS TargetProductID,
INSERTED.ProductID AS SourceProductID
END;
GO
My #Repository class looks like:
#Procedure(procedureName = "sp_name")
Map<String, Integer> callingSP();
Getting below Exception:
Type cannot be null; nested exception is java.lang.IllegalArgumentException: Type cannot be null
Please help on what went wrong?
For result sets that don't exist as a table already (and hence have an #Entity-decorated class definition somewhere) the trick seems to be using an interface for the results of #Query-decorated methods that are declared in your repositories.
Given the SQL setup for your stored procedure...
use master;
go
create database StackOverflow;
go
use StackOverflow;
go
create table dbo.Products(
ProductID int not null,
ProductName nvarchar(50),
Rate float
);
go
create table dbo.UpdatedProducts(
ProductID int not null,
ProductName nvarchar(50),
Rate float
);
go
insert dbo.Products (ProductID, ProductName, Rate) values
(10, 'Ten', 10.10),
(20, 'Twenty', 20.20);
insert dbo.UpdatedProducts (ProductID, ProductName, Rate) values
(20, 'Twenty', 20),
(30, 'Thirty', 30);
go
select * from dbo.Products;
select * from dbo.UpdatedProducts;
go
Which yields...
ProductID
ProductName
Rate
10
Ten
10.1
20
Twenty
20.199999999999999
ProductID
ProductName
Rate
20
Twenty
20.0
30
Thirty
30.0
Then, in Java we have...
// MergeResult.java
public interface MergeResult {
Integer getSourceProductID();
Integer getTargetProductID();
}
// ProductsRepository.java
import java.util.List;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;
import org.springframework.stereotype.Repository;
#Repository
public interface ProductsRepository extends JpaRepository<Products, Integer> {
#Query(nativeQuery = true, value = "EXEC dbo.sp_name")
List<MergeResult> callingSP();
}
// FooJpaApplication.java
import java.util.List;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
#SpringBootApplication
public class FooJpaApplication {
private static final Logger log = LoggerFactory.getLogger(FooJpaApplication.class);
public static void main(String[] args) {
SpringApplication.run(FooJpaApplication.class);
}
#Bean
public CommandLineRunner demo(ProductsRepository repository) {
return (args) -> {
log.info("Executing sp_name()");
log.info("-------------------");
List<MergeResult> results = repository.callingSP();
for (MergeResult r : results) {
String line = String.format("TargetProductID=%d; SourceProductID=%d", r.getTargetProductID(),
r.getSourceProductID());
log.info(line);
}
};
}
}
Which yields the log output...
...
2021-02-25 20:30:01.136 INFO 84285 --- [ main] c.e.a.FooJpaApplication : Started FooJpaApplication in 3.076 seconds (JVM running for 3.388)
2021-02-25 20:30:01.138 INFO 84285 --- [ main] c.e.a.FooJpaApplication : Executing sp_name()
2021-02-25 20:30:01.138 INFO 84285 --- [ main] c.e.a.FooJpaApplication : -------------------
2021-02-25 20:30:01.238 INFO 84285 --- [ main] c.e.a.FooJpaApplication : TargetProductID=null; SourceProductID=30
2021-02-25 20:30:01.238 INFO 84285 --- [ main] c.e.a.FooJpaApplication : TargetProductID=10; SourceProductID=null
2021-02-25 20:30:01.238 INFO 84285 --- [ main] c.e.a.FooJpaApplication : TargetProductID=20; SourceProductID=20
2021-02-25 20:30:01.243 INFO 84285 --- [extShutdownHook] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
...
I have some question about Pessimistic Locking in SQL Server? Here are my classes and test scenario;
Entity class:
#Data
#Entity(name = "mapping")
#Table(
uniqueConstraints =
#UniqueConstraint(
name = "UQ_MappingEntity",
columnNames = {
Constants.DATA_TYPE_VALUE,
Constants.DATA_TYPE_NAMESPACE_INDEX,
Constants.TENANT_ID,
Constants.ASSET_TYPE_NAME
}
)
)
public class MappingEntity {
#Id
#GeneratedValue(generator = "uuid")
#GenericGenerator(name = "uuid", strategy = "uuid2")
private String id;
#Column(name = Constants.DATA_TYPE_VALUE)
private long dataTypeValue;
#Column(name = Constants.DATA_TYPE_NAMESPACE_INDEX)
private int dataTypeNamespaceIndex;
#Column(name = Constants.ASSET_TYPE_NAME)
private String assetTypeName;
#Column(name = Constants.TENANT_ID)
private String tenantId;
}
Repository class:
public interface MappingRepository extends JpaRepository<MappingEntity, String> {
#Lock(LockModeType.PESSIMISTIC_WRITE)
MappingEntity findMappingEntityWithLockByTenantIdAndAssetTypeName(
String tenantId, String assetTypeName);
}
Service code block:
#Transactional
public void deleteAspectType(String tenantId, String aspectTypeId) {
MappingEntity mappingEntity = mappingRepository.findMappingEntityWithLockByTenantIdAndAssetTypeName(tenantId, assetTypeName);
mappingRepository.delete(mappingEntity);
}
When I enable the hibernate logs. I see select query below.
select
mappingent0_.id as id1_1_,
mappingent0_.asset_type_name as asset_ty2_1_,
mappingent0_.data_type_namespace_index as data_typ3_1_,
mappingent0_.data_type_value as data_typ4_1_,
mappingent0_.tenant_id as tenant_i5_1_
from
mapping mappingent0_ with (updlock,
holdlock,
rowlock)
where
mappingent0_.tenant_id=?
and mappingent0_.asset_type_name=?
I have sent two delete request at the same time with same tenant_id but different asset_type_name;
Transaction-1: tenant_id = "testtenant", asset_type_name = "testname1"
Transaction-2: tenant_id = "testtenant", asset_type_name = "testname2"
Transaction-1 run select query and gets results, When Transaction-2 run select query it blocks. After Transaction-1 deletes and finishes the transaction, Transaction-2 get results and deletes.
I have two question;
What are the (updlock, holdlock, rowlock) use for? When I use these three same time, how does effect my query and transaction?
Why did Transaction-2 block when it run the query? Because Both transaction selected different rows.
My Entity class
#Entity
class MasterStccycode{
private static final long serialVersionUID = 1L;
#Id
#Basic(optional = false)
#NotNull
#Size(min = 1, max = 3)
#Column(name = "CODE")
private String code;
#Size(max = 100)
#Column(name = "DESC")
private String desc;
}
my JPA Query
SELECT t.code, t.desc FROM MasterStccycode t
then I have this following exception
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.0.v20110604-r9504): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'DESC'.
Error Code: 156
Call: SELECT CODE, DESC FROM master_stccycode
Query: ReportQuery(referenceClass=MasterStccycode sql="SELECT CODE, DESC FROM master_stccycode")
I know the solution is to wrap the DESC keyword with [] into [DESC] but how can I do this on JPA QL?
DESC is a reserved word on most databases. You should rename the field.
You could also quote the field, but just renaming it would be best.
#Column(name = "\"DESC\"")