stored procedure 'auto_pk_for_table' not found - sql-server

I don't know why I received the error :
org.apache.cayenne.CayenneRuntimeException: [v.4.0.M5 Feb 24 2017 07:47:55] Commit Exception
[...]
Caused by: java.sql.SQLException: Procédure stockée 'auto_pk_for_table' introuvable.
[...]
I'm using Cayenne :
<dependency>
<groupId>org.apache.cayenne</groupId>
<artifactId>cayenne-server</artifactId>
<version>4.0.M5</version>
</dependency>
and JDTS for sql server :
<dependency>
<groupId>net.sourceforge.jtds</groupId>
<artifactId>jtds</artifactId>
<version>1.3.1</version>
</dependency>
The connexion is ok :
avr. 10, 2017 2:36:30 PM org.apache.cayenne.datasource.DriverDataSource getConnection
INFOS: +++ Connecting: SUCCESS.
I'm trying to create a new user (I'm starting by bascis!) so my code is :
(I cut a little bit, it's too long:!)
public abstract class _UserInfo extends CayenneDataObject {
public static final String ADDRESS_PROPERTY = "address";
public void setAddress(String address) {
writeProperty(ADDRESS_PROPERTY, address);
}
public String getAddress() {
return (String)readProperty(ADDRESS_PROPERTY);
}
}
public class UserInfo extends _UserInfo implements Serializable {
private static final long serialVersionUID = 1L;
public String address;
public String getAdress() {
return address;
}
public void setAddress(String address) {
super.setAddress(address);
}
//I have the hashcode and equals too
}
Then, I used vaadin to create my form :
public class UserAddView extends CustomComponent implements View {
private static final long serialVersionUID = 1L;
private TextField address;
private Button save;
public static final String USERVIEW = "user";
public boolean checkValidation() {
if (!checkTextFieldValid(address))
return false;
return true;
}
public boolean checkTextFieldValid(TextField element) {
if (element == null || element.isEmpty()) {
Notification.show(
"You should register a " + element.getDescription(),
Type.WARNING_MESSAGE);
return false;
}
return true;
}
public UserAddView() {
VerticalLayout mainLayout = new VerticalLayout();
mainLayout.setSizeFull();
setCompositionRoot(mainLayout);
final VerticalLayout vlayout = new VerticalLayout();
address = new TextField("Address:");
address.setDescription("Address");
vlayout.addComponent(address);
save = new Button("Save");
vlayout.addComponent(save);
mainLayout.addComponent(new HeaderMenu());
mainLayout.addComponent(vlayout);
addListeners();
}
private void addListeners() {
save.addClickListener(new ClickListener() {
private static final long serialVersionUID = 1L;
#Override
public void buttonClick(ClickEvent event) {
if (checkValidation() == true) {
ServerRuntime cayenneRuntime = ServerRuntime.builder()
.addConfig("cayenne-myapplication.xml").build();
ObjectContext context = cayenneRuntime.newContext();
UserInfo user = context.newObject(UserInfo.class);
user.setAddress(address.getValue());
user.getObjectContext().commitChanges();
Notification.show(
"Has been saved, We will send you your password by email. Your user login is: "
+ email.getValue(), Type.TRAY_NOTIFICATION);
getUI().getNavigator().navigateTo(HomepageView.MAINVIEW);
}
}
});
}
#Override
public void enter(ViewChangeEvent event) {
// TODO Auto-generated method stub
}
}
EDIT, add information : In my user object, I have a userid (primary key), in cayenne I wrote it as primary key too and in smallint. This error seems to be link... https://cayenne.apache.org/docs/3.1/api/org/apache/cayenne/dba/sybase/SybasePkGenerator.html

The error happens when you insert a new object. For each new object Cayenne needs to generate a value of the primary key. There are various strategies to do this. The default strategy depends on the DB that you are using. For SQLServer (and for Sybase, as you've discovered :)) that strategy is to use a special stored procedure.
To create this stored procedure (and other supporting DB objects), go to CayenneModeler, open your project, and select "Tools > Generate Database Schema". In "SQL Options" tab, uncheck all checkboxes except for "Create Primary Key Support". The SQL you will see in the window below the checkboxes is what you need to run on SQL server. Either do it from Cayenne modeler or copy/paste to your favorite DB management tool.
There's also an alternative that does not require a stored procedure - using DB auto-increment feature. For this you will need to go to each DbEntity in the Modeler and under the "Entity" tab select "Database-Generated" in the "Pk Generation Strategy" dropdown. This of course implies that your PK column is indeed an auto-increment in the DB (meaning you may need to adjust your DB schema accordingly).

Related

on JpaRepository.save(Entity e) e have negative value as primary key in ms sql server database

When i do JpaRepository.save(Entity e) the primary key generated with the help of hibernate sequence is saved as any random value generally starting from -43 or -42.
I am spring a spring boot project with JPA.
Below is my property file:
hibernate.dialect=org.hibernate.dialect.SQLServer2012Dialect
hibernate.hbm2ddl.auto=validate
hibernate.ejb.naming_strategy=org.hibernate.cfg.ImprovedNamingStrategy
hibernate.show_sql=false
hibernate.format_sql=true
This is my entity on which i am calling save. Sequence name - CPU_Responses_Seq is already present in DB
#Entity
#Table(name="CPU_Responses")
public class CPUResponses extends BaseEnity{
/**
*
*/
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(generator="CPUResponseSeq",strategy=GenerationType.SEQUENCE)
#SequenceGenerator(name="CPUResponseSeq",sequenceName="CPU_Responses_Seq")
#Column(name = "Response_ID", nullable=false,updatable=false)
private long responseId;
This is my persistance config class
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(basePackages= {"package path"})
#PropertySource("classpath:application.properties")
public class PersistanceConfiguration {
#Autowired
private Environment env;
public Environment getEnv() {
return env;
}
public void setEnv(Environment env) {
this.env = env;
}
#Bean
LocalContainerEntityManagerFactoryBean entityManagerFactory() throws NamingException {
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setDataSource(dataSource());
entityManagerFactoryBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
entityManagerFactoryBean.setPackagesToScan("entity path");
Properties jpaProperties = new Properties();
//Configures the used database dialect. This allows Hibernate to create SQL
//that is optimized for the used database.
jpaProperties.put("hibernate.dialect", env.getRequiredProperty("hibernate.dialect"));
entityManagerFactoryBean.setJpaProperties(jpaProperties);
return entityManagerFactoryBean;
}
#Bean
public DataSource dataSource() throws NamingException {
JndiObjectFactoryBean bean = new JndiObjectFactoryBean();
bean.setJndiName("java:comp/env/jdbc/CPUDB");
bean.setProxyInterface(DataSource.class);
bean.setLookupOnStartup(false);
bean.afterPropertiesSet();
return (DataSource) bean.getObject();
}
#Bean
JpaTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(entityManagerFactory);
return transactionManager;
}
}
I don't know what is getting wrong. Data is getting saved in DB but with negative primary key. My sequencer in DB have min value as zero so sequencer is correct.
Kindly help
I think this is related to the changes introduced in Hibernate in their sequence generator, try to add
hibernate.id.new_generator_mappings=false
or
spring.jpa.properties.hibernate.id.new_generator_mappings=false
note that the "new generator" is not compatible with previous version so start from a clean database to avoid issues.

Switching from local database to SQL server

So our current code loaded a csv file into a local jdbcTemplate in which then I do some querying. The issue was always performance, and we finally got access to a SQL server that could load the data. Naturally the company gets the guy with basically no database skills to set this up :P
#Autowired
DataSource dataSource;
#RequestMapping("/queryService")
public void queryService(#RequestParam("id")String id)
{
log.info("Creating tables");
jdbcTemplate.execute("DROP TABLE accounts IF EXISTS");
jdbcTemplate.execute("CREATE TABLE accounts(id VARCHAR(255), name VARCHAR(255), Organization__c VARCHAR(255)";
insertBatch(accounts,dataSource);
ArrayList<Account2> filteredaccs = filterAccount(jdbcTemplate);
.
public void insertBatch(ArrayList<Account2> accs, DataSource dataSource) {
List<Map<String, Object>> batchValues = new ArrayList<>(accs.size());
for (Account2 a : accs) {
Map<String, Object> map = new HashMap<>();
map.put("id", a.getId());
map.put("name", a.getName());
map.put("Organization__c", a.getOrganization__c());
batchValues.add(map);
}
SimpleJdbcInsert simpleJdbcInsert = new SimpleJdbcInsert(dataSource).withTableName("accounts");
int[] ints = simpleJdbcInsert.executeBatch(batchValues.toArray(new Map[accs.size()]));
}
.
public ArrayList<Account2> filterAccount(JdbcTemplate jdbcTemplate)
{
String sql= "query string";
ArrayList<Account2> searchresults = (ArrayList<Account2>) jdbcTemplate.query(sql,
new RowMapperResultSetExtractor<Account2>(new AccountRowMapper(), 130000));
return searchresults;
}
.
public class AccountRowMapper implements RowMapper<Account2> {
public Account2 mapRow(ResultSet rs, int rowNum) throws SQLException {
Account2 a = new Account2();
a.setId(rs.getString("id"));
a.setName(rs.getString("name"));
a.setOrganization__c(rs.getString("Organization__c"));
return a;
}
}
The question here is what is the quickest way for me to 'switch' over to using a SQL server to pull the data down, with the same table and rows, without changing too much of my current code?

Dapper control dates

This question is meant to bring some light around control date times using Dapper.
These controls are used to audit the information in a data storage and figure out when a particular row has been created / updated. I couldn't manage to find any information on GitHub's project, either here in StackOverflow, so I would like this post to become a central source of truth to help others or even to turn into a future extension of the library.
Any answer, resource or best practice will be appreciated.
I've ran into a case where I was working with a database that was consumed by both Rails and Dapper. Rails was managing created_at and updated_at, not the database. So with the .net application I had to implement a solution that managed these and provided the ability to add additional business logic at these layers such as events.
I've included a basic example of how I handled this with a wrapper around Dapper Simple Crud for inserts and updates. This example does not include exposing the other critical methods from dapper and simplecrud such as Query, GET, Delete, etc. You will need to expose those at your discresion.
For safety ensure that you decorate your models created_at property with the attribute [Dapper.IgnoreUpdate]
[Table("examples")]
public partial class example
{
[Key]
public virtual int id { get; set; }
[Required(AllowEmptyStrings = false)]
[StringLength(36)]
public virtual string name { get; set; }
[Dapper.IgnoreUpdate]
public virtual DateTime created_at { get; set; }
public virtual DateTime updated_at { get; set; }
}
public class ExampleRepository : IExampleRepository
{
private readonly IYourDapperWrapper db;
public PartnerRepository(IYourDapperWrapper yourDapperWrapper){
if (yourDapperWrapper == null) throw new ArgumentNullException(nameof(yourDapperWrapper));
db = yourDapperWrapper;
}
public void Update(example exampleObj)
{
db.Update(exampleObj);
}
public example Create(example exampleObj)
{
var result = db.Insert(exampleObj);
if (result.HasValue) exampleObj.id = result.value;
return exampleObj;
}
}
public class YourDapperWrapper : IYourDapperWrapper
{
private IDbConnectionFactory db;
public YourDapperWrapper(IDbConnectionFactory dbConnectionFactory){
if (dbConnectionFactory == null) throw new ArgumentNullException(nameof(dbConnectionFactory));
db = dbConnectionFactory;
}
public int Insert(object model, IDbTransaction transaction = null, int? commandTimeout = null)
{
DateUpdate(model, true);
var results = Db.NewConnection().Insert(model, transaction, commandTimeout);
if (!results.HasValue || results == 0) throw new DataException("Failed to insert object.");
return results;
}
public int Update(object model, IDbTransaction transaction = null, int? commandTimeout = null)
{
DateUpdate(model, false);
var results = Db.NewConnection().Update(model, transaction, commandTimeout);
if (!results.HasValue || results == 0) throw new DataException("Failed to update object.");
return results;
}
private void DateUpdate(object model, bool isInsert)
{
model.GetType().GetProperty("updated_at")?.SetValue(model, DateTime.UtcNow, null);
if (isInsert) model.GetType().GetProperty("created_at")?.SetValue(model, DateTime.UtcNow, null);
}
}

JPA2 CriteriaBuilder: Using LOB property for greaterThan comparison

My application is using SQLServer and JPA2 in the backend. App makes use of a timestamp column (in the SQLServer sense, which is equivalent to row version see here) per entity to keep track of freshly modified entities. NB SQLServer stores this column as binary(8).
Each entity has a respective timestamp property, mapped as #Lob, which is the way to go for binary columns:
#Lob
#Column(columnDefinition="timestamp", insertable=false, updatable=false)
public byte[] getTimestamp() {
...
The server sends incremental updates to mobile clients along with the latest database timestamp. The mobile client will then pass the old timestamp back to the server on the next refresh request so that the server knows to return only fresh data. Here's what a typical query (in JPQL) looks like:
select v from Visit v where v.timestamp > :oldTimestamp
Please note that I'm using a byte array as a query parameter and it works fine when implemented in JPQL this way.
My problems begin when trying to do the same using the Criteria API:
private void getFreshVisits(byte[] oldVersion) {
EntityManager em = getEntityManager();
CriteriaQuery<Visit> cq = cb.createQuery(Visit.class);
Root<Visit> root = cq.from(Visit.class);
Predicate tsPred = cb.gt(root.get("timestamp").as(byte[].class), oldVersion); // compiler error
cq.where(tsPred);
...
}
The above will result in compiler error as it requires that the gt method used strictly with Number. One could instead use the greaterThan method which simply requires the params to be Comparable and that would result in yet another compiler error.
So to sum it up, my question is: how can I use the criteria api to add a greaterThan predicate for a byte[] property? Any help will be greatly appreciated.
PS. As to why I'm not using a regular DateTime last_modified column: because of concurrency and the way synchronization is implemented, this approach could result in lost updates. Microsoft's Sync Framework documentation recommends the former approach as well.
I know this was asked a couple of years back but just in case anyone else stumbles upon this.. In order to use a SQLServer rowver column within JPA you need to do a couple of things..
Create a type that will wrap the rowver/timestamp:
import com.fasterxml.jackson.annotation.JsonIgnore;
import javax.xml.bind.annotation.XmlTransient;
import java.io.Serializable;
import java.math.BigInteger;
import java.util.Arrays;
/**
* A RowVersion object
*/
public class RowVersion implements Serializable, Comparable<RowVersion> {
#XmlTransient
#JsonIgnore
private byte[] rowver;
public RowVersion() {
}
public RowVersion(byte[] internal) {
this.rowver = internal;
}
#XmlTransient
#JsonIgnore
public byte[] getRowver() {
return rowver;
}
public void setRowver(byte[] rowver) {
this.rowver = rowver;
}
#Override
public int compareTo(RowVersion o) {
return new BigInteger(1, rowver).compareTo(new BigInteger(1, o.getRowver()));
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
RowVersion that = (RowVersion) o;
return Arrays.equals(rowver, that.rowver);
}
#Override
public int hashCode() {
return Arrays.hashCode(rowver);
}
}
The key here is that it implement Comparable if you want to use it in calculations (which you definitely do)..
Next create a AttributeConverter that will move from a byte[] to the class you just made:
import javax.persistence.AttributeConverter;
import javax.persistence.Converter;
/**
* JPA converter for the RowVersion type
*/
#Converter
public class RowVersionTypeConverter implements AttributeConverter<RowVersion, byte[]> {
#Override
public byte[] convertToDatabaseColumn(RowVersion attribute) {
return attribute != null ? attribute.getRowver() : null;
}
#Override
public RowVersion convertToEntityAttribute(byte[] dbData) {
return new RowVersion(dbData);
}
}
Now let's apply this RowVersion attribute/type to a real world scenario. Let's say you wanted to find all Programs that have changed on or before some point in time.
One straightforward way to solve this would be to use a DateTime field in the object and timestamp column within db. Then you would use 'where lastUpdatedDate <= :date'.
Suppose that you don't have that timestamp column or there's no guarantee that it will be updated properly when changes are made; or let's say your shop loves SQLServer and wants to use rowver instead.
What to do? There are two issues to solve.. one how to generate a rowver and two is how to use the generated rowver to find Programs.
Since the database generates the rowver, you can either ask the db for the 'current max rowver' (a custom sql server thing) or you can simply save an object that has a RowVersion attribute and then use that object's generated RowVersion as the boundary for the query to find the Programs changed after that time. The latter solution is more portable is what the solution is below.
The SyncPoint class snippet below is the object that is used as a 'point in time' kind of deal. So once a SyncPoint is saved, the RowVersion attached to it is the db version at the time it was saved.
Here is the SyncPoint snippet. Notice the annotation to specify the custom converter (don't forget to make the column insertable = false, updateable = false):
/**
* A sample super class that uses RowVersion
*/
#MappedSuperclass
public abstract class SyncPoint {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
// type is rowver for SQLServer, blob(8) for postgresql and h2
#Column(name = "current_database_version", insertable = false, updatable = false)
#Convert(converter = RowVersionTypeConverter.class)
private RowVersion currentDatabaseVersion;
#Column(name = "created_date_utc", columnDefinition = "timestamp", nullable = false)
private DateTime createdDate;
...
Also (for this example) here is the Program object we want to find:
#Entity
#Table(name = "program_table")
public class Program {
#Id
private Integer id;
private boolean active;
// type is rowver for SQLServer, blob(8) for postgresql and h2
#Column(name = "rowver", insertable = false, updatable = false)
#Convert(converter = RowVersionTypeConverter.class)
private RowVersion currentDatabaseVersion;
#Column(name = "last_chng_dt")
private DateTime lastUpdatedDate;
...
Now you can use these fields within your JPA criteria queries just like anything else.. here is a snippet that we used inside a spring-data Specifications class:
/**
* Find Programs changed after a synchronization point
*
* #param filter that has the changedAfter sync point
* #return a specification or null
*/
public Specification<Program> changedBeforeOrEqualTo(final ProgramSearchFilter filter) {
return new Specification<Program>() {
#Override
public Predicate toPredicate(Root<Program> root, CriteriaQuery<?> query, CriteriaBuilder cb) {
if (filter != null && filter.changedAfter() != null) {
// load the SyncPoint from the db to get the rowver column populated
SyncPoint fromDb = synchronizationPersistence.reload(filter.changedBeforeOrEqualTo());
if (fromDb != null) {
// real sync point made by database
if (fromDb.getCurrentDatabaseVersion() != null) {
// use binary version
return cb.lessThanOrEqualTo(root.get(Program_.currentDatabaseVersion),
fromDb.getCurrentDatabaseVersion());
} else if (fromDb.getCreatedDate() != null) {
// use timestamp instead of binary version cause db doesn't make one
return cb.lessThanOrEqualTo(root.get(Program_.lastUpdatedDate),
fromDb.getCreatedDate());
}
}
}
return null;
}
};
}
The specification above works with both the binary current database version or a timestamp.. this way I could test my stuff and all the upstream code on a database other than SQLServer.
That's it really: a) type to wrap the byte[] b) JPA converter c) use attribute in query.

Map null column as 0 in a legacy database (JPA)

Using Play! framework and it's JPASupport class I have run into a problem with a legacy database.
I have the following class:
#Entity
#Table(name="product_catalog")
public class ProductCatalog extends JPASupport {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
public Integer product_catalog;
#OneToOne
#JoinColumn(name="upper_catalog")
public ProductCatalog upper_catalog;
public String name;
}
Some product catalogs don't have an upper catalog, and this is referenced as 0 in a legacy database. If I supply the upper_catalog as NULL, then expectedly JPA inserts a NULL value to that database column.
How could I force the null values to be 0 when writing to the database and the other way around when reading from the database?
I don't see any easy way of achieving what you want with JPA directly (and there are great chance that even if you find a way that works with basic operation like save or load, that it will not work with more complex use case, like complex criteria / hql, none standard fetching mode, etc)
So i would do that :
#Entity
#Table(name="product_catalog")
public class ProductCatalog extends JPASupport {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
public Integer product_catalog;
#Column(name="upper_catalog")
public Long upper_catalog_id;
public String name;
public ProductCatalog getUpperCatalog() {
if (upper_catalog_id == 0)
return null;
return ProductCatalog.findById(upper_catalog_id);
}
public void setUpperCatalog(ProductCatalog pc) {
if (pc == null) {
upper_catalog_id = 0;
}
else {
if (pc.id == null) {
// option 1. a bit like a cascade
pc.save();
// option 2. if you consider passing a transient entity is not valid
throw new RuntimeException("transient entity " + pc.toString());
}
upper_catalog_id = pc.id;
}
}
}
I see two options:
Use a primitive data type as Id (i.e. int instead of Integer)
If you are using Hibernate as JPA provider, use a CustomType to do the conversion

Resources