I am using eclipse link jpa 2.0. I am using hana database. So I created views and generated java entity files using eclipse. The problem is, after starting Java Web application in Tomcat server 7, if any table data is modified, then view is not returning updated values. Even if I am running view with native query it is giving old values only.. please let me know what changes need to do in configuration level or entity creating level. ( I even added #Cacheable(false) also).
Assuming that when you say view you mean database views, perhaps the following existing answered question might help (though the link talks about Oracle instead):
Materialized View - Oracle / Data is not updating
You might be using some sort of materialized view made in the database that can be configured to refresh during certain events. In this case, the problem does not lie in Eclipselink's caching mechanisms, but in your database instead (as you mentioned that even native queries returned stale data).
Related
At a conference yesterday, I learned about the importance of putting your database in source control. They showed us how to make a new Database project and import the database.
What I was wondering about is how I would change an existing project running on Entity Framework to utilize the database project's power?
Schema updates have always been done by using Entity Framework Migrations. I get that the Database project will be able to deploy database updates for me and save those update scripts to source control, but I would like to keep Entity Framework for querying my data (if that makes any sense at all).
Is it possible (or even: recommended) to use Entity Framework to access the database but manage the database using a Database project in Visual Studio ? How do you go about this?
I've tried searching for similar questions and using Google to find if anyone else is having the same problem, but no dice so far.
I should also state that I am considering using this in databases that also have stored procedures in them. These are not controlled through Entity Framework at all, and therefore are not in source control yet.
Thank you for your time.
What I was wondering about is how I would change an existing project
running on Entity Framework to utilize the database project's power?
Answer: I suggest you to see this course from Plural Sight : https://www.pluralsight.com/courses/code-first-entity-framework-legacy-databases
Is it possible (or even: recommended) to use Entity Framework to access the database but manage the database using a Database project
in Visual Studio ? How do you go about this?
Answer: Yes, it's possible and recommended. Your data project becomes the source of truth about the structure of your database. This is very powerful to keep control of all the changes and state of your database in one place (Visual Studio). The course from the first answer will teach you how.
I should also state that I am considering using this in databases that
also have stored procedures in them. These are not controlled through
Entity Framework at all, and therefore are not in source control yet.
Answer: I don't see any problem using stored procedures. The tool from the Plural Sight course will create the procedure in your source control and the reverse engineering will create a class/method for easy use of the proc.
I just came across the below alternative, which I didn't test though:
Generate Entity Framework Core classes from a SQL Server database project - .dacpac file
I believe this should be something to be considered
I developed an application like that, having 2 projects: application itself and the SSDT project for the database. The database changes were deployed via change scripts, and EF migrations were disabled in the application.
Everything worked fine, although it did bring a bit of an overhead. For example, it was a bit of a hassle to introduce major database updates / refactorings into the EF layer. For some reason, I was unable to reverse engineer database changes directly into the app, so I had to do it half-manually: creating new project, generate EF context for the entire database, and then copying new / changed files into the main application.
(Then again, it was almost 5 years ago. With luck, EF scaffolding has improved since then.)
Spring Boot is a good framework to develop quickly applications. However, when creating an application binded to database, it seems some of the work must be done twice (I'm using Flyway):
create table creation SQL queries scripts
create Spring entites containing corresponding annotations
run application : the flyway script generates the tables
Writing scripts AND entites can be time consuming, and without added value. Is it possible to do it only once?
Thanks
Just set theese properties on your configuration file:
spring.jpa.properties.javax.persistence.schema-generation.create-source=metadata
spring.jpa.properties.javax.persistence.schema-generation.scripts.action=create
spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=create.sql
The schema file will be generated automatically in the project root. Hope it helps.
You can also use JPA Buddy plugin. It has a "Show DDL" menu where you can visualize the sql script for a selected entity. Really useful when you want to avoid creating everything manually.
I am working on Adobe CQ. I created 2-3 versions(1.2,1.2,1.3) for a particular page in my author instance. Now I tried to package my content page and installed it in another instance. I couldn't see the versions of the page which I installed in another instance.
Can anyone help me out doing this?? I want to migrate my content pages along with their versions from one CQ instance to another??
We are in the same situation. You can extract prior version details using the packaging approach, but you will be precluded from reloading them in due to the new Oak security model. The next issue is that you would need to extract and transform the data, and then reinsert due to the node ID's potentially differing, especially if you are using partial data sets to extract.
Where we have gotten to, and are proving now, is to use the new migration tool to move content from instance to instance, which purportedly has a version extract tool. I will update details here when we get our results back.
UPDATE:
We have tested the CRX2OAK migration tool, and it indeed does move versions across. Using the tool, you can specify filters to only migrate a subset of content, which will then drag the version details across as well.
It seems this approach works quite well for both single tenancy and multi tenancy approaches as it used to using a package for content.
Unfortunately, it can't be used as a portable backup system, as it is an instance to instance solution. It does, however, work well for blue/green deployment strategies.
Versions are stored by path '/jcr:system/jcr:versionStorage' in AEM.
To transfer pages with their versions just create a package with filters for content which you want to move and the version storage path as well, download package and install in other AEM.
If anyone comes across this question like me, here is the summarised answer:
You can use crx2oak utility available from link below to migrate pages and page version across instances:
https://repo.adobe.com/nexus/content/groups/public/com/adobe/granite/crx2oak/
This is a powerful utility with multiple uses (especially in upgrades) as documented in links below:
https://docs.adobe.com/docs/en/aem/6-2/deploy/upgrade/using-crx2oak.html
https://jackrabbit.apache.org/oak/docs/migration.html
The source and destination repositories need to be offline while running this utility so best to plan ahead for this type of migration.
HTH
Background:
I am using GitHub to store a ZF2 application.
The database schema + the actual data stored inside the schema are not being stored inside a version control. At the moment I am in development mode, so I have some database dump scripts that I load into the database when I need to. I also tweak entries in the database via phpMyAdmin when I need ongoing granular control for immediate testing purposes. I am also looking into using Doctrire ORM, so my schema will be part of my code via Annotations, and that will be checked into GitHub. Doctrine ORM will generate the actual schema for me, although it is still a separate step in the deployment process. The actual data however, will still be outside of the application and outside of the repository and currently has to be dealt with separately and is not automated.
Goal:
I want to be able to deploy ZF2 application and the database schema, and the data onto Zend Server and have it "just work" in the most automated, least manual way possible.
Question:
What is a recommended, best practice way to deploy every aspect of ZF2 application in the most automated, least manual way possible and have it "just work"? Let's focus on the Development and Testing mode here, as in Production it may be good to have separate deployment steps to protect against accidental live data overwrites.
You can try Phing (http://www.phing.info/) for deploying your PHP application, adjusting directory permissions, running database migrations, running unit tests, etc. I used Phing in couple of my projects with great success.
I'm currently working on a Grails project which has a static production database with a lot of data in it. I would like to test my application using the production data, but instead of having to clone the production database I'd like to setup a proxy database to the production database.
Essentially reads of the database would go all the way to production database while writes would stop at a proxy database (preferably an h2 database). If a row was updated that came from the production database the row would be saved to the proxy database and returned, instead of the production's row, on subsequent queries.
I'd like to do all of this as transparently to the application as possible. My currently line of thinking is that I'd need to fork the Hibernate GORM implementation and make it support this use case. Has this been done before? Is there a better way?
Forking the Hibernate GORM implementation may not be a good idea. You will be stuck in your version and will have to, somehow, make this up to date with the original plugin (eg. bug fix, new implementations).
Maybe a custom TestMixin that allows you to override all registered domain classes, with new implementations of save(), get(), find() and etc can be an option. You can work with the metaClass to override this static methods and this will be triggered only on tests with the annotated mixin.
With this you can use multiple datasources in the test environment to determine which will be used.