Unfortunately, the CakePHP documentation is horrible. I can't seem to figure out if it's possible to cache the results of these DESCRIBE queries in my live environment. All I can find on caching for CakePHP is caching for data within controllers...
Set your debug to 0 in your core.php file. With debugging > 0, Cake automatically checks for schema changes.
Related
I am using eclipse link jpa 2.0. I am using hana database. So I created views and generated java entity files using eclipse. The problem is, after starting Java Web application in Tomcat server 7, if any table data is modified, then view is not returning updated values. Even if I am running view with native query it is giving old values only.. please let me know what changes need to do in configuration level or entity creating level. ( I even added #Cacheable(false) also).
Assuming that when you say view you mean database views, perhaps the following existing answered question might help (though the link talks about Oracle instead):
Materialized View - Oracle / Data is not updating
You might be using some sort of materialized view made in the database that can be configured to refresh during certain events. In this case, the problem does not lie in Eclipselink's caching mechanisms, but in your database instead (as you mentioned that even native queries returned stale data).
i'm struggling onto a problem that i can't find a way ti fix. I'm currently running joomla 3.4.5 and a gantry based theme. I tried minifying CSS, JS and HTML and also optimize the images with the google insight tool.
i've done a debug system and it shows that
Application: beforeRenderModule mod_rocknavmenu
implies 21.7 seconds ..... I think that is the issue .... how can i solve that?
The site is this
Thank you for your supoprt
I would guess the site is slow due to your hosting.
Also news01.png and news02.png are taking several seconds each to come through.
Update all of your extensions, out of date extensions can impact performance.
Check your slow query log, CPU, and memory usage on your server. Those will tell you more about potential issues.
Found realy easy solution - just disable System - Model plugin
While analyzing Appstats traces for datastore_v3.Get calls, it would be very helpful to know which entities are being retrieved from the datastore. Is there a hidden configuration flag that enables this?
I have tried setting appstats_DATASTORE_DETAILS to True in appengine_config.py, but it doesn't seem to make any difference.
As far as I know there is no hidden configuration flag to do that. The sources of the Appstats module are open.
You can view them at https://code.google.com/p/googleappengine/source/browse#svn%2Ftrunk%2Fpython%2Fgoogle%2Fappengine%2Fext%2Fappstats (python version) and https://code.google.com/p/googleappengine/source/browse#svn%2Ftrunk%2Fjava%2Fsrc%2Fmain%2Fcom%2Fgoogle%2Fappengine%2Ftools%2Fappstats (java version).
You can search there to see if there is already something like that or you can copy the code and make the changes yourself. You can even send back a patch with changes that can be integrated into the sdk.
I am using DN3 and GAE 1.7.4.
I use JPA2 which according to the documentations by default has Level2 cache enabled.
Here is my question:
If I run a query that returns some objects, would these objects be put in the cache automatically by their ID?
If I run em.find() with an id of an object which has already been loaded with another query createQuery().getResultList() would it be available in the cache?
Do I need to run my em.find() or query in a transaction in order for the cache to kick in?
I need some clarification on how this cache works and how I could do my queries/finds/persists in order to make the best use of the cache.
Thanks
From Google App Engine: Using JPA with App Engine
Level2 Caching is enabled by default. To get the previous default
behavior, set the persistence property datanucleus.cache.level2.type
to none. (Alternatively include the datanucleus-cache plugin in the
classpath, and set the persistence property
datanucleus.cache.level2.type to javax.cache to use Memcache for L2
caching.
As for your doubts, this depends on your query as well as DataNucleus and GAE Datastore adapter implementation specifics. As Carol McDonald suggested I believe that the best path to find the answers for your questions is with JPA2 Cache interface... More specifically the contains method.
Run your query, get access to the Cache interface through the EntityManagerFactory and see if the Level 2 cache contains the desired entity.
Enabling DataNucleus logs will also give you good hints about what is happening behind the scenes.
After debugging in development local GAE mode I figured level 2 cache works. No need for transaction begin/commit. The result of my simple query on primary keys as well as em.find() would be put in the cache by their primary keys.
However the default cache timeout in local development server is like a few seconds, I had to add this:
<property name="datanucleus.cache.level2.timeout" value="3600000" />
to persistence.xml.
There are at least two Grails plugins that emulate the database migration functionality of Rails:
Autobase
Liquibase
Is there a consensus about which of these is best, or is there another plugin that does database migration better than either of the above?
There is now a standard Grails database migration plugin available. According to this blog post at least the liquibase plugin will therefore not be maintained past the liquibase 1.9 release anymore.
The new database migration plugin has built-in functionality to execute changelogs on startup and supports the definition of changes in Groovy DSL, so it's probably what you are looking for.
I use Autobase (which is built on top of Liquibase) as it (last time I checked) allows you to automatically check/apply your migrations when the app starts. With the Liquibase plugin I have to do this myself in servlet init code. This allows you to set your datasource to dbCreate = none and let Autobase handle getting the DB into shape.
It does mean you need to write a migration each time you add a property to a domain class, but I think this is a good thing as it makes you think about what the underlying field should actually be instead of just letting Hibernate take a guess at it.
I think some of the Autobase plugin (e.g. the groovy dsl) is being migrated back to the Liquibase plugin, but you'd need to check up on that.
The only downside to Autobase is the lack of good documentation. There is some but it's not complete. Luckily, the dsl is the same as the xml Liquibase tags so you can work most of it out.
I use liquibase, I'm not sure that Robert is still actively maintaining Autobase and the xml that liquibase provides is actually pretty DSL-like. I think it also gives a little bit of separation to your database commands and doesn't make it ingrained into the start-up process (some people might prefer the reverse).
At least as of Grails2.0, the database migration plugin is the defacto way to handle non-trivial database changes. The plugin is built on Liquibase, and is authored by the Springsource folks - always a mark of quality. I wrote an introduction to the database migration plugin which might be of use to someone reading this.
I have heard that Autobase is still maintained, but consider that the Grails Database Migration Plugin is written by the core team, and likely going to be the officially supported one.
In other words, encourage you can wait on Grails 1.4 --> roadmap before choosing either of the plugins above.
YEs i also see the migration pluging. This is helpful...
http://grails-plugins.github.io/grails-database-migration/