I’m working on a project that uses Oracle ADF Faces. ADF introduces some additional scopes (pageFlowScope, viewScope and backingBeanScope) on top of the standard JSF ones. Our use of one of the ADF scopes, viewScope, appears to be causing our session size to bloat over time.
Objects that are view scoped (e.g. our Backing Beans) are managed by ADF and appear to be put into the session in a org.apache.myfaces.trinidadinternal.application.StateManagerImpl$PageState object. The number of these objects in the session is equal to the org.apache.myfaces.trinidad.CLIENT_STATE_MAX_TOKENS in our web.xml configuration file.
Once all of the tokens are ‘used up’, by navigating around the application, the oldest one of these objects is removed from the session and (should be) garbage collected. However, the reclaim of this space is observed much later, after the session has expired. Because of this, when load testing the application we see the heap space usage gradually increasing, before causing the JVM to crash.
The monitoring of the creation and destruction of our objects is done by adding log statements in the default constructor and in the finalize method (Which overrides the finalize method on object). The logging statements on object creation are seen when we would expect them, but the logging statements from the finalize method are only seen after session expiry. When a garbage collection is triggered using Oracle JRocket Mission Control we see the heap usage drop significantly, but don’t observe any logging from the finalize method calls.
Does anyone have any thoughts on why the garbage collector might not be able to reclaim view scoped objects after they are removed from the session?
Thanks in advance.
You might want to switch the bean to a requestScope. This would mean that if the bean is in the request-response cycle, it would get loaded up else it wouldnt. Also, adf-scopes are built on top of the usual jsf scopes.
Related
For a project I need to have a unique ID generator. So I thought about a Singleton with synchronized methods.
Since a Singleton following the traditional Singleton pattern (private static instance) is shared accross Sessions, I'm wondering if the #Singleton Annotation is working the same way?
The documentation says: Identifies a type that the injector only instantiates once.
Does it mean, that a #Singleton will be independent per User Session (which is bad for an id-generator)? Should I prefer an old school Singleton with Class.getInstance() over an Injection of an #Singleton-Bean?
Or should I use neither nor and provide the Service within an #ApplicationScoped bean?
it musst be guaranteed that only ONE thread, independent of the user session can access the method to generate the next id. (It's not solvable with auto-increment database ids)
Edit: JSF 2.2, CDI and javax.inject.* i'm talking about :)
All those kinds of singletons (static, #javax.inject.Singleton, #javax.ejb.Singleton and #javax.enterprise.context.ApplicationScoped) are created once per JVM.
An object that is created once per user session must be annotated with #javax.enterprise.context.SessionScoped so no, singletons will not be instantiated per user session.
Notice that there are two #Singleton annotations, one in javax.inject and the other in the javax.ejb package. I'm referring to them by their fully-qualified names to avoid confusion.
The differences between all those singletons are subtle and I'm not sure I know all the implications, but a few come to mind:
#javax.ejb.Singleton is managed by the EJB container and so it can handle transactions (#javax.ejb.TransactionAttribute), read/write locking and time-outs (#javax.ejb.Lock, #javax.ejb.AccessTimeout), application startup (#javax.ejb.Startup, #javax.ejb.DependsOn) and so on.
#javax.enterprise.context.ApplicationScoped is managed by the CDI container, so you won't have the transaction and locking features that EJB has (unless you use a post-1.0 CDI that has added transactions), but you still have lots of nice things such as #javax.enterprise.inject.Produces, #javax.annotation.PostConstruct, #javax.inject.Named, #javax.enterprise.inject.Disposes (but many of these features are available to EJBs too).
#javax.inject.Singleton is similar to #ApplicationScoped, except that there is no proxy object (clients will have a reference to the object directly). There will be less indirection to reach the real object, but this might cause some issues related to serialization (see this: http://docs.jboss.org/weld/reference/latest-2.2/en-US/html_single/#_the_singleton_pseudo_scope)
A plain static field is simple and just works, but it's controlled by the class loader so in order to understand how/when they are instantiated and garbage collected (if ever), you will need to understand how class loaders work and how your application server manages its class loaders (when restarting, redeploying, etc.). See this question for more details.
javax.inject.Singleton - When used on your bean, you have to implement writeResolve() and readReplace to avoid any serialization issues. Use it judiciously based on what your bean actually has in it.
javax.enterprise.context.ApplicationScoped - Allows the container to proxy the bean and take care of serialization process automatically. This is recommended to avoid unprecedented issues.
For More information refer this page number 45.
Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!
On GAE my handler calls a function that does all the heavy lifting. All the objects are created within the function. However after the function exits (which returns a string for response.out.write) the memory usage does not go down. The first http call to GAE works, but memory stays at about 100MBs afterwards. The second access attempt fails because private memory limit is reached.
I have cleared all class static objects that I wrote and called the close and clear functions of the third party library to no avail. How does one cleanly release memory? I'd rather force a restart than tracking down memory leaks. Performance is not an issue here.
I know that it is not due to GC. GAE reports that memory stays at high level for a long period of time. The two http calls above were separated by minutes or longer.
I've tried to do the import of my function in the Handler.get function. After serving the page I tried to del all imported third party modules and then my own module. In theory now each call should get a restart of all suspected modules but memory problem still persists. The only (intended) modules left between calls should be standard library modules (including lxml, xml etc).
EDIT:
I now use taskqueue to schedule the heavy duty part on a backend instance and use db.Blob to pass around the results. Getting backends to work solves the memory issue. GAE documentation on backends is complete but confusing. The key is that one needs to follow the instructions on 1) editing backends.yaml 2) using appcfg to update (deploying from launcher is not enough). Afterwards check in admin that the backends is up. Also taskqueue target= breaks on the development server so one needs to work around it on the development server.
This is (probably) due to the fact that there is nothing saying that the garbage collector (which is in charge of freeing unused memory) will kick in directly when your function returns.
You could manually force it to kick in via a few hacks but that will not solve anything if two http request happens aprox. at the same time.
Instead I recommend you to look over solutions which doesn't require you to do the heavy lifting on each request.
If the data generated is unique for each request see if you can do the computations outside of your (limited) private memory pool.
how do I manually start the Garbage Collector?
When your heavy weight variables have gone out of scope invoke the GC by using the below method.
import gc
...
gc.collect ()
I'm working on a 2-tier WPF/EF Code First application. I did a lot of googling but couldn't find a sample implementation of what I was looking for... was hoping that someone on this forum could help me out. Here are the requirements:
On Application Start up
Open a DBContext
Cache the reference data in various maps/lists when the application starts
Close Context.
When user opens a form
Open a DBContext (I'm using UnitOfWork pattern here)
Fetch a fresh copy of Entity from context for Editing.
Call SaveChanges() when Save button is hit.
Close the Context.
The problem manifests when I use an object from Cache to change a navigation property.
e.g. use a drop down (backed by cache which was created using a different DBContext) to set Department navigation property.
The UnitOfWork either throws an exception saying entity was loaded in another DBContext (When Department is lazy loaded DynamicProxy) or inserts a new row in Department table.
I couldn't find even a single example where reference data was being cached... I can't believe that no one came across this issue. Either I'm not looking in the right place or not using the right keywords.
I hope this is doable using EF. I'd appreciate if you can share your experiences or post some references.
I'm kinda new to this so would like to avoid using too many frameworks and just stick to POCO with WPF/EF stack.
Try to attach your cached item (probably, you'd make a clone before attaching):
var existingUnicorn = GetMyExistingUnicorn();
using (var context = new UnicornsContext())
{
context.Unicorns.Attach(existingUnicorn);
context.SaveChanges();
}
Refer to Using DbContext... article.
You mention you are using WPF for this, in that case you don't necessarily have to open a new DBContext every time you want to interact with the domain layer. (Apologies if this goes against UoW that you are keen on using)
Personally I have been using code-first development for a desktop application, and I have found that pooling the contexts (and therefore the connection) prevents this problem, and hasn't led to any problems thus far.
In principle, as soon as the application is launched, a main Context object is opened for the main UI thread, and stays open throughout the duration of the application lifetime. It is stored statically, and is retrieved by any Repository class when they are used.
For multi-threading scenarios, any background threads are free to open up additional contexts and use them in Repositories to prevent any race conditions.
If you were to adopt this approach, you would find that as all repositories share the same context, there are no issues arising from object context tracking.
I ended up defining int foreign key property in addition to navigation.
In my application I only modify the int property and use the navigation property for displaying the details (read only controls).
While this works it makes the application a little fragile and sometimes inconsistent.
although this blog claims that the FK & Navi properties are synced by EF but I couldn't get it to work.
http://coding.abel.nu/2012/03/ef-code-first-navigation-properties-and-foreign-keys
Can anyone give a high level description of what is going on in the Composite C1 core? In particular I am interested in knowing how the plugin architecture works and what the core components are of the system i.e. when a request arrives what is happening in the architecture. The description doesn't have to be too verbose just a list of steps and the classes involved.
Hopefully one of the core development team would enlighten me... and maybe publish some more API (hint hint more class documentation please).
From request to rendered page
The concrete path a request takes depends on the version of C1 you're using, since it was changed to use Routing in version 2.1.2. So lets see
< 2.1.2
Composite.Core.WebClient.Renderings.RequestInterceptorHttpModule will intercept all incoming requests and figure out if the requested path correspond to a valid C1 page. If it does, the url will be rewritten to the C1 page handler ~/Rendererings/Page.aspx
2.1.1
Composite.Core.Routing.Routes.Register() adds a C1 page route (Composite.Core.Routing.Pages.C1PageRoute) to the Routes-collection that looks at the incoming path, figures out if its a valid C1 page. If it is, it returns an instance of ~/Rendererings/Page.aspx ready to be executed.
Okay, so now we have an instance of a IHttpHandler ready to make up the page to be returned to the client. The actual code for the IHttpHandler is easy to see since its located in ~/Renderers/Page.aspx.cs.
OnPreInit
Here we're figuring out which Page Id and which language that was requested and looking at whether we're in preview mode or not, which datascope etc.
OnInit
Now we're fetching the content from each Content Placeholder of our page, and excuting its functions it may contain. Its done by calling Composite.Core.WebClient.Renderings.Page.PageRenderer.Render passing the current page and our placeholders. Internally it will call the method ExecuteFunctions which will run through the content and recursively resolve C1 function elements (<f:function />), execute them and replace the element with the functions output. This will be done until there are no more function elements in the content in case functions them selves output other functions.
Now the whole content is wrapped in a Asp.Net WebForms control, and inserted into our WebForms page. Since C1 functions can return WebForms controls like UserControl etc., this is necessary for them to work correctly and trigger the Event Lifecycle of WebForms.
And, that's basically it. Rendering of a requested page is very simple and very extendable. For instance is there an extension that enables the usage of MasterPages which simply hooks into this rendering flow very elegantly. And because we're using Routing to map which handler to use, its also possible to forget about ~/Rendering/Page.aspx and just return a MvcHandler if your a Mvc fanatic.
API
Now, when it comes to the more core API's there are many, depending on what you want to do. But you can be pretty sure, no matter what there is the necessary ones to get the job done.
At the deep end we have the Data Layer which most other API's and facades are centered around. This means you can do most things working with the raw data, instead of going through facades all the time. This is possible since most configuration of C1 is done by using its own data layer to store configuration.
The Composite C1 core group have yet to validate/refactor and document all the API's in the system and hence operate with the concept of 'a public API' and what can become an API when the demand is there. The latter is a pretty darn stable API, but without guarantees.
The public API documentation is online at http://api.composite.net/
Functions
Functions is a fundamental part of C1 and is a technique to abstract logic from execution. Basically everything that either performs a action or returns some data/string/values can be candidates for functions. At the lowest level a function is a .Net class implementing the IFunction interface, but luckily there are many easier ways to work with it. Out of the box C1 supports functions defined as XSLT templates, C# methods or Sql. There are also community support for writing functions using Razor or having ASP.Net UserControls (.ascx files) to be functions.
Since all functions are registered in C1 during system startup, we use the Composite.Functions.FunctionFacade to execute whatever function we know the name of. Use the GetFunction to get a reference to a function, and then Execute to execute it and get a return value. Functions can take parameters which are passed as real .Net objects when executing a function. There is also full support for calling functions with Xml markup using the <f:function /> element, meaning that editors, designers, template makers etc. easily can access a wealth of functionality without having to know how to write .Net code.
Read more about functions here http://users.composite.net/C1/Functions.aspx and how to use ie Razor to make functions here http://docs.composite.net/C1/ASP-NET/Razor-Functions.aspx
Globalization and Localization
C1 has full multi-language support in the core. Composite.Core.Localization.LocalizationFacade is used for managing the installed locales in the system; querying, adding and removing. Locales can be whatever CultureInfo object is known by your system.
Composite.Core.ResourceSystem.StringResourceSystemFacade is used for getting strings at runtime that matches the CultureInfo your request is running in. Use this, instead of hardcoding strings on your pages or in your templates.
Read more about Localization here http://docs.composite.net/C1/HTML/C1-Localization.aspx
Global events
Composite.C1Console.Events.GlobalEventSystemFacade is important to know if you need to keep track on when the system is shutting down so you can make last-minute changes. Since C1 is highly multithreaded its easy to write extensions and modules for C1 that are multithreaded as well, taking advantage of multi core systems and parallelization and therefor its also crucial to shut down ones threads in a proper manner. The GlobalEventSystemFacade helps you do that.
Startup events
If you write plug-ins these can have a custom factory. Other code can use the ApplicationStartupAttribute attribute to get called by the Composite C1 core when the web app start up.
Data events
You can subscribe to data add, edit and delete events (pre and post) using the static methods on Composite.Data.DataEvents<T>. To attach to these events when the system start up, use the ApplicationStartupAttribute attribute.
Data
Composite.Core.Threading.ThreadDataManager is important if your accessing the Data Layer outside of a corresponding C1 Page request. This could be a custom handler that just has to feed all newest news as a Rss feed, or your maybe writing a console application. In these cases, always remember to wrap your code that accesses the data like this
using(Composite.Core.Threading.ThreadDataManager.EnsureInitialize())
{
// Code that works with C1 data layer goes here
}
For accessing and manipulating data its recommended NOT to use the DataFacade class, but wrap all code that gets or updates or deletes or adds data like this
using(var data = new DataConnection())
{
// Do things with data
}
IO
When working with files and directories its important to use the C1 equivalent classes Composite.Core.IO.C1File and Composite.Core.IO.C1Directory to .Net's File and Directory. This is due to the nature where C1 can be hosted on Azure, where you might not have access to the filesystem in the same way as you have on a normal Windows Server. By using the C1's File and Directory wrappers you can be sure that code you write will be able to run on Azure as well.
C1 Console
The console is a whole subject on itself and has many many API's.
You can create your own trees using Composite.C1Console.Trees.TreeFacade or Composite.C1Console.Elements.ElementFacade and implementing a Composite.C1Console.Elements.Plugins.ElementProvider.IElementProvider.
You can use the Composite.C1Console.Events.ConsoleMessageQueueFacade to send messages from the server to the client to make it do things like open a message box, refreshing a tree, set focus on a specific element, open a new tab etc. etc.
Composite.C1Console.Workflow.WorkflowFacade is used for getting instances of specific workflows and interacting with them. Workflows is a very fundamental part of C1 and is the way multi-step operations are defined and executed. This makes it possible to save state of operation so ie. a 10 step wizard is persisted even if the server restarts or anything else unexpected happens. Workflows are build using Windows Workflow Foundation, so are you familiar with this, you should be feeling at home
There is also a wealth of javascript facades and methods you can hook into when writing extensions to the Console. Much more than i could ever cover here so i will refrain myself from even getting started on that subject here.
composite.config
A fundamental part of C1 is providers, almost everything is made up of providers, even much of the core functionality. Everything in the console from Perspectives to Trees and elements and actions are feeded into C1 with providers. All the standard functions, the datalayer and all the widgets for use with the Function Call editor is feeded into C1 with providers. All the localisation strings for use with the Resources, users and permissions, url formatters etc. is all providers.
Composite.Data.Plugins.DataProviderConfiguration
Here all providers that can respond to the methods on DataFacade, Get, Update, Delete, Add etc. are registered. Every provider informs the system which interfaces it can interact with and C1 makes sure to route all requests for specific interfaces to their respective dataproviders.
Composite.C1Console.Elements.Plugins.ElementProviderConfiguration
Here we're defining the perspectives and the trees inside the Console. All the standard perspectives you see when you start the Console the first time are configured here, no magic or black box involved.
Composite.C1Console.Elements.Plugins.ElementActionProviderConfiguration
Action providers are able to add new menuitems to all elements in the system, based on their EntityToken. This is very powerful when you want to add new functionality to existing content like versioning, extranet security, custom cut/paste and the list goes on.
Composite.C1Console.Security.Plugins.LoginProviderConfiguration
A LoginProvider is what the C1 console will use to authenticate a user and let you log in or not. Unfortunately this isn't very open but with some reflection you should be all set.
Composite.Functions.Plugins.FunctionProviderConfiguration
Composite C1 will use all the registered FunctionProviders to populate its internal list of functions on system startup.
Composite.Functions.Plugins.WidgetFunctionProviderConfiguration
WidgetProviders are used in things like the Function Call Editor or in Forms Markup to render custom UI for selecting data.
Composite.Functions.Plugins.XslExtensionsProviderConfiguration
Custom extensions for use in XSLT templates are registered here
And then we have a few sections for pure configuration, like caching or what to to parallelize but its not as interesting as the providers.
Defining and using sections
Sections in composite.config, and other related .config files are completely standard .Net configuration and obeys the rules thereof. That means that to be able to use a custom element, like ie. Composite.Functions.Plugins.WidgetFunctionProviderConfiguration it has to be defined as a section. A section has a name and refers to a type that would inherit from System.Configuration.ConfigurationSection. Composite uses the Microsoft Enterprise Libraries for handling most of these common things like configuration and logging and validation and therefor all Composites sections inherit from Microsoft.Practices.EnterpriseLibrary.Common.Configuration.SerializableConfigurationSection. Now, this type just has to have properties for all the elements we want to be able to define in the .config-file, and .Net will automatically make sure to wire things up for us.
If you want to access configuration for a particular section you would call Composite.Core.Configuration.ConfigurationServices.ConfigurationSource.GetSection(".. section name") and cast it to your specific type and your good to go.
Adding extra properties to already defined sections
Normally .Net would complain if you write elements or attributes in the .config files that aren't recognized by the type responsible for the section or for the element. This makes it hard to write a truly flexible module-system where external authors can add specific configuration options to their providers, and therefor we have the notion of a Assembler. Its a ConfigurationElement class with a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.AssemblerAttribute attribute assigned to it that in turns takes a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.IAssembler interface as argument that is responsible for getting these custom attributes and values from the element in the .config file and emit usable object from it. This way .Net won't complain about an invalid .config file, since we inject a ConfigurationElement object that has properties for all our custom attributes, and we can get hold of them when reading the configuration through the IAssembler
Slides
Some overview slides can be found on these lins
Overview
Extensibility points
Page request handling
Function system
Data system
Data type system
Inspiration and examples
The C1Contrib project on GitHub is a very good introduction how to interact with the different parts of C1. Its a collection of small packages, that can be used as it is, or for inspiration. There are packages that manipulates with dynamic types to enable interface inheritance. Other packages uses the javascript api in the console, while others show how to make Function Providers, Trees and hook commands unto existing elements. There is even examples of how to manipulate with the Soap webservice communication going on between client and server so you can make it do things the way you want it. And the list goes on.