JSF application : should I use micro-services and how? - angularjs

I have a web application developed with JSF 2 and primefaces. The project has been frozen for months, but it's quite advanced, the whole application run inside the same container under glassfish, so it's a monolith.
My application has an user interface and its purpose is to offer them the possibility to organize urls to tutorials (any kinds) as cards, with tags for the classification, into folders. So any user has its own tree, they can make a research inside the other users's tree create a link on a file in their own tree, copy a entire folder, reorganize it etc.
Nowedays we hear a lot about microservices, Spring boot, Angular Js, react etc. I like to develop with JSF it's a great framework, but I'm asking myself about refactoring my application, at least the necessary parts into microservices, and if JSF is appropriate for that or if I should user other tools.
What I like for example with JSF is the facility to create views, its component approach, and how it handle the full cycle of a request.
For example with a simple folder creation form :
I have to choose the parent folder, so I can bind a research component to a backing bean that makes a research indirectly in my DB using a DAO ( in my app an EJB using JPA). That happens at the "invoke application" phase and refresh my form list with ajax at the end. When I submit the form I can also bind a converter to the research component to retrieve directly a Folder object, the converter uses also a DAO to retrieve the object that I need at the "Invoke application" phase to finish the job.
I also use validators to control different attributes of a new folder, usually I declare them inside my entity class (Folder, User ...) with annotations like #NotNull etc. Before I save the folder on my db, I also check the user rights to see if he can write inside the parent folder and so on. I do that inside the backing bean, so at the 'invoke application' phase, and return a faces message if anything happens wrong.
When I read about micro-services I see that you can use them directly inside a form using json for communication, so it seems quite different. For example if I have a micro-service for the CRUD operations of my folders, are the validators, the converters, part of the service or are they stand alone services ? And what about the security checks ? that kind of architecture is quite mysterious to me.
ps : English is not my mother tongue so be indulgent please :)

AngularJs is pretty ancient man :)
You have to look at the pain points to identify ways to tear down your monolith. Monolith pains are usually slow and painful dev cycle and difficult manual test phases. If you did the entire arquillian thing and have full continuouos integration with single button deployments, you've slain the beast the hard way. Not many braved this route. But if you're looking at mounting feature creep with code freezes and manual test cycles then yeah you kind of want to try to pull some of those features out into a service you can redeploy very quickly

Related

Custom Angular setup for MIT library/project

Hi I need your help trying to figure it out something.
First a little background, I was used to code in Django, it was fast to code and good, but times change and Node is taking over most of new apps, then I move to Express a couple of years ago, however I still miss a lot of the Django functionality, then I start coding a little library to help me with common tasks, then start growing until the current point where you have a core library and plug-able “apps” to do recurrent tasks, like notifications, auth and more, or at least that’s the plan.
Right now an app works something like this:
./controllers (All renders)
./endpoints (Restfull API endpoints)
./models (Static and DB models)
./public (Public files)
./style (Scss styles with bootstrap injected)
./views (Default templates, distributed with the app as example, loaded by default)
…/…/views (Custom views to rewrite the default ones from the app)
On start, JSloth compile everything for you and run the server (hot reload included):
Now, that works fine using an static multipage environment, but I will love to use Angular for that, changes will be needed but I want you guys to lead me in the right direction.
So far I have 2 ideas:
1- Split Routes/Html apps and Restful/Endpoints, then basically use an standard set up on that kind of apps with webpack and AngularSSR.
The big downside is, you can’t give an out the box frontend implementation for apps.
2- Figure it out a way to provide an Angular app for each JSloth app, in this way if you install/copy the auth app you will be provided of user lists, interfaces to update your profile, etc.
I’m thinking that may be a problem talking about performance because in this way you will have a lot of Angular apps, am I wrong?
I need a easy way to allow the final user to share footers and headers at least, maybe even styles or scss variables for colors, in that way it will not look like a huge change.
Do you have any other option? Any better idea?
Thank you so much for the help, this is the repository: https://github.com/chrissmejia/JSloth
Edit 1: Forgot to add the models folder

Where to put code for model entity classes in GWT using GAE

Could some explain how to best organize the model entity classes in GWT for use on app engine?
I have been using this ebook as an example to follow http://code.google.com/p/gwt-gae-book/wiki/StoringData, but I am unsure about where to add this code. I do not need help in how to write the classes, I just want to know does this code go in the client or server?
In my application I have one module that handles the UI and that is it so far. My next step is implementing the data functionality features.
I also plan on using twig and appwrench, if possible, in developing my model if that helps.
Thanks in advance for any help in getting this setup.
Am assuming you are new to programming and hence detailed explanation for you:
Everything else except for UI and RPC calls to server will reside in "server" package. Within server side code, you would further like to create modules which interact with each other. Like the layer which receives calls from client and processes them. Another layer which contains core business logic, and next layer which interacts with DB, where your entities/model will reside.
You can look at example for your current problem which separates various layers of code. Only difference being the code is using JSP for it's UI.

Prism 4 WPF App - Solution Architecture with MVVM, Repositories and Unit of Work patterns implemented

I'm new to .Net and trying to learn things. I'm trying to develop a Prism4 WPF app with
Visual Studio CSharp 2010 Express Edition,
Prism v4,
Unity as IoC,
SQL Server CE as data store.
I've studied a lot(?) and infuenced by this and this among others, and decided to implement MVVM, Repository and UnitofWork patterns. This application will be a Desktop application with a single user (me:-)
So, I've created a solution with the following projects:
Shell (Application Layout and startup logic)
Common (Application Infrastructure and Workflow Logic)
BusinessModuleA (Views and ViewModels)
BusinessModuleA.Model (Business Entities - POCO)
BusinessModuleA.Data (Repositories, Data Access (EF?) )
BusinessModuleB (Views and ViewModels)
BusinessModuleB.Model (Business Entities - POCO)
BusinessModuleB.Data (Repositories, Data Access (EF?) )
My questions are:
Which projects should reference which projects ?
If I implement Repositories in 'BusinessModuleX.Data', which is
obvious, where should I define IRepositories ?
Where should I define IUnitOfWork and where should I implement UnitOfWork ?
Is it ok if I consume UnitOfWork and Repositories in my ViewModels ?
Instict says it is bad design.
If (4) above is bad, then ViewModel should get data via a Service
Layer (another project ?). Then, how can we track changes to the
entities so as to call the relevant CRUD methods on those objects at the Service Layer?
Is any of this making any sense or am I missing the big picture ?
Ok, may be I've not made myself clear on what I wanted exactly in my first post. There are not many answers coming up. I'm still looking for answers because while what #Rachel suggested may be effective for the immediate requirements, I want to be careful not to paint myself into a corner. I've an Access Db that I developed for my personal use at Office, and which became kind of a success and now being used by 50+ users and growing. Maintaining and modifying the access code base has been fairly simple at the beginning, but as the app evolved, began to fall apart. That's why I have chosen to re-write everything in .Net/Wpf/Prism and want to make sure that I get the basic design right.
Please discuss.
Meanwhile, I came up with this...
First off, I would simplify your project list a bit to just Shell, Common, ModuleA, and ModuleB. Inside each Project I'd have sub-folders to specify where everything is. For example, ModuleA might be separated into folders for Views, ViewModels, and Models
I would put all interfaces and global shared objects such as IUnitOfWork in your Common project, since it will be used by all modules.
How you implement IUnitOfWork and your Repositories probably depends on what your Modules are.
If your entire application links to one database, or shares database objects, then I would probably create two more projects for the DataAccessLayer. One would contain public interfaces/classes that can be used by your modules, and the other would contain the actual implementation of the Data Access Layer, such as Entity Framework.
If each Module has it's own database, or its own set of objects in the database (ie. Customer objects don't exist unless you have the Customer Module installed), then I would implement IUnitOfWork in the modules and have them handle their own data access. I would probably still have some generic interfaces in the Common library for the modules to build from though.
Ideally, all your modules and your Shell can access the Common library. Modules should not access each other unless they build on them. For example, a Customer Statistics module that builds on the base Customer module should access the Customer module.
As for if your ViewModels should consume a UnitOfWork orRepository, I would have them use a Repository only. Ideally your Repository should be like a black box - ViewModels can Get/Save data using the Repository, but should have no idea how it's implemented. Repositories can get the data from a service, entity framework, direct data access, or wherever, and the ViewModel won't care.
I'm no expert on design architecture, however that's how I'd build it :)
I would highly recommend you to get the Introduction to PRISM and Repository pattern inside Design Patterns Library training videos. They are great ones. Hope it helps

How does the Composite C1 architecture work?

Can anyone give a high level description of what is going on in the Composite C1 core? In particular I am interested in knowing how the plugin architecture works and what the core components are of the system i.e. when a request arrives what is happening in the architecture. The description doesn't have to be too verbose just a list of steps and the classes involved.
Hopefully one of the core development team would enlighten me... and maybe publish some more API (hint hint more class documentation please).
From request to rendered page
The concrete path a request takes depends on the version of C1 you're using, since it was changed to use Routing in version 2.1.2. So lets see
< 2.1.2
Composite.Core.WebClient.Renderings.RequestInterceptorHttpModule will intercept all incoming requests and figure out if the requested path correspond to a valid C1 page. If it does, the url will be rewritten to the C1 page handler ~/Rendererings/Page.aspx
2.1.1
Composite.Core.Routing.Routes.Register() adds a C1 page route (Composite.Core.Routing.Pages.C1PageRoute) to the Routes-collection that looks at the incoming path, figures out if its a valid C1 page. If it is, it returns an instance of ~/Rendererings/Page.aspx ready to be executed.
Okay, so now we have an instance of a IHttpHandler ready to make up the page to be returned to the client. The actual code for the IHttpHandler is easy to see since its located in ~/Renderers/Page.aspx.cs.
OnPreInit
Here we're figuring out which Page Id and which language that was requested and looking at whether we're in preview mode or not, which datascope etc.
OnInit
Now we're fetching the content from each Content Placeholder of our page, and excuting its functions it may contain. Its done by calling Composite.Core.WebClient.Renderings.Page.PageRenderer.Render passing the current page and our placeholders. Internally it will call the method ExecuteFunctions which will run through the content and recursively resolve C1 function elements (<f:function />), execute them and replace the element with the functions output. This will be done until there are no more function elements in the content in case functions them selves output other functions.
Now the whole content is wrapped in a Asp.Net WebForms control, and inserted into our WebForms page. Since C1 functions can return WebForms controls like UserControl etc., this is necessary for them to work correctly and trigger the Event Lifecycle of WebForms.
And, that's basically it. Rendering of a requested page is very simple and very extendable. For instance is there an extension that enables the usage of MasterPages which simply hooks into this rendering flow very elegantly. And because we're using Routing to map which handler to use, its also possible to forget about ~/Rendering/Page.aspx and just return a MvcHandler if your a Mvc fanatic.
API
Now, when it comes to the more core API's there are many, depending on what you want to do. But you can be pretty sure, no matter what there is the necessary ones to get the job done.
At the deep end we have the Data Layer which most other API's and facades are centered around. This means you can do most things working with the raw data, instead of going through facades all the time. This is possible since most configuration of C1 is done by using its own data layer to store configuration.
The Composite C1 core group have yet to validate/refactor and document all the API's in the system and hence operate with the concept of 'a public API' and what can become an API when the demand is there. The latter is a pretty darn stable API, but without guarantees.
The public API documentation is online at http://api.composite.net/
Functions
Functions is a fundamental part of C1 and is a technique to abstract logic from execution. Basically everything that either performs a action or returns some data/string/values can be candidates for functions. At the lowest level a function is a .Net class implementing the IFunction interface, but luckily there are many easier ways to work with it. Out of the box C1 supports functions defined as XSLT templates, C# methods or Sql. There are also community support for writing functions using Razor or having ASP.Net UserControls (.ascx files) to be functions.
Since all functions are registered in C1 during system startup, we use the Composite.Functions.FunctionFacade to execute whatever function we know the name of. Use the GetFunction to get a reference to a function, and then Execute to execute it and get a return value. Functions can take parameters which are passed as real .Net objects when executing a function. There is also full support for calling functions with Xml markup using the <f:function /> element, meaning that editors, designers, template makers etc. easily can access a wealth of functionality without having to know how to write .Net code.
Read more about functions here http://users.composite.net/C1/Functions.aspx and how to use ie Razor to make functions here http://docs.composite.net/C1/ASP-NET/Razor-Functions.aspx
Globalization and Localization
C1 has full multi-language support in the core. Composite.Core.Localization.LocalizationFacade is used for managing the installed locales in the system; querying, adding and removing. Locales can be whatever CultureInfo object is known by your system.
Composite.Core.ResourceSystem.StringResourceSystemFacade is used for getting strings at runtime that matches the CultureInfo your request is running in. Use this, instead of hardcoding strings on your pages or in your templates.
Read more about Localization here http://docs.composite.net/C1/HTML/C1-Localization.aspx
Global events
Composite.C1Console.Events.GlobalEventSystemFacade is important to know if you need to keep track on when the system is shutting down so you can make last-minute changes. Since C1 is highly multithreaded its easy to write extensions and modules for C1 that are multithreaded as well, taking advantage of multi core systems and parallelization and therefor its also crucial to shut down ones threads in a proper manner. The GlobalEventSystemFacade helps you do that.
Startup events
If you write plug-ins these can have a custom factory. Other code can use the ApplicationStartupAttribute attribute to get called by the Composite C1 core when the web app start up.
Data events
You can subscribe to data add, edit and delete events (pre and post) using the static methods on Composite.Data.DataEvents<T>. To attach to these events when the system start up, use the ApplicationStartupAttribute attribute.
Data
Composite.Core.Threading.ThreadDataManager is important if your accessing the Data Layer outside of a corresponding C1 Page request. This could be a custom handler that just has to feed all newest news as a Rss feed, or your maybe writing a console application. In these cases, always remember to wrap your code that accesses the data like this
using(Composite.Core.Threading.ThreadDataManager.EnsureInitialize())
{
// Code that works with C1 data layer goes here
}
For accessing and manipulating data its recommended NOT to use the DataFacade class, but wrap all code that gets or updates or deletes or adds data like this
using(var data = new DataConnection())
{
// Do things with data
}
IO
When working with files and directories its important to use the C1 equivalent classes Composite.Core.IO.C1File and Composite.Core.IO.C1Directory to .Net's File and Directory. This is due to the nature where C1 can be hosted on Azure, where you might not have access to the filesystem in the same way as you have on a normal Windows Server. By using the C1's File and Directory wrappers you can be sure that code you write will be able to run on Azure as well.
C1 Console
The console is a whole subject on itself and has many many API's.
You can create your own trees using Composite.C1Console.Trees.TreeFacade or Composite.C1Console.Elements.ElementFacade and implementing a Composite.C1Console.Elements.Plugins.ElementProvider.IElementProvider.
You can use the Composite.C1Console.Events.ConsoleMessageQueueFacade to send messages from the server to the client to make it do things like open a message box, refreshing a tree, set focus on a specific element, open a new tab etc. etc.
Composite.C1Console.Workflow.WorkflowFacade is used for getting instances of specific workflows and interacting with them. Workflows is a very fundamental part of C1 and is the way multi-step operations are defined and executed. This makes it possible to save state of operation so ie. a 10 step wizard is persisted even if the server restarts or anything else unexpected happens. Workflows are build using Windows Workflow Foundation, so are you familiar with this, you should be feeling at home
There is also a wealth of javascript facades and methods you can hook into when writing extensions to the Console. Much more than i could ever cover here so i will refrain myself from even getting started on that subject here.
composite.config
A fundamental part of C1 is providers, almost everything is made up of providers, even much of the core functionality. Everything in the console from Perspectives to Trees and elements and actions are feeded into C1 with providers. All the standard functions, the datalayer and all the widgets for use with the Function Call editor is feeded into C1 with providers. All the localisation strings for use with the Resources, users and permissions, url formatters etc. is all providers.
Composite.Data.Plugins.DataProviderConfiguration
Here all providers that can respond to the methods on DataFacade, Get, Update, Delete, Add etc. are registered. Every provider informs the system which interfaces it can interact with and C1 makes sure to route all requests for specific interfaces to their respective dataproviders.
Composite.C1Console.Elements.Plugins.ElementProviderConfiguration
Here we're defining the perspectives and the trees inside the Console. All the standard perspectives you see when you start the Console the first time are configured here, no magic or black box involved.
Composite.C1Console.Elements.Plugins.ElementActionProviderConfiguration
Action providers are able to add new menuitems to all elements in the system, based on their EntityToken. This is very powerful when you want to add new functionality to existing content like versioning, extranet security, custom cut/paste and the list goes on.
Composite.C1Console.Security.Plugins.LoginProviderConfiguration
A LoginProvider is what the C1 console will use to authenticate a user and let you log in or not. Unfortunately this isn't very open but with some reflection you should be all set.
Composite.Functions.Plugins.FunctionProviderConfiguration
Composite C1 will use all the registered FunctionProviders to populate its internal list of functions on system startup.
Composite.Functions.Plugins.WidgetFunctionProviderConfiguration
WidgetProviders are used in things like the Function Call Editor or in Forms Markup to render custom UI for selecting data.
Composite.Functions.Plugins.XslExtensionsProviderConfiguration
Custom extensions for use in XSLT templates are registered here
And then we have a few sections for pure configuration, like caching or what to to parallelize but its not as interesting as the providers.
Defining and using sections
Sections in composite.config, and other related .config files are completely standard .Net configuration and obeys the rules thereof. That means that to be able to use a custom element, like ie. Composite.Functions.Plugins.WidgetFunctionProviderConfiguration it has to be defined as a section. A section has a name and refers to a type that would inherit from System.Configuration.ConfigurationSection. Composite uses the Microsoft Enterprise Libraries for handling most of these common things like configuration and logging and validation and therefor all Composites sections inherit from Microsoft.Practices.EnterpriseLibrary.Common.Configuration.SerializableConfigurationSection. Now, this type just has to have properties for all the elements we want to be able to define in the .config-file, and .Net will automatically make sure to wire things up for us.
If you want to access configuration for a particular section you would call Composite.Core.Configuration.ConfigurationServices.ConfigurationSource.GetSection(".. section name") and cast it to your specific type and your good to go.
Adding extra properties to already defined sections
Normally .Net would complain if you write elements or attributes in the .config files that aren't recognized by the type responsible for the section or for the element. This makes it hard to write a truly flexible module-system where external authors can add specific configuration options to their providers, and therefor we have the notion of a Assembler. Its a ConfigurationElement class with a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.AssemblerAttribute attribute assigned to it that in turns takes a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.IAssembler interface as argument that is responsible for getting these custom attributes and values from the element in the .config file and emit usable object from it. This way .Net won't complain about an invalid .config file, since we inject a ConfigurationElement object that has properties for all our custom attributes, and we can get hold of them when reading the configuration through the IAssembler
Slides
Some overview slides can be found on these lins
Overview
Extensibility points
Page request handling
Function system
Data system
Data type system
Inspiration and examples
The C1Contrib project on GitHub is a very good introduction how to interact with the different parts of C1. Its a collection of small packages, that can be used as it is, or for inspiration. There are packages that manipulates with dynamic types to enable interface inheritance. Other packages uses the javascript api in the console, while others show how to make Function Providers, Trees and hook commands unto existing elements. There is even examples of how to manipulate with the Soap webservice communication going on between client and server so you can make it do things the way you want it. And the list goes on.

Using a Reference Table Service in a WPF Application

For past projects(the last few have been web using asp.net mvc) we created a service that caches our reference tables(as required) to be used primarily for dropdown lists.
Now I'm working on a desktop application.An upgrade from vb6/sybase to vb.net/sql server
I'm trying out WPF.
I started down the same path building up my DAL. one entity for each reference table.
I'm at the stage now where I want to setup the business layer (some reference tables can be edited)
And I'm not sure if I should follow the same process which is to use ReferenceTableService to "manage" the reference tables.(interacts with the DAL, Controller)
This will be an application that sits on a share that multiple users run.
What's the best way to deal with the reference tables? Caching them doesn't seem to be an option. Should I simply load them as each user opens up a new form in the application? Perhaps using a "ReferenceTableService"?
In this case, the Reference Table Service is thin layer in the application. Not a process running elsewhere.
I haven't done much WPF (be interesting to see what the WPF Gurus think) but I think your existing approach is sound and I don;t see why you should deviate from it.
Loading up on app start sounds reasonable; you just have to think about the expected lifetime of a user session vs the expected frequency of changes to the reference data.
Caching: if the data comes from a central service you could always introduce caching there.

Resources