How can I migrate monolithic MEAN stack application to Micro Services? - angularjs

We developed a monolithic MEAN stack application to manage Employees and Inventory in the office. How to use Employees as one service and Inventory as one service?

Your application should be decomposed into smaller self sufficient units to utilize the full benefits of micro-service architecture.
So, you need to create 2 separate applications one for Employee management and another for Inventory management.
Before decomposing your application make sure you need this or not. As decomposing applications brings number of challenges which was not their in monolithic application like distributed routing between independent components, Centralized security mechanism, Inter-communication between different micro-services etc.

Related

Architecture: one or multiple databases for sub customers (web)APP

I've built a winforms application that i'm currently rebuilding into an ASP.NET MVC application using Web API etc. Maybe an app will be added later on.
Assume that I will provide these applications to a few customers.
My applications are made for customer accounting.
So all of my customers wil manage their customers whithin the applications I provide.
That brings me to my question. Should I work with one big database for al my customers, or should I use seperate database for each of my customers? I'd like to ask the same for web app instances, api's etc.
Technicaly I think both options are possible. If it's just a mather of preference, all input is appreciated.
Some pros and cons I could think off:
One database:
Easy to setup/maintain
Install one update for all of my customers
No possibility to restore db for one customer
Not flexible in terms of resource spreading
Performance, this db can get realy large
Multiple databases:
Preformance, databases are smaller sized and can be spread by multiple servers
Easy to restore data if customer made a 'huge mistake'
The ability to provide customer specific needs (not needed atm)
Harder to setup/maintain, every instance needs to be updated seperately.
A kind of gateway/routing thing is needed to route users to the right datbase/app
I would like to know how the 'big companies' approach this.
You seem to be talking about database multi-tenancy, and you are right about the pros and cons.
The answer to this depends a lot on the kind of application you are building and the kind of customers it will have.
I would go with multi-tenant (single DB multiple tenants) database if
Your application is a multi-tenant application.
Your users do not need to store their own data backups.
Your DB schema will not change for each customer (this is implied in multi-tenant applications anyway).
Your tenants/customers will not have a huge amount of individual data.
Your customers don't have government imposed data isolation laws they need to comply with (EU data in EU, US data in US etc.).
And for individual databases pretty much the inverse of all those points.

Where should i access my Database

I'm curious how you would handle following Database access.
Let's suggest you have a Computer which Hosts your database as part of his server work and multiple client PC's which has some client-side-software on it that need to get information from this database
AFAIK there are 2 way's to do this
each client-side-software connects directly to the Database
each client-side-software connects to a server-side-software which connects to the Database as some sort of data access layer.
so what i like to know is:
What are the pro and contra's of each solution?
And are other solutions out there which maybe "better" to do this work
I would DEFINITELY go with suggestion number 2. No client application should talk to a datastore without a broker ie:
ClientApp -> WebApi -> DatabaseBroker.class -> MySQL
This is the sound way to do it as you separate concerns and define an organized throughput to the datastore.
Some benefits are:
decouple the client from the database
you can centralize all upgrades, additions and operability in one location (DatabaseBroker.class) for all clients
it's very scaleable
its safe in regards to business logic
Think of it like this with this laymans example:
Marines are not allowed to bring their own weapons to battle (client apps talking directly to DB). instead they checkout the weapon from the armory (API). The armory has control over all weapons, repairs and upgrades (data from database) and determines who gets what.
What you have described sounds like two different kind of multitier architectures.
The first point matches with a two-tier and the second one could be a three-tier.
AFAIK there are 2 way's to do this
You can divide your application in several physical tiers, therefore, you will find more cases suitable to this architecture (n-tier) than the described above.
What are the pro and contra's of each solution?
Usually the motivation for splitting your application in tiers is to achieve some kind of non-functional requirements (maintainability, availability, security, etc.), the problem is that when you add extra tiers you also add complexity,e.g.: your app components need to communicate with each other and this is more difficult when they are distributed among several machines.
And are other solutions out there which maybe "better" to do this work.
I'm not sure what you mean with "work" here, but notice that you don't need to add extra tiers to access a database. If you have a desktop application installed in a few machines a classical client/server (two-tier) model should be enough. However, a web-based application needs an extra tier for interacting with the browser. In this case the database access is not the motivation for adding this extra tier.

Web application vs. web services vs. classic application

Please I need help.
I have project in which I need application which communicates with local DB server and simultaneously with central remote DB server to complete some task(read stock quotas from local server create order and then write order to central orders DB,...).
So, I don`t know which architecture and technology do this.
Web application, .NET WinForms client applications on each computer, or web services based central application with client applications?
What are general differences between this approaches?
Thanks
If you don't want to expose your database directly to the clients, I'd recommend having a web service layer in between. Depending on the sensitivity of your data and the security level of your network, I'd recommend either a web service approach (where you can manage the encryption of data yourself, and without need for expensive ssl certificates) or a web interface (which might be easier to construct, but with limitations in security).
I agree with Tomas that a web service layer might be good. However, when it comes to choosing between webforms or winforms I don't think your question includes enough information to make the choice.
I'd say that if you want a powerful and feature rich user interface and want to make development easy, Winforms is probably the way to go. But if you need it to be usuable from a varied array of clients and want easier maintenance and deployment, a web app might be best.
First, focus on the exact relationship between these databases. What does "local" mean. Right there on the user's desktop? Shared between all the users in their office? Presumably the local quotes (you do mean stock quotes and not quotas?) could potentiually be a little out of date relative to the central order server's view of the world. Does that matter? I place an order for 100 X at price 78.34, real price may be different. What is the intended behaviour.
My guess is that there is at least some business logic and so we need to decide where that runs. One (thick client) approach is to put that logic on the desktop, the desktop app then might write directly to the central DB. I don't tend to do this for several reasons:
Every client desktop gets a database connection. Scaling is not good, eventually the database gets unhappy when the number of users gets very large.
If we need a slightly different app, perhaps exposed to a different set of users via the Web or whatever, we end up reproducing that business logic.
An alternative approach (thin or browser based) keeps the UI on the desktop, but puts the logic on the server. The client can then invoke some kind of service. Now there's lots of possible ways of doing that, a simple Web Service or Rest Service will do the job. I hope it's clear that this service-based appraoch addressed my two points above.
By symmetry I would treat the local databases in the same way, wrap them in services. However it's possible that some more complex relationship between the databases exists and in which case you might need the local service layer to interact with the central service layer.
I'm touting the general pronciple of Do Not Repeat Yourself, implement each piece of business logic once.

Any recomendations for an efective way to sync data from one database, to other app's databases?

Here's my problem. I built a web app, and naturally kept the data in a database which describes that app's domain. Afterwords, I built another web app for the same organization, and used a seperate database to describe that app's domain and store data... and naturally a couple more projects came up and for each app I've isolated it's data to a single database. Deveolpment wise, I think it's ok, as I can maintain changes to the data structure and data at the app's database.
Considering these apps belong to the same organization, there tends to be plenty of data replicated between them, like department names, job titles, shop names, etc. Most of these tables hold the same data, but are not exactly the same in each database, and are not always used by all of the apps. Changes to this data, though, needs to be changed at all the apps (sometimes in a diferent ways) creating a growing management "hassle".
So I've been think of a way to get some syncronization between the data. I want an easier management - update at one app (or a central app) and update all the databases as needed by each app - and also a better way to share data between apps (like maybe mash up data from differnt apps in a new app to alow specific analysis). Most of the data I'm refering to is used as contraints more than being core domain concept, describing the organization rather than describing a particular domain.
I'm looking for opinions on some ways to get this done.
My first idea was to grab comun data structures, like the department names' table i mentioned, and stick'em in a core database. Any updates to the data would be done at this database, through a dedicated web app, and I'd apply some sort of Observer or Publisher / Suscriber Pattern for these changes - on changes the app would notify observing apps (through there dedicated webservice) that the changes occured and allow for the app to grab the new data and use it as it needs. GUIDs could be user as a reference to identify the same data throughout the apps. Also, I could build web services for read and search operations that don't need to be in a specific app's database, but could be useful to it.
A second idea would be that each app manage it's own data, and the apps could observe one another. A change in one could notify others that share the same data structure that the change occurred. I could still use some GUIDs and even build services on any of the apps. I think this would also be less excessive in terms of duplication of data, but might be harder to manage as each app would eventually be coupled to other apps, and I would some how have to distribute responsabilities as to which app controls what information.
I'm really curious as to something of this genre of data distibuition and syncing would work and even be recomended. Opions and other ideas are more than welcome!
What you describe here is a typical case for a "Master Data Management" system. EAI vendors (Oracle, TIBCO, IBM) offer such products. They resemble your first solution, being centralised databases with synchronization processes, detecting changes in external data sources, grabbing the changes and synchronizing data out to other external databases. They also provide a user interface to change master data directly.
MDM software are expensive, but you can implement a custom solution which will be - at least initially - cheaper than purchasing one. Both of your solutions make technical sense but there is a difference in their manageability.
The first one is better, if you can dedicate a responsible person/organization to take care of it and the business owners of your services can agree on making changes via this new centralised system.
The second solution shares the responsibility between the service owners. The hard task here is to identify the owner of each type of information (business object).
I cannot advise a solution without a deeper knowledge of your systems and organizations, but I hope I could give some ideas.

Physical middle-tier separation for Windows Forms apps

I've been designing quite a few Windows Forms applications lately (data-entry apps, office integration, etc), and although I've always designed with logical tiers in mind, the "middle tier" business logic layer has always been hosted on the desktop PC (i.e. physical client-server).
As I approach more complex applications, I find myself yearning for a physical middle tier instead, with the desktop client passing requests back to an application server to process business logic (which may change regularly) and interfaces. It feels like I'm missing out on factors such as scalability and maintainability that are more native to Web apps.
I'm curious to know how far other WinForms developers have taken their middle-tier separation:
How much processing (if any) do you perform on a middle-tier server?
What communication method are you using - WCF, remoting, web services, etc?
How much is performance a factor, and how often do you roundtrip to the server?
Is there are a benefit in moving business logic onto a separate tier, or is it more practical to host components locally on a PC (and just make sure you have a good deployment model to push out regular releases as business rules change)?
Alternatively, should I be guiding customers away from WinForms completely if these factors are involved? With alternatives such as Silverlight and even ASP.NET w/ AJAX, the reasons to choose a WinForms client seem to be shrinking.
What is important to keep in mind is that there is a trade-off between the ease of development with a seperate middle tier vs all of the scalability benefits etc. What I mean by this is that you have to refresh interface mappings etc in your code, you have to deploy a middle tier somewhere for your testers to use, which needs to be refreshed etc. Furthermore, if you are lazy like me and pass your Entity Framework objects around directly, you cannot serialise them to a middle tier, so you then need to create DTO's for all of your operations etc.
Some of this overhead can be handled by a decent build system, but that also needs effort to set up and maintain.
My preferred tactic is to keep physical seperation in terms of assemblies (i.e. I have a seperate business logic / data access assembly) and to route all of the calls to the business layer through an interface layer, which is a bunch of Facade classes. So all of these assemblies reside within my windows app. I also create interfaces for all of these facades, and access them through a factory.
That way, should I ever need the separation of a middle tier, and the trade-off in terms of productivity is worth it, I can separate my business layer out, put it behind a WCF service (as that is my preferred service platform) and perform some refactorings in terms of what my facades hand out, and what they do with what they accept.
This is one example of why I tend to always do work in a web environment. If the network is available to your client application for service calls, it's certainly available to send and retrieve data from a web server.
Certainly, client restrictions can alter your final path, but when given the chance to influence the direction, I steer towards web-based solutions. There are deployment technologies available that give you an easier upgrade path than a traditional desktop package, but there's no real substitute for updating a server-based application.
Depending on your application, there are several performance issues to keep in mind.
If your work is very similar between various clients (shared data), then doing the processing on a middle tier is better because you can make use of caching to cut down on the overall processing.
If your is different between clients (private data), then you won't have much gain by doing the processing at a middle tier.

Resources