I am developing a monolithic app using ABP IO. However, I want to apply as much as I can the ethic of microservices for easy migration to that latter architectural structure.
I have respected almost all constraints including Aggregation, DDD, and so on.
For the scenario synchronous communication between services, I have used the Client proxy for the C# as it has documented and it worked perfectly except on the side of permissions. I want configurable permission that could I use to differ between internal requests that are performed between modules (future microservices) and the public requests that come from outside.
Any suggestions??
Abp provides both dynamic proxy and static proxy (from v5.0).
Synched communication between microservices are done using Client Credentials flow. If you want to grant permission to a microservice you can either;
Add it in seeder method (like in this code sample).
Or if you are using commercial, you can use IdentityServer Management UI and add manage permissions:
You can also check synced communication between microservices guide for more information.
Related
What's the best practice for connecting to an SQL database from Vercel/Next.js serverless functions? I've seen a few options commonly mentioned:
Create a new direct database connection in the serverless function. This has important drawbacks:
Connection pooling: every new invocation of a serverless function would create its database connection, which could quickly overwhelm the database
Security: the database has to be publicly exposed since Vercel doesn't support static IPs or VPC peering. This unfortunately is deal-breaker for any security-sensitive application (fintech, healthcare, education, etc.) and SOC 2 compliance
Add an intermediary service that receives HTTP requests and proxies it to the database
My understanding is that this is a common thing people do? How does this work?
Use a vendor-specific solution, like the Prisma Data Proxy product (requires using the Prisma ORM) or AWS Aurora Data API (essentially an out-of-the-box version of the second option, now deprecated)
Trying to understand what the "best practice" solution to this problem is — have others deployed solutions they're satisfied with?
For personnel blog application should be okay to connect your edge function direct to database. Unless, this small website becomes super popular.
Eventually, For your application hosted on edge, another API endpoint is needed. It can be REST API or GraphQL does not matter. What matters is that, at the front this endpoint will accept loads of request from your edge functions (nodejs applications on vercel/netlify) and at the back, it will communicate with database, pool the connections and caching so on.
You can add nginx load balancer in front of that API endpoint to make it scale.
There are tons of options to engineer it
I'm trying to get a proof of concept going for a multi-tenancy containerized ASP.NET MVC application in Service Fabric. The idea is that each customer would get 1+ instances of the application spread across the cluster. One thing I'm having trouble getting mapped out is routing.
Each app would be partitioned similar to this SO answer. The plan so far is to have an external load balancer route each request to the SF Reverse Proxy service.
So for instance:
tenant1.myapp.com would get routed to the reverse proxy at <SF cluster node>:19081/myapp/tenant1 (19081 is the default port for SF Reverse Proxy), tenant2.myapp.com -> <SF Cluster Node>:19081/myapp/tenant2, etc and then the proxy would route it to the correct node:port where an instance of the application is listening.
Since each application has to be mapped to a different port, the plan is for SF to dynamically assign a port on creation of each app. This doesn't seem entirely scaleable since we could theoretically hit a port limit (~65k).
My questions then are, is this a valid/suggested approach? Are there better approaches? Are there things I'm missing/overlooking? I'm new to SF so any help/insight would be appreciated!
I don't think the Ephemeral Port Limit will be an issue for you, is likely that you will consume all server resources (CPU + Memory) even before you consume half of these ports.
To do what you need is possible, but it will require you to create a script or an application that will be responsible to create and manage configuration for the service instances deployed.
I would not use the built-in reverse proxy, it is very limited and for what you want will just add extra configuration with no benefit.
At moment I see traefik as the most suitable solution. Traefik enables you to route specific domains to specific services, and it is exactly what you want.
Because you will use multiple domains, it will require a dynamic configuration that is not provided out of the box, this is why I suggested you to create a separate application to deploy these instances. A very high level steps would be:
You define your service with the traefik default rules as shown here
From your application manager, you deploy a new named service of this service for the new tenant
After the instance is deployed you configure it to listen in a specific domain, setting the rule traefik.frontend.rule=Host:tenant1.myapp.com to the correct tenant name
You might have to add some extra configurations, but this will lead you to the right path.
Regarding the cluster architecture, you could do it in many ways, for starting, I would recommend you keep it simple, one FrontEnd node type containing the traefik services and another BackEnd node type for your services, from there you can decide how to plan the cluster properly, there is already many SO answers on how to define the cluster.
Please see more info on the following links:
https://blog.techfabric.io/using-traefik-reverse-proxy-for-securing-microservices-on-azure-service-fabric/
https://docs.traefik.io/configuration/backends/servicefabric/
Assuming you don't need an instance on every node, you can have up to (nodecount * 65K) services, which would make it scalable again.
Have a look at Azure API management and Traefik, which have some SF integration options. This works a lot nicer than the limited built-in reverse proxy. For example, they offer routing rules.
I am new to web development, and have seen posts such as these . If one is using AWS and is connecting to an AWS rds instance through Node, is that still considered a direct connection as opposed to a web service?
You're probably going to get a bunch of conflicting opinions on this. My personal opinion is a web service in front of your database makes sense in some scenarios. Multiple applications connecting to the web service instead of directly to the db gives several advantages, security, caching, etc.
That being said, if this is just a single app then most of those advantages disappear and in fact just make things more complex for you. You're going to have to setup your web service for the db as well as your actual code.
If one is using AWS and is connecting to an AWS rds instance through Node, is that still considered a direct connection as opposed to a web service?
No, if Node.js is running on a server or in "serverless" containers (e.g. AWS Lambda) that is not a direct connection. That is a web service, and that's what you want.
A direct connection means the app connects to the database itself... but that requires embedding credentials in the app.
You do not want to embed anything in your app that you would not willingly hand over to an arbitrary user -- such as database credentials and API keys -- because you cannot trust that the app won't be reverse-engineered.
You should design the app in such a way that you would have no security concerns if the entire source code of the app were exposed, because knowing everything about the app's internals would give a malicious actor no valuable information. How? The code on the server side (e.g. in Node.js) should treat every request from the app as potentially suspicious, untrustworthy, etc., and validates every request to do anything.
This layer of separation is one of the strongest reasons why you never give the app direct access to the database. Code running in a trusted place -- your web server/API layer -- needs to vet every database interaction. This topology also decouples the app user from tying up resources on the database server when not actually interacting with the database, which is far less practical with a direct connection.
I am a newbie in Microsoft Azure platform. I want to create multiple databases dynamically (We are developing multi-tenant model. So, Each organization should have their own database. Whenever an organization is registered with our system, we need to create a new database dynamically). Please provide some insights on this.
By using Azure Resource Manager Templates you can reliably deploy the whole infrastructure required by each organisation. So if they need a webserver, database and middleware servers, you can define the whole thing in a template and reliably deploy that for every client.
(from the above link)
You can deploy, manage, and monitor all of the resources for your solution as a group, rather than handling these resources individually.
You can repeatedly deploy your solution throughout the development lifecycle and have confidence your resources are deployed in a consistent state.
You can use declarative templates to define your deployment.
You can define the dependencies between resources so they are deployed in the correct order.
You can apply access control to all services in your resource group because Role-Based Access Control (RBAC) is natively integrated into the management platform.
You can apply tags to resources to logically organize all of the resources in your subscription.
You can clarify billing for your organization by viewing the rolled-up costs for the entire group or for a group of resources sharing the same tag.
The link above has a lot of resources for learning how to use templates as well as the syntax and usage.
There are a large number of templates at the Azure ARM Template Github page and even some pre-existing templates to get you started deploying SQL Server to Azure (there's also mysql and postgress if you prefer)
Plus many others that you can work through to get accustomed to how they work.
you can use the AZURE SQL Database REST API to do so, its as simple as sending a PUT Request to a URL https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/microsoft.sql/servers/{server-name}/databases/{database-name}?api-version={api-version}
Check out these links for more details
https://msdn.microsoft.com/en-us/library/azure/mt163571.aspx
https://msdn.microsoft.com/en-us/library/azure/mt163685.aspx
I create a CMS from scratch and decided to use CouchDB as my database solution. For my CMS I need various accounts and of course different user roles (admin, author, unregistered user, etc.).
First I thought I would program authorization within my CMS myself, but CouchDB has stuff like this build in, so I want to ask:
What is the best practice creating a multiuser app with CouchDB?
Create only one admin for CouchDB and manage restrictions, roles and accounts by yourself?
Use build-in functionality of CouchDB for all this? (Say create a CouchDB admin user for every admin of the CMS?)
What if I want to add other 3rd-party authorization later? Say I want users to login via Twitter/Facebook/Google?
Greetings,
Pipo
The critical question is whether you want to expose CouchDB to the public or not.
If you want to build your CMS as a classical 3-tier architecture where CouchDB is exclusively accessed from a privileged scripting layer, e.g. PHP, then I would recommend you to roll your own authorization system. This will give you better control over the authorization logic. Particularly, you can realize document based read access control (not available in the CouchDB security system).
If instead you want to expose CouchDB to the public, things are different. You cannot actually write server side logic (except for separate asynchronous listeners via the changes feed) so you will have to use CouchDB's built in authentication/authorization system. That limits you to read access controlled on a database level (not document level!). Write access can be controlled with validation functions. CouchDB admins should not be equivalent to application admins as a CouchDB admin is rather comparable to a server admin in a traditional setting. A database admin in CouchDB would be a better fit (can change design documents and therefore make modifications to the CMS installation like adding plugins). All other users with write access can be realized as database members.
I would prefer the second approach, because this will give you the possibility to leverage all the nice features of CouchDB like replication and the changes feed. However, you will have to do some filtered replication between databases with different members if you need fine grained read access control.
If you want to use other authentication mechanisms than those offered by CouchDB, you will eventually have to modify the installation (which can be an issue if you want to use a hosted CouchDB). For a facebook plugin see e.g. https://github.com/ocastalabs/CouchDB-Facebook-Authentication.