How does one achieve loose coupling / late binding with Azure integration offerings such as Logic Apps?
When I produce a Logic App, it tightly couples itself to a trigger type, and if I chain onto Function Apps, it's also tightly bound to the instance of the app in an Azure tenancy. I don't seem to be able to produce code that can be deployed into any subscription (as we have multiple environments and off shore teams), nor can I find a way of disconnecting things like the specific service bus queue it's working with and configure that are deployment.
This is all standard functionality we see with BizTalk which we're looking at alternatives to now because it seems Microsoft are pushing to the cloud.
Related
Functions & Logic Apps are two distinct offerings by Microsoft Azure. I wonder what are the use cases that one should favor the new Functions offering over Logic Apps.
Azure Functions is code being triggered by an event.
Logic Apps is a workflow triggered by an event.
That means that they are also, in fact, complementary. You can, as of sometime yesterday, add a Function as part of a workflow inside a Logic App via the Logic Apps UX.
TL;DR - It's Logic Apps + Functions, not Logic Apps OR Functions.
"Here are few use cases where you can decide to choose between Azure Functions and Azure Logic Apps.
Azure Functions:
Azure Function is code being triggered by an event
Azure Functions can be developed and debugged on local workstation, which is a big
plus to increase developer productivity
When dealing with synchronous request/response calls, that execute more complex logic, Azure function is preferred option
Logic Apps:
Logic Apps is a work flow triggered by an event
Logic Apps run only in the cloud, as it has a dependency on Microsoft-managed connectors. It cannot be debug, test or run Logic Apps locally
Logic Apps is better suited for asynchronous integration and fire-and-forget messaging that requires reliable processing.
Azure Functions has sufficient logging and troubleshooting capabilities and you can even build your custom monitoring tools. Functions does not depend on cloud, it can run locally too."
Logic Apps are used for automating your business process. They make integration with cloud and on premise systems easy with several out of the box connectors. Azure functions on the other hand do something in response to an event, for instance when a message is added to a queue, or a blob is added, process these etc. I guess you can even expose Azure functions as an HTTP API endpoint and integrate into your business process using Logic Apps.
The other obvious difference in my mind is the pricing, Azure functions are charged based on the compute used for the function to execute and the associated memory with the function (https://azure.microsoft.com/en-us/pricing/details/functions/).
Just wanted to add some of my thoughts
Azure Function Apps should be used for
High frequency tasks - 1,000,000 executions and 400,000 GB-s of memory is free and then the price is very low. Once you know any coding language functions support you can run millions and millions of executions at very low cost.
Very easy to bind with multiple Azure services - while Logic Apps also bind easily to external services if you want to do it from logic apps at high frequency it will cost you a buck or two. Functions also allow for easy input and output bindings to external azure services.
Stateful executions - with durable task framework you can run multiple functions, perform fan-in and fan-out and write stateful executions with ease.
Programming and Scripting Languages - if you already know programming languages then functions might be easy way to migrate some of your applications to the cloud with minimal changes.
Azure Logic Apps should be used for
Low frequency - biggest reason for this is pricing model. Imagine as if single action in logic app is what you pay for as it is separate execution.
For example, if you have 1 logic app with 3 steps and you run it every 10 seconds. This will be 18 actions per minute. So, 1080 per hour, 25920 per day. If those 3 actions connect to anything external, i.e. blobs/http, etc. They are connectors and as such simple logic app with 26,000 connector runs per day will net you 100$ a month. Compared to most likely under 1$ for functions.
Combining lots of external services/APIs - thanks to 200+ connectors you can easily combine multiple services without a need to learn APIs and such. This is simple TCO calculation, is it better to write X amount of API integrations for price of developer or just use out of the box connectors.
Extremely well-designed logging - with visual logging it is very easy to check every single execution step input, outputs, time etc. As if you did log every single line in Azure Functions.
Nicely extends other services like Data Factory - some services are extremely well designed for certain tasks, but they are not as good at other tasks. For instance, data factory can't send emails out of the box but in 10 minutes you can call HTTP webhook for Logic App from data factory and start sending emails at ease.
In short as other said. They pay different roles and should be used as such.
In general, Logic apps ❤️ functions.
If you want to check out some info, I encourage you to check
Function Apps intro video https://youtu.be/Vxf-rOEO1q4
Logic Apps intro video https://youtu.be/UzTtastcBsk
Logic Apps security with API management https://marczak.io/posts/2019/08/secure-logic-app-with-api-management/
The answer to this question might have changed after the release of Azure Durable Functions.
Now the overlap between the two platforms is greater. Both service offerings allow you to build serverless workflows; while Azure Durable Functions are code-based workflows, Logic Apps are visually designed workflows.
Logic Apps are better suited when building integration solutions due to the very extensive list of connectors that should reduce the time-to-market, and when rich visual tools to build and manage are preferred.
Durable Functions are a better fit if you require or prefer to have all the power and flexibility of a robust programming language, or you need more portability, and the available bindings and logging capabilities are sufficient.
A detailed comparison between the two platforms is in this post.
Logic Apps is the iPaas offering from Microsoft. It can be used to create easy-to-implement Integration Solutions on the Cloud. It comes with an array of out-of-the-box connectors that can be used to integrate solutions across On-Premises and Could based applications.
Azure functions, however, can be used to quickly run small pieces of code or functions on the "Cloud". Azure functions can be integrated with Logic Apps to run snippets of code from within Logic Apps.
I use both extensively. I prefer Logic Apps over Azure Function for simple apps/api. Knowledge transfer of Logic Apps is pretty easy as the next guy just have to look at the picture. Logging/tracing is also already built-in. However, Logic Apps (and Flow) will become messy and not easily readable when you have more than just a few if-else or case conditions or if you have several nested workflows. Error handling in Logic Apps also leaves a lot to be desired.
Azure Function
The azure function is a piece of code that gets triggered on some event or timer
it could be debugged and there are a couple of languages in which you can code in
and couple options to write code like Visual Studio Code, Visual studio, In-portal
Logic app
It is a workflow orchestration tool, it gets triggered in a similar way as the azure functions but it's a drag and drop tool you can't code in it
it provides a bunch of action to perform the functionality it is mainly used for integrating systems
Both the system is based on the serverless architecture but the azure logic app is easy to develop and debug but limited in scope
if you require a lot customized logic azure function is for you
I am building a desktop application that will be used on local network, with SQL Server as database.
This application would have around 50 users top at the same time. In what particular scenario would I need to use WCF service? Is it recommended to create a WCF service on the server computer where database would reside, so we connect to this server through WCF service, instead of connecting to the database directly? What is the recommended way to connect to SQL Server data and why?
Edit: Let me explain in more detail. I have used WCF Ria services before, so I know how they work. Lets assume that WCF services works in same way. The question was directed toward why would we use WCF instead of directly connecting to database? I didnt want to specify my current application requirement, since I would get a specific answer for specific requirement. My goal was to understand in general why and when would you yse one instead of another. And I have received satisfying answers so far.
It appears to me that general consensus is to use WCF only if there would be a demand of another type of application, which would use web access to get data from service. Also, if I understood correctly, from security point of view, there is no difference between the two.
There would be a statistical app in the future that uses web to provide read-only statistics to user, and naturally some service will be required for this task (application has no specific client in mind, it will be offered to lots of clients). Since I need some demo application to be done very rapidly for particular clients, then I am thinking to neglect the service part, and make a proper layering (WPF->VM->Model->EF, so later I would just insert service between the model and EF. I guess it should not take too much time to make WPF app running with inserted layer. I am also postponing the service because of next reason: since HTML5 is (going to be) main technology for web, and there is a possibility that SL will be abandoned as technology (which I have been using), the logical decision would be to choose HTML5 over SL. But since I am totally unfamiliar with HTML5 and its requirements, I am not sure if WCF service is the best choice for it, and this is also one of the reasons to postpone the decision of choosing the service type (along with requirement to make the desktop demo app as fast as possible).
I think a better way to consider the question is whether you should abstract your database and data access layer from the application using a service interface. You could use WCF and SOAP or you could use a REST based HTTP service, the choice of technology is secondary to whether the current or future requirements of your application indicate that an additional layer of abstraction is warrented.
Reasons you might consider using a service interface instead of directly connecting to the SQL database include but are not limited to:
Ease of supporting multiple operating systems/client UIs
Ability to evolve the data/service interface separately from your database schema
Isolate application from changes to database schema or location (you don't have to redeploy change to application, only change internals of the services it is calling)
If data could be used by other systems, you have a standard means of allowing these systems to interface with the data your application is managing
Reduced SQL database connection security concerns (only service identity connects to database, allowing you to use a variety of authentication/authorization strategies on the client side)
The trade off you are looking at is the time/cost/complexity of implementing a service interface versus the flexibility and mantainability benefits you will gain. You should evaluate the needs of your application and your customer before you make a decision on whether to connect directly to your data store using ADO.NET or use a service layer.
You should take a look at the Microsoft Service Layer Guidelines as they cover a lot of the considerations to take into account.
Unless you need to create a reusable service, I can't think of a reason to add a WCF layer, unless you are just looking for a reason to do it. I think you can just go with some sort of ORM like EF or nHibernate and be happy.
The main reason for WCF is security. If the client connects directly to the DB then the client must be given rights on tables. The client can hack into the connection and use TSQL directly. You must expose port 1433 to the network in a single tier application. With WCF there is not direct access from the client to SQL. It is not just more secure in general but you can have more granular security. .NET service code can enforce row level security. A table only has column level security. If this is business on a private network and you don't expect anyone would try and hack into your db then client connecting directly to the SQL server is easier to build. With server side service the other factor is a change to server side code is one spot so you don't have to update 50 devices.
I am thinking about replacing the business layer with BizTalk orchestrations exposed as WCF services as a standard architecture for many of our apps. Essentially ASP.NET and WinForms apps will call these services to retrieve and update data in many of our LOB databases among other things. Some of the services will also be exposed to partners.
As for the data access, I can certainly use the SQL Adapter, but I think it's not the cleanest way to do it, and the fact that it's tightly coupled to SQL Server also makes it a bad idea for me. I would like to use Entity Framework based custom DAL's or perhaps generated from tools like SubSonic, etc.
Is this a good idea? From my Google searches, I can't find many people doing this kind of thing or any comments on how it might have worked out for them.
What's your take on this? Any ideas on where to cache data, concurrency issues, etc.?
You will probably have an easier time using straight WCF and something like NHibernate or EF. These services consuming and returning DTO's and not raw entities. If you do have heavy business logic or mapping that needs done, BizTalk can sit in front or with ESB even expose mapping services to your data service on the side.
Also check out the new WebApi stuff. http://wcf.codeplex.com/
And BizTalk is not great at doing low latency ... you will get some overhead on all the service calls.
If you have to do a lot of service aggregation there MIGHT be a case for doing it this way ... but beware the latency and overhead you get from an integration platform that is aimed at giving you all kind of services around messaging integrity etc.
Please I need help.
I have project in which I need application which communicates with local DB server and simultaneously with central remote DB server to complete some task(read stock quotas from local server create order and then write order to central orders DB,...).
So, I don`t know which architecture and technology do this.
Web application, .NET WinForms client applications on each computer, or web services based central application with client applications?
What are general differences between this approaches?
Thanks
If you don't want to expose your database directly to the clients, I'd recommend having a web service layer in between. Depending on the sensitivity of your data and the security level of your network, I'd recommend either a web service approach (where you can manage the encryption of data yourself, and without need for expensive ssl certificates) or a web interface (which might be easier to construct, but with limitations in security).
I agree with Tomas that a web service layer might be good. However, when it comes to choosing between webforms or winforms I don't think your question includes enough information to make the choice.
I'd say that if you want a powerful and feature rich user interface and want to make development easy, Winforms is probably the way to go. But if you need it to be usuable from a varied array of clients and want easier maintenance and deployment, a web app might be best.
First, focus on the exact relationship between these databases. What does "local" mean. Right there on the user's desktop? Shared between all the users in their office? Presumably the local quotes (you do mean stock quotes and not quotas?) could potentiually be a little out of date relative to the central order server's view of the world. Does that matter? I place an order for 100 X at price 78.34, real price may be different. What is the intended behaviour.
My guess is that there is at least some business logic and so we need to decide where that runs. One (thick client) approach is to put that logic on the desktop, the desktop app then might write directly to the central DB. I don't tend to do this for several reasons:
Every client desktop gets a database connection. Scaling is not good, eventually the database gets unhappy when the number of users gets very large.
If we need a slightly different app, perhaps exposed to a different set of users via the Web or whatever, we end up reproducing that business logic.
An alternative approach (thin or browser based) keeps the UI on the desktop, but puts the logic on the server. The client can then invoke some kind of service. Now there's lots of possible ways of doing that, a simple Web Service or Rest Service will do the job. I hope it's clear that this service-based appraoch addressed my two points above.
By symmetry I would treat the local databases in the same way, wrap them in services. However it's possible that some more complex relationship between the databases exists and in which case you might need the local service layer to interact with the central service layer.
I'm touting the general pronciple of Do Not Repeat Yourself, implement each piece of business logic once.
There are some rumors floating around that the team at my company will soon be using web services for all future application development. The architecture is supposed to be something like this:
Application --> Web Service --> Database
The stated reasoning behind it is security. This sounds like a huge waste of time for little if any benefit. My question is, in what ways does a web service make your data more secure than a database? I would think that if an attacker wanted to get all your data and had already gotten onto the app server, it would be fairly trivial to figure out how the application is getting it's data.
Please keep in mind that these web services would be purely for data, and would have little if any business/validation logic, and would also be outside the application developers control (at least that's the way it's worked with all previous applications that have used web services).
If it's true that there will be no business logic or validation on the web services, then there is only a limited security benefit to adding the additional layer of abstraction. I say limited because the interface between your application and the database is still more limited than if they were directly talking to each other.
If you add validation and business logic to the equation, there is a significant security benefit, as anyone who has access to the application account can only do the database what the application is able to do. Additionally, this is a better design because it reduces coupling between your application and implementation details of how the data is stored in the database. If you wanted to change the database schema, you only need to update the web services, and not entire applications.
One important thing about Web Services is interoperability so that different applications from different platforms later can utilize the services and data. Your company will benefit a lot by doing so. And you are right about the security, it is definitely one of the good reasons to use web service rather than expose a public endpoint of the database, it is dangerous!
Web Services enable the accessibility of your data, For example, your data can be accessed within browser by javascript. There is no way to access the database on the server directly within Javascript.
All in all, go for it, that is the right approach.
the security argument is questionable; authenticating to a web service is no different than authenticating to the database
there are legitimate reasons for moving db operations to web services and SOA in general, but security isn't one of them
If you use a webservice hopefully you will also be using some kind of queue when sending the data to the database. If you are using a webservice and queue combo then the security come into place with less chance of lost data. If you do not have a webservice and queue combo if you send data to the database and it never gets there you have no were for it to go it just disappears.
You are correct though if someone wants to break into your system a webservice isnt going to help if anything it might make it worse if you make the webservice public and they find the name of your webservice because then they can just query your DB using the webservice and any security features on your servers will just think it is you applications getting the information.