Web application vs. web services vs. classic application - winforms

Please I need help.
I have project in which I need application which communicates with local DB server and simultaneously with central remote DB server to complete some task(read stock quotas from local server create order and then write order to central orders DB,...).
So, I don`t know which architecture and technology do this.
Web application, .NET WinForms client applications on each computer, or web services based central application with client applications?
What are general differences between this approaches?
Thanks

If you don't want to expose your database directly to the clients, I'd recommend having a web service layer in between. Depending on the sensitivity of your data and the security level of your network, I'd recommend either a web service approach (where you can manage the encryption of data yourself, and without need for expensive ssl certificates) or a web interface (which might be easier to construct, but with limitations in security).

I agree with Tomas that a web service layer might be good. However, when it comes to choosing between webforms or winforms I don't think your question includes enough information to make the choice.
I'd say that if you want a powerful and feature rich user interface and want to make development easy, Winforms is probably the way to go. But if you need it to be usuable from a varied array of clients and want easier maintenance and deployment, a web app might be best.

First, focus on the exact relationship between these databases. What does "local" mean. Right there on the user's desktop? Shared between all the users in their office? Presumably the local quotes (you do mean stock quotes and not quotas?) could potentiually be a little out of date relative to the central order server's view of the world. Does that matter? I place an order for 100 X at price 78.34, real price may be different. What is the intended behaviour.
My guess is that there is at least some business logic and so we need to decide where that runs. One (thick client) approach is to put that logic on the desktop, the desktop app then might write directly to the central DB. I don't tend to do this for several reasons:
Every client desktop gets a database connection. Scaling is not good, eventually the database gets unhappy when the number of users gets very large.
If we need a slightly different app, perhaps exposed to a different set of users via the Web or whatever, we end up reproducing that business logic.
An alternative approach (thin or browser based) keeps the UI on the desktop, but puts the logic on the server. The client can then invoke some kind of service. Now there's lots of possible ways of doing that, a simple Web Service or Rest Service will do the job. I hope it's clear that this service-based appraoch addressed my two points above.
By symmetry I would treat the local databases in the same way, wrap them in services. However it's possible that some more complex relationship between the databases exists and in which case you might need the local service layer to interact with the central service layer.
I'm touting the general pronciple of Do Not Repeat Yourself, implement each piece of business logic once.

Related

Where should i access my Database

I'm curious how you would handle following Database access.
Let's suggest you have a Computer which Hosts your database as part of his server work and multiple client PC's which has some client-side-software on it that need to get information from this database
AFAIK there are 2 way's to do this
each client-side-software connects directly to the Database
each client-side-software connects to a server-side-software which connects to the Database as some sort of data access layer.
so what i like to know is:
What are the pro and contra's of each solution?
And are other solutions out there which maybe "better" to do this work
I would DEFINITELY go with suggestion number 2. No client application should talk to a datastore without a broker ie:
ClientApp -> WebApi -> DatabaseBroker.class -> MySQL
This is the sound way to do it as you separate concerns and define an organized throughput to the datastore.
Some benefits are:
decouple the client from the database
you can centralize all upgrades, additions and operability in one location (DatabaseBroker.class) for all clients
it's very scaleable
its safe in regards to business logic
Think of it like this with this laymans example:
Marines are not allowed to bring their own weapons to battle (client apps talking directly to DB). instead they checkout the weapon from the armory (API). The armory has control over all weapons, repairs and upgrades (data from database) and determines who gets what.
What you have described sounds like two different kind of multitier architectures.
The first point matches with a two-tier and the second one could be a three-tier.
AFAIK there are 2 way's to do this
You can divide your application in several physical tiers, therefore, you will find more cases suitable to this architecture (n-tier) than the described above.
What are the pro and contra's of each solution?
Usually the motivation for splitting your application in tiers is to achieve some kind of non-functional requirements (maintainability, availability, security, etc.), the problem is that when you add extra tiers you also add complexity,e.g.: your app components need to communicate with each other and this is more difficult when they are distributed among several machines.
And are other solutions out there which maybe "better" to do this work.
I'm not sure what you mean with "work" here, but notice that you don't need to add extra tiers to access a database. If you have a desktop application installed in a few machines a classical client/server (two-tier) model should be enough. However, a web-based application needs an extra tier for interacting with the browser. In this case the database access is not the motivation for adding this extra tier.

Is WCF recommended to use with WPF and MVVM to retrieve data from SQL Server?

I am building a desktop application that will be used on local network, with SQL Server as database.
This application would have around 50 users top at the same time. In what particular scenario would I need to use WCF service? Is it recommended to create a WCF service on the server computer where database would reside, so we connect to this server through WCF service, instead of connecting to the database directly? What is the recommended way to connect to SQL Server data and why?
Edit: Let me explain in more detail. I have used WCF Ria services before, so I know how they work. Lets assume that WCF services works in same way. The question was directed toward why would we use WCF instead of directly connecting to database? I didnt want to specify my current application requirement, since I would get a specific answer for specific requirement. My goal was to understand in general why and when would you yse one instead of another. And I have received satisfying answers so far.
It appears to me that general consensus is to use WCF only if there would be a demand of another type of application, which would use web access to get data from service. Also, if I understood correctly, from security point of view, there is no difference between the two.
There would be a statistical app in the future that uses web to provide read-only statistics to user, and naturally some service will be required for this task (application has no specific client in mind, it will be offered to lots of clients). Since I need some demo application to be done very rapidly for particular clients, then I am thinking to neglect the service part, and make a proper layering (WPF->VM->Model->EF, so later I would just insert service between the model and EF. I guess it should not take too much time to make WPF app running with inserted layer. I am also postponing the service because of next reason: since HTML5 is (going to be) main technology for web, and there is a possibility that SL will be abandoned as technology (which I have been using), the logical decision would be to choose HTML5 over SL. But since I am totally unfamiliar with HTML5 and its requirements, I am not sure if WCF service is the best choice for it, and this is also one of the reasons to postpone the decision of choosing the service type (along with requirement to make the desktop demo app as fast as possible).
I think a better way to consider the question is whether you should abstract your database and data access layer from the application using a service interface. You could use WCF and SOAP or you could use a REST based HTTP service, the choice of technology is secondary to whether the current or future requirements of your application indicate that an additional layer of abstraction is warrented.
Reasons you might consider using a service interface instead of directly connecting to the SQL database include but are not limited to:
Ease of supporting multiple operating systems/client UIs
Ability to evolve the data/service interface separately from your database schema
Isolate application from changes to database schema or location (you don't have to redeploy change to application, only change internals of the services it is calling)
If data could be used by other systems, you have a standard means of allowing these systems to interface with the data your application is managing
Reduced SQL database connection security concerns (only service identity connects to database, allowing you to use a variety of authentication/authorization strategies on the client side)
The trade off you are looking at is the time/cost/complexity of implementing a service interface versus the flexibility and mantainability benefits you will gain. You should evaluate the needs of your application and your customer before you make a decision on whether to connect directly to your data store using ADO.NET or use a service layer.
You should take a look at the Microsoft Service Layer Guidelines as they cover a lot of the considerations to take into account.
Unless you need to create a reusable service, I can't think of a reason to add a WCF layer, unless you are just looking for a reason to do it. I think you can just go with some sort of ORM like EF or nHibernate and be happy.
The main reason for WCF is security. If the client connects directly to the DB then the client must be given rights on tables. The client can hack into the connection and use TSQL directly. You must expose port 1433 to the network in a single tier application. With WCF there is not direct access from the client to SQL. It is not just more secure in general but you can have more granular security. .NET service code can enforce row level security. A table only has column level security. If this is business on a private network and you don't expect anyone would try and hack into your db then client connecting directly to the SQL server is easier to build. With server side service the other factor is a change to server side code is one spot so you don't have to update 50 devices.

Best way to access a remote database: via webservice or direct DB-access?

I'm looking to develop an application for Mac and iOS-devices. The application will rely on information stored in a remote database. It needs both read (select) and write (insert, update, delete) access to the database. The application will be a multi-user application.
Now I'm looking at two different approaches to access the database:
- via web service: the application accesses the web service (REST, JSON) which accesses the database. Authentication will be done via HTTP authentication over SSL (https).
- access the remote database directly over a VPN.
The app will be used by a maximum of let's say 100 people and is aimed at small groups/organizations/businesses.
So my question is: what would be the best approach to access the database? What about security and performance? What would a typical implementation for a small business look like?
Any advice will be appreciated.
Thanks
Using web services adds a level of indirection between the clients and the database. This has several advantages that are all due to the fact that the clients need to have no knowledge of the database, only of your web service interface. Since client applications are more complicated to control and update than your server side code, it pays to add a level of business logic on the server that lets you tweak your system without pushing updates to the clients. Main advantages:
Flexibility - you can change the database configuration / replace the data layer altogether and change nothing on the client apps as long as you keep the same web service interface.
Security - implement some authentication mechanism for your web services, and avoid giving clients access credentials to your database engine.
There are some disadvantages too: you pay for that flexibility by adding a level of complexity - it'd probably be faster to just code the database access into the clients and get done with it. Consider the web services layer as an investment that might pay dividends down the road. Whether it's worth it really depends on your business requirements and outlook.
Given the information you have provided, the answer is almost certainly web services, unless the VPN is fast.
If the VPN is fast enough to handle the traffic, you will save a lot of time, effort and expense by accessing the database directly from your application.
You can also provide remote access to virtual PC sessions, if that's your thing.
So it's all going to depend on what your requirements are. There are a lot of ways to do this, and each has its advantages and disadvantages. Making the right decision will require a fair amount of systems analysis, probably beyond the scope of a question posted on StackOverflow.

Using VCL for the web (intraweb) as a trick for adding web interface to a legacy non-tiered (2 tiers) Delphi win32 application does make sense?

My team is maintaining a huge Client Server win32 Delphi application. It is a client/server application (Thick client) that uses DevArt (SDAC) components to connect to SQL Server.
The business logic is often "trapped" in Component's event handlers, anyway with some degree of refactoring it is doable to move the business logic in common units (a big part of this work has already been done during refactoring... Maintaing legacy applications someone else wrote is very frustrating, but this is a very common job).
Now there is the request of a web interface, I have several options of course, in this question i want to focus on the VCL for the web (intraweb) option.
The idea is to use the common code (the same pas files) for both the client/server application and the web application. I heard of many people that moved legacy apps from delphi to intraweb, but here I am trying to keep the Thick client too.
The idea is to use common code, may be with some compiler directives to write specific code:
{$IFDEF CLIENTSERVER}
{here goes the thick client specific code}
{$ELSE}
{here goes the Intraweb specific code}
{$ENDIF}
Then another problem is the "migration plan", let's say I have 300 features and on the first release I will have only 50 of them available in the web application. How to keep track of it? I was thinking of (ab)using Delphi interfaces to handle this. For example for the User Authentication I could move all the related code in a procedure and declare an interface like:
type
IUserAuthentication= interface['{0D57624C-CDDE-458B-A36C-436AE465B477}']
procedure UserAuthentication;
end;
In this way as I implement the IUserAuthentication interface in both the applications (Thick Client and Intraweb) I know that That feature has been "ported" to the web. Anyway I don't know if this approach makes sense. I made a prototype to simulate the whole process. It works for a "Hello world" application, but I wonder if it makes sense on a large application or this Interface idea is only counter-productive and can backfire.
My question is: does this approach make sense? (the Interface idea is just an extra idea, it is not so important as the common code part described above) Is it a viable option?
I understand it depends a lot of the kind of application, anyway to be generic my one is in the CRM/Accounting domain, and the number of concurrent users on a single installation is typically less than 20 with peaks of 50.
EXTRA COMMENT (UPDATE): I ask this question because since I don't have a n-tier application I see Intraweb as the unique option for having a web application that has common code with the thick client. Developing webservices from the Delphi code makes no sense in my specific case, so the alternative I have is to write the web interface using ASP.NET (duplicating the business logic), but in this case I cannot take advantage of the common code in an easy way. Yes I could use dlls maybe, but my code is not suitable for that.
The most important thing that you have to remember is this:
Your thick client .EXE process is used by one person at a time (multiple persons will have multiple instances of that .EXE).
Your intraweb .EXE process will be used by many persons at a time. They all share the same instance of the process.
That means your business logic must not only be refactored out into common units, the instances of the business logic must be able to reside into memory multiple times, and not interfere.
This starts with the business logic that talks to the database: you must be able to have multiple database connections at the same time (in practice a pool of database connections work best).
In my experience, when you can refactor your business logic into datamodules, you have a good starting point to support both an Intraweb and a thick client version of your app.
You should not forget the user interface:
Thick clients support modal forms, and have a much richer UI
Web browsers support only message dialogs (and then: those are very limited), all fancy UI stuff costs a lot of development time (though for instance, TMS has some nice components for Intraweb)
Then, to top it off, you have to cope with the stateless nature of the HTTP protocol. To overcome this, you need sessions. Intraweb will take care of most of the session part.
But you need to ask yourself questions like these:
what should happen if a user is idle for XX minutes?
how much session state can I store in memory? and what if it doesn't fit?
what do I do with the session state that does not fit into memory?
This is just a start, so let use know when you need more info.
If it gets very specific to your app, you can always contact me directly: just google me.
--jeroen
I think if you move your application to n-tiers will be a better solution, it will be easier after that to be used by the desktop and web applications.
you already made the first part by decoupling the business logic from the presentation, you can use RemObject SDK or DataSnap that bundled with Delphi.
after that you will have working desktop application, and you can use Intrawebm Asp.net or what ever for web part, and in this way you will not have to duplicate the business logic again for the web part.
usually converting desktop application to web not easy as you thought, because they work in different environment, and you need to build each one as it's nature.

Using a web service to secure a database

There are some rumors floating around that the team at my company will soon be using web services for all future application development. The architecture is supposed to be something like this:
Application --> Web Service --> Database
The stated reasoning behind it is security. This sounds like a huge waste of time for little if any benefit. My question is, in what ways does a web service make your data more secure than a database? I would think that if an attacker wanted to get all your data and had already gotten onto the app server, it would be fairly trivial to figure out how the application is getting it's data.
Please keep in mind that these web services would be purely for data, and would have little if any business/validation logic, and would also be outside the application developers control (at least that's the way it's worked with all previous applications that have used web services).
If it's true that there will be no business logic or validation on the web services, then there is only a limited security benefit to adding the additional layer of abstraction. I say limited because the interface between your application and the database is still more limited than if they were directly talking to each other.
If you add validation and business logic to the equation, there is a significant security benefit, as anyone who has access to the application account can only do the database what the application is able to do. Additionally, this is a better design because it reduces coupling between your application and implementation details of how the data is stored in the database. If you wanted to change the database schema, you only need to update the web services, and not entire applications.
One important thing about Web Services is interoperability so that different applications from different platforms later can utilize the services and data. Your company will benefit a lot by doing so. And you are right about the security, it is definitely one of the good reasons to use web service rather than expose a public endpoint of the database, it is dangerous!
Web Services enable the accessibility of your data, For example, your data can be accessed within browser by javascript. There is no way to access the database on the server directly within Javascript.
All in all, go for it, that is the right approach.
the security argument is questionable; authenticating to a web service is no different than authenticating to the database
there are legitimate reasons for moving db operations to web services and SOA in general, but security isn't one of them
If you use a webservice hopefully you will also be using some kind of queue when sending the data to the database. If you are using a webservice and queue combo then the security come into place with less chance of lost data. If you do not have a webservice and queue combo if you send data to the database and it never gets there you have no were for it to go it just disappears.
You are correct though if someone wants to break into your system a webservice isnt going to help if anything it might make it worse if you make the webservice public and they find the name of your webservice because then they can just query your DB using the webservice and any security features on your servers will just think it is you applications getting the information.

Resources