I was reading about the difference between the tiers of architecture (2 and 3). I got to know that the later was safer than the first. The 2 Tier poses security risks, a website said. I am unable to understand what security risks the 2 tier architecture could pose?
I took the example of a ticketing software that used to have a 2 tier system. Now, if multiple clients are sending queries, can one client access information of the other one? can the response to the request get mixed up, sending wrong information to each of the clients?
I am unable to think of security issues which could exist. It would be great if anybody could drop in an answer.
In a two tier system, clients are accessing a database directly. An improperly secured database could grant too much access to a client. Securing databases for public access takes quite a bit of work. They are general execution systems and are not generally designed with fine grained security systems (exceptions do exist).
Three tier systems generally do not expose general execution systems to clients. They have specific methods and securing the middle tier is generally much more straightforward.
Related
I'm curious how you would handle following Database access.
Let's suggest you have a Computer which Hosts your database as part of his server work and multiple client PC's which has some client-side-software on it that need to get information from this database
AFAIK there are 2 way's to do this
each client-side-software connects directly to the Database
each client-side-software connects to a server-side-software which connects to the Database as some sort of data access layer.
so what i like to know is:
What are the pro and contra's of each solution?
And are other solutions out there which maybe "better" to do this work
I would DEFINITELY go with suggestion number 2. No client application should talk to a datastore without a broker ie:
ClientApp -> WebApi -> DatabaseBroker.class -> MySQL
This is the sound way to do it as you separate concerns and define an organized throughput to the datastore.
Some benefits are:
decouple the client from the database
you can centralize all upgrades, additions and operability in one location (DatabaseBroker.class) for all clients
it's very scaleable
its safe in regards to business logic
Think of it like this with this laymans example:
Marines are not allowed to bring their own weapons to battle (client apps talking directly to DB). instead they checkout the weapon from the armory (API). The armory has control over all weapons, repairs and upgrades (data from database) and determines who gets what.
What you have described sounds like two different kind of multitier architectures.
The first point matches with a two-tier and the second one could be a three-tier.
AFAIK there are 2 way's to do this
You can divide your application in several physical tiers, therefore, you will find more cases suitable to this architecture (n-tier) than the described above.
What are the pro and contra's of each solution?
Usually the motivation for splitting your application in tiers is to achieve some kind of non-functional requirements (maintainability, availability, security, etc.), the problem is that when you add extra tiers you also add complexity,e.g.: your app components need to communicate with each other and this is more difficult when they are distributed among several machines.
And are other solutions out there which maybe "better" to do this work.
I'm not sure what you mean with "work" here, but notice that you don't need to add extra tiers to access a database. If you have a desktop application installed in a few machines a classical client/server (two-tier) model should be enough. However, a web-based application needs an extra tier for interacting with the browser. In this case the database access is not the motivation for adding this extra tier.
Should client applications be coded so that they connect to and retrieve data from the remote SQL database?
Based on my knowledge I would say that is extremely bad practice, and you should have a server application which handles all clients and acts as a central unit for retrieving data - is this right?
Are business information systems ever built without a server application to handle clients?
Depends what's meant by 'client applications'. Internal client applications within a business can often work well by interacting directly with a central database. Of course, certainly make them use read-only credentials unless they explicitly need to write.
An external client application is perhaps another question. If you're distributing, say, an iPhone app I would definitely write an API server to wrap common requests.
The extra layer of abstraction is usually helpful for more than security--consider scalability. What if suddenly you had orders of magnitude more client requests? It's much easier to add caching or other performance enhancements to an API service than to update each client. Much better to build an architecture that can be changed than to tie down to a direct implementation.
I'm looking to develop an application for Mac and iOS-devices. The application will rely on information stored in a remote database. It needs both read (select) and write (insert, update, delete) access to the database. The application will be a multi-user application.
Now I'm looking at two different approaches to access the database:
- via web service: the application accesses the web service (REST, JSON) which accesses the database. Authentication will be done via HTTP authentication over SSL (https).
- access the remote database directly over a VPN.
The app will be used by a maximum of let's say 100 people and is aimed at small groups/organizations/businesses.
So my question is: what would be the best approach to access the database? What about security and performance? What would a typical implementation for a small business look like?
Any advice will be appreciated.
Thanks
Using web services adds a level of indirection between the clients and the database. This has several advantages that are all due to the fact that the clients need to have no knowledge of the database, only of your web service interface. Since client applications are more complicated to control and update than your server side code, it pays to add a level of business logic on the server that lets you tweak your system without pushing updates to the clients. Main advantages:
Flexibility - you can change the database configuration / replace the data layer altogether and change nothing on the client apps as long as you keep the same web service interface.
Security - implement some authentication mechanism for your web services, and avoid giving clients access credentials to your database engine.
There are some disadvantages too: you pay for that flexibility by adding a level of complexity - it'd probably be faster to just code the database access into the clients and get done with it. Consider the web services layer as an investment that might pay dividends down the road. Whether it's worth it really depends on your business requirements and outlook.
Given the information you have provided, the answer is almost certainly web services, unless the VPN is fast.
If the VPN is fast enough to handle the traffic, you will save a lot of time, effort and expense by accessing the database directly from your application.
You can also provide remote access to virtual PC sessions, if that's your thing.
So it's all going to depend on what your requirements are. There are a lot of ways to do this, and each has its advantages and disadvantages. Making the right decision will require a fair amount of systems analysis, probably beyond the scope of a question posted on StackOverflow.
I am just getting started breaking a .NET application and its SQL Server database into two systems - an intranet and a public website.
The various database tables will need to be synchronised between the two databases in different ways, for example:
Moving from web to intranet, with the intranet data becoming read-only
Moving from intranet to web, with the web data becoming read-only
Tables that need to be synchronised and are read/write on both the intranet and web databases.
Some of the synchronisation needs to occur relatively quickly with minimal lag, possibly with some type of transaction locking to ensure repeatable reads etc. Other times it doesn't matter if there is a delay between synchronisation.
I am not quite sure where to start with all this, as there seems to be many different ways of achieving this. Which technologies and strategies should I be looking at?
Any tips?
A system like that looks like the components are fairly tightly coupled. An upgrade across several systems all at once can turn into quite the nightmare.
It looks like this is less of a replication problem and more of a problem of how to maintain a constant connection to a remote database without much I/O lag. While it can be done, probably isn't going to work out very well in terms of scalability and being able to troubleshoot problems.
You might look at using some message queueing and asynchronous data processing from the remote site to the intranet. You'll probably have to adjust some expectations of the business side so that they don't assume that everything is accessible real-time all the time.
Of course, its hard to give specifics without more details. It might be a good idea to look into principles of SOA and messaging systems for what you're trying to do.
Out of the box you have SQL Server Replication. Sounds like a pair of filtered transactional replication publications can do the job. Transactional replication has a low overhead on the publisher and can ensure transactional consistency of the published changes.
Nathan raises some very valid points about the need for a more loosely coupled solution. Service Broker can fit that shoe quite well with its loosely coupled asynchronous nature, and provide a headache free upgrade future since SSB is compatible between SQL Server versions and editions. But this freedom comes at the cost of letting the heavy lifting of actually detecting the changes and applying them to the tables to you, as application code, not a trivial feats.
I'm facing the following challenge:
I have a bunch of databases in different geographical locations where the network may fail a lot (I'm using cellular network). I need to keep all the databases synchronized but there is no need to be in real time. I'm using Java but I have the freedom to choose any free database.
How can I achieve this?
It's a problem with a quite established corpus of research (of which people is apparently unaware). I suggest to not reinvent a poor, defective wheel if not absolutely necessary (such as, for example, so unusual requirements to allow a trivial solution).
Some keywords: replication, mobile DBMSs, distributed disconnected DBMSs.
Also these research papers are relevant (as an example of this research field):
Distributed disconnected databases,
The dangers of replication and a solution,
Improving Data Consistency in Mobile Computing Using Isolation-Only Transactions,
Dealing with Server Corruption in Weakly Consistent, Replicated Data Systems,
Rumor: Mobile Data Access Through Optimistic Peer-to-Peer Replication,
The Case for Non-transparent Replication: Examples from Bayou,
Bayou: replicated database services for world-wide applications,
Managing update conflicts in Bayou, a weakly connected replicated storage system,
Two-level client caching and disconnected operation of notebook computers in distributed systems,
Replicated document management in a group communication system,
... and so on.
I am not aware of any databases that will give you this functionality out of the box; there is a lot of complexity here due to the need for eventual consistency and conflict resolution (eg, what happens if the network gets split into 2 halves, and you update something to the value 123 while I update it on the other half to 321, and then the networks reconnect?)
You may have to roll your own.
For some ideas on how to do this, check out the design of Yahoo's PNUTS system: http://research.yahoo.com/node/2304 and Amazon's Dynamo: http://www.allthingsdistributed.com/2007/10/amazons_dynamo.html
Check out SymmetricDS. SymmetricDS is web-enabled, database independent, data synchronization/replication software. It uses web and database technologies to replicate tables between relational databases in near real time. The software was designed to scale for a large number of databases, work across low-bandwidth connections, and withstand periods of network outage.
I don't know your requirements or your apps, but this isn't a quick answer type of question. I'm very interested to see what others have to say. However, I have a suggestion that may or may not work for you, depending on your requirements and situation. particularly, this will not help if your users need to use the app even when the network is unavailable (offline access).
Keeping a bunch of small databases synchronized is a fairly complex task to do correctly. Is there any possibility of just having one centralized database, and either having the client applications connect directly to it or (my preferred solution) write some web services to handle accessing/updating data rather than having a bunch of client databases?
I realize this limits offline access, but there are various caching strategies you can use. (Which of course, leads you back to your original question.)