Advice on moving to a multi tier Delphi architecture - database

We have a relatively large application that is strongly tied into Firebird (stored procedures, views etc). We are now getting a lot of requests to support additional databases and we would also like to move a lot of the functionality from the client to the server.
Now seems like a good time to move to a 3(4) tier architecture. We have already looked at DataSnap 2009 and RemObjects SDK/DataAbstract. Both seem like they would do the job, but are there any advantages/disadvantages we should look out for? Are there any other frameworks that you could recommend?
Cheers,
Paul

I can recommend using the KBM Middleware components from Components4Developers. There is a bit of a learning curve but they are very flexible and hold up well under use in real world conditions.
Comment from a user (http://www.components4programmers.com/usercomments/commentfromapowerusertoaquestion.htm)

Changing your application to Multi-Tiers with new framework (RM,DS,kbmMW, or what ever), will make a lot of changes in our application architecture, I recommended to go with this in future, but you can achieve the support for multi database, with other products like
UniDac from DevArt( Best components for database with direct connection).
AnyDac(from same Company who offer RemObjects.
SqlDirect(Has support for 9 MajorDB and also ODBC).
ZeosDB(Open source).
using one of the components above, will give you support for most major databases, beside it will not make you doig a lot of changes, and in some cases you just replace old database components with the new ones, and maybe change some of properties.
However, changing to Multi-tiers will not only make you only support more databases, but it will separate your business logic from presentation layer, therefore you can have more presentation layers for your application like web interface, or smart devices.
But the most important in the Multi-Tiers architecture, you will have a scalable system the grow more than what the database you are using can handle of connection, beside other benefits, like using other languages to write client applications.

In the process of moving to a multitier application you could consider using a transport protocol between the layers, which is language/technology independent (like webservices, (i think tha remobjects supports that)).
This could make a reimplementation of a layer simpler later (like if you later have to make a another version of the client-application in a browser/java/silverlight).

You can also investigate Midware http://www.overbyte.be/frame_index.html

For multi-tier architecture I also recommend to check out message-oriented middleware.
With message-oriented middleware, cross-language and cross-platform application integration can be implemented using the peer-to-peer or the publish/subscribe communication model. Messaging systems are loosely coupled, asynchronous and reliable. For example, they are core components in Java(tm) application servers such as JBoss.
For Firebird, I recently wrote a blog article on replacing Firebird database events, their limitations and ways to replace them with message-broker based solutions (which are available as open source):
Firebird Database Events and Message-oriented Middleware (part 1)
Firebird Database Events and Message-oriented Middleware
(part 2)
(disclaimer: I am a developer of Delphi and Free Pascal client libraries for open source message brokers).

Related

How to call SQL Server stored procedure from Android in Xamarin

We have a mobile application made in VB.NET for Windows CE/Mobile smart devices for shipping/reception operations in a warehouse. This application connects to a SQL Server and extensively uses stored procedures. Since Windows Mobile devices are discontinued and replaced by Android devices, we have to convert our solution to Android, using Visual Studio's Xamarin and C#.
I'm a newbie at Android programming. Is there a way we can connect directly to a SQL Server instance and call a stored procedure from Android? I made some searches and people says it's better to call web services as an intermediary between Android and SQL Server. Is it the best practice?
Thanks for your insight and help
Upgrading Legacy applications can be incredibly complicated depending on the technology and frameworks used.
The first thing I would suggest taking a look at is the Architecture documentation Microsoft produced for Xamarin and Cross-platform frameworks which you can see here
Typical Application Layers
Data Layer – Non-volatile data persistence, likely to be an SQLite database but could be implemented with XML files or any other suitable
mechanism.
Data Access Layer – Wrapper around the Data Layer that provides Create, Read, Update, Delete (CRUD) access to the data without
exposing implementation details to the caller. For example, the DAL
may contain SQL statements to query or update the data but the
referencing code would not need to know this.
Business Layer – (sometimes called the Business Logic Layer or BLL) contains business entity definitions (the Model) and business logic.
Candidate for Business Façade pattern.
Service Access Layer – Used to access services in the cloud: from complex web services (REST, JSON, WCF) to simple retrieval of data and
images from remote servers. Encapsulates the networking behavior and
provides a simple API to be consumed by the Application and UI layers.
Application Layer – Code that’s typically platform specific (not generally shared across platforms) or code that is specific to the
application (not generally reusable). A good test of whether to place
code in the Application Layer versus the UI Layer is (a) to determine
whether the class has any actual display controls or (b) whether it
could be shared between multiple screens or devices (eg. iPhone and
iPad).
User Interface (UI) Layer – The user-facing layer, contains screens, widgets and the controllers that manage them.
Now you could just simply use the System.Data.SqlClient assembly and fire off stored procedure runs against your database. However a more common approach would to create an REST Api that sits in-between your client and your back-end services.
You can download a handy E-book created by the Devs over at microsoft that will show you some common enterprise patterns to use for Cross-platform technologies like Xamarin which can be found here
Here's one such example of the kind of patterns those links refer to.
You can also find an overview of various web services that you can use at this link
The options it gives you an overview of are:
ASMX
WCF
REST
So plenty of choice, but depends on your current approach.

How to design/develop an integration layer or bus for different external services/apps

We are currently looking into replacing one of our apps with possibly an ESB or some similar tool and was looking for some insights into how best to approach this.
We currently have a stand alone service that consumes/interact with different external services and data sources, some delivered through SOAP Web Services and others we just use a DB connection. This service is exposed through SOAP and we have other apps that consume this service but are very tightly coupled to it, now we also have other apps that need to consume some of the external services and would like to replace this all together with an ESB or some sort of SOA platform.
What would be the best way to replace this 'external' services integration layer with an ESB? We were thinking of having a 'global' contract/API in which all of the services we consume are exposed as one single contract where all the possible operations and data structures that we use are exposed under one single namespace, would this be the best way of approaching this? and if so are there any tools that could help us automate this process or do we basically have to handcraft this contract/API?. This would also mean that for any changes to the underlying services/API's we will have to update this new API as well.
If not then the other option I see is to basically use the 'ESB' as a 'proxy' layer in which all of our sources are exposed as they are, so we would end up with several different 'contracts' / API endpoints, but I don't really see the value in this.
Also given the above what would be the best tool for the job? is a full blown ESB an overkill or are we much better rolling our own using something like Apache Camel or Spring Integration?.
A few more details:
We are currently integrating over 5 different external services with more to come in the future.
Only a couple of apps consuming our current app at the moment but several other apps/systems in the future will need to consume some of these external services.
We are currently using a single method of communication (SOAP) between these services but some apps might use pub/sub messaging in the future, although SOAP will still be the main protocol used.
I am new to ESB integration so I apologize in advance if I'm misunderstanding a lot of these technologies and the problems they are meant to solve.
Any help/tips/pointers will be greatly appreciated.
Thanks.
You need to put in some design thoughts of what you want to achieve over time.
There are multiple benefits and potential pitfalls with an ESB introduction.
Here are some typical benefits/use cases
When your applications are hard to change or have very different release cycles - then it's convenient to have an ESB in the middle that can adopt the changes quickly. This is very much the case when your organization buys a lot of COTS products and cloud services that might come with an update the next day that breaks the current API.
When you need to adapt data from one master data system to several other systems and they might not support the same interfaces, i.e. CRM system might want data imported via web services as soon as it's available, ERP want data through db/staging tables and production system wants data every weekend in a flat file delivered via FTP. To keep the master data system clean and easy to maintain, just implement one single integration service in the master data system, and adapt this interface to the various other applications within the ESB plattform instead.
Aggregation or splitting of data from various sources to protect your sensitive systems might be a use case. Say that you have an old system that can take a small updates of information at a time and it's not worth to upgrade this system - then an integration solution that can do aggreggation or splitting or throttling can be a good solution.
Other benefits and use cases include the ability to track and wire tap every message passing between systems - which can even be used together with business intelligence tools to gather KPI:s.
A conceptual ESB can also introduce a canonical message format that is used for all services that needs to communicate. If a lot of applications share the same data with several other applications (not only point to point) - then the benefits of a canonical message format can outweight the cost (which is/can be high). An ESB server might be useful to deal with canonical data as it is usually very good at mapping from one format to another.
However, introducing an ESB without a plan what benefits you are trying to achieve is not really a good thing, since it introduces overhead - you need another server to keep alive, you need perhaps another team to understand all data flows. You need particular knowledge with your integration product. Finally, you need to be able to have some governance around it so that your ESB initiative does not drift away from the goals/benefits you have foreseen.
You should choose some technology that you are comfortable with - or think you can be comfortable with. Apache Camel is indeed very powerful and my favorite integration engine - but it's not an ESB as it does not come with a runtime that you can use to deploy/manage/monitor your integration services with. You can use it together with most Java EE application servers or even better - Apache ServiceMix (= Karaf+Camel+ActiveMQ+CXF) which is built for this task.
The same goes with spring integration - you need to run it somewhere, app servers or what not.
There is a large set of different products, both open source and commercial that does these things.

Building a REAL database application using Datasnap

I have built an extensive 2-tier application in D2010, using ADO and devexpress. I want to upgrade this to using Datasnap mainly to provide HTTPS communication instead of just TCP/IP to the vulnerable SQL server. I have followed all the Datasnap tutorials I could find. I have Cary Jensen's Delphi In Depth: ClientDatasets. All good and well, but the examples are pretty useless because in a REAL database application, grids are populated from joining multiple tables together and almost never from a single table. This obviates the "autoresolve" capability of clientdatasets right off the bat. Even the proposed beforeupdateevent handlers won't work in a datasnap application because the DB is only accessible to the datasnap server. So it seems to me I have to create a method on the datasnap server for EACH insert/update I am going to need, then expose those methods to the client and call them from the client as required to request the datasnap server to perform the required update/inserts. This seems like a lot of work!
Is there an easier way to implement https comms to a SQL Server?
Oh in case you're wondering, the application is already pseudo 3-tier in that grids are wired to TdxMemData, and never directly to TADOQueries. I handle all insert/updates myself in the same way that I would have needed to if I had used TClientdatasets.
If you think your database is vulnerable think twice about using D2010 Datasnap. It is very, very vulnerable. Don't be fooled by HTTPS, there are still lot of pieces missing to fully protect the channel. For example once you use Datasnap, SQL server Windows integrated authentication (kerberos based...), is gone.
For a full explanation see: Why Datasnap 2010 is a toy library. It's of course my personal opinion, but is is based on my experience using Midas/Datasnap since Delphi 3, and my current work about IT security.
Anyway you're wrong about insert/updates/deletes. You have to use providers' events to control them on the datasnao server side. It's a bit more complex than handling them in a two-tier application, but you don't need ad-hoc methods for each operation.
[2016 Update: DataSnap in 2016 is even more woefully behind in terms of security and features now than it was when this question was written. I do not recommend its use in any new designs at all, ever.]
DataSnap is a solution to the problem of building multi-tier (Three or more) applications. Directly connecting to SQL over the internet from a thick client that contains all the business logic in the client has many well-understood problems, including the fact that business logic changes then require that you update ALL your clients at once. A middle tier improvement (business logic change) that is inside your data-snap (or other) middle tier logic, is not distributed to each client. The clients are thinner, and contain less of the business logic. Secondly, a well designed data-snap "API" that you build yourself only exposes you to the risks that you create yourself, rather than exposing you to the entire set of MS SQL vulnerabilities.
Frankly, losing Kerberos authentication from your thick client, is not a reason to abandon the idea of a middle tier. I don't understand ldsandon's point at all here. Is he advocating a two-tier application architecture that connects to internet or LAN clients, and that contains all the business logic, as "more secure" than a multi-tier application?
The implicit question suggested by your title is unanswerable, and undefined. What does "real" mean? Many industries deploy two-tier thick clients inside their own corporate LANs. Many have found it beneficial to use a middle tier inside their own LAN, and many have found that external applications that run over the internet should definitely NOT be surfacing the SQL connectivity to thick clients, and so they provide some kind of "web method" (SOAP, REST+JSON, etc) architecture. It has been carefully pointed out that Data-Snap is not a Purely "RESTful" architecture, but it does use JSON, and is in many ways REST-ful in design, although not fully.
If you don't understand the problem that DataSnap was created to solve, it is easy to think DataSnap is worthless, or (alternatively, and equally wrong) some kind of silver bullet. It exists for a particular purpose, one that many people find useful for their development needs. If you intend to take on the work of making a middle tier, DataSnap makes it easier than to do it 100% as a "roll your own middle tier", but it is more work than not having a middle tier.

Simplest way to develop an app that can use multiple types of databases?

I have a project for a class which requires that if a database is used, options exist for the user to pick a database to use which could be of a different type. So while I can use e.g. MySQL for development, in the final version of the project, the user must be able to choose a database (Oracle, MySQL, SQLite, etc.) upon installation. What's the easiest way to go about this, if there is an easy way?
The language used is up to me as long as it's supported by the department's Linux machines, so it could be Java, PHP, Perl, etc. I've been researching and found info on ODBC, JDBC, and SQLJ (such as this) but I'm quite new to databases so I'm having a hard time figuring out what would be best for my needs. It's also possible there may not be a simple enough way to do this; the professor admitted he's not a database guy either and he seemed to think it would be easy without having a clear idea of what it would take.
This is for a web app, but it ought to be fairly straight forward, using for example HTML and Javascript on the client side and Java with a MySQL database on the server side. No mention has been made of frameworks so I think they're too much. I have the option of using Tomcat or Apache if necessary but the overall idea is to keep things simple, and everything used should be able to be installed/changed/configured with just user level access. So something like having to recompile PHP to use ODBC would be out, I think.
Within these limitations, what would be the best way (if any) to be able to interact with an arbitrary database?
The issue I think you will have here is that SQL is not truely standard. What I mean is that vendors (Oracle, MySQL etc) have included types and features that are not SQL standard in order to "tie you in" to their DB, such as Oracle's VARCHAR2 and so on.
When I was at university, my final year project was to create an application that allowed users to create relational databases using JDBC with a Java front-end.
The use of JDBC was very simple but the issue was finding enough SQL features/types that all the vendors have in common. So they could switch between them without any issues. A way round this is to implement modules to deal with vendor specific issues and write ways to translate between them. So for example you may develop a database for MySQL with lots of MySQL specific code in that, but then you may want to use Oracle and then there are issues, which you would need to resolve.
I would spend some time looking at what core SQL standard all the vendors implement and then code for these features. But I think the technology you use wouldn't be the issue but rather the SQL you create.
Hope this helps, apologies if its not helpful!
Well, you can go two ways (in Java):
You can develop your own classes to work with different databases and load their drivers in JDBC. This way you will create a data access layer for yourself, which takes some time.
You can use Hibernate (or other ORMs). This way Hibernate will take care of things for you and you only have to know how to use Hibernate. Learning Hibernate may take some time, but when you get used to it, it can be very useful for your future projects.
If you want to stick Java there Hibernate (which wouldn't require a framework). Hibernate is fairly easy to use. You write HQL which gets translated to the SQL needed for the database you're using.
Maybe use an object relational mapper (ORM) or database abstraction layer (DAL). They are designed to provide a standard API to multiple database backends, making it possible to switch between different backends with minimal or no changes to your code. In Python, for example, a popular ORM is SQLAlchemy, and an excellent DAL is the web2py DAL (it's part of the web2py framework but can be used as a standalone DAL outside the framework as well). There are many other options in other languages as well.
use a framework with database abstraction layer and orm . try symfony or rails
There are a lot of Object relational database frameworks, unless you prefer jdbc. For simple/small applications this should work fine.

AIR: Connect to database over network?

Is there a way to make an AIR app connect to a database over a network. I know it has a built in SQLLite but I need to connect to a database over a network. Is there anyway to do this? If not directly then maybe through the help of something else like Java.
Thanks!
The main challenge should not be the "over a network" requirement, but rather that you need an ActionScript driver for your DBMS. There are some third party libraries, e.g. asSql or Asql (both for MySQL), but I have no experience with either of them.
However, depending on your application, you might really want to consider to introduce some back-end encapsulating business logic and persistence, rather than having the AIR app talk to the remote DBMS directly. Especially for multi-user apps I would definitely discourage you from doing so.. If you want to introduce a back-end, the Java platform is certainly a good choice since there are two very good AMF3 implementations (BlazeDS and GraniteDS). I would also recommend to take a look at the Grails framework and especially the Grails Flex Plugin. There is a nice and informative article on InfoQ about Grails and Flex.

Resources