Exchange Data between two apps across PC on LAN - database

I have a need of implementing two apps that will exchange data with each other. Both apps will be running on separate PCs which are part of a LAN.
How we can do this in Delphi?
Is there any free component which will make it easy to exchange data between apps across PCs?

If I'm writing it myself, I (almost) always use sockets to exchange data between apps.
It's light weight, it works well on the same machine, across the local network or the Internet with no changes and it lets you communicate between apps with different permissions, like services (Windows messages cause problems here).
It might not be a requirements for you, but I'm also a fan of platform independent transports, like TCP/IP.
There are lots of free choices for Delphi. Here are a few that I know of. If you like blocking libraries, look at Indy or Synapse. If you prefer non-blocking, check out ICS.

Before you choose a technique, you should characterize the communication according to its throughput, granularity, latency, and criticality.
Throughput -- how much data per unit time will you need to move? The range of possible values is so wide that the lowest-rate and highest-rate applications have almost nothing in common.
Granularity -- how big are the messages? How much data does the receiving application need before it can use the message?
Latency -- when one aplication sends a message, how soon must the other application see it? How quickly do you want the receiving application to react to the sending application?
Criticality -- how long can a received message be left unattended before it is overrun by a later message? (This is usually not important unless the throughput is high and the message storage is limited.)
Once you have these questions answered, you can begin to ask about the best technology for your particular situation.
-Al.

I used to use Mailslots if I needed to communicate with more than one PC at a time ("broadcast") over a network, although there is the caveat that mailslots are not guaranteed.
For 1-to-1, Named Pipes are a Windows way of doing this sort thing, you basically open a communication channel between 2 PCs and then write messages into the pipe. Not straight forward to start with but very reliable and the recommended way for things like Windows Services.
MS offer Named Pipes as an alternative way of communicating with an SQL Server (other than TCP/IP).
But as Bruce said, TCP/IP is standard and platform independent, and very reliable.

DCOM used to be a good method of interprocess communication. This was also one of Delphis strong points. Today I would strongly advice against using it.
Depending on the nature of your project I'd choose either
using a SQL server
socket communication

Look at solutions that use "Remote Procedure Call" type interfaces. I use RemObjects SDK for this sort of thing, but there are open source versions of RealThinClient which would do just as well.
Both of these allow you to create a connection that for most of your code is "transparent", and you just call an interface which sends the data over the wire and gets results back. You can then program how you usually do, and forget the details of sockets etc.

This is one of those cases where there really isn't a "best" answer as just about any of the technologies already discussed can be used to accurately communicate between two applications. The choice of which method to use will really come down to the critical nature of your communication, as well as how much data must be transfered from one workstation to another.
If your communication is not time sensitive or critical, then a simple poll of a database or file at regular intervals might be sufficient. If your communication is critical and time sensitive then placing a TCPIP server in each client might be worth pursuing. If just time sensitive then mailslots makes a good choice, if critical but not time sensitive then named pipes.

I've used the Indy library's Multicast components (IdIPMCastClient/Server) for this type of thing many times. The apps just send XML to each other. Quick and easy with minimal connection requirements.

Probably the easiest way is to read and write a file (or possibly one file per direction). It also has the advantage that it is easy to simulate and trace. It's not the fastest option, though (and it definitely sounds lame ;-) ).

A possibility could be to "share" objects across the network.
It is possible with a Client-Server ORM like our little mORMot.
This Open Source libraires are working from Delphi 6 up to XE2, and use JSON for transmission. There is some security features included (involving a RESTful authentication mechanism), and can use any database - or no database at all.
See in particular the first four samples provided, and the associated documentation.

For Delphi application integration, a message oriented middleware might be an option. Message brokers offer guaranteed delivery, load balancing, different communication models and they work cross-platform and cross-language. Open source message message brokers include:
Apache ActiveMQ and ActiveMQ Apollo
Open Message Queue (OpenMQ)
HornetQ
RabbitMQ
(Disclaimer - I am the author of Delphi / Free Pascal client libraries for these servers)

Related

What APIs to use for program working across computers

I want to try programming something which can do things across multiple end points, so that when things occur on one computer events can occur on others. Obviously the problem here is in sending commands to the other end points.
I'm just not sure of what program I would use to do this with, I'm guessing it would have to use an API which uses some kind of client server model. I expect there are things that people use to do this with but I don't know what they are called.
How would I go about doing this? Are there common APIs which allow people to do this?
There are (at least) two types to distinguish between: RPC APIs and Message Queues (MQ)
An RPC-style API can be imaged like an remotely callable interface, it typically gives you one response per request. Apache Thrift1) is one of the frameworks designed for this purpose: An easy to use cross-platform, cross-language RPC framework. (And oh yes, it also supports Erlang, just in case ...). There are a few others around, like Googles protocol buffers, Apache Avro, and a few more.
Message Queuing systems are more suitable in cases where a more loosely coupling is desired or acceptable. In contrast to an RPC-style framework and API, a messaging queue decouples request and response a bit more. For example, a MQ system is more suitable for distributing work to multiple handlers, or distributing one event to multiple recipients via producer/consumer or publish/subscribe patterns. A typical candidate could be MSMQ, ApacheMQ or RabbitMQ
Although with RPC this can be achieved as well, it is much more complicated and involves more work as you are operating on a somewhat lower abstraction level. RPCs shine when you need more the request/response style and value performance higher than the comfort of an MQ.
On top of MQ systems there are more sophisticated Service Bus systems, like for example NServiceBus. A service bus operates on an even higher level of abstraction. They also have their pro's and con's, but can be helpful too. At the end, it depends on your use case.
1) Disclaimer: I am actively involved in that project.
Without more information, I can just suggest you to look at Erlang. It is probably the easiest language to learn distributed systems, since sending messages is built into the language and it is irrelevant for the language and command itself, if the message is sent within the same PC, LAN or through the internet to a different machine.

connecting to a database server using fsockopen or socket functions and issuing commands

How do we connect to a database server using any programming language using socket functions.
I am thinking like the protocols. For example smtp, http, ftp, imap.
we connect to these ports and we issue comands (execute commands).
Like these is it possible to connect to a database server (the port is 3306) and can we issue commands which might execute various functionality like DDL, DML, TCL.
Since people say database server i thought of this like there should some possiblity to do to what i think instead of using programming language related sql functions like mysql_connect, mysql_select or mysql_query...
i would like to have suggestions, answers and references. may be i am not using the relevant search string in google to find information for this.
"You don't". Unless a particular service documents its protocol as a public API, this is risky, difficult, and prone to break at any minute. The protocol might even include elements specifically intended to make this hard. You can, of course, wireshark and reverse engineer the protocol, but you never know for sure that the definition does not include 'On september 22nd change all the Q's to R.'
You will need to communicate with the servers using their (custom) protocols.
Why would you want to reimplement these protocols though? Unless this is just academic curiosity, you're much better off using any libraries the DB vendors supply.
The functions you talk about (mysql_connect, mysql_query, ...) are essentially acting on a raw socket, but they know the protocol. They take an SQL query, and they take a socket and they process the query into the right data to send.
Protocols on things like DB servers are going to be non-human readable. The HTTP is pretty and clear compared to a protocol designed for small size and strict parsing. Unless you absolutely need to, I would avoid reinventing the wheel.

What are the pros and cons of using database for IPC to share data instead of message passing?

To be more specific for my application: the shared data are mostly persistent data such as monitoring status, configurations -- not more than few hundreds of items, and are updated and read frequently but no more than 1 or 2Hz. The processes are local to each other on the same machine.
EDIT 1: more info - Processes are expected to poll on a set of data they are interested in (ie. monitoring) Most of the data are persistent during the lifetime of the program but some (eg. configs) are required to be restored after software restart. Data are updated by the owner only (assume one owner for each data) Number of processes are small too (not more than 10)
Although using a database is notably more reliable and scalable, it always seems to me it is kind of an overkill or too heavy to use when all I do with it is to share data within an application. Whereas message passing with eg. JMS also has the middleware part, but it is more lightweight and has a more natural or flexible communication API. Implementing event notification and command pattern is also easier with messaging I think.
It will be of great help if anyone can give me an example of what circumstance would one be more preferable to the other.
For example I know we can more readily share persistent data between processes using database, although it is also doable with messaging by distributing across processes and/or storing with some XML files.
And according to here, http://en.wikipedia.org/wiki/Database-as-IPC and here, http://tripatlas.com/Database_as_an_IPC. It says it'd be an anti pattern when used in place of message passing, but it does not elaborate on eg. how bad can the performance hit be using database compared to message passing.
I have gone thru several previous posts that asked a similar question but I am hoping to find an answer that'd focus on design justification. But from those questions that I have read so far I can see a lot of people did use database for IPC (or implemented message queue with database)
Thanks!
I once wrote a data acquisition system that ran on about 20 lab computers. All the communication between the machines took place through small tables on a MySQL database. This scheme worked great, and as the years went by and the system expanded everything scaled well and remained very responsive. It was easy to add features and fix bugs. You could debug it easily because all the message passing was easy to tap into.
What the database does for you is that it provides a fast, well debugged way of maintaining concurrency while the network traffic gets very busy. Someone at MySQL spent a lot of time making the network stuff work well under high load, and if you write your own system using tcp/ip sockets you're going to be re-inventing those wheels at great expense of time and effort.
Another advantage is that if you're using the database anyway for actual data storage then you don't have to add anything else to your system. You keep the possible points of failure to a minimum.
So all these people who say IPC through databases is bad aren't always right.
Taking into account that DBMS is a way to store information, and messages are the way to transport information, your decision should be based on the answer to the question: "do I need persistence of data in time or the data is consumed by recipient".

Messaging Middleware Vs RPC and Distributed Databases

I would like to know your opinions on advantages and disadvantages of using
Messaging Middleware vs. RPC and Distributed Databases in a distributed application?
These three are completely different things:
Message Oriented Middleware (MOM): A subsystem providing (arbitrary) message delivery services between interested systems. Usually providing the ability to change messages' content, route them, log them, guarantee the delivery, etc.
Remote Procedure Call (RPC): A rather generic term denoting a method of invoking a procedure / method / service residing in a remote process.
Distributed database: seems quite self-explanatory to me, refer to wikipedia.
Hence it's hard to tell specific (dis)advantages not knowing the actual distributed application better. You could be comparing RPC and MOM. In that case MOM usually is a complete message delivery solution, while RPC is just a technical mean of inter-process communication.

Server Programming - Simple Multiplayer Game - Which protocol and technologies?

I have a year's experience writing client code but none with server stuff. I want to restrain the question a bit so I'll simplify what I'm trying to achieve.
I want to write server code such that two clients (Browser or iPhone / Android) can connect, when two player have connected they see a timer count down to zero. The clock would be synchronize on the server and the clients would be uniquely identifiable.
The trouble here is with the word connect, what do people use for multiplayer games? Open a TCP socket for two way communications? You can tell I'm not really sure what I'm talking about. I was hoping to use AppEngine but I'm not sure if it's suitable as it's request based.
I have some experience with Java and although Erlang sounds like the best choice this is something I just want to play with and push out fast so Java would be easier. I just need to know the best way to connect players etc.
Thanks,
Gav
I suggest we regard desktop and mobile systems as equal clients. What options are then?
You write a socket server which will accept connections from clients. But then you need to write some socket client as well, even 2x - for a desktop and for a mobile OS. And the user will have to install this client.
You launch a web server (whatever technology you like). It will expose some web services which will be equally accessible to both desktop and clients OSes. But still you need to write a client application (again 2x).
You run a web server and make all functionality accessible via standard HTTP protocol. This way you won't even need a client - almost every desktop or a mobile OS has at least some web browser installed. JavaScript will provide for dynamic updates of your ticker.
There is a good series of articles about Networking for game programmers by someone who does that stuff for a living.
I'm by no means an expert on network communication, but if you don't mind loosing a few packets (or error checking in software) and you want fast, lean communication you could use UDP. I think most time-sensitive data programs and streaming media use this protocol to avoid delays and keep packet size down.
I realized a Client/ Server app a few years ago with java and ServerSocket (http://java.sun.com/j2se/1.4.2/docs/api/java/net/ServerSocket.html). You also have a SSL version.
So you create a ServerSocket and wait for connexion. When a client is connected, you create a thread that will discuss with this client with a protocol that you made.
http://www.developer.com/java/ent/article.php/1356891/A-PatternFramework-for-ClientServer-Programming-in-Java.htm
If found this little framework :
http://www.bablokb.de/jcs/jcs.html
One of the hardest thing is to create your protocol , a good way to learn how to create one would be to understand how work the FTP (or HTTP ...).
You are correct that the J2EE model breaks down with near-realtime or multi-player demands. You migth want to consider the RedDwarf game server project. It does for games what Servlets do for busienss logic and is open source.
http://www.reddwarfserver.org
I suggest we regard desktop and mobile
systems as equal clients. What options
are then?
RedDwarf has a pluggable trabsport layer and can support any sort of client you wish.
Web servers are great for web type apps. if your game acts like a web page-- is not multi-user, is turn based, and evolves very slowly-- then a web server is a fine chocie.
If it doesn't however, you need something a bit beefier in technology.
Oh, and for what its worth, whetevr you do, if you want to write a server from scratch then DON'T use"ServerSocket." That requries a thread per connection and will never scale. Use NIO or use an NIO framewoprk like Netty.

Resources