Database synchronization - database

Recently my clients have asked me if they can use they’re application remotely, disconnected from the local network and the company server.
One solution is to place the database in the cloud, but a connection to the database, and the cloud and an internet connection must be always available.
There not always the case.
So my question is - Is there any database sync system, or a synchronization library so that I can work disconnected with local database and when I connect synchronize the changes I have made and receive changes others have made?
Update:
The application is under Windows (7/xp) ( for now )
It's in Delphi 2007 win32
All client need to have Read/Write access
All Clients have internet connection, but not always ON
Security is not critical, but the Sync service should encrypt the communication
When in the presence of the companies network the system should sync and use the Server Database and not the local one.

You have a host of issues with thinking about such a solution. First, there are lots of possible solutions, such as:
Using database replication within a database, to mimic every update (like a "hot" backup)
Building an application to copy the database periodically (every night)
Using a third-party tool (which is what you are asking, I think)
With replication services, the connection does not have to always be up. Changes to the database are logged when the connection is not available and then applied when they can be sent.
However, there are lots of other issues when you leave a corporate network. What about security of the data and access rights? Do you have other options, such as making it easier to access the database from within the network? Do the users need only read-access to the database or read-write access? Would both versions need to be accessed at the same time. Would there be updates to both at the same time?
You may have other options that are more secure than just moving a database to the cloud.

I believe RemObjects DataAbstract allows offline mode and synchronization by using what they call Briefcases. All your other requirements (security, encrypted connections, etc.) are also covered.
This is not a drop-in replacement, thought, and may need extensive rewrite/refactoring of your application. There are lots of upsides, thought; business rules can/should be enforced on the server (real security), scriptable business rules, multiplatform architecture, etc.

There are some products available in the Java world (SymmetricDS lgpl license) - apart from actually being a working system it is documents how it achieved synchronization. Connects to any db with jdbc support. . There is a pro version but the user guide (downloadable pdf) gives you the db schema plus rules on push pull syncing. Useful if you want to build your own.
Btw there is a data replication so tag that would help.

One possibility that is free is the Microsoft Sync Framework: http://msdn.microsoft.com/en-us/sync/bb736753.aspx
It may be possible for you to use it, but you would need to provid some more detail about your application and operating environment to be sure.

IS it possible to share a database like .mdb and work fine? i try but sometimes the file where the databse is changes from DB to DB1 i use delphi Xe4 and Google Drive .
Thank´s

Related

Move from a local single-user database to an online multi-user database

I have a calendar-type WPF program that is used to assign the workload to a team. The events are stored in an Access database and the program is accessed by one person at a time by remotely connection to a computer. The team has grown and multiple people would need to access the program simultaneously. I can install the program on several computers, but where should I move the database? On a software like Dropbox/Onedrive, on a SQL online host? Thanks.
You can use a SQL Server on many Cloud platforms (though I am not sure Dropbox can host SQL Server natively). Azure (Microsoft cloud) is a very mature solution. You still should verify, now that multiple users will be managing data, that the database is backed up a regular basis and that any updates to data should be done within transactions that your code should be aware of. 'Aware of' means that if there is a conflict your code should either resubmit or notify the user that the insert/update/delete failed.

Which server platform to choose: SQL Azure or Hosted SQL Server for new project

We're getting ready to build a new platform for our current system. Currently we install sql server express locally to all our clients and all their data is stored there. While the process works pretty good, it's still a pain to add columns/tables etc. We also want to have our data available outside of the local install. So we're moving to a central web based sql database and creating a web based application. Our new application will be a Silverlight 5, wcf ria services, mvvm, entity framework application
We've decided that either a web hosted sql server database or sql azure database are the way to go. However, I have no idea why I would choose one over the other. The limitations of azure don't seem to apply to us, but our application will be run on our current shared web host. Is it better to host the application on the same server as the database? Do we even know with shared web hosting that the server is on the same location as the app? There's also the marketing advantage of being 'in the cloud' which our clients love when we drop that word (they have no idea about anything technical, it's just a buzzword for them). I'm not too worried about the cost as I think both will ultimately be about the equivalent of each other.
I feel like I may be completely overthinking this and either will work, however I'd like to try and get the best solution for us and don't want to choose without getting some feedback.
In case it helps, our application is mostly dashboard/informational data. Mostly financial and trending data. It's almost entirely read only. Sometimes the data can get fairly large and we would be sending upwards of 50,000 rows of data to the application.
Thanks for any help/insight you can provide for me!
The main concerns I would have with using a SQL Azure DB from an application on your current shared web host would be
The effect of network latency: Depending on location, every time you do a DB round trip from your application to the SQL Azure DB you will incur a 50-100ms delay. If your application does lots of round trips, this will mount up. Often, if an application has been designed to work with a DB on the LAN (you use of local client DBs suggests this) the they tend to get "chatty" since network delays are very small on the LAN. You may find your application slows down significantly.
Security: You will have to open up the SQL Azure firewall to the IP address(es) that your application presents when querying. Depending on your host, it may be that this IP address is shared between several tenants. This would be a vulnerability.
If neither of these is a problem, then SQL Azure will provide a much lower management overhead (e.g. no need to patch etc.) and will give you very high reliability, especially in terms of the risk of data loss.

How to secure database traffic the other way around, that is to say, from client to server

My scenary:
I am trying to develop a service which will query different databases.
To clear the above statement up:
I use the word service in its broadest sense: a sofware component that will provide some value to the database owner.
These databases will be in no way under my control as they will belong to different companies. They won't be known beforehand and multiple vendors are to be supported: Oracle, MS (SQL Server), MySql, PostgreSQL. Also, OLE DB and ODBC connections will be supported.
The problem: security of database credentials and overall traffic is a big concern but the configuration effort should be reduced at a minimum. Ideally, all the security issues should be addressed programmatically in the service implementation and require no configuration effort for the database owner other than provide a valid connection string.
Usually, database SSL support is done through server certificates which I want to avoid as it is cumbersome for the client (the database owner).
I have been looking into how to do this to no avail. Hopefully this might be done with openssl, SSPI, client SSL certificate or any form of tunneling; or may be it is just not posible. Some advice would be greatly apreciatted.
I am having a bit of difficulty understanding how this service would work without being extremely cumbersome for the database owner even before you try to secure the traffic with the database.
Take Oracle in particular (though I assume there would be similar issues with other databases). In order for your service to access an Oracle database, the owner of the database would have to open up a hole in their firewall to allow your server(s) to access the database on a particular port so they would need to know the IP addresses of your servers and there is a good chance that they would need to configure a service that does all of its communication on a single port (by default, the Oracle listener will frequently redirect the client to a different port for the actual interaction with the database). If they are at all security conscious, they would have to install Oracle Connection Manager on a separate machine to proxy the connection between your server and the database rather than exposing the database directly to the internet. That's quite a bit of configuration work that would be required internally and that's assuming that the database account already exists with appropriate privileges and that everyone signs off on granting database access from outside the firewall.
If you then want to encrypt communication with the database, you'd either need to establish a VPN connection to the database owner's network (which would potentially eliminate some of the firewall issues) or you'd need to use something like Oracle Advanced Security to encrypt the communication between your servers. Creating VPN connections to many different customer networks would require a potentially huge configuration effort and could require that you maintain one server per customer because different customers will have different VPN software requirements that may be mutually incompatible. The Advanced Security option is an extra cost license on top of the enterprise edition Oracle license that the customer would have to go out and purchase (and it would not be cheap). You'd only get to the point of worrying about getting an appropriate SSL certificate once all these other hoops had been jumped through. The SSL certificate exchange would seem like the easiest part of the whole process.
And that's just to support Oracle. Support for other databases will involve a similar series of steps but the exact process will tend to be slightly different.
I would tend to expect that you'd be better served, depending on the business problem you're trying to solve, by creating a product that your customers could install on their own servers inside their network that would connect to a database and would have an interface that would either send data to your central server via something like HTTPS POST calls or that would listen for HTTPS requests that could be sent to database and the results returned over HTTP.
SSL is very important in order to keep a client's database safe. But there is more than just that. You have to make sure that each database account is locked down. Each client must only have access to their own database. Further more, every database has other privileges which are nasty. For instance MySQL has FILE_PRIV which allows an account to read/write files. MS-SQL has xp_cmdshell which allows the user to access cmd.exe from sql (why would they do this!?). PostgreSQL allows you to write stored procedures in any language and from there you can call all sorts of nasty functions.
Then, there are other problems. A Malformed query can cause a buffer overflows which will give an attacker the keys to the kingdom. You have to make sure all of your databases are up to date, and then pray no one drops an 0-day.

How do I implement Internet accessible system with Delphi?

I am about to start working on a new system which will need to support multiple users and potentially allow the database to be accessed over the Internet.
The system will be win32, not web based, the database will just be in an office and accessible anywhere. I am not sure if this is a dangerous approach security wise, am open to suggestions
The database will be SQL Server and the system will be implemented in Delphi 6
Does anyone know how I go about starting this? I will need to take into account record locking as well.
If anyone could provide links to good articles that would be appreciated.
Cheers
Paul
IMHO, the easiest way for you is to create a VPN exposing securely your database over Internet.
Security will be very good, because access to the database will be available only through a trusted VPN connection.
And your database will be available from anywhere, using the Internet just as a tunnel to transport your database packets safely.
So your Delphi code will connect to the database just as usual, using TCP/IP connection, via the VPN secure tunnel.
No need to add additional Delphi-only artifacts, like Indy components and such.
And you will be able to connect to your database for not-Delphi client, which could be a good idea to use some database browsing tool.
Exposing the database on the Internet is a security risk. Security flaws could be easily exploitable remotely.
Solutions are:
VPN, as said in other answers. Simple and secure, but requires some setup on both end-ponts (clients and VPN server), and may require proper software on the server - or a VPN router/appliance - and on the client as well if you're not using standard VPN protocols).
A n-tier application, where only the application server is exposed to the internet. You still have to protect the application server properly and the transmission channel. May require less setup on the client side. Delphi 6 offers Datasnap as a n-tier library (it also still supports CORBA, but it was dropped since D7). DCOM is not very firewall friendly (but can be configured to work across them) but can secure the channel on its own, the other two options (socket and HTTP) are easier to setup but a little less secure (they work using DCOM proxies, thereby the client identity is lost, and require custom code or certificates to secure the channel).
A third solution could be to let user connect remotely via remote desktop, but it requires licenses and a machine able to sustain the remote sessions load.
Record locking is handled by the database itself - read the documentation about SQL Server locking mode carefully to avoid bad surprises later. If the connection is not fast enough you may choose to cache some data on the client side (TClientDataset works well for that) and it can also reduce locking issues, but it can introduce udpate conflicts.
You probably mean a client server system that communicates trough TCP/IP.
You can create this using the Indy components. Be sure to check the examples because they are not easy to use, but you can create almost anything network related with them.
Actually, there are dozens of techniques possible, depending on your experiences, preferences and tools that you have available. I would advise you to use ADO to connect to the database and not the BDE, though. To do this, you can use the ADO components that are part of Delphi or import the msado15.dll type library into your project to use raw ADO API calls. The latter will require a lot more experience!SQL Server is able to just expose itself to the Internet, although this creates a security risk. Still, someone who wants to access it will need a username and password to get a connection and you would need to open the ports that SQL Server uses. But technically speaking, to use ADO over the Internet, all you need to know is the IP address of a working server, plus login information. It's a security risk, though. And for that reason, most developers will not expose SQL Server to a database but just write web services to wrap around the specific database functions that you want to expose.Record locking is something SQL Server will do for you, and if you use transactions you can make it even a bit more secure.In the end, the things you need to learn and read about depend heavily on the things you want to do in your application. So before you even start to write some code, start writing a functional design to get an overview of what you want and what you would need for this. From this document, start writing technical documents to describe more precisely what your code needs to do. Once you have this, you can ask more direct questions about the things you need, yet don't know at the moment.

Database security / scaling question

Typically I use a database such as MySQL or PostGreSQL on the same machine as the application using it, which makes access easy and secure. I'm just now building the first site that will have a separate physical database server (later this year it will). I'm wondering 3 things:
(security) What things should I look into for starters pertaining to security of accessing a separate machine's database?
(scalability) Are their scalability issues that I should think about pertaining to this (technology agnostic)?
(more ServerFaultish but related) If starting the DB out on the same physical server (using a separate VMWare VM) and later moving to a different physical server, are there implicit problems that I'll have to deal with? Isn't another VM still accessed via localhost?
If these questions are completely ludicrous, I apologize to you DB experts.
Easy, I'll grant you. Secure.. well, security has very little to do with the physical location of the database server.
To get to your three questions though:
First, look at how you can limit access to database tables using the database servers security model. Namely, if your application does not need to drop tables, make sure the user it uses to connect does not have that ability. Second, look into how to encrypt the connection between the database server and your application. In windows this is pretty transparent through kerberos and can even be enforced by group policy settings, not sure about other platforms. Third, look into what features the database has for encrypting the data "at rest". Meaning, does it natively support encryption of the actual data files themselves?
The point here is that your application is only one possible entry point to the database server itself. Ask yourself, what would happen if someone can connect directly without going through your application using your apps credentials. Next ask, what can happen if they find a SQL Injection issue.. Also, ask yourself, what information can be gleaned if someone is able to monitor the IP traffic going between your app and the server. Can they discern any data? Finally, ask yourself, what if they get a copy of the database itself?
The lengths you go for #1 is going to be dependent on several factors such as How valuable is the data (eg: what would happen to you, your company, or your clients if it was lost); and, How much time do you have to come up with an ideal solution?
scalability: This is purely a function of load. Unfortunately, the only way to scale most database applications is to scale up. Meaning that you acquire a larger database server as the need arises. Stack Overflow went through this not too long ago. Some database types (nosql, mongodb, etc) support a concept known as shredding or sharding. MySql, PostGreSql, etc don't. Instead you'll have to specifically design the app to handle it. Which means not using things like auto incrementing keys, etc. This can be a royal PITA... which is why scaling up is a much easier prospect depending on your application.
Another VM is not accessible via "localhost". localhost defines access to your current server. Whether that server is a VM or not is immaterial. You'll have to reference your database server by name. Now, transitioning the database VM to another physical server should have zero impact as your are referencing it by name. Beyond that there aren't any other considerations.
In addition to Chris's valid response,
Security
Use a security mechanism on the network in addition to whatever security features the database or app framework provides. Perhaps this is a simple as firewalling the network, running IPSEC, or over an ssl tunnel. The point is that you shouldn't assume the DB authors are network security experts, or that the DB authentication mechanism has even addressed network security at all.
Scalability
One scalability issue comes to mind when moving from local to remote dbs. Remote TCP/IP communication is much slower than local pipe communication. Your app may have hidden scalability issues due to frequent round-trips to the DB. Between each query, your app waits for each DB response in succession. On a local system, the latency is so small you may not have noticed it.

Resources