The new security abilities of couchdb mean you can dispense with your middle-ware and access your data directly from your client if you data fits into a key value store. What if your data needs a relational database? Is there a relational db with similar abilities? Should I just tell my db server to listen on port 80?
Oracle 10g and Oracle 11g come with an embedded HTTP server.
edit
Tim Hall has a succinct overview of the embedded PL/SQL Gateway, which is part of the XML DB implementation in 10g, on his Oracle-Base site. Read it here. In another example he discusses native PL/SQL web services in 11g. Find out more.
Your question is confusing, but I will try anyway:
A relational database (RDBMS and not embedded) generally has very granular security features which includes login and authentication mechanisms -- the details are beyond the scope of a SO answer.
Telling a DB to listen on a certain port doesn't have much to do with security (unless the port is mapped and accepting internet trafic, in which case mapping it away would prevent it to listen for trafic).
In a RDBMS the relational the execution environment is your middle man, and a RDBMS will have a back-end storage structure. You generally cannot directly access the underlying engine as the execution environment does a lot of complex things -- which you cannot hope to co-ordinate with through direct access. The architecture of couch-DB is very simple compared to a RDBMS and places a lot of low-level power in the hands of the developer.
-- edit: after first comment by author --
A relational database is meant to be directly accessed -- and layers in the middle are application specific architectural decisions and additions to the RDBMS.
--edit: after second comment by Author --
If you want to access their RDBMS directly via the internet they need make the database port reachable, once that is done you need to use the native drivers/API of the database vendor.
They may:
Open up the database's port to the internet by externally mapping it (bad bad bad).
Provide you with an SSH gateway which you could use to tunnel in.
provide you with a VPN endpoint to which you can establish a VPN connetion from your network
Related
Recently my clients have asked me if they can use they’re application remotely, disconnected from the local network and the company server.
One solution is to place the database in the cloud, but a connection to the database, and the cloud and an internet connection must be always available.
There not always the case.
So my question is - Is there any database sync system, or a synchronization library so that I can work disconnected with local database and when I connect synchronize the changes I have made and receive changes others have made?
Update:
The application is under Windows (7/xp) ( for now )
It's in Delphi 2007 win32
All client need to have Read/Write access
All Clients have internet connection, but not always ON
Security is not critical, but the Sync service should encrypt the communication
When in the presence of the companies network the system should sync and use the Server Database and not the local one.
You have a host of issues with thinking about such a solution. First, there are lots of possible solutions, such as:
Using database replication within a database, to mimic every update (like a "hot" backup)
Building an application to copy the database periodically (every night)
Using a third-party tool (which is what you are asking, I think)
With replication services, the connection does not have to always be up. Changes to the database are logged when the connection is not available and then applied when they can be sent.
However, there are lots of other issues when you leave a corporate network. What about security of the data and access rights? Do you have other options, such as making it easier to access the database from within the network? Do the users need only read-access to the database or read-write access? Would both versions need to be accessed at the same time. Would there be updates to both at the same time?
You may have other options that are more secure than just moving a database to the cloud.
I believe RemObjects DataAbstract allows offline mode and synchronization by using what they call Briefcases. All your other requirements (security, encrypted connections, etc.) are also covered.
This is not a drop-in replacement, thought, and may need extensive rewrite/refactoring of your application. There are lots of upsides, thought; business rules can/should be enforced on the server (real security), scriptable business rules, multiplatform architecture, etc.
There are some products available in the Java world (SymmetricDS lgpl license) - apart from actually being a working system it is documents how it achieved synchronization. Connects to any db with jdbc support. . There is a pro version but the user guide (downloadable pdf) gives you the db schema plus rules on push pull syncing. Useful if you want to build your own.
Btw there is a data replication so tag that would help.
One possibility that is free is the Microsoft Sync Framework: http://msdn.microsoft.com/en-us/sync/bb736753.aspx
It may be possible for you to use it, but you would need to provid some more detail about your application and operating environment to be sure.
IS it possible to share a database like .mdb and work fine? i try but sometimes the file where the databse is changes from DB to DB1 i use delphi Xe4 and Google Drive .
Thank´s
My scenary:
I am trying to develop a service which will query different databases.
To clear the above statement up:
I use the word service in its broadest sense: a sofware component that will provide some value to the database owner.
These databases will be in no way under my control as they will belong to different companies. They won't be known beforehand and multiple vendors are to be supported: Oracle, MS (SQL Server), MySql, PostgreSQL. Also, OLE DB and ODBC connections will be supported.
The problem: security of database credentials and overall traffic is a big concern but the configuration effort should be reduced at a minimum. Ideally, all the security issues should be addressed programmatically in the service implementation and require no configuration effort for the database owner other than provide a valid connection string.
Usually, database SSL support is done through server certificates which I want to avoid as it is cumbersome for the client (the database owner).
I have been looking into how to do this to no avail. Hopefully this might be done with openssl, SSPI, client SSL certificate or any form of tunneling; or may be it is just not posible. Some advice would be greatly apreciatted.
I am having a bit of difficulty understanding how this service would work without being extremely cumbersome for the database owner even before you try to secure the traffic with the database.
Take Oracle in particular (though I assume there would be similar issues with other databases). In order for your service to access an Oracle database, the owner of the database would have to open up a hole in their firewall to allow your server(s) to access the database on a particular port so they would need to know the IP addresses of your servers and there is a good chance that they would need to configure a service that does all of its communication on a single port (by default, the Oracle listener will frequently redirect the client to a different port for the actual interaction with the database). If they are at all security conscious, they would have to install Oracle Connection Manager on a separate machine to proxy the connection between your server and the database rather than exposing the database directly to the internet. That's quite a bit of configuration work that would be required internally and that's assuming that the database account already exists with appropriate privileges and that everyone signs off on granting database access from outside the firewall.
If you then want to encrypt communication with the database, you'd either need to establish a VPN connection to the database owner's network (which would potentially eliminate some of the firewall issues) or you'd need to use something like Oracle Advanced Security to encrypt the communication between your servers. Creating VPN connections to many different customer networks would require a potentially huge configuration effort and could require that you maintain one server per customer because different customers will have different VPN software requirements that may be mutually incompatible. The Advanced Security option is an extra cost license on top of the enterprise edition Oracle license that the customer would have to go out and purchase (and it would not be cheap). You'd only get to the point of worrying about getting an appropriate SSL certificate once all these other hoops had been jumped through. The SSL certificate exchange would seem like the easiest part of the whole process.
And that's just to support Oracle. Support for other databases will involve a similar series of steps but the exact process will tend to be slightly different.
I would tend to expect that you'd be better served, depending on the business problem you're trying to solve, by creating a product that your customers could install on their own servers inside their network that would connect to a database and would have an interface that would either send data to your central server via something like HTTPS POST calls or that would listen for HTTPS requests that could be sent to database and the results returned over HTTP.
SSL is very important in order to keep a client's database safe. But there is more than just that. You have to make sure that each database account is locked down. Each client must only have access to their own database. Further more, every database has other privileges which are nasty. For instance MySQL has FILE_PRIV which allows an account to read/write files. MS-SQL has xp_cmdshell which allows the user to access cmd.exe from sql (why would they do this!?). PostgreSQL allows you to write stored procedures in any language and from there you can call all sorts of nasty functions.
Then, there are other problems. A Malformed query can cause a buffer overflows which will give an attacker the keys to the kingdom. You have to make sure all of your databases are up to date, and then pray no one drops an 0-day.
I am about to start working on a new system which will need to support multiple users and potentially allow the database to be accessed over the Internet.
The system will be win32, not web based, the database will just be in an office and accessible anywhere. I am not sure if this is a dangerous approach security wise, am open to suggestions
The database will be SQL Server and the system will be implemented in Delphi 6
Does anyone know how I go about starting this? I will need to take into account record locking as well.
If anyone could provide links to good articles that would be appreciated.
Cheers
Paul
IMHO, the easiest way for you is to create a VPN exposing securely your database over Internet.
Security will be very good, because access to the database will be available only through a trusted VPN connection.
And your database will be available from anywhere, using the Internet just as a tunnel to transport your database packets safely.
So your Delphi code will connect to the database just as usual, using TCP/IP connection, via the VPN secure tunnel.
No need to add additional Delphi-only artifacts, like Indy components and such.
And you will be able to connect to your database for not-Delphi client, which could be a good idea to use some database browsing tool.
Exposing the database on the Internet is a security risk. Security flaws could be easily exploitable remotely.
Solutions are:
VPN, as said in other answers. Simple and secure, but requires some setup on both end-ponts (clients and VPN server), and may require proper software on the server - or a VPN router/appliance - and on the client as well if you're not using standard VPN protocols).
A n-tier application, where only the application server is exposed to the internet. You still have to protect the application server properly and the transmission channel. May require less setup on the client side. Delphi 6 offers Datasnap as a n-tier library (it also still supports CORBA, but it was dropped since D7). DCOM is not very firewall friendly (but can be configured to work across them) but can secure the channel on its own, the other two options (socket and HTTP) are easier to setup but a little less secure (they work using DCOM proxies, thereby the client identity is lost, and require custom code or certificates to secure the channel).
A third solution could be to let user connect remotely via remote desktop, but it requires licenses and a machine able to sustain the remote sessions load.
Record locking is handled by the database itself - read the documentation about SQL Server locking mode carefully to avoid bad surprises later. If the connection is not fast enough you may choose to cache some data on the client side (TClientDataset works well for that) and it can also reduce locking issues, but it can introduce udpate conflicts.
You probably mean a client server system that communicates trough TCP/IP.
You can create this using the Indy components. Be sure to check the examples because they are not easy to use, but you can create almost anything network related with them.
Actually, there are dozens of techniques possible, depending on your experiences, preferences and tools that you have available. I would advise you to use ADO to connect to the database and not the BDE, though. To do this, you can use the ADO components that are part of Delphi or import the msado15.dll type library into your project to use raw ADO API calls. The latter will require a lot more experience!SQL Server is able to just expose itself to the Internet, although this creates a security risk. Still, someone who wants to access it will need a username and password to get a connection and you would need to open the ports that SQL Server uses. But technically speaking, to use ADO over the Internet, all you need to know is the IP address of a working server, plus login information. It's a security risk, though. And for that reason, most developers will not expose SQL Server to a database but just write web services to wrap around the specific database functions that you want to expose.Record locking is something SQL Server will do for you, and if you use transactions you can make it even a bit more secure.In the end, the things you need to learn and read about depend heavily on the things you want to do in your application. So before you even start to write some code, start writing a functional design to get an overview of what you want and what you would need for this. From this document, start writing technical documents to describe more precisely what your code needs to do. Once you have this, you can ask more direct questions about the things you need, yet don't know at the moment.
Typically I use a database such as MySQL or PostGreSQL on the same machine as the application using it, which makes access easy and secure. I'm just now building the first site that will have a separate physical database server (later this year it will). I'm wondering 3 things:
(security) What things should I look into for starters pertaining to security of accessing a separate machine's database?
(scalability) Are their scalability issues that I should think about pertaining to this (technology agnostic)?
(more ServerFaultish but related) If starting the DB out on the same physical server (using a separate VMWare VM) and later moving to a different physical server, are there implicit problems that I'll have to deal with? Isn't another VM still accessed via localhost?
If these questions are completely ludicrous, I apologize to you DB experts.
Easy, I'll grant you. Secure.. well, security has very little to do with the physical location of the database server.
To get to your three questions though:
First, look at how you can limit access to database tables using the database servers security model. Namely, if your application does not need to drop tables, make sure the user it uses to connect does not have that ability. Second, look into how to encrypt the connection between the database server and your application. In windows this is pretty transparent through kerberos and can even be enforced by group policy settings, not sure about other platforms. Third, look into what features the database has for encrypting the data "at rest". Meaning, does it natively support encryption of the actual data files themselves?
The point here is that your application is only one possible entry point to the database server itself. Ask yourself, what would happen if someone can connect directly without going through your application using your apps credentials. Next ask, what can happen if they find a SQL Injection issue.. Also, ask yourself, what information can be gleaned if someone is able to monitor the IP traffic going between your app and the server. Can they discern any data? Finally, ask yourself, what if they get a copy of the database itself?
The lengths you go for #1 is going to be dependent on several factors such as How valuable is the data (eg: what would happen to you, your company, or your clients if it was lost); and, How much time do you have to come up with an ideal solution?
scalability: This is purely a function of load. Unfortunately, the only way to scale most database applications is to scale up. Meaning that you acquire a larger database server as the need arises. Stack Overflow went through this not too long ago. Some database types (nosql, mongodb, etc) support a concept known as shredding or sharding. MySql, PostGreSql, etc don't. Instead you'll have to specifically design the app to handle it. Which means not using things like auto incrementing keys, etc. This can be a royal PITA... which is why scaling up is a much easier prospect depending on your application.
Another VM is not accessible via "localhost". localhost defines access to your current server. Whether that server is a VM or not is immaterial. You'll have to reference your database server by name. Now, transitioning the database VM to another physical server should have zero impact as your are referencing it by name. Beyond that there aren't any other considerations.
In addition to Chris's valid response,
Security
Use a security mechanism on the network in addition to whatever security features the database or app framework provides. Perhaps this is a simple as firewalling the network, running IPSEC, or over an ssl tunnel. The point is that you shouldn't assume the DB authors are network security experts, or that the DB authentication mechanism has even addressed network security at all.
Scalability
One scalability issue comes to mind when moving from local to remote dbs. Remote TCP/IP communication is much slower than local pipe communication. Your app may have hidden scalability issues due to frequent round-trips to the DB. Between each query, your app waits for each DB response in succession. On a local system, the latency is so small you may not have noticed it.
Situation: Some Bank has an old legacy ABS (Automatic bank system).
Bank wants to:
notify old legacy CRM system about client's account changes (Publish operation).
check PIN codes of client cards (Request/Response operation) - in synchronious mode.
ABS is implemented in very old private technologies with StoredProcedures calls. So, I can connect to this system via database only.
Which ways of Java/.Net (ESB) application integration with old/legacy database system do you know?
Write/Publish operation
Any vendor's databse server:
Scan tables for new entries - too low speed.
Trigger (if they're supported) which handles SQL updates and inserts and writes event information to some table. And application listener should be checking this table for events.
Oracle serevr : PL/SQL TRIGGERS + Oracle AQ. And listener for JMS.
Reading operation
Just write result into tables of ABS - dangerous.
...
How to notify legacy database system about responses in synchronious mode??? How to implement Write/Read in synchronious mode???
Again, which ways of Java/.Net (ESB) application integration with old/legacy database system do you know?
Lot of vendors hype about DataServices. I think the most value of these products is when integrating different datasources.
I would consider making a simple "application" that exposes this data as a service
It depends on many factors; particularly read/write throughput and performance sensitivity of the database.
Databases tend to be kinda sensitive things and are often very fragile to general purpose access from arbitrary other systems when they are finely tuned for production use in a specific system; so often folks replicate the database to another read-only slave database that can be then used for doing integration work & querying and so forth.
You can then use triggers/polling/JMS based on whatever you need without impacting the original database.
Depending on the database replication technology used; you can then often install triggers in the replica database (which can afford to get a little behind from the master from time to time) - to minimise impact in the production database
I can propose you to use Mule as ESB in your bank (see also http://www.mulesource.org/display/MULE/Home).
It allows you to communicate to database directly (jdbc level which has to be OK with stored procedures as well as tables/views level). I have positive experience with it for integration core banking system (database level, Oracle) with standalone application (web services level).
Frankly, I din't got all your questions (your can ask me in Russian directly if you are prefere),but IMO Mule is your way - it can consume JMS, JDBC, file level and many others and process syncronouse and asyncornouse events as well (see also http://www.mulesource.org/display/MULE2USER/Available+Transports).
Reagrds.
P.S. To be more clear for English speaking audience, I can propose you use more standard term core banking system instead of ABS (which means the same in xUSSR countries).