SQLite database remote access - database

I have a SQLite database on my local machine and my web services running on the same machine access it using SQLAlchemy like this:
engine = create_engine('sqlite:///{}'.format('mydatabase.db'), echo=True)
We are planning to host our web services on a separate machine from where the database is hosted. How can we make this 'mydabata.db' be accessible for our web services remotely for my web services? Thanks.

From SQLite when to use docs:
Situations Where A Client/Server RDBMS May Work Better
Client/Server Applications
If there are many client programs sending SQL to the same database over a network, then use a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, file locking logic is buggy in many network filesystem implementations (on both Unix and Windows). If file locking does not work correctly, two or more clients might try to modify the same part of the same database at the same time, resulting in corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
A good rule of thumb is to avoid using SQLite in situations where the same database will be accessed directly (without an intervening application server) and simultaneously from many computers over a network.
SQLite works well for embedded system or at least when you use it on the same computer. IMHO you'll have to migrate to one of the larger SQL solutions like PostgreSQL, MariaDB or MySQL. If you've generated all your queries though the ORM (SQLAlchemy) then there will be no problem migrating to another RDBMS. But even if wrote SQL queries too there should not be much problems because all these RDBMSes use very similar dialects (unlike Microsoft's T-SQL). And since SQLite is lite it supports only a subset of what other RDBMSes support so there should not be a problem.

Related

Does sharing database structure make my application significantly more vulnerable?

I have a software that has two versions. One version is designed as a desktop application with its own database (MS SQL CE) that would reside on the user's machine. Another version has a client-server application where the database resides on a server. The database structure is nearly identical in both cases. The reason we have this setup is because we work in developing countries where the internet connection is unreliable and client-server or web-based applications are not always possible. Some of the software uses also don't require data-sharing and it's easier for the database to reside on the machine.
Someone with the desktop version can simply open up the SQL CE database and look at all of the tables and fields. Does this knowledge make my client-server application significantly more vulnerable to being hacked? If yes, what steps can I take to decrease the risk.
Database structure by itself will not make your web application vulnerable. However if there is a SQL injection or other vulnerability somewhere in client-server application, for the attacker it would be much easier (will take less time) to operate with your database.
In other words, your application is not less secure with exposed database structure, especially if it benefits performance for some users.
Knowledge of the database structure can make your application less secure.
You can develop a frontend for your database structure so it is not directly exposed to the users.

Is MS Access 2007 database more reliable than SQLite

Not long ago I developed a Windows forms application to use SQLite. Now there is the need to have the database shared on a network. We don't want to use a Client/Server (MS SQL or MySQL) database because we want to simplify installation of the application as much as possible.
The challenge here is SQLite's performance on a network when shared. The filelocking system is not reliable on a network and thus we can have data conflicts.
I am ruling out SQLite for this purpose and considering MS Access database file (accdb) instead. I am wondering if this can handle concurrent transactions say users updating a record at the same time. Is it much better than SQLite.
Maximum number of users should be between 5 and 10.
5-10 users should be no problem. You should split it into front- and backend if you are not using the legacy application as frontend.

Database synchronization

Recently my clients have asked me if they can use they’re application remotely, disconnected from the local network and the company server.
One solution is to place the database in the cloud, but a connection to the database, and the cloud and an internet connection must be always available.
There not always the case.
So my question is - Is there any database sync system, or a synchronization library so that I can work disconnected with local database and when I connect synchronize the changes I have made and receive changes others have made?
Update:
The application is under Windows (7/xp) ( for now )
It's in Delphi 2007 win32
All client need to have Read/Write access
All Clients have internet connection, but not always ON
Security is not critical, but the Sync service should encrypt the communication
When in the presence of the companies network the system should sync and use the Server Database and not the local one.
You have a host of issues with thinking about such a solution. First, there are lots of possible solutions, such as:
Using database replication within a database, to mimic every update (like a "hot" backup)
Building an application to copy the database periodically (every night)
Using a third-party tool (which is what you are asking, I think)
With replication services, the connection does not have to always be up. Changes to the database are logged when the connection is not available and then applied when they can be sent.
However, there are lots of other issues when you leave a corporate network. What about security of the data and access rights? Do you have other options, such as making it easier to access the database from within the network? Do the users need only read-access to the database or read-write access? Would both versions need to be accessed at the same time. Would there be updates to both at the same time?
You may have other options that are more secure than just moving a database to the cloud.
I believe RemObjects DataAbstract allows offline mode and synchronization by using what they call Briefcases. All your other requirements (security, encrypted connections, etc.) are also covered.
This is not a drop-in replacement, thought, and may need extensive rewrite/refactoring of your application. There are lots of upsides, thought; business rules can/should be enforced on the server (real security), scriptable business rules, multiplatform architecture, etc.
There are some products available in the Java world (SymmetricDS lgpl license) - apart from actually being a working system it is documents how it achieved synchronization. Connects to any db with jdbc support. . There is a pro version but the user guide (downloadable pdf) gives you the db schema plus rules on push pull syncing. Useful if you want to build your own.
Btw there is a data replication so tag that would help.
One possibility that is free is the Microsoft Sync Framework: http://msdn.microsoft.com/en-us/sync/bb736753.aspx
It may be possible for you to use it, but you would need to provid some more detail about your application and operating environment to be sure.
IS it possible to share a database like .mdb and work fine? i try but sometimes the file where the databse is changes from DB to DB1 i use delphi Xe4 and Google Drive .
Thank´s

Database security / scaling question

Typically I use a database such as MySQL or PostGreSQL on the same machine as the application using it, which makes access easy and secure. I'm just now building the first site that will have a separate physical database server (later this year it will). I'm wondering 3 things:
(security) What things should I look into for starters pertaining to security of accessing a separate machine's database?
(scalability) Are their scalability issues that I should think about pertaining to this (technology agnostic)?
(more ServerFaultish but related) If starting the DB out on the same physical server (using a separate VMWare VM) and later moving to a different physical server, are there implicit problems that I'll have to deal with? Isn't another VM still accessed via localhost?
If these questions are completely ludicrous, I apologize to you DB experts.
Easy, I'll grant you. Secure.. well, security has very little to do with the physical location of the database server.
To get to your three questions though:
First, look at how you can limit access to database tables using the database servers security model. Namely, if your application does not need to drop tables, make sure the user it uses to connect does not have that ability. Second, look into how to encrypt the connection between the database server and your application. In windows this is pretty transparent through kerberos and can even be enforced by group policy settings, not sure about other platforms. Third, look into what features the database has for encrypting the data "at rest". Meaning, does it natively support encryption of the actual data files themselves?
The point here is that your application is only one possible entry point to the database server itself. Ask yourself, what would happen if someone can connect directly without going through your application using your apps credentials. Next ask, what can happen if they find a SQL Injection issue.. Also, ask yourself, what information can be gleaned if someone is able to monitor the IP traffic going between your app and the server. Can they discern any data? Finally, ask yourself, what if they get a copy of the database itself?
The lengths you go for #1 is going to be dependent on several factors such as How valuable is the data (eg: what would happen to you, your company, or your clients if it was lost); and, How much time do you have to come up with an ideal solution?
scalability: This is purely a function of load. Unfortunately, the only way to scale most database applications is to scale up. Meaning that you acquire a larger database server as the need arises. Stack Overflow went through this not too long ago. Some database types (nosql, mongodb, etc) support a concept known as shredding or sharding. MySql, PostGreSql, etc don't. Instead you'll have to specifically design the app to handle it. Which means not using things like auto incrementing keys, etc. This can be a royal PITA... which is why scaling up is a much easier prospect depending on your application.
Another VM is not accessible via "localhost". localhost defines access to your current server. Whether that server is a VM or not is immaterial. You'll have to reference your database server by name. Now, transitioning the database VM to another physical server should have zero impact as your are referencing it by name. Beyond that there aren't any other considerations.
In addition to Chris's valid response,
Security
Use a security mechanism on the network in addition to whatever security features the database or app framework provides. Perhaps this is a simple as firewalling the network, running IPSEC, or over an ssl tunnel. The point is that you shouldn't assume the DB authors are network security experts, or that the DB authentication mechanism has even addressed network security at all.
Scalability
One scalability issue comes to mind when moving from local to remote dbs. Remote TCP/IP communication is much slower than local pipe communication. Your app may have hidden scalability issues due to frequent round-trips to the DB. Between each query, your app waits for each DB response in succession. On a local system, the latency is so small you may not have noticed it.

Virtualize the database server or the web server?

In a web application architecture with 1 app server (IIS) and 1 database server (MSSQL), if you had to pick one server to virtualize in a VM, which would it be: web or db?
Generally speaking of course.
Web of course.
Databases require too much IO bandwidth + It's easier to add instances or databases to a single instance, whereas isolated web servers benefit more.
Similar question... "Run Sharepoint 2003 on VMWare?". Sharepoint is just an asp.net application with a SQL Server back end.
The shortcoming of most virtual environments is I/O, especially disk. SQL is very I/O intensive. I am using a shared virtual host and the slow I/O is killing me.
That said, Microsoft is pushing SQL on Hyper-V. It's a hypervisor, which means its a thinner layer between the VM and the hardware, and the drivers are quasi-native. Here's their whitepaper: http://download.microsoft.com/download/d/9/4/d948f981-926e-40fa-a026-5bfcf076d9b9/SQL2008inHyperV2008.docx
Looks like for SQL, you will lose ~10% performance overall. The upside is that you can move the whole instance to another box quickly, bump up the RAM, etc.
Another thing to consider is Intel's enterprise SSD drives (X25-E). I imagine that would help a lot in a virtual environment. Pricey, of course.
I would probably decide depending on the amount of computation required by the app server, versus the amount of computation/io done by the database.
With that said I would think most of the time the DB should NOT be virtualized. Virtualization isn't too hot for db's that have to ensure that data remains nice and safe on a disk, and adding another abstraction layer can't help with that.
If you have two physical servers there is no need to virtualise - use one server for IIS and one for the database.
If you have one physical server there is also no need to virtualise.
If I had to choose, it would be the web server. The database would benefit in terms of performance by running on a physical server. If the web server is virtualised, it makes it quick and easy to clone it to create a cluster of web servers.
With today's hypervisors and best practices you can virtualise both infrastructures. When you virtualise your DB infrastructure it is best to ensure that the DB is installed onto a SAN based system so that IO performance is not a bottleneck.
As with everything there are the right and wrong way of doing things but following vendor best practices and testing will enable you to squeeze the best performance out of your VM instances.
There are plenty of whitepapers and performance testing from the various vendors should you want to virtualise your entire infrastructure.
Even though virtualisation again is an industry hot topic with various vendors now giving away hypervisors for free, this does not mean that using virtualisation is the way forward. Server consolidation yes, performance enhancing maybe - YMMV

Resources