Which single file embedded DB for a network project? - database

I am looking for an embedded database for a VB 2010 application working over the network. The database file is on a shared network folder on a NAS server (NTFS). For this reason I cannot use any server database like mysql, sql server, etc...
There are nearly 20 PCs accessing the shared folder on the network.
Each pc can open till 3 connections to the database, so we could have till 60 connections to the database. Mostly they just read the database, a writing to the database happens each 5-6 minutes and rarely at the same time, but it can happen.
In the past I had successfully used access+jet for such applications and never had problems, anyway with less network users.
I would still use access+jet (so I do not need to convert the whole database and code), but I would like to use something newer.
I have seen that SQLite is not jet right for network/shared enviroment.
SQL Compact is also not right for shared folder.
VistaDB is too expensive.
Firebird could be an option, but I have no experience: It should be used in a production system and I do not know if I could trust it.
Any suggestion? Or shell I stay by access?
Thanks for replying.

Go with firebird. Stable, lightweight, free and very fast as a network and as an embedded database. I am using it everywhere.
However the database cannot reside on a shared network folder. It must reside on a hard drive that is physically connected to the host machine.
VistaDB is good as an embedded database, but has awful performance as a network database because it is not true client-server.

Related

Move from a local single-user database to an online multi-user database

I have a calendar-type WPF program that is used to assign the workload to a team. The events are stored in an Access database and the program is accessed by one person at a time by remotely connection to a computer. The team has grown and multiple people would need to access the program simultaneously. I can install the program on several computers, but where should I move the database? On a software like Dropbox/Onedrive, on a SQL online host? Thanks.
You can use a SQL Server on many Cloud platforms (though I am not sure Dropbox can host SQL Server natively). Azure (Microsoft cloud) is a very mature solution. You still should verify, now that multiple users will be managing data, that the database is backed up a regular basis and that any updates to data should be done within transactions that your code should be aware of. 'Aware of' means that if there is a conflict your code should either resubmit or notify the user that the insert/update/delete failed.

SQLite database remote access

I have a SQLite database on my local machine and my web services running on the same machine access it using SQLAlchemy like this:
engine = create_engine('sqlite:///{}'.format('mydatabase.db'), echo=True)
We are planning to host our web services on a separate machine from where the database is hosted. How can we make this 'mydabata.db' be accessible for our web services remotely for my web services? Thanks.
From SQLite when to use docs:
Situations Where A Client/Server RDBMS May Work Better
Client/Server Applications
If there are many client programs sending SQL to the same database over a network, then use a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, file locking logic is buggy in many network filesystem implementations (on both Unix and Windows). If file locking does not work correctly, two or more clients might try to modify the same part of the same database at the same time, resulting in corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
A good rule of thumb is to avoid using SQLite in situations where the same database will be accessed directly (without an intervening application server) and simultaneously from many computers over a network.
SQLite works well for embedded system or at least when you use it on the same computer. IMHO you'll have to migrate to one of the larger SQL solutions like PostgreSQL, MariaDB or MySQL. If you've generated all your queries though the ORM (SQLAlchemy) then there will be no problem migrating to another RDBMS. But even if wrote SQL queries too there should not be much problems because all these RDBMSes use very similar dialects (unlike Microsoft's T-SQL). And since SQLite is lite it supports only a subset of what other RDBMSes support so there should not be a problem.

Microsoft Access database - queries run on server or client?

I have a Microsoft Access .accdb database on a company server. If someone opens the database over the network, and runs a query, where does the query run? Does it:
run on the server (as it should, and as I thought it did), and only the results are passed over to the client through the slow network connection
or run on the client, which means the full 1.5 GB database is loaded over the network to the client's machine, where the query runs, and produces the result
If it is the latter (which would be truly horrible and baffling), is there a way around this? The weak link is always the network, can I have queries run at the server somehow?
(Reason for asking is the database is unbelievably slow when used over network.)
The query is processed on the client, but that does not mean that the entire 1.5 GB database needs to be pulled over the network before a particular query can be processed. Even a given table will not necessarily be retrieved in its entirety if the query can use indexes to determine the relevant rows in that table.
For more information, see the answers to the related questions:
ODBC access over network to *.mdb
C# program querying an Access database in a network folder takes longer than querying a local copy
It is the latter, the 1.5 GB database is loaded over the network
The "server" in your case is a server only in the sense that it serves the file, it is not a database engine.
You're in a bad spot:
The good thing about access is that it's easy to create forms and reports and things by people who are not developers. The bad is everything else about it. Particularly 2 things:
People wind up using it for small projects that grow and grow and grow, and wind up in your shoes.
It sucks for multiple users, and it really sucks over a network when it gets big
I always convert them to a web-based app with SQL server or something, but I'm a developer. That costs money to do, but that's what happens when you use a tool that does not scale.

Creating SQL Server database with data and log file on synology NAS Server

I need to create a new database for an application. I planned to put the data files on NAS server (Synology 812). I tried to create the database using different paths for 2 days but nothing worked. At the bottom you can see an example path N'\\10.1.1.5\fileserver\...
I also tried 'N\\10.1.1.5\**volume1\fileserver**\payroll.ldf' because from synology admin interface properties dialog shows this path for fileserver shared directory.
fileserver is a shared folder. I can reach that folder from file explorer.
\\10.1.1.5\fileserver\
And I can create new file or folders inside it using windows explorer. But unluckily the create statement does not work.
CREATE DATABASE Payroll
ON
( NAME = Payroll_dat,
FILENAME = N'\\10.1.1.5\fileserver\payrolldat.mdf',
SIZE = 20MB,
MAXSIZE = 70MB,
FILEGROWTH = 5MB )
LOG ON
( NAME = 'Payroll_log',
FILENAME = N'\\10.1.1.5\fileserver\payroll.ldf',
SIZE = 10MB,
MAXSIZE = 40MB,
FILEGROWTH = 5MB )
GO
I will be very happy if someone has a solution for my problem.
Thank you for your time.
Ferda
SQL Server doesn't support UNC paths by default. See the KB at http://support.microsoft.com/kb/304261 - Description of support for network database files in SQL Server.
Extracts:
Microsoft generally recommends that you use a Storage Area Network
(SAN) or locally attached disk for the storage of your Microsoft SQL
Server database files because this configuration optimizes SQL Server
performance and reliability. By default, use of network database files
(stored on a networked server or Network Attached Storage [NAS]) is
not enabled for SQL Server.
It can be enabled, but you must ensure your hardware meets some strict conditions:
However, you can configure SQL Server to store a database on a
networked server or NAS storage server. Servers used for this purpose
must meet SQL Server requirements for data write ordering and
write-through guarantees, which are detailed in the "More Information"
section.
[...]
Any failure by any software or hardware component to honor this protocol can result
in a partial or total data loss or corruption in the event of a system failure.
[...]
Microsoft does not support SQL Server networked database files on NAS or networked
storage servers that do not meet these write-through and write-order requirements.
Performance can also be heavily compromised:
In its simplest form, an NAS solution uses a standard network
redirector software stack, standard network interface card (NIC), and
standard Ethernet components. The drawback of this configuration is
that all file I/O is processed through the network stack and is
subject to the bandwidth limitations of the network itself. This can
create performance and data reliability problems, especially in
programs that require extremely high levels of file I/O, such as SQL
Server. In some NAS configurations tested by Microsoft, the I/O
throughput was approximately one-third (1/3) that of a direct attached
storage solution on the same server. In this same configuration, the
CPU cost to complete an I/O through the NAS device was approximately
twice that of a local I/O.
So in summary, if you can't gurantee that your hardware supports these requirements, you're playing with fire. It might work for a small test environment, but I'd not host a Live database like this lest data gets corrupted or performance severely suffers.
To enable, use trace flag 1807 as decribed in the KB.
You need SQL Server 2008R2 or later. Starting with 2008R2 the UNC names are supported, although not encouraged (for all the reasons Chris mentions). An SMB 3.0 capable environment goes a long way into solving most of UNC storage problems. Prior to 2008R2 the trace flag 1807 would work, but was not a supported deployment (CSS could refuse to help you if you asked for help on any issue). See SQL Server Can Run Databases from Network Shares & NAS.
I think the best thing you can do is configure a ISCSI volume in your nas device creating a RAID 5 if you have 3 or more available disk, after that you should connect your NAS and your server in an independient VLAN connected to a different network device or even in a diferent phisical network to avoid impact on your LAN due to operations traffic, in this way you will be using your NAS as a SAN

how can I simulate network latency on my developer machine?

I am upsizing an MS Access 2003 app to a SQL Server backend. On my dev machine, SQL Server is local, so the performance is quite good. I want to test the performance with a remote SQL Server so I can account for the effects of network latency when I am redesigning the app. I am expecting that some of the queries that seem fast now will run quite slowly once deployed to production.
How can I slow down (or simulate the speed of a remote) SQL Server without using a virtual machine, or relocating SQL to another computer? Is there some kind of proxy or Windows utility that would do this for me?
I have not used it myself, but here's another SO question:
Network tools that simulate slow network connection
In one of the comments SQL Server has been mentioned explicitly.
You may be operating under a misconception. MS-Access supports so-called "heterogeneous joins" (i.e. tables from a variety of back-ends may be included in the same query, e.g. combining data from Oracle and SQLServer and Access and an Excel spreadsheet). To support this feature, Access applies the WHERE clause filter at the client except in situations where there's a "pass-through" query against an intelligent back-end. In SQL Server, the filtering occurs in the engine running on the server, so SQL Server typically sends much smaller datasets to the client.
The answer to your question also depends on what you mean by "remote". If you pit Access and SQL Server against each other on the same network, SQL Server running on the server will consume only a small fraction of the bandwidth that Access does, if the Access MDB file resides on a file server. (Of course if the MDB resides on the local PC, no network bandwidth is consumed.) If you're comparing Access on a LAN versus SQL Server over broadband via the cloud, then you're comparing a nominal 100 mbit/sec pipe against DSL or cable bandwidth, i.e. against perhaps 20 mbit/sec nominal for high-speed cable, a fifth of the bandwidth at best, probably much less.
So you have to be more specific about what you're trying to compare.
Are you comparing Access clients on the local PC consuming an Access MDB residing on the file server against some other kind of client consuming data from a SQL Server residing on another server on the same network? Are you going to continue to use Access as the client? Will your queries be pass-through?
There is a software application for Windows that does that (simulates a low bandwidth, latency and losses if necessary). It not free though. The trial version has a 30-sec emulation limit. Here is the home page of that product: http://softperfect.com/products/connectionemulator/
#RedFilter: You should indicate which version of Access you are using. This document from 2006 shows that the story of what Access brings down to the client across the wire is more complicated than whether the query contains "Access-specific keywords".
http://msdn.microsoft.com/en-us/library/bb188204(SQL.90).aspx
But Access may be getting more and more sophisticated about using server resources with each newer version.
I'll stand by my simple advice: if you want to minimize bandwidth consumption, while still using Access as the GUI, pass-through queries do best, because then it is you, not Access, who will control the amount of data that comes down the wire.
I still think your initial question/approach is misguided: if your Access MDB file was located on the LAN in the first place (was it?) you don't need to simulate the effects of network latency. You need to sniff the SQL statements Access generates, rather than introducing some arbitrary and constant "network latency" factor. To compare an Access GUI using an MDB located on a LAN server against an upsized Access GUI going against a SQL Server back-end, you need to assess what data Access brings down across the wire to the client from the back-end server. Even "upsized" Access can be a hog at the bandwidth trough unless you use pass-through queries. But a properly written client for a SQL-Server back-end will always be far more parsimonious with network bandwidth than Access going against an MDB located on a LAN server, ceteris paribus.

Resources