Oracle performance when doing bulk inserts from Delphi - sql-server

We have a Delphi application which can connect to either Oracle or SQL Server. We use Devart components to connect to the databases, and everything is very generic when it comes to database access. i.e. we use the lowest common denominator. Ultimately we use the databases as data stores and do not use any of the more "advanced" features which maybe specific to the database.
However we have a serious performance issue with Oracle. It is to do with inserting data. I know that inserting data by running off a load of insert statements is not great for performance, but due to some business logic that needs to be done on the raw data before it gets uploaded to the database, we are a little restricted to multiple inserts. To get an idea of performance differences, a recent test we did, inserts 1000 items into our database and takes 5 minutes in SQL Server (acceptable) but 44 minutes in Oracle.
Is there anything we could do to improve performance? The inserting of data needs to be done by the user and NOT an Oracle DBA, so absolutely no Oracle skills is one of the pre-requisites for any solution. Basically, the users need to press a button and everything is done.
Edit: Business Logic happens before the insert (although there is a little going on during the actual insert, so more realistic number would be 2 minutes for SQL Server and 40 or so minutes for Oracle. Bear in mind we are inserting a few large blobs per record, so perhaps that explains the slowish performance, but not why there is such a difference. The 1000 items are part of a transaction.

Oracle supports array DML, which can speed up performance. Also if BLOB are involved, performance may depend on caching settings, and how the BLOB are setup in the destination table. Some db client parameters tuning may be also beneficial to increase network speed.
Anyway, without knowing which version of Oracle you're using, how it is configured, your table(s) deinition (and its tablespaces), how large are the BLOBS, and the SQL actually used (did you trace it?), it's very difficult to diagnose the real problem.
Oracle has some powerful diagnostic tools to identify bottlenecks, but they may not be easy to use and require to know enough about how Oracle works. From the Enterprise Manager Console you can access some of them in a more readable format - did you check it?
Update: because I can't comment to other answers, Oracle support differet type of LOB storage:
LOBs stored into the database (under transaction managment)
BFILES, external file system LOBs yet still managed by Oracle (LOB data not under transaction)
SecureFiles (11g onwards, alike BFILES but with transaction support and other features)
Oracle is designed for and can manage large LOBs - just it needs to be configured properly. Parameter that will affect LOB performance:
ENABLE/DISABLE STORAGE IN ROW
CACHE/NOCACHE/CACHE READS
LOGGING/NOLOGGING
CHUNK
PCTVERSION/RETENTION (especially for updates and deletes)
TABLESPACE (usually, a dedicated tablespace for lobs is advisable)
These parameters needs to be set taking into account the average LOB size, how LOBs are accessed, amd how often are modified. There's no "one size fits all".
But there are also the client side: OCI can buffer LOBs client side, so small read/write operations are cached, minimizing the number of network roundtrips and LOB versioning - that's up to the OCI wrapper you're using.

Array DML (only available with FireDac, ODAC, DOA and our SynDbOracle unit afaik) won't change much if your problem is about blob transfer.
First idea is to compress the data before transmission.
Try several access libraries. Our open source SynDBOracle directly accesses the oci.dll client but may be slightly faster.
But perhaps the problem may be on the server side. Oracle does not like transactions with huge data, since it tends to overflow its wal files. Try to tune the write ahead log files of the table.
IMHO a rdbms is not the best option to store huge blobs. Plain files, indexed via a rdbms for metadata is usually better. Or switch to a big SQL storage, like key/value stores or mongodb blob api.
Remember that both Oracle and mssql do ask money proportional to the data size....

Related

How can I link a SQL image column to an external database?

I need a bit of a help with the following.
Note: in the following scenario, I do not have access to the application's source code, therefore I can only make changes at the database level.
Our database uses dbo.[BLOB] to store all kinds of files and documents. The table uses an IMAGE (yeah, obsolete) data type. Since this particular table is growing quite fast, I was thinking to implement some archiving feature.
My idea is to move all files older than X months to a second database, and then somehow link from the dbo.[BLOB] table to the external/archiving database.
Is this even possible? The goal is to reduce the database size, in order to improve backup and query performance.
Any ideas and hints much appreciated.
Thanks.
Fabian
There are 2 features to help you with backup speed and database size in this case:
Filestream will allow you to store BLOBS as files on the file system instead of in database file. It complicates backup scenario, you have to backup both database and files but you get smaller database file along with faster access time to documents. It is much faster to read file from filesystem than from blob column. Additionally filestream allows for files bigger than 2GB.
Partitioning will split table into smaller chunks on physical level. This way you do not need to access application code to change where particular rows are stored physically and decide which data needs to be accessed fast and put it on SSD drive and which can land on slower archive. This way you can have more frequent backups on current partition, while less frequent on archive.
Prior to SQL Server 2016 SP1 - this feature was available in Enterprise version only. For SQL Server 2016 SP1 this is available in all editions.
In your case most likely you should go with filestream first.
W/o modifying the application you can do, basically, nothing. You may try to see if changing the column type will be tolerated by the application (very unlikely, 99.99% it will break the app) and try to use FILESTREAM, but even if you succeed it does not give much benefits (backup size will be the same, for example).
A second thing you can try is to replace the table with a view, using INSTEAD OF triggers for updates. It is still very likely to break the application (lets say 99.98%). The goal would be to have a distributed partitioned view (or cross DB partitioned view) which presents to the application an unified view of the 'cold' and 'hot' data. Is complex, error prone, but it will reduce the size of the backups (as long as data is moved from hot to cold and cold data is immutable, requiring few backups).
The goal is to reduce the database size, in order to improve backup and query performance.
To reduce the backup size, as I explained above, you can do, basically, nothing. But performance you need to investigate it and address it appropriately, based on your findings. Saying the the database is slow 'because of BLOBs' is hand-waving.

SQL Server 2014 In-Memory OLTP vs Redis

Is SQL Server 2014's In-Memory OLTP (Hekaton) the same or similar concept with Redis?
I use Redis for in-memory storage (storage in RAM) and caching, while having a separate SQL Server database (like StackExchange does). Can Hekaton do the same thing?
They're similar by both being primarily in-memory, but that's about it.
Redis is an in-memory key-value database. It can persist data to disk if configure it, but it keeps the entire dataset in memory so you need enough RAM for that. The key-value architecture allows various different data types so you can store a value as a simple string or lists, sets, hashes, etc. Basically all the data structures you can use inside of a programming language are available in Redis natively.
SQL Server Hekaton (In-Memory OLTP) is a new engine designed to run relational tables in memory. All the data for these tables is kept in RAM but also stored to disk so they are fully durable.
Hekaton can take individual tables in a SQL Server database and run them in a different process using MVCC (instead of pages and locks) and other optimizations so operations are thousands of times faster than the traditional disk-based engine. There is a lot of research that went into this and the primary use-case would be to take a table that is under heavy load and switch it to run in-memory to increase performance and scalability.
Hekaton was not meant to run an entire database in memory (although you can do that if you really want to) but rather as a new engine designed to handle specific cases while keeping the interface the same. Everything to the end-user is identical to the rest of SQL Server: you can use SQL, stored procedures, triggers, indexes, atomic operations with ACID properties and you can work seamlessly with data in both regular and in-memory tables.
Because of the performance potential of Hekaton, you can use it to replace Redis if you need the speed and want to model your data within traditional relational tables. If you need the other key-value and data structure features of Redis, you're better off staying with that.
With SQL 2016 SP1 and newer, all tiers of SQL Server now have access to the same features and the only difference is pricing for support and capacity.
Firstly you need the enterprise edition (very expensive) of SQL Server to use Hekaton (In-Memory OLTP). Note you have to pay for sql server per CPU, adding more workload to SQL server may require you to have more CPU and therefore a lot more licence costs.
But unlike Redis, you can have a trigger or stored proc update your “in memory cache” as part of the database transaction. You may also find that Hekaton is fast enough that you don’t need a separate set of caches from your main tables.
So yes, Hekaton can do the same as Redis, but it is unlikely to be sensible to use it in that way unless its usage does not cost you much.
Hekaton comes into its own when it allows you to process a lot more data without having to invest in the programming cost of re-designing your system to make use of caching with Redis or otherwise.

Has open source ever created a single file database that auto handles transactions?

Has open source ever created a single file database that has better performance when handling large sets of sql queries that aren't delivered in formal SQL transaction sets? I work with a .NET server that does some heavy replication of thousands of rows of data from another server and it does so it a 1-by-1 fashion without formal SQL transactions. So, therefore I cannot use SQLite or FirebirdDB or JavaDB because they all don't automatically batch the transactions and therefore the performance is dismal. Each insert waits for the success of the previous one, etc. So, I am forced to use a heavier database like SQLServer, MySQL, Postgres, or Oracle.
Does anyone know of a flat file database (that has a JDBC connect driver) that would support auto batching transactions and solve my problem?
The main think I dont like about the heavier databases is the lack of the ability to see inside the database with a one-mouse-click operation, like you can with SQLLite.
I tried creating a SQLite database and
then set PRAGMA read_uncommitted=TRUE;
and it didn't result in any
performance improvement.
I think that Firebird can work for this.
Firebird have good dotnet provider and many solution for replication
May be you can read this article for Firebird transaction
Try hypersonic DB - http://hsqldb.org/doc/guide/ch02.html#N104FC
If you want your transactions to be durable (i.e. survive a power failure) then the database will HAVE to write to the disc after each transaction (this is usually a log of some sort).
If your transactions are very small this will result in a huge number of writes, and very poor performance even on your battery backed raid controller or SSD, but worse performance on consumer-grade hardware.
The only way of avoiding this is to somehow disable the flush at txn commit (which of course breaks durability). I have no idea which ones support this, but it should be easy to find out.

realtime system database use

Given a .NET environment with Windows CE, can you persist thousands of records per second in a local database (SQL Server 2008 - standard or CE).
What are the performance issues with persisting realtime instrument data in a database versus a log file?
SQL Server 2008 standard is more than capable of those insertion rates PROVIDED you have hardware capable of supporting it.
The question you really need to be asking is do I require the ability to search the captured data quickly?
This SO answer might be of interest: What does database query and insert speed depend on?
The number (and width) of indexes on a table will obviously have an impact on insertion rate.
If you are considering open-source, then MySQL is often cited as being able to handle high volumes.

What is a good SQL Server 2008 solution for handling massive writes so that they don't slow down reads for users of the database?

We have large SQL Server 2008 databases. Very often we'll have to run massive data imports into the databases that take a couple hours. During that time everyone else's read and small write speeds slow down a ton.
I'm looking for a solution where maybe we setup one database server that is used for bulk writing and then two other database servers that are setup to be read and maybe have small writes made to them. The goal is to maintain fast small reads and writes while the bulk changes are running.
Does anyone have an idea of a good way to accomplish this using SQL Server 2008?
Paul. There's two parts to your question.
First, why are writes slow?
When you say you have large databases, you may want to clarify that with some numbers. The Microsoft teams have demonstrated multi-terabyte loads in less than an hour, but of course they're using high-end gear and specialized data warehousing techniques. I've been involved with data warehousing teams that regularly loaded so much data overnight that the transaction log drives had to be over a terabyte just to handle the quick bursts, but not a terabyte per hour.
To find out why writes are slow, you'll want to compare your load methods to data warehousing techniques. For example, have you tried using staging tables? Table partitioning? Data and log files on different arrays? If you're not sure where to start, check out my Perfmon tutorial to measure your system looking for bottlenecks:
http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/
Second, how do you scale out?
You asked how to set up multiple database servers so that one handles the bulk load while others handle reads and some writes. I would heavily, heavily caution against taking the multiple-servers-for-writes approach because it gets a lot more complicated quickly, but using multiple servers for reads is not uncommon.
The easiest way to do it is with log shipping: every X minutes, the primary server takes a transaction log backup and then that log backup is applied to the read-only reporting server. There's some catches with this - the data is a little behind, and the restore process has to kick all connections out of the database to apply the restore. This can be a perfectly acceptable solution for things like data warehouses, where the end users want to keep running their own reports while the new day's data loads. You can simply not do transaction log restores while the data warehouse is loading, and the users can maintain connections the whole time.
To help find out what solution is right for you, consider adding the following to your question:
The size of your database (GB/TB in size, # of millions of rows in the largest table that's having the writes)
The size of your server & storage (a box with 10 drives has different solutions available than a box hooked up to a SAN)
The method of loading data (is it single-record inserts, are you using bulk load, are you using table partitioning, etc)
Why not use MemCached to eliminate the reads, I've got the same situation where I work and we've been using memcached on Windows with great results. I was supprised how trivial it was to get my code running with it too. There are open-source wrapping libraries for virtually every mainstream language, and using it could result in 99% of your reads, not even touching the database (becasue you set the memcache values on the write operation of the database).
Memcached, is really just a giant hash table store (and can even be clustered or run on any machine you like since it uses sockets to read and store the hashes).
When reading the memcached value, simply check if its null (return if its not) or do your ussual database read and return. It can store just about everything, so long as each memcached key/value pair is less than 1MB.
The easiest way would be to slow down the rate at which writes occur, and feed them in one record at a time. They'll be slower, but it would make things faster for users. If the batches take "a couple hours", you perhaps can spread them out more.
This is just an idea. Create a view over your "active" tables. Then BCP in the data into a "staging" table. When it is done, update the view to include the "staging" tables. Just an idea.
I'm not sure what you mean when you say everyone else's read and write slows down. Does it slow down when they read & write to the same database where the data is currently being imported or from different databases on the same server?
If it is the same database, you could always use the "with (nolock)" hint to do the reads even when the table is locked for writes/inserts. However, please be aware that the reads can be dirty reads. I am not sure how you can do faster quick writes when the table is locked because a write is already in progress. You can keep the transaction small to make the writes faster and release the locks. The other option is to have a separate database for bulk inserts and another database for reading.

Resources