Size of db lotus notes - database

I need suggestion regarding maximum size for a db lotus notes highly volatil, i.e. an application based on a db of 8+ Gb accessed by 20 users in average inserting attachments and running scripts.
tks !!

There are limits to the size of a Notes Database (sorry Ken). See the Notes Help "Table of Notes and Domino known limits" and Technote #1308379.
The most important ones are:
Database size: The maximum OS file size limit -- (up to 64GB)
Fields in a database: ~ 3000 (limited to ~ 64K total length for all field names). You can enable the database property "Allow more fields in database" to get up to 22,893 uniquely-named fields in the database.
Views in a database: No limit; however, as the number of views increases, the length of time to display other views also increases
Documents in a view: Up to the maximum size of the database
Ususally the "limiting" factors for an application are view rebuild and full text index times, as Ken suggested.
You may want to checkout Andre Guirards postings on the topic of performance as well as his white paper Performance basics for IBM Lotus Notes developers and the Domino Wiki.

I'm not sure if this answers your question, but there's theoretically no limit to the size of a Notes database. Years ago I remember hearing at Lotusphere they had tested a database at 64GBs and it worked.
That said, there will likely be some issues with view indexes growing large, and long waits for refreshing views.

In the link below you can find the limitations that concern the Lotus Notes Databases as they come from IBM and stated also from leyrer.
Limits of Lotus Notes
However, in our company we are working heavily with Lotus Notes databases and our databases our growing very fast mostly due to attachments that can be documents, spreadsheets, images, etc. The solution that we implemented in order to avoid reaching the size limits was to have normally the application and attach to it databases for the attachments. In this way the attachments are not stored to the main application, but to the attached dbs. You can create as many as you can. For example you can have the following:
MyApp.nsf - Main application
MyAppAttach1.nsf - Attached
MyAppAttach2.nsf - Attached
In this way we can also control how much the dbs will grow. I hope that this can help you in your implementation.

For large databases, it may be important to think about a strategy for archiving documents once they are no longer being actively processed. You don't mention how many documents are created/edited/deleted every day, or how large the average document is; but if it is 8GB now, how large will it be next month, or next year? Depending on the factors martin listed in his answer, this could become a concern long before you reach the 64 GB supported limit, and it is better to be prepared in advance.

First suggestion is, create an archive database and reduce the size of the main database. In the next step you should write some process in order to archive the documents weekly or monthly according to your needs.

Related

archiving in small database tables

At what scale of database growth does archiving become a necessity, and are there guidelines to show when it is required?
I manage an intranet which provides short news articles via about 40 targeted news groups. I have been asked to remove browsing access to articles older than 2 years, but to maintain access to these by an existing search interface.
One proposal is to hide records by using scheduled overnight tasks to remove out old news items to parallel archive tables. Given that the entire database is only about 5Gb, the entire set of 13000 news articles take up 17Mb, and there are indexes on the publication dates, is this approach advisable or will WHERE clauses based on dates suffise? Is there a rule of thumb here?
The db in question is SQL 2008, we add maybe 2000 news items per year, and there are no reported performance issues at present - this is purely 'future proofing'.
This definitely is a candidate for doing the simplest possible thing, because the data involved is quite manageable. A where clause should be enough. You should have an index on the date column you want to use in the where clause, as that will probably be done in an online fashion.
IDK about the setup you have, but 5GB is enough to load in memory and still have room to spare. So you're well within what the system can handle.

save file in Lotus Notes and full text search

Saving file in Lotus Notes have two problems:
the full-text search is slow and inefficient
the single database cannot be very big(related to the os file system).
How can i resolve the problem and Are there any replacement?
In Lotus Notes/Domino (version 8.x) max file size for a single notes database is 64 Gb. I work with Lotus Notes/Domino for 17 years, and I did not meet the db size limit in my projects. Just split your project to different databases, according to its functional purpose.
If you want to store very big files, seems that you should choose a different platform. Lotus Notes is not a suitable tool in this case.
Regarding full-text search, if you setup the immediate indexing, then Domino server will index all new information added to the database as soon as possible. In my experience for small amount of data the indexing worked immediately, for big files it took one-two minutes.
The search itself works fast and I did not notice any slow behaviour on a hardware server that meets the Lotus Domino software requirements.

Best Database selection for Client/Server Application (Multiuser) with Delphi?

i want to code a software with Delphi XE, that will be able to connect to a server and users should be able to read/write the database.
All records will be string (unicode enabled), maybe small amount of it can be blob.
My needs are;
Multiple users enabled
More than one user should be able add new records at one time
Capable of storing huge amount of data
Users can be able to edit their own records
Unicode enabled
As possible as low cost solution
Thanks right now...
I vote for Firebird. It fits all your needs and it is free.
I would go with postgres - it's also free and is very fast.
Sandeep
Most of your requirements are handled by most modern database engines (althout concurrency management is not exactly the same among all databases). But to choose the database(s) that would suit you best you should give more precise informations:
"Multiple users". How many concurrent connections? 10? 100? 1000? 10000? 100000? More?
"More than one user should be able add new records at one time". How many inserts per hour? Is this an OLTP database, or a DW one?
"Capable of storing huge amount of data". How many tables? How many rows? How many fields? What's the average row size? Do you need LOB support? How many indexes?
"Users can be able to edit their own records". How often? How many? How long? Some databases have better locking mechanism than others.
"Unicode enabled". Which flavour? UTF-8? UTF-16?
"All records will be string". Which is the maximum string length you need? Hope they are "natural" string fields - storing non-natural string data in string fields usually lower performance.
I'm sure you'll get others, but ElevateDB fits your needs.
It's the follow-on to DBISAM, which does NOT have Unicode support. But ElevateDB does.
May I suggest to take a look at NexusDB. It also fits all your needs. Bill Todd has just reviewed the product.
"Users can be able to edit their own records" What does it mean for you? A database in which records are not editable, that is a Read/Only database, is not very common.
You'll have to think about the general architecture of your software. You just don't select a database like a new car. I'd suggest that you won't be focused on the database choice, but take a look at the whole picture.
Here are some advices:
Separate your database storage, the User Interface, and your software logic. This is called 3-Tier, and is definitively a good idea if you're starting a new project in 2010. It will save you a lot of time in the future. We use such an architecture in our http://blog.synopse.info/category/Open-Source-Projects/SQLite3-Framework
Use a database connection which is not fixed to one database engine. Delphi comes with DBX, and there are free or not so expensive alternatives around. See http://edn.embarcadero.com/article/39500 for dbx and http://www.torry.net/pages.php?id=552 for alternatives
Think about the future: try to guess what will be the features of your application after some time, and try to be ready to implement them in your today's architecture choices.
In all cases, you're right asking for advice and feedback. The time you're spending now before coding will spare your time during future maintenance.
For example, if one of your request is that "All records will be string", with some BLOB, your database size won't never be bigger than a few GB. SQLite3 could be enough for you, and there is no size limitation in the TEXT fields in this database.
Nobody's mentioned SQL Server Express so I guess I'll do it...
Microsoft SQL Server Express is jolly good and is also free.
Yes, it does have limits but they're pretty big and it's not possible to know if they're sufficient without further info from the OP.
Multiple users enabled - yep
More than one user should be able add new records at one time - yep
Capable of storing huge amount of data - depends on definition of huge. But "probably"
Users can be able to edit their own records - umm, yes
Unicode enabled - yep
As possible as low cost solution - it's free. But the data access components will depend on your choice of access method

Best data store for billions of rows

I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year).
The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net.
I'm a SQL server guy and I think SQL Server can do this, but with all the talk about BigTable, CouchDB, and other nosql solutions, it's sounding more and more like an alternative to a traditional RDBS may be best due to optimizations for distributed queries and scaling. I tried cassandra and the .net libraries don't currently compile or are all subject to change (along with cassandra itself).
I've looked into many nosql data stores available, but can't find one that meets my needs as a robust production-ready platform.
If you had to store 36 billion small, flat records so that they're accessible from .net, what would choose and why?
Storing ~3.5TB of data and inserting about 1K/sec 24x7, and also querying at a rate not specified, it is possible with SQL Server, but there are more questions:
what availability requirement you have for this? 99.999% uptime, or is 95% enough?
what reliability requirement you have? Does missing an insert cost you $1M?
what recoverability requirement you have? If you loose one day of data, does it matter?
what consistency requirement you have? Does a write need to be guaranteed to be visible on the next read?
If you need all these requirements I highlighted, the load you propose is going to cost millions in hardware and licensing on an relational system, any system, no matter what gimmicks you try (sharding, partitioning etc). A nosql system would, by their very definition, not meet all these requirements.
So obviously you have already relaxed some of these requirements. There is a nice visual guide comparing the nosql offerings based on the 'pick 2 out of 3' paradigm at Visual Guide to NoSQL Systems:
After OP comment update
With SQL Server this would e straight forward implementation:
one single table clustered (GUID, time) key. Yes, is going to get fragmented, but is fragmentation affect read-aheads and read-aheads are needed only for significant range scans. Since you only query for specific GUID and date range, fragmentation won't matter much. Yes, is a wide key, so non-leaf pages will have poor key density. Yes, it will lead to poor fill factor. And yes, page splits may occur. Despite these problems, given the requirements, is still the best clustered key choice.
partition the table by time so you can implement efficient deletion of the expired records, via an automatic sliding window. Augment this with an online index partition rebuild of the last month to eliminate the poor fill factor and fragmentation introduced by the GUID clustering.
enable page compression. Since the clustered key groups by GUID first, all records of a GUID will be next to each other, giving page compression a good chance to deploy dictionary compression.
you'll need a fast IO path for log file. You're interested in high throughput, not on low latency for a log to keep up with 1K inserts/sec, so stripping is a must.
Partitioning and page compression each require an Enterprise Edition SQL Server, they will not work on Standard Edition and both are quite important to meet the requirements.
As a side note, if the records come from a front-end Web servers farm, I would put Express on each web server and instead of INSERT on the back end, I would SEND the info to the back end, using a local connection/transaction on the Express co-located with the web server. This gives a much much better availability story to the solution.
So this is how I would do it in SQL Server. The good news is that the problems you'll face are well understood and solutions are known. that doesn't necessarily mean this is a better than what you could achieve with Cassandra, BigTable or Dynamo. I'll let someone more knowleageable in things no-sql-ish to argument their case.
Note that I never mentioned the programming model, .Net support and such. I honestly think they're irrelevant in large deployments. They make huge difference in the development process, but once deployed it doesn't matter how fast the development was, if the ORM overhead kills performance :)
Contrary to popular belief, NoSQL is not about performance, or even scalability. It's mainly about minimizing the so-called Object-Relational impedance mismatch, but is also about horizontal scalability vs. the more typical vertical scalability of an RDBMS.
For the simple requirement of fasts inserts and fast lookups, almost any database product will do. If you want to add relational data, or joins, or have any complex transactional logic or constraints you need to enforce, then you want a relational database. No NoSQL product can compare.
If you need schemaless data, you'd want to go with a document-oriented database such as MongoDB or CouchDB. The loose schema is the main draw of these; I personally like MongoDB and use it in a few custom reporting systems. I find it very useful when the data requirements are constantly changing.
The other main NoSQL option is distributed Key-Value Stores such as BigTable or Cassandra. These are especially useful if you want to scale your database across many machines running commodity hardware. They work fine on servers too, obviously, but don't take advantage of high-end hardware as well as SQL Server or Oracle or other database designed for vertical scaling, and obviously, they aren't relational and are no good for enforcing normalization or constraints. Also, as you've noticed, .NET support tends to be spotty at best.
All relational database products support partitioning of a limited sort. They are not as flexible as BigTable or other DKVS systems, they don't partition easily across hundreds of servers, but it really doesn't sound like that's what you're looking for. They are quite good at handling record counts in the billions, as long as you index and normalize the data properly, run the database on powerful hardware (especially SSDs if you can afford them), and partition across 2 or 3 or 5 physical disks if necessary.
If you meet the above criteria, if you're working in a corporate environment and have money to spend on decent hardware and database optimization, I'd stick with SQL Server for now. If you're pinching pennies and need to run this on low-end Amazon EC2 cloud computing hardware, you'd probably want to opt for Cassandra or Voldemort instead (assuming you can get either to work with .NET).
Very few people work at the multi-billion row set size, and most times that I see a request like this on stack overflow, the data is no where near the size it is being reported as.
36 billion, 3 billion per month, thats roughly 100 million per day, 4.16 million an hour, ~70k rows per minute, 1.1k rows a second coming into the system, in a sustained manner for 12 months, assuming no down time.
Those figures are not impossible by a long margin, i've done larger systems, but you want to double check that is really the quantities you mean - very few apps really have this quantity.
In terms of storing / retrieving and quite a critical aspect you have not mentioned is aging the older data - deletion is not free.
The normal technology is look at is partitioning, however, the lookup / retrieval being GUID based would result in a poor performance, assuming you have to get every matching value across the whole 12 month period. You could place a clustered indexes on the GUID column will get your associated data clusterd for read / write, but at those quantities and insertion speed, the fragmentation will be far too high to support, and it will fall on the floor.
I would also suggest that you are going to need a very decent hardware budget if this is a serious application with OLTP type response speeds, that is by some approximate guesses, assuming very few overheads indexing wise, about 2.7TB of data.
In the SQL Server camp, the only thing that you might want to look at is the new parrallel data warehouse edition (madison) which is designed more for sharding out data and running parallel queries against it to provide high speed against large datamarts.
"I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year).
The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net."
I can tell you from experience that this is possible in SQL Server, because I have done it in early 2009 ... and it's still operation to this day and quite fast.
The table was partitioned in 256 partitions, keep in mind this was 2005 SQL version ... and we did exactly what you're saying, and that is to store bits of info by GUID and retrieve by GUID quickly.
When i left we had around 2-3 billion records, and data retrieval was still quite good (1-2 seconds if get through UI, or less if on RDBMS) even though the data retention policy was just about to be instantiated.
So, long story short, I took the 8th char (i.e. somewhere in the middle-ish) from the GUID string and SHA1 hashed it and cast as tiny int (0-255) and stored in appropriate partition and used same function call when getting the data back.
ping me if you need more info...
The following article discusses the import and use of a 16 billion row table in Microsoft SQL.
https://www.itprotoday.com/big-data/adventures-big-data-how-import-16-billion-rows-single-table.
From the article:
Here are some distilled tips from my experience:
The more data you have in a table with a defined clustered index, the slower it becomes to import unsorted records into it. At some
point, it becomes too slow to be practical.
If you want to export your table to the smallest possible file, make it native format. This works best with tables containing
mostly numeric columns because they’re more compactly represented
in binary fields than character data. If all your data is
alphanumeric, you won’t gain much by exporting it in native format.
Not allowing nulls in the numeric fields can further compact the
data. If you allow a field to be nullable, the field’s binary
representation will contain a 1-byte prefix indicating how many
bytes of data will follow.
You can’t use BCP for more than 2,147,483,647 records because the BCP counter variable is a 4-byte integer. I wasn’t able to find any
reference to this on MSDN or the Internet. If your table consists of
more than 2,147,483,647 records, you’ll have to export it in chunks
or write your own export routine.
Defining a clustered index on a prepopulated table takes a lot of disk space. In my test, my log exploded to 10 times the original
table size before completion.
When importing a large number of records using the BULK INSERT statement, include the BATCHSIZE parameter and specify how many
records to commit at a time. If you don’t include this parameter,
your entire file is imported as a single transaction, which
requires a lot of log space.
The fastest way of getting data into a table with a clustered index is to presort the data first. You can then import it using the BULK
INSERT statement with the ORDER parameter.
There is an unusual fact that seems to overlooked.
"Basically after inserting 30Mil rows in a day, I need to fetch all the rows with the same GUID (maybe 20 rows) and be reasonably sure I'd get them all back"
Needing only 20 columns, a non-clustered index on the GUID will work just fine. You could cluster on another column for data dispersion across partitions.
I have a question regarding the data insertion: How is it being inserted?
Is this a bulk insert on a certain schedule (per min, per hour, etc)?
What source is this data being pulled from (flat files, OLTP, etc)?
I think these need to be answered to help understand one side of the equation.
Amazon Redshift is a great service. It was not available when the question was originally posted in 2010, but it is now a major player in 2017. It is a column based database, forked from Postgres, so standard SQL and Postgres connector libraries will work with it.
It is best used for reporting purposes, especially aggregation. The data from a single table is stored on different servers in Amazon's cloud, distributed by on the defined table distkeys, so you rely on distributed CPU power.
So SELECTs and especially aggregated SELECTs are lightning fast. Loading large data should be preferably done with the COPY command from Amazon S3 csv files. The drawbacks are that DELETEs and UPDATEs are slower than usual, but that is why Redshift in not primarily a transnational database, but more of a data warehouse platform.
You can try using Cassandra or HBase, though you would need to read up on how to design the column families as per your use case.
Cassandra provides its own query language but you need to use Java APIs of HBase to access the data directly.
If you need to use Hbase then I recommend querying the data with Apache Drill from Map-R which is an Open Source project. Drill's query language is SQL-Compliant(keywords in drill have the same meaning they would have in SQL).
With that many records per year you're eventually going to run out of space.
Why not filesystem storage like xfs which supports 2^64 files and using smaller boxes.
Regardless of how fancy people want to get or the amount of money one would end up spend getting a system with whatever database SQL NoSQL ..whichever these many records are usually made by electric companies and weather stations/providers like ministry of environment who control smaller stations throughout the country.
If you're doing something like storing pressure.. temperature..wind speed.. humidity etc...and guid is the location..you can still divide the data by year/month/day/hour.
Assuming you store 4 years of data per hard-drive.
You can then have it run on a smaller Nas with mirror where it would
also provide better read speeds and have multiple mount points..based on the year when it was created.
You can simply make a web-interface for searches
So dumping location1/2001/06/01//temperature and location1/2002/06/01//temperature would only dump the contents of hourly temperature for the 1st day of summer in those 2 years (24h*2) 48 small files vs searching a database with billions of records and possibly millions spent.
Simple way of looking at things.. 1.5 billion websites in the world with God knows how many pages each
If a company like Google had to spend millions per 3 billion searches to pay for super-computers for this they'd be broke.
Instead they have the power-bill...couple million crap computers.
And caffeine indexing...future-proof..keep adding more.
And yeah where indexing running off SQL makes sense then great
Building super-computers for crappy tasks with fixed things like weather...statistics and so on so techs can brag their systems crunches xtb in x seconds...waste of money that can be spent somewhere else..maybe that power-bill that won't run into the millions anytime soon by running something like 10 Nas servers.
Store records in plain binary files, one file per GUID, wouldn't get any faster than that.
You can use MongoDB and use the guid as the sharding key, this means that you can distribute your data over multiple machines but the data you want to select is only on one machine because you select by the sharding key.
Sharding in MongoDb is not yet production ready.

Practical limits of SQL-Server database

I am setting up a database that I anticipate will be quite large, used for calculations and data storage. It will be one table with maybe 10 fields, containing one primary key and two foreign keys to itself. I anticipate there will be about a billion records added daily.
Each record should be quite small, and I will primarily be doing inserts. With each insert I will need to do a simple update on one or two fields of a connected record. All queries should be relatively simple.
At what size will I start running into performance problems with sql-server? I've seen mention of vldb systems, but also heard they may be a real pain. Is there a threshold where I should start looking at that? Is there a better db than sql-server that is designed for this sort of thing?
When talking about transaction rates of over 10k/sec you shouldn't be asking advice on forums... This is close to TPC-C benchmark performance on 32 and 64 ways, which cost millions to tune up.
At what size will you be running into problems?
With a good data model and schema design, a properly tunned and with correct capacity planned server will not run into problems for 1 bil. records per day. The latest published SQL Server benchmarks are at about 1.2 mil tran/min. That is roughtly 16k transactions per second, at system priced at USD ~6 mil in 2005 (64 way Superdome). To achieve 10k tran/sec for your planned load you're not going to need a Superdome, but you are going to need a quite beefy system (at least 16 way probably) and specially a very very good I/O subsytem. When doing back of the envelope capacity planning one usualy considers about 1K tran/sec per HBA and 4 CPU cores to feed the HBA. And you're going to need quite a few database clients (application mid-tiers) just to feed 1 bil. records per day into the database. I'm not claiming that I did your capacity planning here, but I just wanted to give you a ballpark of what are we talking about. This is a multi-million dollars project, and something like this is not designed by asking advice on forums.
Unless you're talking large as in Google's index type of large, the Enterprise databases like SQL Server or Oracle will do just fine.
James Devlin over at Coding the Wheel summed it up nicely (though this is more of a comparison between free DB's like MySQL with Oracle/SQL Server
Nowadays I like to think of SQL Server and Oracle as the Death Stars of the relational database universe. Extremely powerful. Monolithic. Brilliant. Complex almost beyond the ability of a single human mind to understand. And a monumental waste of money except in those rare situations when you actually need to destroy a planet.
As far as performance goes, it all really depends on your indexing strategy. Inserts are really the bottleneck here, as the records need to be indexed as they come in, the more indexing you have, the longer inserts will take.
In the case of something like Google's index, read up on "Big Table", it's quit interesting how Google set it up to use clusters of servers to handle searches across enormous amounts of data in mere milliseconds.
It can be done, but given your hardware costs and plans get MS in to spec things out for you. It will be fraction of your HW costs.
Saying that, Paul Nielson blogged about 35k TPS (3 billion rows per day) 2 years ago. Comments worth reading too and reflect some of what Remus said
The size of the database itself does not create performance problem. Practical problems in database size come from operational/maintenance issues.
For example:
De-fragmenting and re-building indexes take too long.
Backups take too long or take up too much space.
Database restores cannot be performed quick enough in case of an outage.
Future changes to the database tables take too long to apply.
I would recommend designing/building in some sort of partitioning from the start. It can be SQL Server partitioning, application partitioning (e.g. one table per month), archiving (e.g. to a different database).
I believe that these problems occur in any database product.
In addition, be sure to make allowances for transaction log file sizes.

Resources