It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
creating large database 10gb for informix
As long as you ensure you have enough disk space allocated in the chunks for the dbspaces associated with the instance, there is no particular problem with creating a medium size database such as a 10 GB one. These days, a 'large database' really doesn't start until you reach 100 GB; arguably, not until you reach 1 TB. 10 GB is not small, but it isn't all that large.
Where will your data come from? There are a large number of possible loading strategies, depending on data sources and version of IDS. Note that the very latest versions of IDS (11.50.xC6 or later) include 'external tables' as an extra (and extremely fast) loading mechanism, and the MERGE statement combined with external tables provides an 'UPSERT' - update or insert (or delete) - mechanism too.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I just want to know if it is possible to view content of tables in .mdf files with NoSQL for example mongodb? I don't want to change in .mdf file just view inside it.
Sorry to say this, but you seem to completely lack any understanding about relational databases, NoSQL databases and the difference between them.
SQL Server and MongoDB are not only two completely different database products, but two completely different kinds of databases as well (relational vs. non-relational).
Asking whether you can read SQL Server database files with MongoDB is like asking if you can edit AutoCAD files with Microsoft Excel (or the other way round).
Both are complex tools with their own file formats, but made by different vendors and for completely different purposes.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I need to take backup of Apache-Solr's saved data. I enabled replication in solrconfig.xml and doing backup through http-api. Is there anyway to take incremental backup? That is I have 5 GB of data. First time the backup has been taken. When the data size increased to 5.5 GB, the additional data of 0.5 GB alone needs to be taken as backup in the second time. Is there any way of doing it?
You can a snapshot of Complete index (Not Incremental) through HTTP Api.
You can configure the number of back ups to maintain as well and on what condition to backup the index.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
For a few years now I have been working on a system that currently stores its data in a database. It has quite a high demand on it with millions of transactions.
There is no need to do this but purely for fun I have been for a long time now wondering just how fast I could make it if I wrote the whole thing in c, and writing/reading directly from disk. I know this is a little crazy.
All of the data fits in memory, so the biggest issue is going to be somehow storing a transaction log that can be replayed if the system crashes.
I am wondering what people with more experience in C than I think about this.
If i understand he question correctly, I can see two options:
You could look at something like SQLite, which gives both the "written in C" and fast execution parts, in addition to handling your storage to disk. It is a file-based database and is very fast and resilient against system/program crashes.
You could log all your data to disk while storing the live copy, but if you store it as SQL transactions, it is going to be larger than the equivalent raw data. In this case you have a trade-off in that something like SQLite will likely have more processing overhead than your hand coded RAM storage method, but may have less to write to disk due to its raw (non-SQL) storage.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
How to make standalone database application like metastock, tradestation etc. ?
They colud handle some what large amount of database and those database files can take one computer to another.
It is possible to read, write or delete data of those database from the application.
Can anybody have any idea of how those applications are working? And what type of database they are?How to develop an database system like this? If you know know anything about it, Please share it. Thaks in advance.
Well, if you're just looking for a standalone db, you could do it in access but I don't know how big you are going to want it to get, so access may be limiting.
MYSQL is also another option (mostly as it's free for the common folk).
look into sqlite. It's very portable. However it can't handle concurrent queries.
http://www.sqlite.org/
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I want to know that does Apply transaction in stored procedure slow the execution of query?
if yes the why?
i want to know the actual internal processing of sql server when we apply transaction on query?
Consider that there are different types of Transaction within SQL Server and that the default setting for the Database Engine is "Autocommit Transaction", that is to say that each individual Transact-SQL statement is committed when it completes. You do not have to specify statements to control transactions, unless you wish to explicitly manage them with more refined control.
See: Controlling Transactions (Database Engine)
Are you perhaps therefore asking if there is any additional overhead when explicitly controlling transactions?
The short answer is yes, as to what exactly is that overhead, well it depends. It depends on multiple factors such as the method used, i.e. Transactions managed through an API or direct via T-SQL, as well as the performance of your specific hardware.
On the performance front, I guess that there will be a slight performance degradtion when using transactions, but that is negligible.
For the transaction processing, got two links below. Hope they will help you -
http://www.informit.com/articles/article.aspx?p=26657
http://sqlserverpedia.com/wiki/Database-Transaction