It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I need to take backup of Apache-Solr's saved data. I enabled replication in solrconfig.xml and doing backup through http-api. Is there anyway to take incremental backup? That is I have 5 GB of data. First time the backup has been taken. When the data size increased to 5.5 GB, the additional data of 0.5 GB alone needs to be taken as backup in the second time. Is there any way of doing it?
You can a snapshot of Complete index (Not Incremental) through HTTP Api.
You can configure the number of back ups to maintain as well and on what condition to backup the index.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I was doing some analysis for working on creation of a real estate site.
I need to fetch data from www.mls.ca
Now i am not sure on two things:
1. Is it legal to fetch data from that site for commercial purpose.
2. How do i pull the data from the site (i. Do i need to take the data every time and keep updating my db, ii. Do i need to get any db credentials from the mls site), i am a java developer so any hints on that line would be helpful.
Please help me with the process of fetching data from www.mls.ca
I'm not entirely clear what you intend to do, however, Spark looks promising, though. Please provide a specific question, so I can help you further.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I'm trying to create a repro script for a MS connect item, and need to be able to cause a stack dump at will.
How could I do that?
DBCC DUMPTRIGGER can be used to trigger a dump on a specific error.
You can also take a dump of at will, see How to use the Sqldumper.exe utility to generate a dump file in SQL Server but those are less useful.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
In my website there are name of Authors with their images. How can I add their biography to the database?
The biography is long text.
You can use biography field data type VARCHAR(MAX) to store large text data.
Varchar(max) is good to store large data file but if you are using SQL Server 2008 and above then I would explore an option called "File Stream" which allows you to store actual files outside of SQL Server but still can be easily maintained and accessed withing SQL Server.
This MSDN article should give you head start about this great feature.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I get this error when i try to restore my database.
Additional Information:
System.Data.SqlClient.SqlError: The media set has 2 media families but only 1 are provided. All members must be provided.
Any suggestions? Please.
Sounds like when the backup was taken it was striped with another media. Because of this, the combination of both of these would constitute a full backup. Therefore, just like the message says, you need both of the backup files.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
creating large database 10gb for informix
As long as you ensure you have enough disk space allocated in the chunks for the dbspaces associated with the instance, there is no particular problem with creating a medium size database such as a 10 GB one. These days, a 'large database' really doesn't start until you reach 100 GB; arguably, not until you reach 1 TB. 10 GB is not small, but it isn't all that large.
Where will your data come from? There are a large number of possible loading strategies, depending on data sources and version of IDS. Note that the very latest versions of IDS (11.50.xC6 or later) include 'external tables' as an extra (and extremely fast) loading mechanism, and the MERGE statement combined with external tables provides an 'UPSERT' - update or insert (or delete) - mechanism too.