How to tell if sqlite database file is valid or not - database

In the code below, pathToNonDatabase is the path to a simple text file, not a real sqlite database. I was hoping for sqlite3_open to detect that, but it doesn't (db is not NULL, and result is SQLITE_OK). So, how to detect that a file is not a valid sqlite database?
sqlite3 *db = NULL;
int result = sqlite3_open(pathToNonDatabase, &db);
if((NULL==db) || (result!=SQLITE_OK)) {
// invalid database
}

sqlite opens databases lazily. Just do something immediately after opening that requires it to be a database.
The best is probably pragma schema_version;.
This will report 0 if the database hasn't been created (for instance, an empty file). In this case, it's safe work with (and run CREATE TABLE, etc)
If the database has been created, it will return how many revisions the schema has gone through. This value might not be interesting, but that it's not zero is.
If the file exists and isn't a database (or empty), you'll get an error.
If you want a somewhat more thorough check, you can use pragma quick_check;. This is a lighter-weight integrity check, which skips checking that the contents of the tables line up with the indexes. It can still be very slow.
Avoid integrity_check. It not only checks every page, but then verifies the contents of the tables against the indexes. This is positively glacial on a large database.

For anyone needing to do this in C# with System.Data.SQLite you can start a transaction, and then immediately roll it back as follows:-
private bool DatabaseIsValid(string filename)
{
using (SQLiteConnection db = new SQLiteConnection(#"Data Source=" + filename + ";FailIfMissing=True;"))
{
try
{
db.Open();
using (var transaction = db.BeginTransaction())
{
transaction.Rollback();
}
}
catch (Exception ex)
{
log.Debug(ex.Message, ex);
return false;
}
}
return true;
}
If the file is not a valid database the following SQLiteException is thrown - file is encrypted or is not a database (System.Data.SQLite.SQLiteErrorCode.NotADb). If you aren't using encrypted databases then this solution should be sufficient.
(Only the 'db.Open()' was required for version 1.0.81.0 of System.Data.SQLite but when I upgraded to version 1.0.91.0 I had to insert the inner using block to get it to work).

I think a pragma integrity_check; could do it.

If you want only to check if the file is a valid sqlite database then you can check with this function:
private bool CheckIfValidSQLiteDatabase(string databaseFilePath)
{
byte[] bytes = new byte[16];
using (FileStream fileStream = new FileStream(databaseFilePath, FileMode.Open, FileAccess.Read))
{
fileStream.Read(bytes, 0, 16);
}
string gg = System.Text.ASCIIEncoding.ASCII.GetString(bytes);
return gg.Contains("SQLite format");
}
as stated in the documentation:
sqlite database header

Related

Suspending Azure function until Entity Framework makes changes

I have an Azure function (Iot hub trigger) that:
selects a top 1 record ordered by time in descending order
compares with a new record that comes
writes the coming record only if it differs from the selected one (some fields are different)
The issue pops up when records come into the azure function very rapidly - I end up with duplicates in the database. I guess this is because SQL Server doesn't have enough time to make changes in the database by the time the next record comes and Azure function selects, and when the Azure function selects the latest record, it actually receives an outdated one.
I use EF Core.
I do believe that there is no issue with function but with the transactional nature of the operation you described. To solve your issue trivially, you can try using transaction with the highest isolation level:
using (var transaction = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions
{
// With this isolation level all data modifications are sequential
IsolationLevel = IsolationLevel.Serializable
}))
{
using (var connection = new SqlConnection("YOUR CONNECTION"))
{
connection.Open();
try
{
// Run raw ADO.NET command in the transaction
var command = connection.CreateCommand();
// Your reading query (just for example sake)
command.CommandText = "SELECT TOP 1 FROM dbo.Whatever";
var result = command.ExecuteScalar();
// Run an EF Core command in the transaction
var options = new DbContextOptionsBuilder<TestContext>()
.UseSqlServer(connection)
.Options;
using (var context = new TestContext(options))
{
context.Items.Add(result);
context.SaveChanges();
}
// Commit transaction if all commands succeed, transaction will auto-rollback
// when disposed if either commands fails
transaction.Complete();
}
catch (System.Exception)
{
// TODO: Handle failure
}
}
}
You should adjust the code for your need, but you have an idea.
Although, I would rather avoid the problem entirely and not modify any records, but rather insert them and select the latests afterwards. Transactions are tricky in application, they may cause performance degradation and deadlocks being applied in the wrong place and in the wrong way.

Large Data streaming from sql server through WCF to client

What are possible ways of storing large data file( CSV filesaround 1 GB) using SQL database and streaming that data from Database using WCF to the client(without fetching the complete data in memory )?
I think there are a few issues to take into account here:
The size of the data you actually want to return
The structure of that data (or lack thereof)
The place to store that data behind your NLB
Returning that data to the consumer.
From your question, it sounds like you want to store 1 GB of structured (CSV) data and stream it to the client. If you really are generating and then serving a 1GB file (and don't have much metadata around it), I'd go for using a FTP/SFTP server (or perhaps a Network file share, which can certainly be secured in a variety of ways).
If you need to store metadata about the file that goes beyond its file name/create time/location, then SQL might be a good option, assuming you could do one of the following:
store the CSV data in tabular format in the database
Use FILESTREAM and store the file itself
Here is a decent primer on FILESTREAM from SimpleTalk. You could then use the SqlFileStream to help stream the data from the file itself (and SQL Server will help maintain transactional consistency for you, which you may or may not want), an example of which is present in the documentation. The relevant section is here:
private static void ReadFilestream(SqlConnectionStringBuilder connStringBuilder)
{
using (SqlConnection connection = new SqlConnection(connStringBuilder.ToString()))
{
connection.Open();
SqlCommand command = new SqlCommand("SELECT TOP(1) Photo.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() FROM employees", connection);
SqlTransaction tran = connection.BeginTransaction(IsolationLevel.ReadCommitted);
command.Transaction = tran;
using (SqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
// Get the pointer for the file
string path = reader.GetString(0);
byte[] transactionContext = reader.GetSqlBytes(1).Buffer;
// Create the SqlFileStream
using (Stream fileStream = new SqlFileStream(path, transactionContext, FileAccess.Read, FileOptions.SequentialScan, allocationSize: 0))
{
// Read the contents as bytes and write them to the console
for (long index = 0; index < fileStream.Length; index++)
{
Console.WriteLine(fileStream.ReadByte());
}
}
}
}
tran.Commit();
}
}
Alternatively, if you do choose to store it in tabular format you can use typical SqlDataReader methods, or perhaps some combination of bcp and .NET helpers.
You should be able to combine that last link with Microsoft's remarks on streaming large data over WCF to get the desired result.

db.SubmitChanges() not updating the database in Windows phone 8

my question might be similar to many question in google search but I have some specific query. I have written my code like this where db is database and Items is the table having filename as one property.
var query = from fs in dB.Items
where fs.FilePath.Trim() == strOldpath.ToString()
select fs;
foreach (var fs in query)
{
fs.FileName = txtrename.Text.ToString();
}
try
{
dB.SubmitChanges();
}
catch (Exception e)
{
}
This code is running fine but after debugging I stop the emulator and I run in the command prompt
ISETool.exe ts xd 19xxxx-b6f2-474b-a747-6axxxxxxx E:\Practise\WinPhone\PhoneApp3\
it creates the *.sdf in the specific folder and I can open that in server explorer. But I can see that instead of the updated fileName it shows the old File name. the code is running fine. Any help why the file name is not updated? I have set the primary key for the table also.
You appear to have hit a known issue with trying to update the results of a read-only query:
Workaround for LINQ to SQL Entity Identity Caching and Compiled Query Bug?

Large result sets with Hibernate and MaxDB

I need to synchronize a large XML file (containing 6 million records sorted by ID) against a SAP MaxDB database table.
This is my present strategy:
read XML object from file and convert to Java bean
load object with the same primary key from database (see below)
if object does not exist, insert object
if object exists and is changed, update object
at the end of the process, scan the database table for (deleted) objects that are not contained in the file
For efficiency reasons, the "load database object" method keeps a cache of the next 1000 records and issues statements to load the next bunch of objects. It works like this:
List<?> objects = (List<?>)transactionTemplate.execute(new TransactionCallback() {
public Object doInTransaction(TransactionStatus status) {
ht.setMaxResults(BUNCH_SIZE);
ht.setFetchSize(BUNCH_SIZE);
List<?> objects = ht.find("FROM " + simpleName + " WHERE " +
primaryKeyName + " >= ? ORDER BY " + primaryKeyName, primaryValue);
return objects;
}
});
Unfortunately, for some constant values of BUNCH_SIZE (10,000) I get a SQL exception "result table space exhausted".
How can I better optimize the process?
How can I avoid the SQL exception / the bunch size problem?
The code that saves the changed objects is the following:
if (prevObject == null) {
ht.execute(new HibernateCallback(){
public Object doInHibernate(Session session)
throws HibernateException, SQLException {
session.save(saveObject);
session.flush();
session.evict(saveObject);
return null;
}
});
newObjects++;
} else {
List<String> changes = ObjectUtil.getChangedProperties(prevObject, currentObject);
if (hasImportantChanges(changes)) {
changedObjects++;
ht.merge(currentObject);
} else
unchangedObjects++;
}
}
While this code works in principle, it produces masses of database log entries (we are talking about more than 50 GB of log backups) if there are a lot of new or changed objects in the source file.
Can I improve this code by using a lower transaction isolation level?
Can I reduce the amount of database log data written?
Maybe there is a problem with the database configuration?
I am very grateful for any help. Thanks a lot, Matthias

How to write an ODBC DataSet object into a database table using C#?

I am unit/auto-testing a large application which uses MSFT Sql Server, Oracle as well as Sybase as its back end. Maybe there are better ways to interface with a db, but ODBC library is what I have to use. Given these constraints, there is something that I need to figure out, and I would love your help on this. My tests do change the state of the database, and I seek an inexpensive, 99.99% robust way to restore things after I am done ( I feel like a full db restore after each test is too much of a penalty). So, I seek a complement to this function below - I need a way to populate a table from a DataSet.
private DataSet ReadFromTable(ODBCConnection connection, string tableName)
{
string selectQueryString = String.Format("select * from {0};", tableName);
DataSet dataSet = new DataSet();
using (OdbcCommand command = new OdbcCommand(selectQueryString, connection))
using (OdbcDataAdapter odbcAdapter = new OdbcDataAdapter(command))
{
odbcAdapter.Fill(dataSet);
}
return dataSet;
}
// The method that I seek.
private void WriteToTable(ODBCConnection connection, string tableName, DataSet data)
{
...
}
I realize that things can be more complicated - that there are triggers, that some tables depend on others. However, we barely use any constraints for the sake of efficiency of the application under test. I am giving you this information, so that perhaps you have a suggestion on how to do things better/differently. I am open to different approaches, as long as they work well.
The non-negotiables are: MsTest library, VS2010, C#, ODBC Library, support for all 3 vendors.
Is this what you mean? I might be overlooking something
In ReadFromTable
dataset.WriteXmlSchema(memorySchemaStream);
dataset.WriteXml(memoryDataStream);
In WriteToTable
/* empty the table first */
Dataset template = new DataSet();
template.ReadXmlSchema(memorySchemaStream);
template.ReadXml(memoryDataStream);
Dataset actual = new DataSet();
actual.ReadXmlSchema(memorySchemaStream);
actual.Merge(template, false);
actual.Update();
Other variant might be: read the current data, do compare with template and based on what you are missing add the data to the actual dataset. The only thing to remember is that you cannot copy the actual DataRows from one dataset to another, you have to recreate DataRows

Resources