I have an in-memory SQLite database which I want to serialize and send to another computer. Is this possible without writing the database out to disk and reading the file from there?
You could use the online backup API to transfer the in-memory database, to a file-based database created in shared memory (for Linux, in /dev/shm for instance) avoiding the disk operations. Then this pseudo-file is transferred to the remote host (still put in /dev/shm), and the online load API is used to transfer from the file-based database, to your target in-memory database.
See:
http://www.sqlite.org/backup.html
http://www.sqlite.org/c3ref/backup_finish.html
AFAIK, there is no API to perform online/load without intermediate databases.
The sqlite3 shell program contains a .dump command that "dumps the database in an SQL text format." You can use the source code for .dump (it is public domain) to create your own serializer.
Related
I've heard of the H2 database, but how does it work? I'm now scraping data from many websites and don't want to visit the same URL that I've already scraped. Is it possible for me to use the H2 database in that case?
H2 is an open source Java SQL Database. It can be run in both embedded and server mode. It is widely used as an In-memory database.
In-memory database relies on system memory as oppose to disk space for storage of data. Because memory access is faster than disk access. We use the in-memory database when we do not need to persist the data. The in-memory databases are volatile, by default, and all stored data loss when we restart the application.
If you want to persist the data in the H2 database, you should store the data in a file. To achieve the same, you need to change the datasource URL property in your application properties file.
Learn more about H2 here
I store large files (50-500MB) in database. Once loaded by the application, it doesn't need the whole file in memory. How do i fetch table row (or specifically the installer from the row) directly into the memory while avoiding loading the entire file into ram (So a sort of a buffered download into file)?
I haven't found a solution that avoid loading the file so far. Instead i forward requests to flask server that loads the entire file, and then allows the application instance to download it into a file. However this doesn't seem like a very good solution.
You are probably looking for FILESTREAM (SQL Server):
FILESTREAM enables SQL Server-based applications to store unstructured data, such as documents and images, on the file system. Applications can leverage the rich streaming APIs and performance of the file system and at the same time maintain transactional consistency between the unstructured data and corresponding structured data.
It is interesting because on SQL Server (for Windows) it can stream file data to Windows clients without having to load their entirety into the memory of the SQL Server:
The Win32 streaming support works in the context of a SQL Server transaction. Within a transaction, you can use FILESTREAM functions to obtain a logical UNC file system path of a file. You then use the OpenSqlFilestream API to obtain a file handle. This handle can then be used by Win32 file streaming interfaces, such as ReadFile() and WriteFile(), to access and update the file by way of the file system.
Do note that at this time it is not supported on SQL Server 2017 for Linux.
I am currently working on a C project that contains an SQLite3 database with WAL enabled. We have an HTTP web interface over which you shall be able to get an online backup of the database. Currently, the database file is reachable over HTTP, which is bad in many ways. My task now is to implement a new backup algorithm.
There is the SQLite-Online-Backup API which seems to be pretty nice. There, you open two database connections and copy one database to the other. Anyway, in my setup, I can't be sure that there is enough space to copy the entire database, since we may have a lot of statistics and multimedia files in it. For me, the best solution would be to open a SQLite connection that is directly connected to stdout, so that I could backup the database through CGI.
Anyway, I didn't find a way in the SQLite3 API to open a database connection on special files like stdout. What would be best practice to backup the database? How do you perform online backups of your SQLite3 databases?
Thanks in advance!
If you need to have some special target interface for the backup, you can implement your custom VFS interface that does what you need. See the parameters for sqlite3_open_v2() where you can pass in the name of a VFS.
(see https://www.sqlite.org/c3ref/vfs.html for Details about VFS and the OS interface used by SQLite)
Basically every sqlite3_backup_step() call will write some blocks of data, and you would need to transfer those to your target database in some way.
I have a standard WinForms application that connects to a SQL Server. The application allows users to upload documents which are currently stored in the database, in a table using an image column.
I need to change this approach so the documents are stored as files and a link to the file is stored in the database table.
Using the current approach - when the user uploads a document they are shielded from how this is stored, as they have a connection to the database they do not need to know anything about where the files are stored, no special directory permissions etc are required. If I set up a network share for the documents I want to avoid any IT issues such as the users having to have access to this directory to upload to or access existing documents.
What are the options available to do this? I thought of having a temporary database where the documents are uploaded to in the same way as the current approach and then a process running on the server to save these to the file store. This database could then be deleted and recreated to reclaim any space. Are there any better approaches?
ADDITIONAL INFO: There is no web server element to my application so I do not think a WCF service is possible
Is there a reason why you want to get the files out of the database in the first place?
How about still saving them in SQL Server, but using a FILESTREAM column instead of IMAGE?
Quote from the link:
FILESTREAM enables SQL Server-based applications to store unstructured
data, such as documents and images, on the file system. Applications
can leverage the rich streaming APIs and performance of the file
system and at the same time maintain transactional consistency between
the unstructured data and corresponding structured data.
FILESTREAM integrates the SQL Server Database Engine with an NTFS file
system by storing varbinary(max) binary large object (BLOB) data as
files on the file system. Transact-SQL statements can insert, update,
query, search, and back up FILESTREAM data. Win32 file system
interfaces provide streaming access to the data.
FILESTREAM uses the NT system cache for caching file data. This helps
reduce any effect that FILESTREAM data might have on Database Engine
performance. The SQL Server buffer pool is not used; therefore, this
memory is available for query processing.
So you would get the best out of both worlds:
The files would be stored as files on the hard disk (probabl faster compared to storing them in the database), but you don't have to care about file shares, permissions etc.
Note that you need at least SQL Server 2008 to use FILESTREAM.
I can tell you how I implemented this task. I wrote a WCF service which is used to send archived files. So, if I were you, I would create such a service which should be able to save files and send them back. This is easy and you also must be sure that the user under which context the WCF service works has permission to read write files.
You could just have your application pass the object to a procedure (CLR maybe) in the database which then writes the data out to the location of your choosing without storing the file contents. That way you still have a layer of abstraction between the file store and the application but you don't need to have a process which cleans up after you.
Alternatively a WCF/web service could be created which the application connects to. A web method could be used to accept the file contents and write them to the correct place, it could return the path to the file or some file identifier.
The current application I'm working lets call X is an archiving application for the data kept another application say Y. Both are very old applications developed about 8 odd years back. So far in my reading of the documentation, I have learnt that the process to transfer data used is that, the SQL Server Database Tables snapshot is created in flat files and then this flat files are ftp'd to the correct unix box where through ctl various insert statements are generated for the Oracle Database and that's how this data is transferred. It uses bcp utility. I wanted to know if there is a better and a faster way this could be accomplished. There should be a way to transfer data directly, I feel the whole process of taking it in files and then transfer and insert must be really slow and painstaking. Any insights???
Create a DB Link from your Oracle Database to SQL Server database, and you can transfer the data via selects / inserts.
Schedule the process using DBMS_SCHEDULER if this needs to be done on a periodic basis.
you can read data from a lot of different database vendors using heterogeneous services. To use this you create a service on the Unix box that uses - in this case - odbc to connect to the SQL Server database.
You define this service in the listener.ora and you create a tns alias that points to this service. The alias looks pretty normal, except for the extra line (hs = ok). In your database you make a database link that using this tns alias as connect string.
UnixODBC in combination with the FreeTDS driver works fine.
The exact details vary between releases, for 10g look for hs4odbc, 11g dg4odbc.