Truly virtual FTP - database

I am looking to set up an FTP server without connecting it to a file system.
I want to use a database to store which of the many large files on my site each user will have access to. Because of the number and size of the files involved, the files cannot all be stored on a single server so a link based setup is not useful.
I am imagining an FTP server that will act as a pass-through for a backend CDN that stores all the files and checks a remote database for which files to present.
Does a system like this exist? If it doesn't exist, Which open source FTP server would be easiest to modify to suit my needs?

Have you looked at JScape?
It costs money but has the capability

Since you are on StackOverflow, I assume that you are ready for coding. In this case you can use FTP/FTPS server component, included in FTPSBlackbox of our SecureBlackbox product. It lets you handle all operations yourself, so you are not bound to the file system in any way.
We also have SFTP server (SSH File Transfer Protocol) with similar design.

Related

how do i determine oracle database name of data source

I've been searching around and haven't found anything on my scenario that I understand:
I have a list of all of the Oracle databases and corresponding servers that my company owns (about 80 servers 150 databases). I am trying to figure out which one a specific file is being downloaded from (from a webpage).
I am mechanical engineer, not in software so if you could eli5 that would be very helpful.
Specifically I need the SID name, but figuring out the server name
would also be helpful.
Your question is kind of tricky here. if your downloading the file from web application(I assuming it is a Java webapp), oracle database could act as either the data store or a report server that can generate the oracle reports directly
In the first case, you need to find out if what kind of file you are downloading?
is it a PDF? is it a excel file? or just text file or anything? the best idea is to check out the file link and then decide what software generating this file. it could be any software in back end to generate the file like, POI(for generating excel file), or even a direct file link, but not oracle at all.
Also, In this case, the file is usually generated at backend by server-let. You need ask the developer which report or file generating engine they are employing. and if oracle database is also being used, it is usually providing the data fro that report or file engine.
In the second case, you can just check out the the URL and give it to the webmaster asking them which oracle server it is using. it is usually configured in the web server.

Backing up an SQLite3 database on embed

I am currently working on a C project that contains an SQLite3 database with WAL enabled. We have an HTTP web interface over which you shall be able to get an online backup of the database. Currently, the database file is reachable over HTTP, which is bad in many ways. My task now is to implement a new backup algorithm.
There is the SQLite-Online-Backup API which seems to be pretty nice. There, you open two database connections and copy one database to the other. Anyway, in my setup, I can't be sure that there is enough space to copy the entire database, since we may have a lot of statistics and multimedia files in it. For me, the best solution would be to open a SQLite connection that is directly connected to stdout, so that I could backup the database through CGI.
Anyway, I didn't find a way in the SQLite3 API to open a database connection on special files like stdout. What would be best practice to backup the database? How do you perform online backups of your SQLite3 databases?
Thanks in advance!
If you need to have some special target interface for the backup, you can implement your custom VFS interface that does what you need. See the parameters for sqlite3_open_v2() where you can pass in the name of a VFS.
(see https://www.sqlite.org/c3ref/vfs.html for Details about VFS and the OS interface used by SQLite)
Basically every sqlite3_backup_step() call will write some blocks of data, and you would need to transfer those to your target database in some way.

SQL Server DB stop from being copied

I have some SQL server DBs attached to my instance. The problem is they can easily be copied from the physical folder and any one can attach them to his own instance and view data.
How can I make sure that when files are attached, they can not be copied from the physical location and can be copied only when detached from the instance?
Thanks
So it seems to me someone have access to the file system. Your database .mdf files can only be as secure as the file system. But here are the few things that will help.
You can encrypt the data before it goes into the database. Use a long encryption key, to difficult to bruteforce to be worthwhile.
Also you can consider to change the file extension. There's no law that says you have to use MDF and LDF.
IMO, you may put the database files in a obscure directory. Don't use MSSQL\DATA.
Hope all these tips will help :)
Considering it's SQL server I am assuming you are using Windows environment. The ideal situation would be to have it on a separate server where only few people have access.
If it's a smaller setup then restrict unwanted access to folders by applying security and only allowing yourself and trusted users access to the folders.
Have you checked out Transparent Data Encryption (TDE)? It's specifically intended to prevent people from accessing the actual files.
Transparent data encryption (TDE) performs real-time I/O encryption
and decryption of the data and log files.
http://technet.microsoft.com/en-us/library/bb934049.aspx
I don't think that as you said you will be able to copy the .mdf file while SQL Server service is running
You may disable the Builtin\Administrators user to restrict the Windows Authentication

WinForms application design - moving documents from SQL Server to file storage

I have a standard WinForms application that connects to a SQL Server. The application allows users to upload documents which are currently stored in the database, in a table using an image column.
I need to change this approach so the documents are stored as files and a link to the file is stored in the database table.
Using the current approach - when the user uploads a document they are shielded from how this is stored, as they have a connection to the database they do not need to know anything about where the files are stored, no special directory permissions etc are required. If I set up a network share for the documents I want to avoid any IT issues such as the users having to have access to this directory to upload to or access existing documents.
What are the options available to do this? I thought of having a temporary database where the documents are uploaded to in the same way as the current approach and then a process running on the server to save these to the file store. This database could then be deleted and recreated to reclaim any space. Are there any better approaches?
ADDITIONAL INFO: There is no web server element to my application so I do not think a WCF service is possible
Is there a reason why you want to get the files out of the database in the first place?
How about still saving them in SQL Server, but using a FILESTREAM column instead of IMAGE?
Quote from the link:
FILESTREAM enables SQL Server-based applications to store unstructured
data, such as documents and images, on the file system. Applications
can leverage the rich streaming APIs and performance of the file
system and at the same time maintain transactional consistency between
the unstructured data and corresponding structured data.
FILESTREAM integrates the SQL Server Database Engine with an NTFS file
system by storing varbinary(max) binary large object (BLOB) data as
files on the file system. Transact-SQL statements can insert, update,
query, search, and back up FILESTREAM data. Win32 file system
interfaces provide streaming access to the data.
FILESTREAM uses the NT system cache for caching file data. This helps
reduce any effect that FILESTREAM data might have on Database Engine
performance. The SQL Server buffer pool is not used; therefore, this
memory is available for query processing.
So you would get the best out of both worlds:
The files would be stored as files on the hard disk (probabl faster compared to storing them in the database), but you don't have to care about file shares, permissions etc.
Note that you need at least SQL Server 2008 to use FILESTREAM.
I can tell you how I implemented this task. I wrote a WCF service which is used to send archived files. So, if I were you, I would create such a service which should be able to save files and send them back. This is easy and you also must be sure that the user under which context the WCF service works has permission to read write files.
You could just have your application pass the object to a procedure (CLR maybe) in the database which then writes the data out to the location of your choosing without storing the file contents. That way you still have a layer of abstraction between the file store and the application but you don't need to have a process which cleans up after you.
Alternatively a WCF/web service could be created which the application connects to. A web method could be used to accept the file contents and write them to the correct place, it could return the path to the file or some file identifier.

Is it possible to access the FILESTREAM share?

What I mean is being able to access it through Windows Explorer or other programs. I believe the answer is that it isn't possible. But I really want to know why it's not allowed. It seems that the files could be made available read-only through the network share.
You can't access the Filestream share directly and explore around. Any open to a Filestream file needs to be done using the path retrieved from SQL Server and by using NtCreateFile (or a wrapper) with the appropriate transaction context passed in through the EABuffer.
It is possible to create a new share and point it to the physical location of the files, however this is pretty pointless as there's no supported way to resolve a SQL Filestream row to a physical file location (the RsFx filter driver handles these conversions internally), the file location may change at any time due to concurrent updates / partition changes, and you'll need to relax security on the folder to an unacceptable level. It can also cause corruptions in the database if you move or delete files without the knowledge of SQL Server. Any locks held on physical files will interfere with deletes as mentioned in dportas' comment.
I agree it would be great to be able to browse a namespace of the Filestream files through explorer and open files directly through applications without requiring an application rewrite.
Yes it is possible. The point of filestream however is that you get that access via the filestream API rather than direct through the filesystem. Bear in mind that the file name could change without warning - for example updates may cause a new filestream file to be created. Possibly if you are holding file system locks (even shared locks) on a file that is needed by SQL Server then that may cause a contention problem. So if you access the data direct through the file system the results will be unsupported and may be unreliable - but then again it might work :-)
Yes it is possible if you are also using FileTables (I am using Sql Express 2017). When in Sql Server Configuration Manager, right click on your server instance, select Properties, and then go to the FILESTREAM tab. Check the "Allow remote clients access to FILESTREAM data". You may have to stop/start your instance. Now you can browse to the share, which is named according to your instance (in my case SqlExpress). In my database (SimioPortal) I had created a file (BlobStore) where I stored my files.
So, at the command prompt I can now type: dir \localhost\sqlexpress\SimioPortal\blobstore and see a list of my files. You can do a similar thing in File Explorer.

Resources