Oracle RDS (AWS): How to retrieve files generated? - database

Here's the situation, we have a store procedure that generates .txt files in a directory inside of a database. I recently migrated from Oracle DB SE on Linux to an RDS oracle database instance.
I know I'm not able to access files remotely (no scp, or sftp) to copy the files that it generates, so instead of I decided to use DBMS_FILE_TRANSFER.PUT_FILE using a DB link to another database.
The link is working fine, but whenever I try to copy the file I get the following error:
ERROR at line 1:
ORA-19505: failed to identify file "/u01/test.txt"
ORA-27046: file size is not a multiple of logical block size
Additional information: 1
Is there any alternatives to do this without using an EC2 instance?

File size must be a multiple of 512 (or whatever you've set the logical block size to)
Ensure when the file is made that the size is exactly a multiple of 512, pad with zeros or similar to ensure this
See https://asktom.oracle.com/pls/apex/asktom.search?tag=file-transfer

Related

Azure Data Factory - File System to Oracle Cloud Storage

Is it possible to copy files from an on-prem File System to Oracle Cloud Storage. Note that we are not concerned with the data inside the files.
In simple terms it's as if copying files from one folder to another.
Here is what I have tried:
Created Self-Hosted Runtime for the file system (testing on my local machine)
Created Linked Service for File System
Linked Service for Oracle Cloud Storage (OCS)
Data Set of File System
Data set of Oracle (OCS)
However, I get error saying that my C:\ can not be resolved in step 2. when connection is tested.
and
In 5. it says not able to sink because it is not supported under OCS. At this point it seems like it is not possible to copy files into OCS?
I tried different configurations to see if OCS can be used as a drop container for files.

Can anyone suggest me how i can write the path to store the output file in which my stored procedure outputs in my oracle DB?

suppose the IP address of my FTP server is xx.xxx.xx.xx and i need the output file to be stored in D:/example. I need to esnure that the path i give is in my FTP server. How can i include that in my fopen function, like a path which points to the example in my FTP server.
Generally speaking, this is how it goes:
there's a database server
there is a directory on one of its disks
that directory will be used in create directory command which creates a directory, Oracle object
it will be used as a target for your file-related operations. For example:
it'll contain CSV files which are source of external tables
.dmp files, result of data pump export, will be stored there (the same goes for import)
UTL_FILE will create files in that directory
All that means that your idea of creating a file on a FTP server might not work just as easy.
However, there's a way : if you create directory (Oracle object) using UNC (Universal Naming Convention) which points to a directory on the FTP server, the file might be created there. Do some research about it; I know I once did that (put files onto an application server), but that was long time ago and I don't remember everything I did.
Another option you might consider is DBMS_SCHEDULER package. Suppose you create a file on the database server (which is the simplest option; if you do it right, it is more or less trivial). Once the procedure (which creates the file) is done, call DBMS_SCHEDULER.CREATE_JOB using the executable job type and call an operating system batch file that will copy the file from the database server to the FTP server.
That's all I can say about it; at least, you have something to research & think about.

Temporarily attach/connect to SQL Server LocalDB file

I am trying to programmatically create and connect to an application specific LocalDB database. I would like to do this by specifying the file name of a .MDF file only, ideally without specifying an instance name or a name for the database that gets registered anywhere.
The database is to be accessed from some unit tests so it will only be used for a brief time before being deleted. My current approach creates the .MDF file correctly but also registers the name with the default instance which I would like to avoid given the temporary and 'non-singleton' nature of the database instances.
Is it possible to do what I am trying to do, or have I misunderstood how LocalDB works?
LocalDB automatic instance with specific data file
Server=(localdb)\v11.0;Integrated Security=true;
AttachDbFileName=C:\MyFolder\MyData.mdf;
Update
This can be used with the Deployment area in your .testsettings file. You just need to check 'Enable deployment' and add both the .mdf and .ldf files to 'Additional files and directories to deploy'.
You can then simply use the connection string above, and the test runner will take care of moving your data files to an appropriate temp folder for you.
Chrisb's answer got me on the right lines to solve this, but I noticed that the database remained attached to the default instance in LocalDB even after the connection had been closed. I read that this might eventually be purged after a few minutes but in my case this was too long as the file was located in a temporary directory used by MSTest and had to be closed in time for the cleanup at the end of the test run.
The solution was to use a connection string similar to https://stackoverflow.com/a/26712648 and a detach process similar to https://stackoverflow.com/a/6646319 immediately after I had finished using the connection.
Creating the MDF file in the first place could be accomplished by connecting to the automatic LocalDB instance, executing CREATE DATABASE and then using the same detach method. By using the file name for the database name, which is allowed in LocalDB due to the much longer names permitted, I ensured beyond reasonable doubt that the database name will not clash with anything else on the computer even during the short period it stays attached.

Insert a client file into a column on a server database

I wish to write a query that inserts a file that resides on the client (C# web server) into a column in the database server (SQL Server), something like INSERT … SELECT * FROM OPENROWSET(BULK…), but without having to save the file on the server machine first.
Is this even possible in SQL?
Although your context is unstated, I'm assuming that you're intending to run this from SSMS rather than from OSQL, a PowerShell script, or through some other means.
The file doesn't need to reside on the physical box running SQL Server, but SQL Server does need access to it. The typical approach, I believe, would be for an application server to copy the file to a shared repository and then pass it off to SQL Server through a UNC reference. The syntax to do so is relatively trivial and can be found in Importing Bulk Data by Using BULK INSERT or OPENROWSET(BULK...).
If instead you're interested in providing a mechanism for the SQL Server to save a file from some type of stream operation where the client is directly transmitting a file and there is no shared repository, I'm not aware of a way to do that. Even if you use an SQL FILESTREAM object you still need an accessible NTFS location to stream from. See Saving and Retrieving File Using FileStream SQL Server 2008.
At some point, the server will have to have a hand on the file. That does not mean that the server has to keep the file, but the file has to get to the server in order to be read and inserted into the db. Typically, this is achieved with a form and a file-type input. On the server, you can use the uploaded file to create your query, then delete it.
That said, storing files in a database is a debatable practice. Depending on the type and size of files you're storing, your database can quickly balloon in size. For starters, this makes backups slower and more prone to failure, along with a laundry list of other potential pitfalls. Check out this question on SO: Storing Images in DB - Yea or Nay? As you can see from the answers, there are a number of considerations to be made, but a good rule of thumb is to not do this unless you have compelling reasons to do so.
In SQL Server BLOB Data in .NET: Tutorial, Mohammad Elsheimy explains how it can be done:
using(SqlConnection con = new SqlConnection(conStr) )
using(SqlCommand command = new SqlCommand("INSERT INTO MyFiles VALUES (#Filename, #Data)", con) )
{
command.Parameters.AddWithValue("#Filename", Path.GetFileName(filename));
command.Parameters.AddWithValue("#Data", File.ReadAllBytes(filename));
connection.Open();
}
Basically, this way the file is read on the client and sent to the database server without a need for a temporary file on the server machine.

SQL Server Bulk Insert fails (Network related)

I'm fairly new to SQL Server.
I'm trying to bulk insert into a table, using the command in SQL Server Management Studio (2005):
BULK INSERT Table1 FROM 'c:\text.txt' WITH (FIELDTERMINATOR = '|')
I get the error:
Msg 4860, Level 16, State 1, Line 1
Cannot bulk load. The file "c:\text.txt" does not exist.
I'm positive the file actually exists.
I get the feeling that it is looking for the file on the local hard drive for where ever the server is. Is that the case? If so, how do you generally solve this problem? (to note, I've tried specifying the network address of my PC when entering the location of the text file, but I get a permission error. Also, I know in advance that my company doesn't allow files to placed on a server).
SQL Server does not have an SQL statement that reads data from the client end (as the other posters have pointed out). Other RDBMS products do implement this (eg. the Postgres COPY statement lets you specify a file on the server or a file on the client that is read by the db connectivity library on the client side).
You can achieve moving data from a file on the client to a table on SQL Server using the bcp command line program.
bcp lets you copy data from a local file to a table on the server, or from a table (or select query) on the server into a local file. For example:
bcp servername.dbname.tablename in c:\temp.txt -T -c
will copy a tabbed delimited file (temp.txt) into the specified table (assuming the file contains the right number of columns).
I am not sure if this helps, but it is the only way to move data from a client file to a server table without giving the server some sort of access across the network to the data file on client.
I'd agree that it's a problem with the file being on your C drive, and not the server's drive.
If it's a permissions issue, have you tried creating a file share on your workstation that the server does have permissions to read from? Maybe something like \YourWorkstation\SQLFile, and then granting everyone (or Guest, depending on how your network permissions are set up) read access on it?
If you can't create the share on your laptop, or you can't grant rights to it for some reason, is there a file share somewhere in the office that you do have rights to, and that SQL can also read from? Maybe a NAS or a "Common" network folder?
Have you created a share drive on your machine that the server can see? If so then you just need to refer to the path including your machine name instead of C:
Yes, it will look for the file on the SQL server itself.
If you can map a network drive to the C drive of your SQL server, then you can just copy the file over before running the bulk insert.
If you absolutely can't get any access to the server's file system, then you can look at doing something like this:
write a program that reads your text file and inserts the contents into single record in a temporary table that has a Text field, perhaps using a stored procedure
have the program execute the bcp command to export the data from the temporary table into a text file on the SQL server's local file system, to a folder that the account under which the SQL service is running has write permission
have the program run a bulk insert command specifying the path to the text file on the server
delete the text file and the temporary table

Resources