I'm trying to code a mssql job that does something using the files in a specific directory. But I don't know the name of the file / files, they will vary in time.
I've found xp_cmdshell command, but I can not use it because of security reasons
Is there any other way to check directory if it contains txt files or not (and if yes get the names of them) in tsql.
Thanks in advance,
Without access to the xp_ stored procedures, no. The other way would be to create a COM object using sp_OACreate that creates a COM Scripting.FileSystemObject, but again access to this may well be restricted as it's a security issue.
As your describing this as an MSSQL job, I'm assuming this is going to be a scheduled task of some description? If so, your best option is probably going to be creating a standard Windows batch file (.BAT) that's scheduled in SQL Server agent that does the existential checking and passes whatever files are found in to your SQL script via sqlcmd/osql.
Related
suppose the IP address of my FTP server is xx.xxx.xx.xx and i need the output file to be stored in D:/example. I need to esnure that the path i give is in my FTP server. How can i include that in my fopen function, like a path which points to the example in my FTP server.
Generally speaking, this is how it goes:
there's a database server
there is a directory on one of its disks
that directory will be used in create directory command which creates a directory, Oracle object
it will be used as a target for your file-related operations. For example:
it'll contain CSV files which are source of external tables
.dmp files, result of data pump export, will be stored there (the same goes for import)
UTL_FILE will create files in that directory
All that means that your idea of creating a file on a FTP server might not work just as easy.
However, there's a way : if you create directory (Oracle object) using UNC (Universal Naming Convention) which points to a directory on the FTP server, the file might be created there. Do some research about it; I know I once did that (put files onto an application server), but that was long time ago and I don't remember everything I did.
Another option you might consider is DBMS_SCHEDULER package. Suppose you create a file on the database server (which is the simplest option; if you do it right, it is more or less trivial). Once the procedure (which creates the file) is done, call DBMS_SCHEDULER.CREATE_JOB using the executable job type and call an operating system batch file that will copy the file from the database server to the FTP server.
That's all I can say about it; at least, you have something to research & think about.
My SSIS projects tend to run queries that require changes as they move between environments, like the table schema might change or a value in the Where clause. I've always either put my SQL into a Project Parameter, which is hard to edit since formatting is lost, or just put it directly into the Execute SQL Task/Data Flow Source then manually edited it between migrations which is also not ideal.
I was wonder though if I added my SQL scripts to files within the project, can these be read back in? Example if I put a query like this:
select id, name from %schema%.tablename
I'd like to read this into a variable then it's easy to use an expression as I do with Project Parameters to replace %schema% with the appropriate value. Then the .sql files within the project can be edited with little effort or even tested through an Execute SQL Task that's disabled/removed before the project goes into the deployment flow. But I've been unable to find how to read in a file using a relative path within the project. Also I'm not even sure these get deployed to the SSIS Server.
Thanks for any insight.
I've added a text file query.sql to an SSIS (SQL 2017) Project in Visual Studio, bit I've found no way to pull the contents of query.sql into a variable.
Native tooling approach
For an Execute SQL Task, there's an option to source your query directly from a file.
Set your SQLSourceType to File Connection and then specify a file connection manager in the FileConnection section.
Do be aware that while this is handy, it's also ripe for someone escalating their permissions. If I had access to the file the SSIS package is looking for, I can add a drop database, create a new user and give them SA rights, etc - anything the account that runs the SSIS package can do, a nefarious person could exploit.
Roll your own approach
If you're adamant about reading the file yourself, add two Variables to your SSIS package and supply values like the following
User::QueryPath -> String -> C:\path\to\file.sql
User::QueryActual -> String -> SELECT 1;
Add a Script Task to the package. Specify as a ReadOnly variable User::QueryPath and specify as a ReadWrite variable User::QueryActual
Within the Main you'd need code like the following
string filePath = this.Dts.Variables["User::QueryPath"].Value.ToString();
this.Dts.Variables["User::QueryActual"].Value = System.IO.File.ReadAllText(filePath);
The meat of the matter is System.IO.File.ReadAllText. Note that this doesn't handle checking whether the file exists, you have permission to access, etc. It's just a barebones read of a file (and also open to the same injection challenges as the above method - just this way you own maintaining it versus the fine engineers at Microsoft)
You can build your query by using both Variable and Parameter.
For example:
Parameter A: dbo
Build your variable A (string type) as : "Select * FROM server.DB." + ParameterA + ".Table"
So if you need to change the schema, just change the parameter A will give you the corresponding query in variable A.
I have to say that I hate myself for such general question as "What I am doing wrong?" but I simply have no idea what can be the problem:
I've created SSIS package that takes the data from flat files (CSV), counts the average on one of the columns, groups by date and writes it to the database and deletes the original file. All works fine when executed within SSIS, but when I am scheduling it within Server Agent it simply doesn't work - log reports success but there is no new data in the database and the .csv file exists in its original location.
I know the problem with protection level set up in SSIS, so I've changed it to "EncryptAllWithPassword" and I use the same password with Server Agent.
Here is a link to the Server Agent Job script (created as "script job as DROP and CREATE")
Edit: Just to make things weirder, using
dtexcec /f {filepath} /de {password}
executes program without problem. I know I can shedule such command in the Windows itself, but i'd like to keep all scheduled jobs in one place - in the Server Agent
EDIT: Solved by changing the path to UNC
There are two important things to remember when setting up packages to run via a SQL Server Agent job.
Use UNC paths for all file locations, no matter how simple. There is a high probability that the server will have a different view of the file structure than your development machine, so UNC paths ensure that both machines are referencing the same paths.
Use a proxy account to execute that package, as described here http://www.mssqltips.com/sqlservertip/2163/running-a-ssis-package-from-sql-server-agent-using-a-proxy-account/.
The proxy account must have access to the physical paths and the server objects.
This also allows for security stratification on your various packages (not all packages need access to everything).
I'm really bad at SQL and couldn't find anything near what I really need. I'm trying to create a Stored Procedure that should run each night to check if records in my database have an equivalent file on a server with all our data.
Example: Record with a mp4 has: [Spotnumber] -> 0000001. Then my procedure should check (not locally) if the file exists on the other server with this number.
Also the place where it should look could be fore exemple (not locally) C:/Spots. And in this directory there'll be subdirectories like: 2013, 2012, 2011. It should check in each directory if it doesn't exists.
For this I was thinking to make something like this: Single check. But this one searches locally and already has the url in a table-field. This won't be possible for mine.
So my question is: Is it even possible to do this with just a SQL procedure? If yes how should I make it check all the files on another server (what path should I use?) + How can I make it check for each record in each subdirectory?
I would suggest another approach.
Instead of using Sql server to check if the file exists then update the db.
Why don't you use a powershell script checking if a file exist, then in this powershell script update the database. With a little search on google you can find all functions on microsoft blogs explaining how to check if a file exists and update file.
Another solution, you could create an assembly in your database with a .net language and work with that.
Last possibility, i think it can be possible too with SSRS.
If you really need to do that with tsql, you should allow xcmd command on your server, then use xcmd... but it means every body could use xcmd. It's not designed too and not suggested :)
I'm working on a web interface for modifying Oracle database backup settings. One of the options I want to give users is where to set the flash recovery area. As far as I know, the only way to change this is by executing something like:
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='C:\file\path' SCOPE=BOTH SID='*';
The problem is that if the file path is some path that doesn't already exist on the system, it isn't created automatically, and this script fails. Does anyone know if there is a way to instruct Oracle to make that directory for me or if there is a PL/SQL script I can use to create a directory on the physical disk (I.E. not a CREATE DIRECTORY call)?
If you really want to do this, write a Java stored procedure (stored as an Oracle object) that calls the mkdir function on a file object. You'll need to use dbms_java.grant_permission to grant java.io.FilePermission privileges.