Does DatabaseIntegrityCheck.sql log to text files in disk? - sql-server

I want to setup checkdb using DatabaseIntegrityCheck.sql from Ola-hallengren. I have passed LogToTable = 'Y'. But will it log to disk as well in text files? I did not find any parameter for that.
P.S. I know that jobs from MaintenanceSolution.sql do log to files in disk.
Script reference : DatabaseIntegrityCheck.sql

The procedure do not, byt itself log to disk. There isn't really any clean way to write to disk from inside T-SQL. Hence using an output file in the job step (like what the create job section of MaintenanceSolution does).

Related

Date in NLog file name and limit the number of log files

I'd like to achieve the following behaviour with NLog for rolling files:
1. prevent renaming or moving the file when starting a new file, and
2. limit the total number or size of old log files to avoid capacity issues over time
The first requirement can be achieved e.g. by adding a timestamp like ${shortdate} to the file name. Example:
logs\trace2017-10-27.log <-- today's log file to write
logs\trace2017-10-26.log
logs\trace2017-10-25.log
logs\trace2017-10-24.log <-- keep only the last 2 files, so delete this one
According to other posts it is however not possible to use date in the file name and archive parameters like maxArchiveFiles together. If I use maxArchiveFiles, I have to keep the log file name constant:
logs\trace.log <-- today's log file to write
logs\archive\trace2017-10-26.log
logs\archive\trace2017-10-25.log
logs\archive\trace2017-10-24.log <-- keep only the last 2 files, so delete this one
But in this case every day on the first write it moves the yesterday's trace to archive and starts a new file.
The reason I'd like to prevent moving the trace file is because we use Splunk log monitor that is watching the files in the log folder for updates, reads the new lines and feeds to Splunk.
My concern is that if I have an event written at 23:59:59.567, the next event at 00:00:00.002 clears the previous content before the log monitor is able to read it in that fraction of a second.
To be honest I haven't tested this scenario as it would be complicated to set up as my team doesn't own Splunk, etc. - so please correct me if this cannot happen.
Note also I know that it is possible to directly feed Splunk other ways like via network connection, but the current setup for Splunk at our company is reading from log files so it would be easier that way.
Any idea how to solve this with NLog?
When using NLog 4.4 (or older) then you have to go into Halloween mode and make some trickery.
This example makes hourly log-files in the same folder, and ensure archive cleanup is performed after 840 hours (35 days):
fileName="${logDirectory}/Log.${date:format=yyyy-MM-dd-HH}.log"
archiveFileName="${logDirectory}/Log.{#}.log"
archiveDateFormat="yyyy-MM-dd-HH"
archiveNumbering="Date"
archiveEvery="Year"
maxArchiveFiles="840"
archiveFileName - Using {#} allows the archive cleanup to generate proper file wildcard.
archiveDateFormat - Must match the ${date:format=} of the fileName (So remember to correct both date-formats, if change is needed)
archiveNumbering=Date - Configures the archive cleanup to support parsing of filenames as dates.
archiveEvery=Year - Activates the archive cleanup, but also the archive file operation. Because the configured fileName automatically ensures the archive file operation, then we don't want any additional archive operations (Ex. avoiding generating extra empty files at midnight).
maxArchiveFiles - How many archive files to keep around.
With NLog 4.5 (Still in BETA), then it will be a lot easier (As one just have to specify MaxArchiveFiles). See also https://github.com/NLog/NLog/pull/1993

Check if a file exists in UNIX from SQLplus without UTL_FILE

My current scenario is like this:
I need to login to sqlplus from a shell script to call a stored procedure.
After that I need to create a CSV file by SPOOLING data from a table.
Then I need to check whether the CSV file has been created in a particular directory and depending on the result an update query needs to be run.
I know that this can be checked within sqlplus with the help of UTL_FILE package but unfortunately due to Client policies,the access of this package is restricted in the current system.
Another way is to exit from sqlplus and perform the file check in UNIX and then again log in to sqlplus to perform the rest actions. But this I believe would result in slower execution time and performance is an important factor in this implementation as the tables contain huge volumes of data(in millions).
So is there any other way to check this from sqlplus without exiting from the current session?
System Info:
OS - Red Hat Enterprise Linux
Database - Oracle 11g
If the file is on the same machine that you're running SQL*Plus on, you could potentially use the host command.
If the file you're checking is the same one you're spooling to, it must exist anyway, or you would have got an SP error of some kind; but if you do want to check the same file for some reason, and assuming you have a substitution variable with the file name:
define csv_file=/path/to/spool.csv
-- call procedure
spool &csv_file
-- do query
spool off
host ls &csv_file
update your_table
set foo=bar
where &_rc = 0;
If the file exists when the host command is run, the _rc substitution variable will be set to zero. If the file doesn't exist or isn't readable for any reason it will be something else - e.g. 2 if the file just doesn't exist. Adding the check &_rc = 0 to your update will mean no rows are updated if there was an error. (You can of course still have whatever other conditions you need for the update).
You could suppress the display of the file name by adding 1>/dev/null to the host command string; and could also suppress any error messages by also adding 2>/dev/null, though you might want to see those.
The documentation warns against using &_rc as it isn't portable; but it works on RHEL so as long as you don't need your script to be portable to other operating systems this may be good enough for you. What you can't do, though, is do anything with the contents of the file, or interpret anything about it. All you have available is the return code from the command you run. If you need anything more sophisticated you could call a script that generates specific return codes, but that's getting a bit messy.

Autorun script that create text file

For safety and precaution I created a file with information about me, so in a case I lose my pendrive, the person knows where to contact me. But my friends always alter the contents of the file or its name.
It´s possible to create an autorun.inf file that always generate a text file (.txt) into pendrive with some information about the owner of this pendrive?
Thanks.
Sounds like you want a script to check the file on load, on when some unit process occurs. Is it a real drive? Maybe you can run the equiv of an autoexec command.

UNIX Shell script: file reading issue

I have to read a file in my shell script. I was using PL/SQL's UTL_FILE to open the file.
But I have to do a new change which will append timestamp to the file.
e.g import.data file becomes import_20152005101200.data
Now timestamp is the time at which file arrive at the server.
Since the file name changed I can't use the old way of file accessing.
I came up with below solution:
UTL_FILE.FOPEN ('path','import_${file_date}.data','r');
To achieve this I have to get filename and trim it using SUBSTR to get timestamp and pass to file_date variable.
However I am not able to find how to access filename in a particular path. I can use basename. But My file name keeps changing because of timestamp.
Any help/ alternate ideas are welcome.
PL/SQL isn't a good tool to solve this problem; UTL_FILE doesn't have any tools to list all the files in a folder.
A better solution is to define a stored procedure which uses UTL_FILE and pass the file name to process as an argument to the procedure. That way, you use the shell (which has many powerful commands and tools to examine folders and files) or a script language like Python to determine which file to process.

create flat file in ssis package

I'm working on creating a csv export from a SQL Server database and I've been familiar with a process for doing so that admittedly, I've never completely understood. The process involves creating a "template" file, which defines the columns and structure for the file export. Once the "template" file exists, you can use a Data Flow task to fill it and a File System Task to copy it to the final storage destination with whatever file name you'd like (frequently a date/time stamp).
Is there a reason that you can't simply create a file directly, without the intermediate "template" file? I've looked around for a bit and it seems like all the proposed solutions involve connecting to an existing file. I see that there is a "Create File" Usage type for a "File" connection manager, but you can't use it in any File System Task. The only File System Type connection managers you can use relative to a file are "Copy", "Delete", "Move", "Rename", and "Set Attributes".
Is there a way to create a file at package run time and fill it?
The whole point of SSIS is to create a data flow with metadata so that the data can be manipulated - if you just want to go database direct to CSV you are probably better off using bcp (bulk copy program) from the command line. If you want to include it as part of a SSIS package just add an Execute Process Task and add the command line to that. You can dynamically change the included columns or the output file by adding an expression to the task. You could also call bcp though TSQL using an Excute SQL Task.
One other option is to concatenate all your columns in your query inter-spaced with a comma literal and output to a text file with just one very wide column.
For documentation on bcp look here

Resources