In SSIS, i need to write into a log, whatever items get displayed on the Progress tab. Is there an inbuilt feature to do this? i tried the logging option but it logs too many details and the log is pretty huge for a single package run.
i need only whatever gets logged on the Progress tab to be logged on to a text file when the package undergoes a scheduled run through Sql server agent
Here is a possible solution.
You have an option at DTEXEC at the command-line. There is a switch called /Reporting or /Rep for short.
I found it here
This link says the same.
This is your question in MSDN. Its pretty useless since they suggest you turn on logging, which you've already done. I thought I should add it anyways for completeness.
Related
I have an SSIS job that is scheduled to run every 5 minutes via SQL Agent. The job imports the contents of an excel file into a SQL table. That all works great, but the files get placed there sporadically and often times when the job runs there is no file there at all. The issue is this is causing the job to fail and send a notification email that the job failed, but I only want to be notified if the job failed while processing a file, not because there was no file there in the first place. From what I have gathered I could fix this with a script task to check if the file is there before the job continues, but I haven't been able to get that to work. Can someone break down how the script task works and what sort of script I need to check if a file exists? Or if there is some better way to accomplish what I am trying to do I am open to that as well!
The errors I get when I tried the Foreach Loop approach are
This can be done easily with a Foreach Loop Container in SSIS.
Put simply, the container will check the directory you point it at and perform the tasks within the container for each file found. If no files are found the contents of the container are never executed. Your job will not fail if no files are found. It will complete reporting success.
Check out this great intro blog post for more info.
In the image attached the question, the specific errors are related to the Excel Source failing validation. When SSIS opens a package for editing or running, the first thing it does is validate all of the artifacts needed for a successful run are available and conform to the expected shape/API. Since the expected file may not be present, right click on the Excel Connection Manager and in the Properties menu, find a setting for DelayValidation and change it to True. This will ensure the connection manager only validates the resource is available if the package is actually going to use it i.e. it passes into the Foreach Loop Container. You will also need to set the same DelayValidation to True on your Data Flow Task.
You did not mention what scripting approach you're applying to search for your file. While using C# or VB.NET are typical scripting languages used in a Scripting control task of this nature, you can also use TSQL that will simply return a boolean value saved to a user variable (Sometimes systems limit the use C# and VB.NET). Then you apply that user variable in the control flow to determine whether to import (boolean = 1) or not (boolean = 0).
Take a look at the following link that shows in detail how to set up the TSQL script that checks for whether or not a file exist.
Check for file exists or not in sql server?
Take a look at the following link that shows how to apply a conditional check based on a boolean user variable. This example also shows how to apply VB.NET in a script task to determine if the file exists (as an alternative to the before mentioned TSQL approach).
http://sql-articles.com/articles/bi/file-exists-check-in-ssis/
Hope this helps.
Given help from this microsoft link, I am aware of many tools related to SSIS diagnostics:
Event Handlers (in particular, "OnError")
Error Outputs
Operations Reports
SSISDB Views
Logging
Debug Dump Files
I just want to know what is the basic, "go to" approach for (non-production) diagnostics setup with SSIS. I am a developer who WILL have access to the QA and UAT servers where I will be performing diagnostics.
In my first attempt to find the source of an error, I used SSMS to view operational reports. All I saw was this:
I followed the instructions shown above, but all it did was lead me in a circle. The overview allows me to see the details, but the details show the above message and ask me to go back to the overview. In short, there is zero error information beyond telling me which task failed within the SSIS package.
I simply want to get to a point where I can at last see SOMETHING about the error(s).
If the answer is that I first need to configure an OnError event in my package, then my next question is: what would the basic, "go to" designer-flow look like for that OnError event?
FYI, this question is similar to "best way of logging in SSIS"
I also noticed an overall strategy for success with SSIS in this answer. The author says:
Instrument your code - have it make log entries, possibly recording diagnostics such as check totals or counts. Without this, troubleshooting is next to impossible. Also, assertion checking is a good way to think of error handling for this (does row count in a equal row count in b, is A:B relationship really 1:1).
Sounds good! But I'd like to have a more concrete example...particularly for feeding me the basics of what specific errors were generated.
I'm trying to avoid learning ALL the SSIS diagnostic approaches, just for the purpose of picking one good "all around" approach.
Update
Per Nick.McDermaid suggestion, in the SSISDB DB I run this:
SELECT * FROM [SSISDB].[catalog].[executions] ORDER BY start_time DESC
This shows to me the packages that I manually executed. The timestamps correctly reflect when I ran them. If anything is unusual(?) it is that the reference_id, reference_type and environment_name columns are null. All the other columns have values.
Update #2
I discovered half the answer I'm looking for. The reason no error information is available, is because by default the SSIS package execution logging level is "none". I had to change the logging level.
Nick.McDermaid gave me the rest of the answering by explaining that I don't need to dive into OnError tooling or SSIS logging provider tooling.
I'm not sure what the issue with your reports are but in answer to the question "Which SSIS diagnostics should I learn", I suggest the vanilla ones out of the box.
In other words use built in SSIS logging (which does not require any additional code) to log failures. Then use the built in reports (once you get them working) to check those logs.
vanilla functionality requires no maintenance. Custom functionality (i.e. filling your packages up with OnError events) requires a lot more maintenance.
You may find situations where you need to learn some of the SSISDB tricks to troubleshoot but in the first instance, try to get everything you can out of the vanilla reports.
If you need to maintain an SQL 2012 or after existing system, then all of this logging is built in. Manual OnError additions are not guaranteed to be built in
The only other thing to be aware of is that script tasks never yield informative errors. I actually suggest you avoid the use of script tasks in SSIS. I feel that if you have to use a script task, you might be using the wrong tool
Adding to the excellent answer of #Nick.McDermaid.
I use SSIS Catalog error reporting. In most cases, it is sufficient and have the following functionality for error analysis. Emphasis is on the following:
Usually the first or second error message contains meaningful information on error. The latter is some error occurred in the dataflow....
If you look at the first/second error message at All Messages report at Error Messages section, you will see Error Context hyperlink. Invoking it will show you environment, connection managers and some variables at the moment of package crash.
Good error analysis is more an approach and practice than a mere tool selection. Here are my recommendations:
SSIS likes to report error code instead of meaningful explanation. So, Integration Services Error and Message Reference is your friend.
SSIS includes in error context (see above) dump those variables which have Include in ErrorDump property set to true.
I build SQL commands for SQL Task or DataFlow Source in variables. This allows to display SQL command executed at error in error context, when you set incude in Dump property on these variables.
Structure your variables well. If some variable is used only at some task - declare it on this task. Otherwise a mess of dumped variables will hurt you more than do any good.
We have a script file called CreateClientDatabase.SQL, double clicking it opens it in SSMS where we can change a few parameters and execute it.
Problem:
A few hours/days later we may need to do the same again for another client, but if the original tab in SSMS has not been closed then double clicking the file will simply bring that tab to the fore and not actually open the file.
So it's easy to assume the script you are now looking at is the same as the file when it is not, and this can lead to all sorts of issues.
Is there a way round this?
Can SSMS open a second copy of the file, or warn the user that it hasn't actually opened it much like Excel does.
What you need, I think, is something similar to excel or word template files: whenever you open such file by double-click, a new document with the contents of the template is created.
SSMSBoost add-in (that I develop) has “Autoreplacements” feature: you can define a “magic token”, that will be replaced by your script, whenever that token is typed. For example, we have pre-defined token “sel”, which is replaced by “select * from” whenever you type “sel” and press space.
You could associate your script with any word, like “doit”, so whenever you visit next customer, you just open new query window, type that word+space and you have your script in the window immediately.
Just to mention: SSMSBoost allows you to define “favorite” connections, so, you can save all your cutomer’s servers in one list and quickly switch between them.
Alternative:
Have a look at SSMS Templates (View->Template explorer). SSMS allows creating your own templates and opening them by double-clicking their name from Template Explorer. In connection with SSMSBoost “Preferred connections” list you have a good setup to start your work quickly.
Hope this helps.
If file is opened from the windows explorer then it opens another instance of SSMS.
I think what you need is to detect when the SQL script file is changed outside if the SSMS environment:
make sure that Tools -> Options -> Environment -> Documents -> Detect when file is changed outside the environment is checked.
More details can be found here.
I have a set of SSRS reports developed in Visual Studio Data Tools that are used for several clients.
For one particular site, each report will work perfectly in 'Preview' mode in Visual Studio, however when I deploy them and attempt to run them in IE all of the parameters are disabled/greyed out. I also tried to put in a set of default parameters to see what would happen and when I click 'Run Report' I do not even get the 'Loading' image and the report is just blank.
I have checked the IE security settings and have reset them, and also added the report web page location in to trusted sites which unfortunately hasn't made a difference. One thing I have also noticed is that when I try to view Developer Tools in IE, it also just shows a blank white screen.
I've searched online and found a couple of old posts on various websites with a similar issue but none that were ever resolved, and now I am not sure how to proceed.
More info that might be useful:
The client are using IE 11, it is a terminal server style setup with around 8 or 9 users logged on.
The report I am testing with has 2 date parameters first, and two multi-valued parameters (I have tried other reports but with the same issue)
The client recently moved servers and the reports worked fine before, although it was a fresh install of the report so I don't believe there would be any old settings that could be causing it?
Any help greatly appreciated :)
In SSRS, report parameters can be inactive when the report is generated from a snapshot. Removing this option is as follows:
Navigate to the report in SSRS
"Manage" the report from the drop down.
Select "Processing Options" from the left menu
Is "Render this report from a report snapshot" selected? If so, switch
to "Always run this report with the most recent data" option.
Run the report to determine if parameters are now active.
If this fixes your issue, you may need to switch from the snapshot option to the Scheduled Cache option available under the left menu link "Cache Refresh Options"
For one particular site, each report will work perfectly in 'Preview' mode in Visual Studio, however when I deploy them and attempt to run them in IE all of the parameters are disabled/greyed out.
If that is the case, and a previous version of the report was already on the SSRS report server, what you can do is delete the report, and upload it again. There is some kind of cache for the parameters, that causes that if you modify the order of them, some get disabled, and the only way I have found to get rid of it is to delete the old report from the server.
For me this worked:
Open IE
Go to Settings->Compatibility View Settings
Add the URL of your report server.
You could be experiencing an old bug which was reported a long time ago in a connect item:
SSRS Weird Parameters Order Bug (parameter grayed out)
The official Microsoft answer on this subject is:
Currently this behavior is per design. However we will consider this
behavior improvement in a future release of reporting services The
only workaround at this point is to change the order of the
parameters.
Are the parameters on the RDL file set to visible, like below:
Just going to add this as an answer, because it solved the issue for me;
Delete the report on the server & deploy again.
What happens is that you completely remove the report and thus also its parameters. When you deploy a report its parameters their default values do not get overwritten in case it didn't have a default the first time you deployed.
In the Release Management for Visual Studio Deployment log, which can be found under the Details button found when viewing the details of a release, there are a number of columns.
The last three of these are Tool Log, Command Output and Status.
I have never seen anything in Tool Log, but I am in the situation now where I have a component running tests using vstest.console.exe.
Under "Command Output", I see the output from this program as it appears in the console window. However, vstest.console.exe can also be instructed to create a test log (a .trx file), which I would like to view under Tool Log. But I have no idea how to make this happen.
How do I get a log to appear under the Tool Log column?
The answer, it seems, is to set the "Log File Name" in the Tool definition to whatever log file you wish to view (in my case, the .trx file).
This obviously requires that you are using a tool that defines this, or a custom one. Fortunately, I'm using a custom tool.
Your Tool have to copy/create the .trx file to/in the Tool's work directory (%LOCALAPPDATA%\Temp\RM\T\RM\\).
Don't forget to add .trx extesion to Administration -> Settings -> Approved File Extension List.