shellcheck currently yields a lot of errors when analyzing one of my scripts. To get an overview I'd like to go through each error one by one. Is there a way I can disable all rules but one when using shellcheck?
Related
Given help from this microsoft link, I am aware of many tools related to SSIS diagnostics:
Event Handlers (in particular, "OnError")
Error Outputs
Operations Reports
SSISDB Views
Logging
Debug Dump Files
I just want to know what is the basic, "go to" approach for (non-production) diagnostics setup with SSIS. I am a developer who WILL have access to the QA and UAT servers where I will be performing diagnostics.
In my first attempt to find the source of an error, I used SSMS to view operational reports. All I saw was this:
I followed the instructions shown above, but all it did was lead me in a circle. The overview allows me to see the details, but the details show the above message and ask me to go back to the overview. In short, there is zero error information beyond telling me which task failed within the SSIS package.
I simply want to get to a point where I can at last see SOMETHING about the error(s).
If the answer is that I first need to configure an OnError event in my package, then my next question is: what would the basic, "go to" designer-flow look like for that OnError event?
FYI, this question is similar to "best way of logging in SSIS"
I also noticed an overall strategy for success with SSIS in this answer. The author says:
Instrument your code - have it make log entries, possibly recording diagnostics such as check totals or counts. Without this, troubleshooting is next to impossible. Also, assertion checking is a good way to think of error handling for this (does row count in a equal row count in b, is A:B relationship really 1:1).
Sounds good! But I'd like to have a more concrete example...particularly for feeding me the basics of what specific errors were generated.
I'm trying to avoid learning ALL the SSIS diagnostic approaches, just for the purpose of picking one good "all around" approach.
Update
Per Nick.McDermaid suggestion, in the SSISDB DB I run this:
SELECT * FROM [SSISDB].[catalog].[executions] ORDER BY start_time DESC
This shows to me the packages that I manually executed. The timestamps correctly reflect when I ran them. If anything is unusual(?) it is that the reference_id, reference_type and environment_name columns are null. All the other columns have values.
Update #2
I discovered half the answer I'm looking for. The reason no error information is available, is because by default the SSIS package execution logging level is "none". I had to change the logging level.
Nick.McDermaid gave me the rest of the answering by explaining that I don't need to dive into OnError tooling or SSIS logging provider tooling.
I'm not sure what the issue with your reports are but in answer to the question "Which SSIS diagnostics should I learn", I suggest the vanilla ones out of the box.
In other words use built in SSIS logging (which does not require any additional code) to log failures. Then use the built in reports (once you get them working) to check those logs.
vanilla functionality requires no maintenance. Custom functionality (i.e. filling your packages up with OnError events) requires a lot more maintenance.
You may find situations where you need to learn some of the SSISDB tricks to troubleshoot but in the first instance, try to get everything you can out of the vanilla reports.
If you need to maintain an SQL 2012 or after existing system, then all of this logging is built in. Manual OnError additions are not guaranteed to be built in
The only other thing to be aware of is that script tasks never yield informative errors. I actually suggest you avoid the use of script tasks in SSIS. I feel that if you have to use a script task, you might be using the wrong tool
Adding to the excellent answer of #Nick.McDermaid.
I use SSIS Catalog error reporting. In most cases, it is sufficient and have the following functionality for error analysis. Emphasis is on the following:
Usually the first or second error message contains meaningful information on error. The latter is some error occurred in the dataflow....
If you look at the first/second error message at All Messages report at Error Messages section, you will see Error Context hyperlink. Invoking it will show you environment, connection managers and some variables at the moment of package crash.
Good error analysis is more an approach and practice than a mere tool selection. Here are my recommendations:
SSIS likes to report error code instead of meaningful explanation. So, Integration Services Error and Message Reference is your friend.
SSIS includes in error context (see above) dump those variables which have Include in ErrorDump property set to true.
I build SQL commands for SQL Task or DataFlow Source in variables. This allows to display SQL command executed at error in error context, when you set incude in Dump property on these variables.
Structure your variables well. If some variable is used only at some task - declare it on this task. Otherwise a mess of dumped variables will hurt you more than do any good.
I have a T4 template that generates a lot of SQL code, for which I have lots of SQL71502 and SQL71562 warnings.
These warnings are expected and I want them ignored for that specific file.
I tried using the generated file properties to turn them off. It works, but the "Suppress TSql Warnings" property value gets cleared each time the template runs so it's pointless.
I don't want to disable these warnings on the whole project and the pragma instruction isn't supported AFAIK.
So far my only option seems to be using EnvDTE api, which I'd very much like to avoid.
Can anyone help?
How about putting them all in a separate project and disabling the warnings there and using "same database" references to the main project (would be hard if you reference the generated objects and back to the main project)
Otherwise the dte api, it isn't that hard to enumerate all project items and check the properties I can point to a sample if you need one.
Ed
You can ignore the tsql warnings on a per file basis in the properties dialog of the specific file:
I just started Play framework(2.1) and copied sample project (Zentasks) and customising. I removed all the previous view, controller and model classes. When I run the app, my browser shows evolution script and I must run the script. But I do not want to create and execute this script because I have got already my database and tables before this app. \
In addition, there are still DDLs in the script creates tables already deleted.
I removed the evolutions directory again and again, the file auto generated and I did now work.
I want to understand how it works and know how to avoid this annoying?
Thanks.
There is commented option evolutionplugin=disabled in application.conf for this, just uncomment it:
# Evolutions
# ~~~~~
# You can disable evolutions if needed
evolutionplugin=disabled
To make it working again, just comment it or set its value to enabled
I have this kind of rule:
"foo" *> \out do
need something
create "foo" somehow
It's built correctly, and running the build twice won't build the target.
Then I add a system' to this rule:
"foo" *> \out do
...
system' something
Running shake now doesn't rebuild the "foo" target, because no dependencies changed. Anyway, the rule changed. So I expect the newly added system' action to change the history of the rule, and in turn force a rebuild of "foo", but it wasn't the case.
Usually in autoconf/automake systems, or even in non-trivial makefiles, the rules depend on Makefile itself, so that whenever it changes the project is rebuilt.
In Shake I expect this to work and to be fine grained.
In the source code of system' I can't see anything that adds an implicit dependency on the command being run.
Am I doing something wrong? Is it intentional to not support this kind of dependencies, or it's simply not implemented?
The behaviour you are seeing is expected, and the ICFP 2012 paper S6.2 outlines some strategies for coping with it. It would be ideal if each output depended on the rule used to build it, but that requires some equality over rules (which are just functions whose source code is unavailable during execution). One day I hope Shake will work as you describe, but I don't think it's possible at the moment in GHC.
To minimize the number of build system changes, it is best to keep anything you think will change on a regular basis out of the build system. Any list of files (for example which files are required to link a C executable) can be put in configuration files or depended upon properly through the use of the oracle mechanism.
Of course, the rules will still change from time to time, and there are two approaches you can take:
Rebuild everything if anything changes
A simple conservative approach, as adopted by many Makefiles, is to rebuild everything if anything changes. An easy way to implement this strategy is to hash the Shake scripts, and use that as the shakeVersion field. For projects that rebuild reasonably quickly this can work well.
Rebuild with manual control
For larger projects, the build system is often in daily flux, but most parts of the build system have no impact on most rules. For some rules I explicitly need some source files in the rule, for example if there is a large generator in a file, I would need that in the rule producing the generated output. If you also use writeFileChanged to write the generated file, then many modifications to the generator will cause only minimal rebuilding.
If the changes are more invasive, I manually increment the shakeVersion field, and force a complete rebuild. In a mature build system with proper dependency tracking, this event may happen only a few times a year.
I am fixing some problems with a legacy system and have run into a snag that I am surprised was not caught sooner. I am running Django 1.3 and using postgres 9.1.3 in running this application. The system is a validation system for users to use the rest of the system. It uses part of the Django users interface, but mostly it has it's own 'Users'.
My problem comes along when I try and give a user their account questions (similar to if you forget a password to a website). When I try to do that it throws this error:
Database Error at admin/password/user
relation "password_user_answered_questions_id_s" does not exist
LINE 1: SELECT CURRVAL('"password_user_quest...
^
Does anyone know what might cause this error? I have tried resetting the db (didn't think it would do anything but just wanted to be sure) and have also poked around in the db using phppgadmin and found that everything else is getting stored correctly except this one. It is using a ManyToMany field when assigning it so that a user can have multiple questions and a question can be used by multiple users.
The reason is most likely that the
relation "password_user_answered_questions_id_s" does not exist
Just like the error message informs us. Are you aware of how PostgreSQL handles identifiers?
Also, sequences are usually named *_seq. Letters missing from the end?
About maximum length of identifiers - I quote the manual from the link above:
The system uses no more than NAMEDATALEN-1 bytes of an identifier;
longer names can be written in commands, but they will be truncated.
By default, NAMEDATALEN is 64 so the maximum identifier length is 63
bytes. If this limit is problematic, it can be raised by changing the
NAMEDATALEN constant in src/include/pg_config_manual.h.
Bold emphasis mine. Seems like you should shorten your identifiers a bit.
The problem is you haven't synced your DB, I guess. Please execute these commands:
python manage.py makemigrations myappname
python manage.py migrate myappname