I have a requirement as I need to turn the logging off in the Mulesoft Flow. I need to to this at logger level and if possible at Http connector level as well. Tried changing the INFO as OFF in log4j2.xml file but no luck. What parameters in log4j2.xml file I need to update to make it work.Right now I have done it for asynchronous logger.
Thanks in advance
Please don't do that. I strongly advise against disabling logging. Logging is the number one tool to troubleshoot issues you have. To reduce logging it is preferable to set it to a higher level, like WARN or ERROR. I can't think an scenario that requires it. If you absolutely need to turn it off, and understand clearly the tradeoffs, you need to set each package (or category) to a higher level, like WARN, ERROR or OFF. The top level packages for Mule and MuleSoft are a good start.
Example:
<!-- DON'T DO THIS! -->
<AsyncLogger name="org.mule" level="ERROR"/>
<AsyncLogger name="com.mulesoft" level="ERROR"/>
You might still find log entries from some packages from third party libraries. You can iteratively add the levels for those too until nothing else is logged.
Note that if you have any issues you will be on your own with no clues as what happened.
Related
I am not able to add data to the database even though the database is up.
I have checked it using
vespa-get-cluster-state
The error message that I got is in the image below.
Please let me know what to do to resolve this issue.
You need to ensure that all your nodes agree on the current time, by running NTP.
You'll probably be better off deploying on https://cloud.vespa.ai so you don't need to deal with this yourself.
I have an issue, i need to be able to see the php error when debug mode is set as false on production environment.
I currently see the internal error message but i would like to see the php error in case things break.
How can i do this? I don't want to have the debug kit activated as well.
/**
* Debug Level:
*
* Production Mode:
* false: No error messages, errors, or warnings shown.
*
* Development Mode:
* true: Errors and warnings shown.
*/
'debug' => false,
In the Logs dir you can always view the error.log.
I usually spit out the error.log on a page that has restricted access, cause I cant always get on the filesystem and its just faster and easier for others aswell.
Restricted DebugKit access
Depending on the security requirements of your application, you can implement some form of check to turn on debug mode while in production. For instance, a custom key + IP restriction.
So you can check to see if `$_GET['key'] is equal to your key AND the IP matches your machine. If so, turn debug on, otherwise, leave it off. This will allow you to much more easily debug your live application.
You are opening yourself a bit to potential concerns (though I'm not a good enough hacker to know any particular ones). But if you're doing banking level software, or storing any type of PCI-compliant data, you should probably not do this. Otherwise, it's a good solution.
Or, you could simply turn on debug kit if logged in as user with a specific role.
CakePHP Logs
As others have mentioned, you can use the internal Cake Logs via accessing them directly or, as Alex points out in another answer, display the error log on a page with restricted access.
Third Party Logs
Companies like PaperTrailApp make sorting through your logs very nice and easy.
If you want there is a plugin called error email cakephp, I use in my projects you install and define what kind of errors he might send you an email , it's very good and works on cakephp 3. You can find the project on github, its very documented.
Given help from this microsoft link, I am aware of many tools related to SSIS diagnostics:
Event Handlers (in particular, "OnError")
Error Outputs
Operations Reports
SSISDB Views
Logging
Debug Dump Files
I just want to know what is the basic, "go to" approach for (non-production) diagnostics setup with SSIS. I am a developer who WILL have access to the QA and UAT servers where I will be performing diagnostics.
In my first attempt to find the source of an error, I used SSMS to view operational reports. All I saw was this:
I followed the instructions shown above, but all it did was lead me in a circle. The overview allows me to see the details, but the details show the above message and ask me to go back to the overview. In short, there is zero error information beyond telling me which task failed within the SSIS package.
I simply want to get to a point where I can at last see SOMETHING about the error(s).
If the answer is that I first need to configure an OnError event in my package, then my next question is: what would the basic, "go to" designer-flow look like for that OnError event?
FYI, this question is similar to "best way of logging in SSIS"
I also noticed an overall strategy for success with SSIS in this answer. The author says:
Instrument your code - have it make log entries, possibly recording diagnostics such as check totals or counts. Without this, troubleshooting is next to impossible. Also, assertion checking is a good way to think of error handling for this (does row count in a equal row count in b, is A:B relationship really 1:1).
Sounds good! But I'd like to have a more concrete example...particularly for feeding me the basics of what specific errors were generated.
I'm trying to avoid learning ALL the SSIS diagnostic approaches, just for the purpose of picking one good "all around" approach.
Update
Per Nick.McDermaid suggestion, in the SSISDB DB I run this:
SELECT * FROM [SSISDB].[catalog].[executions] ORDER BY start_time DESC
This shows to me the packages that I manually executed. The timestamps correctly reflect when I ran them. If anything is unusual(?) it is that the reference_id, reference_type and environment_name columns are null. All the other columns have values.
Update #2
I discovered half the answer I'm looking for. The reason no error information is available, is because by default the SSIS package execution logging level is "none". I had to change the logging level.
Nick.McDermaid gave me the rest of the answering by explaining that I don't need to dive into OnError tooling or SSIS logging provider tooling.
I'm not sure what the issue with your reports are but in answer to the question "Which SSIS diagnostics should I learn", I suggest the vanilla ones out of the box.
In other words use built in SSIS logging (which does not require any additional code) to log failures. Then use the built in reports (once you get them working) to check those logs.
vanilla functionality requires no maintenance. Custom functionality (i.e. filling your packages up with OnError events) requires a lot more maintenance.
You may find situations where you need to learn some of the SSISDB tricks to troubleshoot but in the first instance, try to get everything you can out of the vanilla reports.
If you need to maintain an SQL 2012 or after existing system, then all of this logging is built in. Manual OnError additions are not guaranteed to be built in
The only other thing to be aware of is that script tasks never yield informative errors. I actually suggest you avoid the use of script tasks in SSIS. I feel that if you have to use a script task, you might be using the wrong tool
Adding to the excellent answer of #Nick.McDermaid.
I use SSIS Catalog error reporting. In most cases, it is sufficient and have the following functionality for error analysis. Emphasis is on the following:
Usually the first or second error message contains meaningful information on error. The latter is some error occurred in the dataflow....
If you look at the first/second error message at All Messages report at Error Messages section, you will see Error Context hyperlink. Invoking it will show you environment, connection managers and some variables at the moment of package crash.
Good error analysis is more an approach and practice than a mere tool selection. Here are my recommendations:
SSIS likes to report error code instead of meaningful explanation. So, Integration Services Error and Message Reference is your friend.
SSIS includes in error context (see above) dump those variables which have Include in ErrorDump property set to true.
I build SQL commands for SQL Task or DataFlow Source in variables. This allows to display SQL command executed at error in error context, when you set incude in Dump property on these variables.
Structure your variables well. If some variable is used only at some task - declare it on this task. Otherwise a mess of dumped variables will hurt you more than do any good.
I'm building a vClould client application via the REST APIs, however, the documentation is inconsistent an in some cases just wrong and misleading.
All I really need is a solid debug tool or even a log file. Any recommendations?
You already mentioned you have access to the message stream, which is one of the first steps. Typically if I'm using the Apache HttpClient/HttpComponents I'll go increase the log level so it logs the full HTTP requests.
My next step is usually to cheat and to log into vCD as a system administrator and see what's going on. When vCD was designed there was a very deliberate decision to not reveal infrastructure level problems to tenants of the cloud (normal org users or org admins), as that would break the cloud abstraction. Sadly, that means as an org-level user you're often going to get "contact your cloud admin" error responses. We are aware that this isn't ideal and try to find ways to make it better when we can (IIRC the new 5.5 release that was announced last month does have some improvements in that area).
The last step is usually to cheat even more and to look at the server side logs (vcloud-container-debug.log, specifically). That usually gives me a better clue as to what went wrong. Of course, you may be unlucky and not have access to the vCD cell machine.
My workaround in the latter two cases is to try the operations via the vCD UI and see (1) if they work as expected and (2) if they do, to check the system state via the API and see if I'm sending the wrong request payloads, etc. because the doc or schema reference may not have been clear enough.
In regards to the documentation, please use the feedback links () found on individual doc pages to let us know! Our technical writer reviews all the feedback and tries to address them.
My final suggestion is that you might want to post API questions to the vCloud API community forum VMware has. There are a number of experts (both users and VMware employees) that monitor it and respond to questions.
I'm currently working on a web application which generates daily error (and non error) logs.
The current system outputs a log per task to a text file, and outputs critical errors as well as "start" and "finish" type messages to an email account.
The current workflow is as follows: scour the email box for errors, then go and find the .txt file to look at the associated errors and find the cause.
There are around 30 txt files split across about 5 servers.
This system was set up before me, but I'm looking for any advice on how to deal with the situation.
I have control of the script forming the error logs so can do pretty much anything - but I'm lost where to start: I'd considered some kind of web facing dashboard tool, maybe output the files to RSS or something?
Are there any external or internal tools I should be using?
Of course you may use the SQL Server Reporting Services or review this comparison table, there are some packages which may support SQL Server but they may be overwhelming for your task.
It's not really clear what your problem is or what you want to do, but if I understand correctly, your biggest problem is that some messages are logged to a log file but others are sent by email. Therefore, there is no single location that has all error messages in it and that makes analysis and troubleshooting difficult.
The best solution would be to use a logging framework that supports multiple logging destinations (file, DB, email) and severities. That would allow you to specify a configuration like "all errors are logged to a text file and critical ones are also sent by email", so you can ensure that you have everything in one place for general analysis but critical errors are also handled with priority.
You didn't mention what programming language you use, but assuming it's .NET-based then log4net and Enterprise Library are two common frameworks and there are many questions about them here on SO. Googling should give you a good idea of the pros and cons for your situation. If you're using a different language then you can look for the equivalent package: log4j (Java), logging (Python) etc.