getsentry.banno-tools.com showing on a blacklist? - blacklist

we are having a couple of customers see false positives in regard to https://getsentry.banno-tools.com/api/2/security when they log into Banno, thus the client's may not get access w/o whitelisting the aforementioned Host/URL. I suspect that the site/URL has made it to a blacklist somewhere, but at this time not sure where.

Related

DynamoDB ConditionalCheckFailedException thrown but succeeds

I think that have seen in many occasions that a DynamoDB conditional put throws ConditionalCheckFailedException but succeeds. Usually in this scenario, the request takes quite long (~10s) to finish, but I can see that the request is updated despite the fact that a ConditionalCheckFailedException is thrown (and the it took few seconds).
By the way I don't force any timeout on the DDB request.
Is this a bug, or some DDB conditional put contract that I misunderstand? Has anyone experienced this issue?
Answering this late to inform others:
ConditionCheckFailedException but item is persisted:
This typically happens when you save an item to DynamoDB, DynamoDB acknowledges the write request but the response gets lost on the return path which can happen for multiple reasons, keeping in mind that DynamoDB is one of the largest distributed systems in the cloud.
This causes the SDK timeout to exceed while awaiting a response, which then triggers an SDK retry. When the write request is retried, the condition now evaluates to False as the item already exists, which in turn throws a ConditionCheckFailedException, which can cause confusion.
When I receive a ConditionCheckFailedException I typically do a strongly consistent GetItem request for the item to ensure it exists with the values I expect and move on.

How to catch "Nrpe unable to read output" when occured?

I'm trying to catch "nrpe unable to read output" output from plugin and send an email when this one occurs and I'm a little bit stuck :) . Thing is there are different return codes when this error occurs on different plugin:
Return code Service status
0 OK
1 WARNING
2 CRITICAL
3 UNKNOWN
Is there a way either to unify return codes of all plugins I use(that there always will be 2[CRITICAL] when this problem occurs), or any other way to catch those alerts? I want to keep return codes for different situations as is(i.e. filesystem /home will be warning(return code 1) for 95% and critical(return code 2) for 98%
Most folks would rather not have this error sending alert emails, because it does not represent an actual failed check. Basically it means nothing more than:
The command/plugin (local or remote) was ran by NRPE, but
failed to return any usable status and/or text back to nrpe.
This most often means something went wrong with the command/plugin and it hasn't done the job it was expected to perform. You don't want alerts being thrown for checks, when the check wasn't actually performed - as this would be very misleading. It's also important to note that the Return Code is not even be coming from the command/plugin.
In my experience, the number one cause of this error is a bad check. And as the docs for NPRE state, you should run the check (with all its options!) to make sure it runs correctly. Do yourself a favor and test both working AND not working states. About 75% of the time, this has happened because the check only works correctly when it has OK results, and blows up when something not-OK must be reported.
Another issue that causes these are network glitches. NRPE connects and runs the check; but the connection is closed before any response is seen. Once again, not a true check result.
For a production Nagios monitoring system, these should be very rare errors. If they are happening frequently, then you likely have other issues that need to be fixed.
And as far as I can tell, all built-in Nagios plugins use the exact same set of return codes. Are you certain this isn't a 'custom' check?
Ok, I think I've found the solution for my problems-I will try to check nagios.log on each node for those errors.

Isolated Storage Exception in silverlight

I've isolated storage in my silverlight application to store some information for specific users.
On every login, I check storage space by using
IsolatedStorageFile.GetUserStoreForApplication()
After that I store some information in local variable and then clear all storage and get it again by using these lines
IsolatedStorageFile.GetUserStoreForApplication().Remove();
IsolatedStorageFile.GetUserStoreForApplication();
Sometimes I get error on IsolatedStorageFile.GetUserStoreForApplication(). Error detail is
System.IO.IsolatedStorage.IsolatedStorageException was caught
Message=Initialization failed.
StackTrace:
at System.IO.IsolatedStorage.IsolatedStorageFile.FetchOrCreateStore(String groupName, String storeName, IsolatedStorageFile isf)
at System.IO.IsolatedStorage.IsolatedStorageFile.GetUserStore(String group, String id)
at System.IO.IsolatedStorage.IsolatedStorageFile.GetUserStoreForApplication()
Error occurs randomly but when it happens,I lost my all data in storage. I don't know the reason of this error and didn't find any helpful article. I also found many related questions but my problem is still there.
Edit: I just got to know the reason of this behavior and that is According to this
If any of the directories or files in the store are in use, the removal attempt for the store fails. Any subsequent attempts to modify the store throw an IsolatedStorageException exception. In this case, you must ensure that the files or directories are explicitly deleted.
But I did find any method to explicitly delete whole store. Can anyone suggest me any solution?

Problem with performance counters on Vista

I'm running into a strange issue on Vista with the Performance monitoring API. I'm currently using code that worked fine on XP/2k, based around PdhGetFormattedCounterValue(). I start out using PdhExpandWildCardPath to expand the counters (I'm interested in overall network statistics), the counters I'm looking at are:
\\Network Interface(*)\\Bytes Received/sec
\\Network Interface(*)\\Bytes Sent/sec
\\Processor(_Total)\\% Processor Time
The problem is that on their first call they return PDH_INVALID_DATA, I don't think this is a problem, since if I query it again I will start getting data without the error. The problem is this - while the processor time is worked exactly as expected, neither of the network interface counters are returning anything - just 0 all the time. I verified using Perfmon that they are reporting data normally, so I'm at a loss as to what might be the issue. I caught this at MS:
http://support.microsoft.com/?scid=kb%3Ben-us%3B287159&x=11&y=9
But I'm not interested in multi-language for my task, so I don't think this is relevant. I will see if I can come up with some basic code showing exactly what I'm doing, but nothing is returning anything strange, and it worked on XP/2k, so I suspect something changed under the hood. Thanks!
It turns out the issue was that the network interfaces are both wildcards, whereas the Processor one is actually already rolled up by the performance monitoring. What I didn't realize was that it PdhExpandWildCardPath didn't return something directly usable by PdhAddCounter. By this I mean that if ExpandWildCard returns 3 expanded matches, they come back as a null separated strings - I understood this, but I had assumed that AddCounter would be effectively create a counter containing all three. Nope, reality is I needed to break up each path and request it individually from AddCounter, then roll up the results manually when I get them.
Hopefully this helps someone else to avoid the same mistake I made with less frustration. ;)

How can I prevent database being written to again when the browser does a reload/back?

I'm putting together a small web app that writes to a database (Perl CGI & MySQL). The CGI script takes some info from a form and writes it to a database. I notice, however, that if I hit 'Reload' or 'Back' on the web browser, it'll write the data to the database again. I don't want this.
What is the best way to protect against the data being re-written in this case?
Do not use GET requests to make modifications! Be RESTful; use POST (or PUT) instead the browser should warn the user not to reload the request. Redirecting (using HTTP redirection) to a receipt page using a normal GET request after a POST/PUT request will make it possible to refresh the page without getting warned about resubmitting.
EDIT:
I assume the user is logged in somehow, and therefore you allready have some way of tracking the user, e.g. session or similar.
You could make a timestamp (or a random hash etc..) when displaying the form storing it both as a hidden field (just besides the anti Cross-Site Request token I'm sure you allready have there), and in a session variable (wich is stored safely on your server), when you recieve a the POST/PUT request for this form, you check that the timestamp is the same as the one in session. If it is, you set the timestamp in the session to something variable and hard to guess (timestamp concatenated with some secret string for instance) then you can save the form data. If someone repeats the request now you won't find the same value in the session variable and deny the request.
The problem with doing this is that the form is invalid if the user clicks back to change something, and it might be a bit to harsh, unless it's money you're updating. So if you have problems with "stupid" users who refresh and click the back-button thus accidentally reposting something, just using POST would remind them not to do that, and redirecting will make it less likely. If you have a problem with malicious users, you should use a timestampt too allthough it will confuse users sometimes, allthough if users is deliberately posting the same message over and over you probably need to find a way to ban them. Using POST, having a timestam, and even doing a full comparison of the whole database to check for duplicate posts, won't help at all if the malicious users just write a script to load the form and submit random garbage, automatically. (But cross-site-request protection makes that a lot harder)
Using a POST request will cause the browser to try to prevent the user from submitting the same request again, but I'd recommend using session-based transaction tracking of some kind so that if the user ignores the warnings from the browser and resubmits his query your application will prevent duplication of changes to the database. You could include a hidden input in the submission form with value set to a crypto hash and record that hash if the request is submitted and processed without error.
I find it handy to track the number of form submissions the user has performed in their session. Then when rendering the form I create a hidden field that contains that number. If the user then resubmits the form by pressing the back button it'll submit the old # and the server can tell that the user has already submitted the form by examining what's in the session to what the form is saying.
Just my 2 cents.
If you aren't already using some sort of session-management (which would let you note and track form submissions), a simple solution would be to include some sort of unique identifier in the form (as a hidden element) that is either part of the main DB transaction itself, or tracked in a separate DB table. Then, when you are submitted a form you check the unique ID to see if it has already been processed. And each time the form itself is rendered, you just have to make sure you have a unique ID.
First of all, you can't trust the browser, so any talk about using POST rather than GET is mostly nerd flim-flam. Yes, the client might get a warning along the lines of "Did you mean to resubmit this data again?", but they're quite possibly going to say "Yes, now leave me alone, stupid computer".
And rightly so: if you don't want duplicate submissions, then it's your problem to solve, not the user's.
You presumably have some idea what it means to be a duplicate submission. Maybe it's the same IP within a few seconds, maybe it's the same title of a blog post or a URL that has been submitted recently. Maybe it's a combination of values - e.g. IP address, email address and subject heading of a contact form submission. Either way, if you've manually spotted some duplicates in your data, you should be able to find a way of programmatically identifying a duplicate at the time of submission, and either flagging it for manual approval (if you're not certain), or just telling the submitter "Have you double-clicked?" (If the information isn't amazingly confidential, you could present the existing record you have for them and say "Is this what you meant to send us? If so, you've already done it - hooray")
I'd not rely on POST warnings from the browser. Users just click OK to make messages go away.
Anytime you'll have a request that needs to be one time only e.g 'make a payment', send a unique token down, that gets submitted back with the request. Throw the token out after it comes back, and so you can now tell when something is a valid submission (anything with a token that isn't 'active'). Expire active tokens after X amount of time, e.g. when a user session ends.
(alternately track the tokens that have come back, and if you have received it before then it is invalid.)
Do a POST every time you alter data, but never return an HTML response from a post... instead return a redirect to a GET that retrieves the updated data as a confirmation page. That way, there is no worry about them refreshing the page. If they refresh, all that will happen is another retrieve, never a data-altering action.

Resources