I need to find out all the checks which are in downtime state at present.
I suspect there are some checks which have been downtimed for months and we need to find out all of them.
You can use the JSON Query Generator:
http://mynagiosserver/nagios/cgi-bin/statusjson.cgi?query=downtimelist
That will give you a list of everything currently in downtime state.
Related
I wish to create a generic component which can save the Object Name and field Names with old and new values in a BigObject.
The brute force algo says, on every update of each object, get field API names using describe and check old and new value of those fields. If it gets modified insert it into new BigObject.
But it will consume a lot of CPU time and I am looking for an optimum solution to handle this.
Any suggestions are appreciated.
Well, do you have any code written already? Maybe benchmark it and then see what you can optimise instead of overdesigning it from the start... Keep it simple, write test harness and then try to optimise (without breaking unit tests).
Couple random ideas:
You'd be doing that in a trigger? So your "describe" could happen only once. You don't need to describe every single field, you need only one operation outside of trigger's main loop.
Set<String> fieldNames = Account.sObjectType.getDescribe().fields.getMap().keyset();
System.debug(fieldNames);
This will get you "only" field names but that's enough. You don't care whether they're picklists or dates or what. Use that with generic sObject.get('fieldNameHere') and it's a good start.
or maybe without describe at all. sObject's getPopulatedFieldsAsMap() will give you cool Map which you can easily iterate & compare.
or JSON.serialize the old & new version of the object and if they aren't identical - you know what to do. No idea if they'll always serialise with same field order though so checking if the maps are identical might be better
do you really need to hand-craft this field history tracking like that? You have 1M records free storage but it could explode really easily in busier SF org. Especially if you have workflows, processes, other triggers that would translate to multiple updates (= multiple trigger runs) in same transaction. Perhaps normal field history tracking + chatter feed tracking + even salesforce shield (it comes with 60 more fields tracked I think) would be more sensible for your business needs.
I've been reading around the forums and documentation, and I can't seem to find anything related to what I am looking for, which is a huge surprise to me as it would seem to be a common requirement, so I suspect that there is a better way of approaching this.
I have a database, which I want to run a SQL Consumer on, and I want to query only records that have been modified since the last time I queried.
It appears that you cannot parameterise a SQL Consumer query, which would seem to be the first hurdle, and secondly, even if I could parameterise the consumer query, I don't appear to be able to store the result between one query and the next.
My assumption is that I would want to store the highest dateModified value, and subsequently query records where the dateModified value is strictly greater than the stored value.
(I realise that this is not foolproof, as there could be millisecond issues, but I can't think of another way of achieving this without changing the application or database.)
The only way I can see of using a SQL Consumer is to store the highest dateModified in a custom table in the system database (which I would rather not change) and include some sort of
WHERE dateModified > interfaceDataTable.lastDateModified
in the SQL Query, and an
UPDATE interfaceDataTable SET lastDateModified = :#$latestDateModifiedValue
in the onConsume SQL.
However, I'd much rather not make any changes to the source database, as that will have further implications for testing etc.
I have the sense I'm barking up the wrong tree here. Is there a better way of approaching this?
Yes this is currently not supported in camel-sql to have it dynamic parameters, such as calling a java bean method etc.
I have logged a ticket to see if we can implement this: https://issues.apache.org/jira/browse/CAMEL-12734
I am using LDAP_MATCHING_RULE_IN_CHAIN to retrieve all nested user-groups to which a user is part of. However, i am facing issue of performance as user is located in a really nested domain forest.
As, a result i am getting too many user-group entries. In order to improve performance, is it possible to restrict the number of entries based on the nested-depth. Say, i would like to fetch all user-groups which user is part of till nested-depth (3-4)?
Server used is :Active Directory (2003/2008)
Please advise
The search happens recursivly so only way to make it faster is limit where you are searching from. ie the search base, make it more specific if you can. No point searching places where you are sure there will be no results.
Im looking for a way to manually adjust TFS task startdates so that my burndown appears correct.
Essentially the iteration has fixed start/end dates and some user stories did not get filled out until half way through the iteration.
This made the burndown have a bump in the road so it looks like we are below target.
I have full access to the TFS database and am wondering what queries I would need to write to get my tasks backdated to the start of the iteration.
I have read somewhere that it is System.AuthorizedDate that controls the burndown chart.
Any help appreciated.
J
You are correct on System.AuthorizedDate being used.
You won't be able to change the System.AuthorizedDate by means of the public API. It won't let you. And you cannot change the System.AuthorizedDate date by means of SQL update commands and remain in a supported state. Officially, Microsoft does not allow this and still maintain the ability for Microsoft to support you unless the SQL enacted changes where made under their guidance such as through a support incident.
I doubt a support incident to Microsoft will yield the update query as it's not a defect and as I explain later it could put you in a very bad place. Could you create a series of updates on the appropriate tables to backdate the System.AuthorizedDate? Without doubt. It might even work but I am not certain if it would work if you dared to do so. The reason is that the work items receive System.Id numbers sequentially as created. I do know that in version control there are expectations in the system that a higher changeset number must have a later commit date (Can't recall the exact field name) than any lower changeset number. It would not surprise me if there are similar expectations in the system for work items. You might find that with such a change to the field in the work item from SQL would render errors or unexpected outcomes in various places - I can imagine a future upgrade or even an update simply bombing and unable to perform. That's all hypothetical though because unless you wish to have your environment in an unsupported state you would not change it via SQL.
Outside creating your own burndown that evaluated differently I am not aware of a means to meet your desired goal under those conditions.
I'm learning Django, trying to do an accounting app for keeping track of my expenses, etc..
I created the database with two models, one for accounts and one for operations.
But i have no idea how to keep my balance updated with each operation.
I was thinking maybe, each time i save a new operation, I update the balance by overriding the save method of Operation Model ? But in that case, if i make a mistake and have to delete one operation, my balance won't be updated then, right ?
I thought I could also create an BalanceHistory model, with an history of all balances but then same problem in case of deleting an operation..
The best option i see would be to update dynamically my balance each time i want to display it, adding all the operations at that time.. but i don't see how i could do that..
So if anyone has some insight about that, that would be great.
I know it is not a purely Django-related issue, but as i'm working with Django, it would be better if i can find an idea using Django features !
Thanks in advance for your time and your help !
You're right that this is tricky--there isn't a way to guarantee an accurate balance except by dynamically calculating it on every read based on the entire history of the account.
If you choose to go that way, you can use the Django ORM's aggregation features to calculate a sum of the Operations. For example, if you operation had a field called amount (a positive or negative number indicating the change in balance for that operation), you could could calculate the balance like:
Operation.objects.filter(account=my_account).aggregate(balance=Sum('amount'))
If you didn't want to do that for whatever reason, you could also maintain a running balance using a custom save method as you've described. You could use the delete method like save to handle individual item deletions.
However, that method will cause problems if you ever user bulk inserts, updates, or deletes or raw SQL operations, since those will not invoke the save and delete methods. However, if you need to use those features, you might be able to use a hybrid method--use the save and delete options when doing one-off transactions, and triggering a full recalculation with aggregation periodically to fix errors or immediately after doing an operation you know will screw up the balance.