build mode - activate logs with a certain level - qooxdoo

I'm using QX 5.0.2.
In Build mode, is there a way to (re)activate logs and only show warning+error (log level?)?
Env key 'qx.debug' = true seems to reactivate logs but I can't find the env key for log level (to set it for example to warning).
Thanks in advance.

I think what you're looking for is the static addFilter method, documented at http://www.qooxdoo.org/devel/api/#qx.log.Logger. It lets you filter messages by the class from which they originate, by log level, and on an all-appenders basis or per appender.
Derrell

Related

Using LDAP template to find certificate

Our organization stores signing certificates in Active Directory. We are using anonymous bind to search for them at a base DN (e.g. OU=MY ORG,dc=mydc,dc=org). I have been trying to use the Spring LdapTemplate to look them up, but no matter what method I use, I get the cryptic InterruptedNamingException.
Assuming a cert subject of cn=mycert.myorg.com
My code looks like this
LdapContextSource contextSource = new LdapContextSource();
contextSource.setUrl(String.format(LDAP_URL_FORMAT, ldapCertStoreParameters.getServerName(),
ldapCertStoreParameters.getPort()));
contextSource.setBase(ldapCertStoreParameters.getBaseDn());
contextSource.setAnonymousReadOnly(true);
contextSource.afterPropertiesSet();
LdapTemplate ldapTemplate = new LdapTemplate(contextSource);
ldapTemplate.setIgnorePartialResultException(true);
ldapTemplate.afterPropertiesSet();
X500Principal principal = x509CertSelector.getSubject();
Object obj = ldapTemplate.lookup(new LdapName(principal.getName()));
The X500 principal's name is the whole dn. cn=mycert.myorg.com,OU=MY ORG,dc=mydc,dc=org
I have also tried the search using just the cn.
We have verified that the DN exists on the server using Apache Directory Studio.
• I would suggest you to please remove the call altogether or set the ‘userSearchBase’ either to an empty String (“”) as per the given example in the below community thread: -
Configure Spring security for Ldap connection
As in the ‘AbstractContextSource’, set the base suffix from which all operations should origin. If a base suffix is set, you will not have to (and, indeed, must not) specify the full distinguished names in any operations performed. Since you specified the full DN for the userDN/filter, you must not specify the base.
AD servers are apparently unable to handle referrals automatically, which causes a ‘PartialResultException’ to be thrown whenever a referral is encountered in a search. To avoid this, set the ‘ignorePartialResultException’ property to true. There is currently no way of manually handling these referrals in the form of ‘ReferralException’, i.e., either you get the exception (and your results are lost) or all referrals are ignored (if the server is unable to handle them properly). Neither is there any simple way to get notified that a ‘PartialResultException’ has been ignored.
For more details regarding the LDAP template search for Active Directory stored certificates, kindly refer to the link below: -
https://docs.spring.io/spring-ldap/docs/current/apidocs/org/springframework/ldap/core/LdapTemplate.html
• Also, please try to refer to the below documentation for configuration of Springboot LDAP template configuration through certificates stored in Active Directory: -
https://www.baeldung.com/x-509-authentication-in-spring-security

Can I send an alert when a message is published to a pubsub topic?

We are using pubsub & a cloud function to process a stream of incoming data. I am setting up a dead letter topic to handle cases where a message cannot be processed, as described at Cloud Pub/Sub > Guides > Handling message failures.
I've configured a subscription on the dead-letter topic to retain messages for 7 days, we're doing this using terraform:
resource "google_pubsub_subscription" "dead_letter_monitoring" {
project = var.project_id
name = "var.dead_letter_sub_name
topic = google_pubsub_topic.dead_letter.name
expiration_policy { ttl = "" }
message_retention_duration = "604800s" # 7 days
retain_acked_messages = true
ack_deadline_seconds = 600
}
We've tested our cloud function robustly and consequently our expectation is that messages will appear on this dead-letter topic very very rarely, perhaps never. Nevertheless we're putting it in place just to make sure that we catch any anomalies.
Given the rarity of which we expect messages to appear on the dead-letter-topic we need to set up an alert to send an email when such a message appears. Is it possible to do this? I've taken a look through the alerts one can create at https://console.cloud.google.com/monitoring/alerting/policies/create however I didn't see anything that could accomplish this.
I know that I could write a cloud function to consume a message from the subscription and act upon it accordingly however I'd rather not have to do that, a monitoring alert feels like a much more elegant way of achieving this.
is this possible?
Yes, you can use Cloud Monitoring for that. Create a new policy and perform that configuration
Select PubSub Topic and Published message. Observe the value every minute and count them (aligner in the advanced option). Now, in the config, when it's above 0 from the most recent value, the alert is raised.
To filter on only your topic you can add a filter by topic_id on your topic name.
Then, configure your alert to send an email. It should work!

Experience Analytics stopped working with error related to Sitecore.Analytics.OmniChannel

Experience Analytics stopped working and it's now not showing any interaction. Upon checking logs I found the below exception but I am not able to find any working solution for this .I need your suggestion on this if any one had faced similar type of issues requesting you to please help me
ERROR [Experience Analytics]: SegmentProcessor failed to process interaction '00d87833-db45-0000-0000-05d25540e158' segment '2db07a51-fad7-4ede-b727-bd49ebb9d6f2' - DescriptorId : 2f421912-f1b3-49d8-82b8-50a64c80e4e3
Sitecore.XConnect.Segmentation.ExpressionBuilder.PredicateDescriptorException: No known predicate type could be determined from 'Sitecore.Analytics.OmniChannel.Conditions.Channel.CurrentInteractionIsOnChannelCondition,Sitecore.Analytics.OmniChannel' specified in the definition item (Id = '2f421912-f1b3-49d8-82b8-50a64c80e4e3', db = 'master') : Could not load type 'Sitecore.Analytics.OmniChannel.Conditions.Channel.CurrentInteractionIsOnChannelCondition' from assembly 'Sitecore.Analytics.OmniChannel'
Here the DescriptorId is a sitecore rule located at the path /sitecore/system/Settings/Rules/Definitions/Elements/Channel/Current Interaction is on channel condition
I am using Sitecore 9.1 /sitecore/system/Settings/Rules/Definitions/Elements/Channel/Current Interaction is on channel condition
I haven't made any changes to the above system item and there is no customization has been made.
Any suggestions please

Salesforce Tooling API - Deactivate Trigger

I am attempting to deactivate triggers using the tooling API. I have successfully in a developer ORG. But was unable to do this in a real developer org. Is this a Salesforce tooling api bug?
Here is the basis of the algorithm,
Create a MetadataContainer with a unique Name
save MetadataContainer
Create an ApexTriggerMember setting the Body, MetadataContainerId, ContentEntityId, and Metadata[apiVersion=33.0 packageVersions=[] status="Inactive" urls=nil>]
Modify Metadata["status"]="Inactive"
save ApexTriggerMember
Create/Save ContainerAsyncRequest
monitor container until completed.
display errors if appropriate
In the sandbox, I have confirmed after requerying the Apex enter code hereTriggerMember that the read-only field "Content" looks appropriate. I also confirmed that the MetadataContainerId now points to a ContainerAsyncRequest that has a State of "Completed"
Here are my results, it appears to be a success, but the ApexTrigger is never deactivated
ContentEntityId = 01q.............[The ApexTrigger I want deactivated]
Content="<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<ApexTrigger xmlns=\"urn:metadata.tooling.soap.sforce.com\">
<apiVersion>33.0</apiVersion>
<status>Inactive</status>
</ApexTrigger>"
Metadata={apiVersion=33.0 packageVersions=nil status="Inactive" urls=nil> attributes= {type="ApexTriggerMember"
url="/services/data/v33.0/tooling/sobjects/ ApexTriggerMember/401L0000000DCI8IAO"
}
}
I think you need to deploy the inactive Trigger from Sandbox to Production. You can't simply deactivate the Trigger in Production. This is true even in the UI.
There are other options, such as using a Custom Setting or Metadata Type to store a Run/Don't Run value. You would query that value in the Trigger to decide whether or not to run it.
https://developer.salesforce.com/forums/?id=906F0000000MJM9IAO

PyroCMS / Codeigniter : too many session entries in db

I'm using for a small website the pyrocms / codeigniter combo.
after adding some content, i checked the db and saw that:
is this a normal behaviour? multiple session_ids for one user with the same ip?
i can't imagine that this is correct.
my session config looks like:
$config['sess_cookie_name'] = 'pyrocms' . (ENVIRONMENT !== 'production' ? '_' .
ENVIRONMENT : '');
$config['sess_expiration'] = 14400;
$config['sess_expire_on_close'] = true;
$config['sess_encrypt_cookie'] = true;
$config['sess_use_database'] = true;
// don't change anything but the 'ci_sessions' part of this. The MSM depends on the 'default_' prefix
$config['sess_table_name'] = 'default_ci_sessions';
$config['sess_match_ip'] = true;
$config['sess_match_useragent'] = true;
$config['sess_time_to_update'] = 300;
i did not change on line of code affecting the session class or something like that.
the red hat rows belong to a 15min cron-job. this is fine i think.
everytime a refresh the page two or three new session_entries are added...
Yes, this is normal. The CI session class automatically generates a new ID periodically. (Every 5 minutes, by default.) This is part of the security inherent in using CI sessions instead of native PHP sessions. Garbage collection will take care of this, you do not need to do anything.
You can read more about the session id behavior in the CI manual. This is an excerpt copied from that page.
The user's unique Session ID (this is a statistically random string
with very strong entropy, hashed with MD5 for portability, and
regenerated (by default) every five minutes)
This behavior is by design. There is nothing to fix. The session class has built in garbage collection that deletes old entries as needed. I have many projects using code igniter for several years. This is what it does.
If it really bothers you, you can alter the timeout in the main CI config file. Change the line
$config['sess_time_to_update'] = 300 (the 5 minute refresh period)
to a number greater than
$config['sess_expiration'] (default 7200)
This will cause the session to timeout before it is regenerated. This is inherently less secure in theory, but unless you are transacting sensitive data, it is probably irrelevant in practice.
But again, this is by design as part of the many layers of CI sessions. These and other features are what make it better than PHP native sessions. You can turn on profiling and see that the overhead for these queries is negligible, especially in light of all the other optimizations the framework provides.

Resources