GOOGAPPUID cookie algorithm - google-app-engine

From Google's documentation: https://cloud.google.com/appengine/docs/developers-console/#traffic-splitting, GOOGAPPUID value should be in range 0-999.
For now, its value is is a string, such as: xCgkQ3wMg28WhrgU.
Now I want to change my cookie to use a specific version on my GAE app,
does anyone know how is the GOOGAPPUID generated?
Update
Just found a trick.
The cookie's value is not random, I guess that it is a type of hashing and the cookie value for each version is fixed.
So, by trying many times and save the cookie value for each version, I can force to access to a version by changing the cookie to the value that corresponding with the version.
For example, suppose that I have 2 versions A and B. After trying, I see that GOOGAPPUID is xAAAA for version A, and xBBBB for version B.
Now, if I want to access version A, just change GOOGAPPUID to xAAAA.
Any idea?

Related

Istio: How to do canary deployment

Currently there are 3 possible values which we can receive in the header like A1, A2 and A3, and we have three different services running for each value like Service-A1, Service-A2 and Service-A3.
Right now routing is done based on the header value. If value in the header is A1, then it will go to Service-A1, if value in the header is A2 then it will go to Service-A2, and accordingly for A3.
Now, we have new service Service-MultiSupport which can work for all these three values. Now, we want to deploy this new service using canary deployment such that 20% of traffic should go to Service-MultiSupport, and rest of the 80% traffic should work as it is working currently which is header based.
Can anyone guide me how to achieve this canary deployment using Istio? I am new to Istio, I tried searching online but couldn't find the proper answer.
Please help if you can.
Thanks in advance.

AssertionConsumerServiceURL from AuthnRequest not from Config

Noticed that the Saml2AuthnResponse Destionation is set based on the relyingParty.SingleSignOnDestination which is retrieved from a "configuration" (harcoded relyingParties array).
I think the Destination should be based on what is set in the AuthnRequest samlp:AuthnRequest -> AssertionConsumerServiceURL and use the relyingParty Destination maybe as a fallback if its missing from the AuthnRequest, but from what I see every AuthnRequest contains the ACS URL.
Or is there a reason why it is implemented this way ?
Thanks
It is part of the security only to replay known URLs/domains. Therefore it is important to configure the relyingParty.SingleSignOnDestination for each relying party.
To have a dynamic response URL you can extend the code to verify that the authnRequest.AssertionConsumerServiceUrl starts with the value in relyingParty.SingleSignOnDestination.
E.g. the value in relyingParty.SingleSignOnDestination could be "https://somedomain.com"
and thereby accept different authnRequest.AssertionConsumerServiceUrl like "https://somedomain.com/auth/AssertionConsumerService" or "https://somedomain.com/acs"

max-pool-size is invalid in combination with derive-size

For the last couple of days I’ve been battling with an issue which I believe is derived from a change in the source code in Thorntail and unfortunately this code doesn’t appear to be publically available.
The error I’ve been receiving is this:
"WFLYCTL0105: max-pool-size is invalid in combination with derive-size".
Previously you could just leave a “derive-size” out of the configuration and there wasn’t an issue however now anytime I’ve included the “max-pool-size” no matter what the combination with “derive-size” it fails with the above mentioned error.
From the latest Thorntail dococumentation:
Specifies if and what the max pool size should be derived from. An
undefined value (or the deprecated value 'none' which is converted to
undefined) indicates that the explicit value of max-pool-size should
be used.
This is what I had previously in WildFly project-defaults.yml which worked perfectly fine:
ejb3:
default-resource-adapter-name: activemq-rar.rar
default-mdb-instance-pool: mdb-strict-max-pool
strict-max-bean-instance-pools:
mdb-strict-max-pool:
max-pool-size: 1
Any ideas or examples would be greatly appreciated.
More information added in response to questions:
The project was updated from using WildFly Swarm 2018.4.1 to use Thorntail 2.2.0.Final.
The code that appears to have changed in Thorntail is below:
OLD code:
https://github.com/stuartwdouglas/wildfly-swarm-core/blob/master/ejb/api/src/main/java/org/wildfly/swarm/ejb/EJBFraction.java
.strictMaxBeanInstancePool(new StrictMaxBeanInstancePool("mdb-strict-max-pool").maxPoolSize(20).timeout(5L).timeoutUnit(StrictMaxBeanInstancePool.TimeoutUnit.MINUTES))
New Code:
https://github.com/thorntail/thorntail/blob/802e785fdd515ecc1b52b22a64a6ff9338dace29/fractions/javaee/ejb/src/main/java/org/wildfly/swarm/ejb/EJBFraction.java
.strictMaxBeanInstancePool(new StrictMaxBeanInstancePool("mdb-strict-max-pool").deriveSize(StrictMaxBeanInstancePool.DeriveSize.FROM_CPU_COUNT).timeout(5L).timeoutUnit(StrictMaxBeanInstancePool.TimeoutUnit.MINUTES))
If anyone has a link to the above source code that would be great. The only links I can find appear to be from JBOSS so the code looks like it was ported accross and not made publicly avaiable.
After the question update: the default configuration of a couple of fractions was changed to better align with default configuration in WildFly 11. You can configure derive-size: null and then the max-pool-size should take effect.
Something like:
ejb3:
default-resource-adapter-name: activemq-rar.rar
default-mdb-instance-pool: mdb-strict-max-pool
strict-max-bean-instance-pools:
mdb-strict-max-pool:
derive-size: null
max-pool-size: 1
(Note: previously, this answer recommended setting derive-size: none, but that doesn't work. After the discussion in comments, I changed the answer to recommend derive-size: null, which does work.)

How do I find the list of bug fixes in each platform release of Salesforce.com?

Recently we are struggling after change made in SFDC platform which seems to be done as bug fixing.
Is there any place where I can find the list of bugs and fixes for them which was deployed into my environment.
Specifically we are having problem with point 13 described on this site (http://salesforceapexcodecorner.blogspot.com/2011/10/new-release-winter-12-in-apex.html):
String Conversion of Number Fields
Previously, when String.valueOf was called with a field of type Number
of an sObject, it incorrectly treated the number field as a Decimal
when converting it to a String and used the String.valueOf(Decimal d)
method to perform the conversion to a String. Apex now correctly
converts a number field to a Double before performing the conversion
and uses the corresponding String.valueOf(Double d) method to convert
the Double value to a String. One side effect of this change is that
converted String values of number fields that have no decimal fraction
now have a decimal point (.0) in them where they didn't before.
Unfortunately I can't find any official info about this...
Thanks,
Łukasz
The most extensive documentation of changes to the salesforce platform are in the release notes. Here are the release notes for Winter 12 (v24), but you can just search the web for "salesforce release notes" and you'll find what you're looking for.
If you are having trouble with a particular change that they've made, you should consider changing the version settings of the class or page in question back to the previous version that was working for you while you sort out the issue. You can change the version number from Setup > Develop > Classes > Version Settings.
If you would like help with the specific issue that you referenced, post more info about it and we'll see what we can do.

How does features manage content type field changes?

I'm having fun with a change to a content type field (from a node reference to a text field) which results in an error when a recreated feature is merged. The error is
FieldException: Cannot change an existing field's type. in field_update_field() (line 234 of /var/www/htdocs/modulesfieldfield.crud.inc)
At the moment, this is only affecting a merge back into a developer's workspace and the staging environment is a clean build from GIT, so unaffected. But it raises an early flag in terms of defining an update process when it goes to production.
When in production, I assume that it will be a matter of managing an export of each instance of that content type, remove the content type, install the recreated feature, migrate the exported data into the refactored content type and then apply any tests that may be defined for that change.
What is the recommended best practice process, i.e. the standard to follow to get it right at the outset?
Many thanks in advance
Best way is to tag your feature with a version.
First version: your old field and it's data
Second version: your old field definition and the new one.
In this version you can migrate the data contained in the old field within a hook_update_N().
Third version: simply remove old field definition
I hope to have answered your question as you expected

Resources