Mule Salesforce Connector TLS 1.1 upgrade Issue - salesforce

I was working on one poc which will connect to salesforce account . The mule version is 6.3.2 and sales force version is 6.3.2. Till 2 [![enter image description here][1]][1]days back it was working fine.
I came to know that last weekend sales force as done TLS upgrade to 1.1 from 1.0. When i was testing my flow getting the below exception:
Root Exception stack trace:
[UnexpectedErrorFault [ApiFault exceptionCode='UNSUPPORTED_CLIENT'
exceptionMessage='TLS 1.0 has been disabled in this organization. Please use TLS 1.1 or higher when connecting to Salesforce using https.'
]
]
When i saw the mule documentation it says that sales force connector 7.1.2 as addressed this issue and I update my connector in studio and retried the scenario which is not working.
Can some one help me out on this.
Regards
Vikram

I previously had to set the following property in the application settings:
https.protocols=TLSv1.1,TLSv1.2
And -Dhttps.protocols=TLSv1.1,TLSv1.2 in my wrapper.conf for Mule standalone.

You can put your configuration in tls-default.conf in MULE_ESB/conf/ folder
and then put the value inside like below:
enabledProtocols=TLSv1.1, TLSv1.2
enabledCipherSuites=TLS_KRB5_WITH_3DES_EDE_CBC_MD5, TLS_KRB5_WITH_RC4_128_SHA, SSL_DH_anon_WITH_DES_CBC_SHA
Or if you want to test from your anypoint studio, just create the tls-default.conf and put under your resources folder
Another information I can add is try to input your url destination to https://www.ssllabs.com/ssltest/ to make sure the TLSv1.1 is enabled by your endpoint or the chiper suite enable too

Similar is answered in https://forums.mulesoft.com/questions/41012/getting-error-when-hitting-a-rest-api-via-https.html#answer-43960
Below is the answer I posted.
I resolved it in my system.
When it is not working in the Runtime that is attached in the Anypoint
studio then follow the below steps.
Navigate to the Anypoint studio installation directory
Search for "tls-default.conf" in the folder. This will show you all the files for all the Runtimes that you have installed.
there will be a property "enabledProtocols" make sure that it contains the TLSv1 in it as below
enabledProtocols=TLSv1,TLSv1.1,TLSv1.2
This above should apply to Cloud hub (Most of the times it is already
enabled) or on-premise systems.

Salesforce are now disabling TLS 1.0, forcing TLS 1.1 or higher.
For Java versions >= 1.8 this is not a problem, but for earlier releases you will want to set the SSLContext. This solution worked for me:
if (Double.parseDouble(Runtime.class.getPackage().getSpecificationVersion()) <= 1.7) // Java versions > 1.7 are compatible with TLS 1.1 or higher by default - we want TLSv1.2 for our needs
setSSLContext(SSLContext.getInstance(SSL_VERSION_TO_USE_FOR_SALESFORCE_LOGIN));
private static void setSSLContext(SSLContext context) {
SSLContext.setDefault(context);
try {
/* Either of the first two parameters may be null in which case the installed security providers will be searched for the highest priority implementation of the appropriate factory.
Likewise, the secure random parameter may be null in which case the default implementation will be used. */
context.init(null, null, null);
} catch (KeyManagementException e) {
// handle exception
}
}

Navigate to Setup
In the Quick Find bar, type in Critical Updates
Select Critical Updates
Locate the Require TLS 1.1 or higher for HTTPS connections​ under the Update Name column
Click on Deactivate.

Related

HCW - hybrid configuration wizard modern - InternalUrl_Duplicate

Unable to get through the Hybrid Configuration Wizard in Modern mode. This is necessary because we want to migrate mailboxes. Classic mode works.
It knows that there is a Hybrid Agent, but I can't successfully install with either path of using existing or adding a new one. In Azure there is an App Proxy registration which appears to have the incorrect IP for the route to on-prem. This was due to a misconfiguration of our outgoing firewall. However after the firewall configuration was fixed, the App Proxy still has the old return IP, and there is no way in Azure to remove this record.
I've removed the app proxy components on the server, and let the HCW install again but this record is not updated or removed. Also have gone through 'Classic' path which according to community posts is supposed to remove the App Proxy record, but it doesn't.
According to what I've read, if the record is inactive for 10 days, it will be removed, but I'd rather resolve this without waiting for 10 days.
I've tried patching the record using Graph but it doesn't work.
2022.01.31 22:09:59.707 10333 [Client=UX, fn=SendAsync, Thread=15] FINISH Time=2170.2ms Results=BadRequest {"error":{"code":"InternalUrl_Duplicate","message":"Internal url 'https://LOCALFQDNSERVER/' is invalid since it is already in use","innerError":{"date":"2022-01-31T22:09:58","request-id":"d5c4dfe0-096d-4382-9da0-9559f45e0217","client-request-id":"d5c4dfe0-096d-4382-9da0-9559f45e0217"}}}

Jackrabbit Oak: Getting started and connect to a standalone repository via RMI

I am totally new to Jackrabbit and Jackrabbit Oak. I worked a lot with Alfresco though, another JCR compliant open-source content repo.
I want to start a standalone Jackrabbit Oak repo, then connect to it via Java code. Unfortunately the Oak documentation is quite scarce.
I checked out the Oak repo, built it with mvn clean install and then ran the standalone server (memory repository is fine for me at the moment for testing) via:
$ java -jar oak-run-1.6-SNAPSHOT.jar server
Apache Jackrabbit Oak 1.6-SNAPSHOT
Starting Oak-Memory repository -> http://localhost:8080/
13:14:38.317 [main] WARN o.a.j.s.r.d.ProtectedRemoveManager - protectedhandlers-config is missing -> DIFF processing can fail for the Remove operation if the content toremove is protected!
When I open http://localhost:8080/ I see a blank page with code like this but the html / xhtml output as source like this:
I try to connect via Java code:
JcrUtils.getRepository("http://localhost:8080");
// or
JcrUtils.getRepository("http://localhost:8080/rmi");
but getting:
Connecting to http://localhost:8080
Exception in thread "main" javax.jcr.RepositoryException: Unable to access a repository with the following settings:
org.apache.jackrabbit.repository.uri: http://localhost:8080
The following RepositoryFactory classes were consulted:
org.apache.jackrabbit.oak.jcr.OakRepositoryFactory: declined
org.apache.jackrabbit.commons.JndiRepositoryFactory: declined
Perhaps the repository you are trying to access is not available at the moment.
at org.apache.jackrabbit.commons.JcrUtils.getRepository(JcrUtils.java:223)
at org.apache.jackrabbit.commons.JcrUtils.getRepository(JcrUtils.java:263)
at Main.main(Main.java:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
(The Oak documentation is not as complete as the Jackrabbit documentation, but I am also not sure how much of Jackrabbit 2 is still valid for Oak, since it's a complete rewrite.)
I found the same question in the mailing list/Nabble, but the provided answer there does not use a remote, standalone repository but a local one running in the same servlet container and even app (just that eventually the Mongo DB / Node store is configured as remote, but that would mean that the Mongo ports would need to be open). So the app creates the repository itself, which is not my case (I got this case working fine in Oak as well).
In Jackrabbit2 (not Oak), I can simply connect via
Repository repo = new URLRemoteRepository("http://localhost:8080/rmi");
and it's working fine, but this method is not available for Oak, it seems.
Is RMI not enabled by default in Oak? Is there a different URI to use?
However, the documentation of Oak says "Oak comes with a runnable jar" and the runnable jar offers the server method to start the server, so I assume that my scenario above is a valid one.
The blank page is a result of your browser being unable to parse the<title/> tag.
Go into developer mode to see how the browser incorrectly interpreted that tag.
Incorrect interpretation of title tag
i never saw an example of jackrabbit oak working like this.. are you sure it is possible to start oak outside of your application?
How do you set up the persistent store? (which one are you going to use?).
Here is the link how you normally set up jackrabbit oak: https://jackrabbit.apache.org/oak/docs/construct.html
For example if you use MongoDB as backend (which is the most powerful), you first connect to the db via
Db db = new MongoClient(ip, port).getDB("testDB");
where ip is the ip-address of your MongoDB-server with its port. This server doesn't need to be on the same machine like your Java code is running. You can even use instead of a single MongoDB instance a Replica set.
The same is valid by using a relational db.. only if you choose the tar-file system backend you are limited to your local machine.
Then, in a second step you create a jcr based on the chosen backend (see the link)

version tag in appengine-web.xml & https

So far, I have been working on version 1 of my server application. I could reach the application at
https://myappid.appspot.com
using my browser. (Note: I am using https, not http)
Now, I am changing the the version id to 2 (version 1 is now production, version 2 is next gen release and I would like to test that). I need version 1 to remain the default version for the users are using the stable, production version.
Now that I have 2 versions, I tried to reach version 2 static front page from by browser (Chrome) using
https://2.latest.myappid.appspot.com
as per the instructions I could find. Instead chrome gives me the following error
*You attempted to reach 2.latest.myappid.appspot.com, but instead you actually reached a server identifying itself as .appspot.com. This may be caused by a misconfiguration on the server or by something more serious. An attacker on your network could be trying to get you to visit a fake (and potentially harmful) version of 2.latest.myappid.appspot.com.
You cannot proceed because the website operator has requested heightened security for this domain.
This problem goes away for http://2.latest.myappid.appspot.com
I have requested secure connection through web.xml's
<transport- guarantee> element. So, what am I missing?
Try 2.myappid.appspot.com.
Also, in the Admin Console when you click on Versions, you can see all versions of your app. A number of each version is a link - you can click on it to get access to that version running.
If you try to access the new version using https, you should use the following instead:
https://2-dot-myappid.appspot.com/
Do you need 2 live versions of the same app?
Although possible with some limitations, this is not what the system was designed to do. The https is apparently one of these limitations.
If you want version 2 to replace version 1, than all you need to do is "set default" on version 2, and than it will be the one mapped to https://myappid.appspot.com

Cannot sign in with google app engine plugin

When trying to sign in using the button in the lower left corner of the screen, I am unable to do so because it needs a verification code. However, I am not offered the chance to receive one, it only brings me directly to the "allow this application page." The exact error in the log is
Could not sign in. Make sure that you entered the correct verification code.
Thanks for your help in advance.
I had the same issue, resolved by changing network settings as follows:
In Eclipse:
Preferences > General > Network Connections
Set Active Provider to Manual
Under Proxy entries, edit the HTTPS proxy, adding host and port info
Check "Requires authentication" and add your network ID and password
I had a similar problem on my Mac OS X.
However, after upgrading my JDK from 1.6 to 1.7 my problems disappeared.(Note: JDK not JRE)
The default java on Mac 10.x is Java SE 6 and you can't uninstall it. You can add Java 1.7 or higher and your system should automatically pick up the later version -you can check from the terminal with
$java -version

WSDL on SQL Server gives HTTP status 505 Version Not Supported

I am a DBA, not a developer, so forgive me if this is a silly question. But we are having issues with a SQL Server 2005 Web Service end point. On the local network I am able to add the reference in Visual Studio 2010 with out any issues. It uses digest as the authentication scheme.
However, when anyone tries to add the web reference on another network, such as a developer in New Zealand (we are in Dayton, OH USA) he receives this error:
There was an error downloading
'http://server.domain.net:1280/release-single-address?wsdl'. The
request failed with HTTP status 505: HTTP Version not supported.
Metadata contains a reference that cannot be resolved:
'http://server.domain.net:1280/release-single-address?wsdl'. The
remote server returned an unexpected response: (505) HTTP Version not
supported. The remote server returned an error: (505) Http Version Not
Supported. If the service is defined in the current solution, try
building the solution and adding the service reference again.
Again, this works in Visual Studio as Right Click add Reference -> Advanced -> Add Web Reference when done on the local subnet as the server.
When done on any other network the service does not import. We have tried it w/o any proxy. There is a cross domain trust involved but that does not seem to be the issue as the error occurs using accounts from either domain. When I download the raw XML to my hdd I can use that to create the web reference. I believe firmly this is some sort of transport layer issue, such as a proxy, but captures when the proxy server settings are disabled are not conclusive.
Today, years after I posted this question, we finally found the answer to this question. It was not a Squid proxy server as we had come to believe. We continued experiencing issues like this with various web services/sites. The last straw was when we finally needed to deploy an SVN server that was used by multinational software engineering teams. Every single member of the different Ops teams we spoke to swore to us there was nothing between the sites that could break our services.
By a stroke of luck the company's Chief Information Security Officer was visiting our site and a colleague happened to run into him and asked about the issues we were having and what might be the cause of it. He said immediately that there were Riverbed appliances doing caching and layer 7 inspection on all WAN traffic. We finally managed to catch these devices in the act of attempting to "normalize" HTML and XML and we were able to perform a capture of data coming from a machine in New Zealand. We performed a diff on HTML pages that were served as well as XML coming from a web service to compare how it looked on the local network vs. across the WAN. In the pages/XML that were being served across the WAN the closing tags were inserted that were not needed or that actually made the XML malformed. Some tags were even commented out entirely if the appliance didn't know what to do with them. And the smoking gun? A custom header...
X-RBT-Optimized-By: cch-riverbed-1 (RiOS 6.5.6a) SC
"Optimized" You keep using that word, but I do not think that it means what you think that it means.
I'm not a pro of SOAP with VS but it may be that version of SOAP is incompatible with sql server 2005?
If I recall correctly, there is two versions of SOAP: 1.1 and 1.2.
Check the HTTP GET command format is correct?
HTTP GET http:// mydomain.com HTTP/1.1\
note there is a SPACE between 'http://' and 'mydomain.com'. The server can not match this format. The result is 505
I am not sure but, I think you should check your firewall or your IIS configuration.

Resources