We have implemented WSO2 API Manager (v1.10.0) in a distributed architecture as outlined in the online documentation here.
This consists of the following (on 5 separate servers):
Gateway (x2)
Publisher & Store (on a single server)
Key Manager (x2)
These are wired-up to the 3 normal API Manager databases (Registry, User Manager & API Manager), which are on a SQL Server 2014 instance.
We are using the Key Managers for the authentication (login, forgotten password, etc.) of the website users, as well as for authenticating API calls.
However, when trying to log in to the site I'm seeing the following (Violation of UNIQUE KEY constraint) error on the Key Manager:
TID: [-1] [] [2016-10-06 00:36:47,842] ERROR
{org.wso2.carbon.identity.oauth2.dao.TokenPersistenceTask} - Error
occurred while persisting access token
:c5a0a11e63388dCHANGEDea34b0533445
{org.wso2.carbon.identity.oauth2.dao.TokenPersistenceTask}
org.wso2.carbon.identity.oauth2.IdentityOAuth2Exception: Error when
storing the access token for consumer key :
fpA6AhOfbVCHANGEDgH0WzBDOga at
org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.storeAccessToken(TokenMgtDAO.java:246)
at
org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.persistAccessToken(TokenMgtDAO.java:284)
at
org.wso2.carbon.identity.oauth2.dao.TokenPersistenceTask.run(TokenPersistenceTask.java:52)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Caused by:
com.microsoft.sqlserver.jdbc.SQLServerException: Violation of UNIQUE
KEY constraint 'CON_APP_KEY'. Cannot insert duplicate key in object
'dbo.IDN_OAUTH2_ACCESS_TOKEN'. The duplicate key value is (15,
williams.j2#CHANGED.org.uk, -1234, ,
APPLICATION_USER, 369db21a386ae4CHANGED0ff34d35708d, ACTIVE, NONE). at
com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
at
com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1515)
at
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:404)
at
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:350)
at
com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696)
at
com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
at
com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:180)
at
com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:155)
at
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.execute(SQLServerPreparedStatement.java:332)
at
org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.storeAccessToken(TokenMgtDAO.java:224)
... 5 more
This is resulting in the following .NET error on the website:
I've tried Googling this, but cannot find an up-to-date answer.
I have not configured the Key Managers to have master and worked nodes (as outlined here) as the documentation seems to suggest that this isn't needed.
Any help would be much appreciated please!
After some debugging found the issue! Before we put this config,
<JDBCPersistenceManager>
<SessionDataPersist>
<PoolSize>0</PoolSize>
</SessionDataPersist>
</JDBCPersistenceManager>
APIM can save more than one ACTIVE OAUTH token to the IDN_OAUTH2_ACCESS_TOKEN table for single token obtaining call.
When the token validation endpoint queries the tokens, only the last one is returned (Time based sorting and Limits are used). When that one is expired token validation mark it as inactive. But the previous one is kept as it is.
When the refresh token happens, it check whether the latest token is inactive. Since it is inactive, it issues a new token. But when the token endpoints tries to persist the token, there is another ACTIVE token. That caused this exception.
To sort this out we can run a update query on IDN_OAUTH2_ACCESS_TOKEN table to mark all the ACTIVE tokens to INACTIVE.
update IDN_OAUTH2_ACCESS_TOKEN set TOKEN_STATE="INACTIVE" where TOKEN_STATE="ACTIVE"
Then the old faulty tokens will be removed and server will start working fine!
I use mysql and was facing the same problem. Changing the poolsize was also not a full solution. Then I noticed something strange about the idn_oauth2_access_token table. There is a column time_created, but the contents was not the time created. It was the last update timestamp. I read somewhere the systems orders on several columns among the time_created column. I inspected my sql script and saw that the database updated this column when a update was fired. I removed this rule and I have no more errors.
Can you change below configuration (<PoolSize>0</PoolSize>) in <APIM_HOME>/repository/conf/identity.xml, and see? By default, the PoolSize is set to 100.
<JDBCPersistenceManager>
<SessionDataPersist>
<PoolSize>0</PoolSize>
</SessionDataPersist>
</JDBCPersistenceManager>
Hope this Will resolve your issue.
Reference: http://sanjeewamalalgoda.blogspot.com/2015/08/how-to-avoid-getting-incorrect-access.html
Related
When configuring a new migration tasks I am asked for the application ID and the key. I created the app id and copied the key from the created account id under the target details. When I save it I get the following error:
Unhandled scenario exception. Scenario 'ConnectToTarget.AzureSqlDbMI.Sync.LRS', TaskId '1fa8b4eb-5a2f-4450-adb6-c1a96504f985'.
One or more errors occurred.
Failed to collect data for Azure Resource '/subscriptions/my subscription/resourceGroups/target resource group/providers/Microsoft.Sql/managedInstances/managed-instance' using application ID 'adminsql'.
Object reference not set to an instance of an object.
I suspect I configured the application ID incorrectly for the migration service but I don't know where to start fixing it
dont know if this is the problem your having, but i got the exact same error when i put the ID of the secret, not the ID of the App into the migration tool... I realized i put in the wrong ID and it works now...
I am trying to understand the specific connection and error states of the Azure IoT C SDK.
I can register with IoTHubClient_SetConnectionStatusCallback for the callback in order to receive the IOTHUB_CLIENT_CONNECTION_STATUS and the IOTHUB_CLIENT_CONNECTION_STATUS_REASON.
The value for the first one are IOTHUB_CLIENT_CONNECTION_AUTHENTICATED and IOTHUB_CLIENT_CONNECTION_UNAUTHENTICATED, which I assume simply means "connected" and "not connected". The reason is more interesting however:
IOTHUB_CLIENT_CONNECTION_EXPIRED_SAS_TOKEN
IOTHUB_CLIENT_CONNECTION_DEVICE_DISABLED
IOTHUB_CLIENT_CONNECTION_BAD_CREDENTIAL
IOTHUB_CLIENT_CONNECTION_RETRY_EXPIRED
IOTHUB_CLIENT_CONNECTION_NO_NETWORK
IOTHUB_CLIENT_CONNECTION_COMMUNICATION_ERROR
IOTHUB_CLIENT_CONNECTION_OK
So my first question is: What are the semantics for the respective reasons? When do they occur? What does the communication error entail? The error is so generic it could simply mean "any error we didn't want to specify explicitely".
My second question goes beyond that. I am trying to use X.509 certificates. However, due to certain requirements I may have certificates that are no longer valid or deleted device ids. Can I somehow distinguish those cases by using the available reasons? When I tried to connect with a non-existing Id I simply got IOTHUB_CLIENT_CONNECTION_COMMUNICATION_ERROR. From my point of view, I no longer need to try to connect to the IoT Hub, since my device doesn't exist. But a communication error may be anything at all. The same issue appeared when I tried to connect with an invalid certificate or private key.
Every time I try I simply get the errors:
Error: Time:Thu May 25 12:04:00 2017 File:~/azure-iot-sdk-c/iothub_client/src/iothubtransport_amqp_messenger.c Func:process_state_changes Line:1563 messagesender reported unexpected state 4 while messenger is starting
Error: Time:Thu May 25 12:04:00 2017 File:~/azure-iot-sdk-c/iothub_client/src/iothubtransport_amqp_device.c Func:device_do_work Line:848 Device 'MyDevice' messenger failed to be started (messenger got into error state)
From those information I cannot determine when to connect or reconnect.
thanks for your questions.
The reasons you listed above are triggered by the following conditions:
IOTHUB_CLIENT_CONNECTION_EXPIRED_SAS_TOKEN
The SAS token (provided by the user) expired, and no longer can be used to authenticate a device against the Azure IoT Hub. Solution: provide a new valid SAS token.
IOTHUB_CLIENT_CONNECTION_DEVICE_DISABLED
Device could not be authenticated because it is disabled by the user on the Azure IoT Hub (see field State in Device Explorer)
IOTHUB_CLIENT_CONNECTION_BAD_CREDENTIAL
Device key provided by the user was considered invalid based on response from Azure IoT Hub upon attempt to authenticate
IOTHUB_CLIENT_CONNECTION_RETRY_EXPIRED
The Azure IoT Hub Client has a feature called RetryPolicy (which can be set using IotHubClient_SetRetryPolicy). It has a property that limits the maximum time the client can attempt to reconnect when failures occur. If that maximum time is reached, the Connection Status is invoked with status UNAUTHENTICATED and reason RETRY_EXPIRED.
IOTHUB_CLIENT_CONNECTION_NO_NETWORK
IOTHUB_CLIENT_CONNECTION_COMMUNICATION_ERROR
If retry policy is disabled, these error reasons might be provided to indicate there is a network connection issue.
IOTHUB_CLIENT_CONNECTION_OK
Provided with status AUTHENTICATED.
We currently have ADFS 2.0 with hotfix 2 rollup installed and working properly as an identity provider for several external relying parties using SAML authentication. This week we attempted to add a new relying party, however, when a client presents the authentication request from the new party, ADFS simply returns an error page with a reference number and does not prompt the client for credentials.
I checked the server ADFS 2.0 event log for the reference number, but it is not present (searching the correlation id column). I enabled the ADFS trace log, re-executed the authentication attempt and this message was presented:
Failed to process the Web request because the request is not valid. Cannot get protocol message from HTTP query. The following errors occurred when trying to parse incoming HTTP request:
Microsoft.IdentityServer.Protocols.Saml.HttpSamlMessageException: MSIS7015: This request does not contain the expected protocol message or incorrect protocol parameters were found according to the HTTP SAML protocol bindings.
at Microsoft.IdentityServer.Web.HttpSamlMessageFactory.CreateMessage(HttpContext httpContext)
at Microsoft.IdentityServer.Web.FederationPassiveContext.EnsureCurrent(HttpContext context)
As the message indicates that the request is not well formed, I went ahead and ran the request through xmlsectool and validated it against the SAML protocol XSD (http://docs.oasis-open.org/security/saml/v2.0/saml-schema-protocol-2.0.xsd) and it came back clean:
C:\Users\ebennett\Desktop\xmlsectool-1.2.0>xmlsectool.bat --validateSchema --inFile metaauth_kld_request.xml --schemaDirectory . --verbose
INFO XmlSecTool - Reading XML document from file 'metaauth_kld_request.xml'
DEBUG XmlSecTool - Building DOM parser
DEBUG XmlSecTool - Parsing XML input stream
INFO XmlSecTool - XML document parsed and is well-formed.
DEBUG XmlSecTool - Building W3 XML Schema from file/directory 'C:\Users\ebennett\Desktop\xmlsectool-1.2.0\.'
DEBUG XmlSecTool - Schema validating XML document
INFO XmlSecTool - XML document is schema valid
So, I'm thinking that ADFS isn't playing full compliance with the SAML specification. To verify, I manually examined the submitted AuthnRequest, and discovered that our vendor is making use of the 'Extensions' element to embed their custom properties (which is valid, according to the SAML specification) (note: "ns33" below correctly namspaces "urn:oasis:names:tc:SAML:2.0:protocol" elsewhere in the request)
<ns33:Extensions>
<vendor_ns:fedId xmlns:vendor_ns="urn:vendor.name.here" name="fedId" value="http://idmfederation.vendorname.org"/>
</ns33:Extensions>
If I remove the previous element from the AuthnRequest and resubmit it to ADFS, everything goes swimmingly. And, in fact, I can leave the 'Extensions' container and simply edit out the vendor namespaced element, and ADFS succeeds.
Now, I guess I have 3 questions:
Why was the reference number not logged to the ADFS log? That really would have helped my early debugging efforts
Is it a known issue that ADFS's SAML handler cannot handle custom elements defined within the Extensions element, and if so, is there a way to add support (or at least not crash while handling it)? My vendor has offered to change the SAML AuthnRequest generated to omit that tag, but said that it 'may take some time'-- and we all know what that means...
Does anyone think that installing ADFS hotfix rollup 3 will address this situation? I didn't see anything in the doc to indicate the affirmative.
Thanks for your feedback.
When facing a MSIS7015 ADFS error, the best place to start would be enabling ADFS Tracing. Login to the ADFS server as admin and run the following command. If you have a very busy ADFS server, might be wise to do it when the server is not as busy.
C:\Windows\System32\> wevtutil sl “AD FS Tracing/Debug” /L:5
C:\Windows\System32\> eventvwr.msc
In Event Viewer select “Application and Services Logs”, right-click and select “View – Show Analytics and Debug Logs”
Go to AD FS Tracing – Debug, right-click and select “Enable Log” to start Trace Debugging.
Process your ADFS login / logout steps and when finished, go to the event viewer mmc find the sub tree AD FS Tracing – Debug, right-click and select “Disable Log” to stop Trace Debugging.
Look for EventID 49 - incoming AuthRequest - and verify values are not being sent with CAPs value. For example, in my case, it was I was receiving the following values: IsPassive='False', ForceAuthn='False'
In my case, to address the issue, all I needed to do was create incoming claim transformer rule - for the distinct endpoints.
Once the CAPs were transformed to lower case true and false, authentication started working.
guys! I've got a very strange error when trying to connect two portals.
When I press whatever 'connect portals' or 'test connection' buttons a red error appears sayin' "An unexpected Error has occurred while validating your request". Yikes!
So, I ensured the similar workflow is running on both sites. Next, I've done some debugging and discovered the malfunctioning method in
DotNetNuke.Enterprise.ContentStaging.StagingClientController.cs
public bool PingServer(string address, int portalId, Guid token)
{
/*====somecode====*/
client.PairService(request);
/*====somecode====*/
return true;
}
So, the pair service. After some more advanced debugging I've found a root of evil:
The INSERT statement conflicted with the FOREIGN KEY constraint "FK_PortalSettings_Portals". The conflict occurred in database "MyDNNDatabase", table "dbo.Portals", column 'PortalID'.
The statement has been terminated. Gosh!
So, I've removed the specified constraint and saw a strange thing in my database. DNN tried to add another LocalServerToken with testing site ID though a targetServerAddress and TargetServerToken with Production site ID were expected to be added.
So, I've deleted a site and created a new one using the template. No luck as I expected.
THe last thing I did was manual adding a targetServerAddress & TargetServerToken in my database. The sites seemed to be connected but when I couldn't authenticate as Host and publishing content caused the same unexpected Error.
Anyone know the damn module so deep?
As mentioned on the DNN community exchange, please open a support ticket to get this addressed. That is what paid support is for with PE and EE ;)
I am using adonet appender of log4net for database debugging. Logging level is set to error. Database logging is configured for two applications running on different servers writting to same table on Oracle database.The columns of table were loginId, level.The problems I am facing are:
Even the logging level is set to error, some info level statements were also shown in the table , and the corresponing level column is being shown as error.
In between some statements, Login Id is shown different than the actual user's login id who is running the application.
So, how to configure log4net on different servers to behave autonomously.
EDIT: I am facing these issues only when running multiple instances of an application otherwise log4net logging is fine.
Scenario: I browsed the published version of the application in 2 browsers with different login Ids and gone through different flow in each browser. The result was login id was getting jumbled. I am getting the login id value from User session in my code and then storing into log4net.GlobalContext.Properties.
After some research, I found that there were some alternatives for log4net.GlobalContext.Properties which can be found in http://logging.apache.org/log4net/release/manual/contexts.html. I think ThreadContext.Properties should be used instead of global.
I think that I am facing the issues because of storing into log4net.GlobalContext.Properties.
Issue 1: I checked the code, and the statements were logger.info. But in the database table it was logging with error level.
Issue 2: code for login Id:
user = (User)Session["User"];
log4net.GlobalContext.Properties["LOGINID"] = user.Login;
in web.config.
If you believe that ThreadContext.Properties can be used instead of global.properties can you show me how to use it for login_id.
I started to post this as a comment but I realized that while I don't have the details I need to give you a specific answer, I can point you in the right direction.
Issue 1: If you are getting statements in your database that are info statements but that are marked as error statements, this is a problem in your code. You have to tell log4net what level the log statement is. You can say that a "Hello World" statement is a FATAL error. It sounds like your program is sending messages you want marked as info messages to the log but they are marked as error statements. Look at where those statements are sent to the log file and you should see a log.ERROR statement. Change that to log.INFO and you should be good to go.
Issue 2: The login ID should show who executed the log statement. That means if you execute something under another account (for permissions) or if you use a service account, it will log that user instead of the person clicking the mouse. I can be much more specific in how to potentially fix this if you show us how you are logging the user information.
Issue 3: I'm not sure what you mean here. Log4net does behave autonomously. You can even use the same configuration on multiple servers without issue, if that is what you are alluding to.
If you would like a more complete answer that is more specific to your issues, please post the log4net config file and the relevant code (where you are logging the INFO statements and the method by which you log the user ID would be a good start).