Authentication with sp-rest-proxy / node-sp-auth - reactjs

I am getting 403 errors with sp-rest-proxy. I was originally using the “User Credentials” strategy which allowed me to GET data, but not POST it. So now I’m am trying the “Addin only permissions”. My I.T. team was able get the app registered for me. but I am still receiving the below error now even with GET.
Error Details:
{
"readyState": 4,
"responseText": "{\"error\":{\"code\":\"-2147024891, System.UnauthorizedAccessException\",\"message\":{\"lang\":\"en-US\",\"value\":\"Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))\"}}}",
"responseJSON": {
"error": {
"code": "-2147024891, System.UnauthorizedAccessException",
"message": {
"lang": "en-US",
"value": "Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))"
}
}
},
"status": 403,
"statusText": "Forbidden"
}
Things I suspect I messed up on:
I strongly think its my server/private config I have the following…
const RestProxy = require('sp-rest-proxy');
const settings = {
configPath: './config/private.json',
port: 8081,
};
const restProxy = new RestProxy(settings);
restProxy.serve();
and private (not the actual values I am using expect for "strategy" )
{
"siteUrl": "https://ORGANIZTION.sharepoint.com",
"strategy": "OnlineAddinOnly",
"clientId": "0000000-000000-000000-0000-00000000",
"clientSecret": "000000000000000000000000000000",
"realm": "00000-0000-0000-0000-000000"
}
I couldn’t find much on the “strategy” value on the sp-rest-proxy or the node-sp-auth side of the documentation. I can assume its OnlineAddinOnly but I’m not able to find the specific syntax for what possible values this attribute expects. I also noticed that the “clientSecret” is changing once I run the server, I assume this is an intentional encryption.
During the App registration phase (step 5 of this https://github.com/s-KaiNet/node-sp-auth/wiki/SharePoint%20Online%20addin%20only%20authentication) I had the IT folk set the “right” attribute in AppPermissionRequests to “Write” instead “FullControl”, I noticed that “FullControl” seems to be used in most example though I wasn’t sure if it was required. Can anyone confirm that?
[Edit: confirmed this is not the issue by setting this to FullControl]
Intention:
I am trying to build an internal data management tool that only needs to work on localhost to get manipulate and replace json files in my teams SharePoint. (just in a nice way so that non-coders can do this). The “sp-rest-proxy” library seems to be what I need to implement the REST API effectively in react.

As far as I know, SharePoint app-only access is disabled by default. You need to ask your administrator to enable it by running the following command:
set-spotenant -DisableCustomAppAuthentication $false

The answer likely in the XML AppPermissionRequests. The creator of the library was able to point me to a better example and I had noticed some differences we had a different scope value and no AllowAppOnlyPolicy adding these seems to have fixed most of the issue. I am able to confirm that I can now do GET.
I am still having issues with GetFolderByServerRelativeUrl and using the to add/replace files but I am not sure that is related and will treat it as a separate issue as it may not be related to sp-rest-proxy or node-sp-auth
the correct AppPermissionRequests XML should be this ->
and as #Michael Han_MSFT mentioned you should ensure that DisableCustomAppAuthentication is set to false
<AppPermissionRequests AllowAppOnlyPolicy="true">
<AppPermissionRequest Scope="http://sharepoint/content/tenant" Right="FullControl" />
</AppPermissionRequests>

Related

Content Script Warning, Unexpected Property Found (Firefox WebExtensions)

After typing out a tutorial code and adjusting the details/regex as follows:
{
"manifest_version" : 2,
"name" : "Paige",
"version" : "1.0",
"description" : "Gathers and displays a custom web feed",
"content_scripts": [
{
"matches": ["*://*.google.com/*"],
"js": ["paige.js"]
}
],
"permissions" : [
"*://*.google.com/*"
]
}
I proceed to about:debugging and click Load Temporary Add-on.... I then select this manifest from my computer (which does indeed share a folder with a paige.js. This results in the following message:
Reading manifest: Warning processing content_script: An unexpected property was found in the WebExtension manifest.
I have double-checked with the tutorial and with the documentation, but I can't seem to determine the problem. The only related help I could find on this subject addressed the following error:
Reading manifest: Error processing content_script: An unexpected property was found in the WebExtension manifest.
This is not the same issue (as far as I can tell) given that mine is a warning and theirs is an error. Regardless, I checked the listed solution and it was not relevant. If anyone can help me discover my mistake, I would very much appreciate it.
This answer is two-fold.
My issue was apparently a file-explorer issue, such that when I saved my file it saved to a new location rather than applying the changes to the original file location. This has now been resolved.
As far as the Warning vs. Error semantics go, they seem to be identical after all. For those wondering, this can be resolved by changing "content_script" to "content_scripts".

Logic App not able to deserialized Azure data factory pipeline error message

I am facing the below issue with Azure Data Factory using Logic App.
I am using the Azure Data Factory pipeline for migration and Logic App for sending "Success & Failure" notification to the technical team.
Now success is working fine as the message is hardcoded, but failure is not as the Logic App web activity is not able to parse data factory pipeline error.
Here is the input that is going to Logic App web activity
Input
{
"url": "https://xxxxxxxxxxxxxxxxx",
"method": "POST",
"headers": {},
"body": "{\n \"title\": \"PIPELINE RUN FAILED\",\n \"message\":\"Operation on target Migration Validation failed: Execution fail against sql server. Sql error number: 50000. Error Message: The DELETE statement conflicted with the REFERENCE constraint \"FK_cmclientapprovedproducts_cmlinkclientchannel\". The conflict occurred in database \"Core7\", table \"dbo.cmClientApprovedProducts\", column 'linkclientchannelid'.\",\n \"color\": \"Red\",\n \"dataFactoryName\": \"LFC-TO-MCP-ADF\",\n \"pipelineName\": \"LFC TO MCP MIGRATION\",\n \"pipelineRunId\": \"f4f84365-58f0-4da1-aa00-64c3a4daa9e1\",\n \"time\": \"2020-07-31T22:44:01.6477435Z\"\n}"
}
Here is the error logic app is throwing
failures
{
"errorCode": "2108",
"message": "{\"error\":{\"code\":\"InvalidRequestContent\",\"message\":\"The request content is not valid and could not be deserialized: 'After parsing a value an unexpected character was encountered: F. Path 'message', line 3, position 202.'.\"}}",
"failureType": "UserError",
"target": "Send Failed Notification",
"details": []
}
I have tried various options, like set variable and convert by using various existing methods (string, json, replace etc), but no luck
e.g #string(activity('LOS migration').Error.Message)
Struggling almost all day this...please suggest if anyone faced a similar issue...
Below is the data flow activity
now it is working...
Pasting body content into the body text field box WITHOUT clicking on 'Add Dynamic Content' in web activity calling Logic App.
For the failure case, pass the error output use #{activity('LOS migration').error.message.
For sending email, it doesn't know if it's going to send a failure or success email. We have to adapt the body so the activity can use parameters, which we'll define later:
{
"DataFactoryName": "#{pipeline().DataFactory}",
"PipelineName": "#{pipeline().Pipeline}",
"Subject": "#{pipeline().parameters.Subject}",
"ErrorMessage": "#{pipeline().parameters.ErrorMessage}",
"EmailTo": "#pipeline().parameters.EmailTo"
}
We can reference this variables in the body by using the following format: #pipeline().parameters.parametername. For more details, you could refer to this article.
If you want to use the direct error message of the data factory activity as an input to the logic app email expression, you could try.
"ErrorMessage": "#{string(replace(activity('activity_name').Error.Message, '"',''''))}"
Replace 'activity_name' with your failing activity name.

Can't connect to Neptune anymore

I have created a Neptune instance in my AWS and a Load Balancer to access it from my local machine to play around.
I'm basically redirecting all connections on the :80 at my LB to :8182 in my Neptune.
So I can easily query it through the browser. In fact, this is the output for the /status:
// 20191211170323
// http://my-lb/status
{
"status": "healthy",
"startTime": "Mon Dec 09 20:06:21 UTC 2019",
"dbEngineVersion": "1.0.2.1.R2",
"role": "writer",
"gremlin": {
"version": "tinkerpop-3.4.1"
},
"sparql": {
"version": "sparql-1.1"
},
"labMode": {
"ObjectIndex": "disabled",
"Streams": "disabled",
"ReadWriteConflictDetection": "enabled"
}
}
Problem is when I try to connect with it through Gremlin Console or Java code I'm getting the following errors:
gremlin> :remote connect tinkerpop.server conf/remote-neptune.yaml
ERROR org.apache.tinkerpop.gremlin.driver.Handler$GremlinResponseHandler - Could not process the response
io.netty.handler.codec.http.websocketx.WebSocketHandshakeException: Invalid handshake response getStatus: 403 Forbidden
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13.verify(WebSocketClientHandshaker13.java:226)
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker.finishHandshake(WebSocketClientHandshaker.java:276)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketClientHandler.channelRead0(WebSocketClientHandler.java:69)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:748)
And my remote-neptune.yaml is as simple as:
hosts: [my-lb]
port: 80
connectionPool: { enableSsl: false}
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
I have updated my AWS credentials although I don't think that's related since I'm accessing it through the LB.
And the weirdest part is that this same scenario was working like a week ago :/
Any ideas?
Thanks!
Looks like the problem has auto resolved, but just sharing a few things to watch out for in case this happens again in the future. If you see connection issues, your first line of operation should be to check if its a network connectivity issue. (You mentioned that you were going to check if something changed with regards to security groups, so do update if that was indeed that case). To check if it indeed is a SG issue - log into your client instance, and do a simple telnet call to the DB endpoint.
telnet <endpoint> <port>
If it responds with "Connected", then you can be sure that your SGs are correct, and now you are dealing with an Application layer problem.
As called out in comments, some of the possible culprits could be:
You previously had a setup without IAM Auth in Neptune (not on ALB) and now you enabled IAM Auth. (Emphasis - I'm referring to IAM Auth on the database, and not some other component in between).
Gremlin client-server mismatches.
Some explicit settings on the ALB that could hinder the requests.
And a few others. To summarize, try to classify if it is a L2/L3 issue or an L7 issue and start investigating based off that.

Difference between "drive.metadata.readonly" and "drive.readonly.metadata"

I want to ask what is the difference between DriveScopes.DRIVE_METADATA_READONLY and https://www.googleapis.com/auth/drive.readonly.metadata? In other words, what is the difference between
these two forms:
https://www.googleapis.com/auth/drive.metadata.readonly //DriveScopes.DRIVE_METADATA_READONLY
https://www.googleapis.com/auth/drive.readonly.metadata
When I was using service account for working with Drive API it takes me a long time to figure out, why my app was throwing unauthorized exception:
Uncaught exception from servlet
com.google.api.client.auth.oauth2.TokenResponseException: 403
{
"error" : "access_denied",
"error_description" : "Requested client not authorized."
}
The String constant DriveScopes.DRIVE_METADATA_READONLY was causing the exception. In which context should I use this constant?
That's clearly a mistake in the Java API client.
The API documentation states that the correct scope is :
https://www.googleapis.com/auth/drive.readonly.metadata
Whereas when you look at the latest javadoc (at the time of this answer), you get :
https://www.googleapis.com/auth/drive.metadata.readonly
You should ignore the DriveScopes constant and create your own constant, while the Google Drive team fixes this.

Error when trying to get Google Drive File

I am getting an error when trying to get a Google Drive File using:
file = service.files().get(fileId=<googleDriveFileId>).execute()
The error is:
<HttpError 404 when requesting https://www.googleapis.com/drive/v2/files/0B6Cpn8NXwgGPQjB6ZlRjb21ZdXc?alt=json returned "File not found: 0B6Cpn8NXwgGPQjB6ZlRjb21ZdXc">
However, when I copy and paste that link directly in the browser like this:
https://www.googleapis.com/drive/v2/files/0B6Cpn8NXwgGPQjB6ZlRjb21ZdXc?alt=json
I get a different error:
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "dailyLimitExceededUnreg",
"message": "Daily Limit Exceeded. Please sign up",
"extendedHelp": "https://code.google.com/apis/console"
}
],
"code": 403,
"message": "Daily Limit Exceeded. Please sign up"
}
}
I am no where even close to exceeding the daily limit, the console shows 0% usage.
I know the fileId is correct, I am using Google Picker to get the fileId.
Any ideas?
I have found elsewhere that this is known issue with Google Drive that they are working to resolve. They offer the following workaround that I have confirmed works.
Add the following to when building the picker:
enableFeature(google.picker.Feature.MULTISELECT_ENABLED).
complete code:
var picker = new google.picker.PickerBuilder().
addView(view).
addView(uploadView).
setAppId("pressomatic").
setCallback(pickerCallback).
enableFeature(google.picker.Feature.MULTISELECT_ENABLED).
build();
picker.setVisible(true);
This same workaround solves another problem I have posted about, when trying to upload to a specific folder with Google Picker using setParent on a DocsUploadView. You still add the same feature to the Picker, not the DocsUploadView, and everything works as it should.

Resources