Writing my own IMPL of IProfileService for IdentityServer4
Trying to figure out how context.RequestedClaimTypes is intended to operate.
Does context.RequestedClaimTypes indicate additional claims to return (in addition to "SUB") or does context.RequestedClaimTypes indicate only return the requested claims and no more?
RequestedClaimTypes is the flattened list of the claim types that are associated with the scopes the client has requested.
That's what the client expects to get back - what you actually return, is up to you.
Related
I posted the following Feature Request to the azure-sdk, but not sure if that was the correct place for getting a response, so reposting here.
https://github.com/Azure/azure-sdk-for-net/issues/20764
When processing a document against a custom trained model, when a value is present but not able to be translated (such as a signature), would it be possible to include something in the response to identify it as having a value though it wasn't able to be processed?
The specific use case is that our client needs to know that a document was signed by the parties involved. Without this feature, someone will be required to manually review thousands of document images per week to verify that they have been signed. In testing we have found that very few signatures are being translated any way, so the string response is coming back as null.
Thank you,
Rich
For Form Recognizer when a value is not detected although it is present it will be extracted as Null as Form Recognizer is not aware that a value exists it did not detect it. In case of signature this is usually due to the signature being unreadable and just a scribble.
I have an LTI Tool Consumer(LMS) that is using LTI1p0 which will send a request to a service that is currently not using LTI. Therefore I'm writing a NodeJS implementation of a wrapper which will
receive from the LTI Tool Consumer,
map it to match service's API,
send it to the service,
then parse the response from the service into an LTI Tool Provider format,
and finally send it back to the Tool Consumer.
The service has a required field called groups which expects an array of group objects like so:
group: [ {
id: <string>, // id of the group
name: <string>, // name of the group
role: <string> // role of the user
}]
This parameter doesn't exactly exist in the LTI1p0 implementation guide. So I want to know how to best send array-type (groups in my case) information via LTI.
When looking through the docs, I've come across a few potential parameters I could use:
1. Context parameters
The guide mentions that a 'type of context would be "group"', and there are parameters for context_id, context_type, context_title. The issue would be that this is only an option for one group per request/user.
2. Custom parameters
I could make a custom parameter and call it custom_groups which seems simple, but I'm not sure how the value should look for arrays? Just like a stringified json object?
custom_groups = "{"id":123,"name":"Group Name","role":"Instructor"}, {"id":124,"name":"Group Name 2","role":"Creator"}"
For the roles parameter, one can send a list of comma-separated strings (i.e. roles= Instructor, Creator,..)but that wouldn't suffice in my case.
I'm still new to LTI, so my apologies if this is blatantly obvious.
Note: Both LTI Consumer (LMS) and the service are external, i.e. I can't change them and only provide the wrapper. I can communicate with the Tool Consumer about possible custom parameters but again not sure which format to request.
Additionally, the service might implement LTI towards the end of the year, so ideally the wrapper could then be removed and the Tool Consumer wouldn't have to change much.
Any help much appreciated!
Groups are notably absent from the LTI spec. So any answer will be part opinion.
I would agree with you that using the context parameter fields, with one LTI launch per group. Would be the most correct way, as far as the spec goes.
However I have not seen an LMS that allows LTI launches from group context. So you may not be able to use the service without a wrapper, even if it supported LTI natively.
Alternatively:
LTI 1.0 Supports custom parameters, as you are extending the the information already sent (context and roles) You could use the ext_ prefix.
Referer: https://www.imsglobal.org/specs/ltiv1p0/implementation-guide
If a profile wants to extend these fields, they should prefix all fields not described herein with "ext_".
So you could send a custom parameter with that prefix. Assuming your LMS lets you send a useful custom paramater. LTI is designed to use basic POST request, Not multidimensional Json objects. But a stringified JSON object is perfectly valid with an appropriate key.
i.e:
ext_custom_groups = "{"id":123,"name":"Group Name","role":"Instructor"}, {"id":124,"name":"Group Name 2","role":"Creator"}"
I'm trying to get AngularJS to work with Gorilla CSRF for my web applciation, but there aren't many documentation around that I can find, so I'm not sure where exactly to start. Should I set a X-CSRF-Tokenfor every GET request or should I just do it when the user visits the home page like I'm doing now? Also, how do I make AngularJS CSRF protection work with Gorilla CSRF? Do I need to do some sort of comparisons? Any example codes would be appreciated.
Here is my code:
package main
import (
"github.com/gorilla/csrf"
"github.com/gorilla/mux"
)
func main() {
r := mux.NewRouter()
r.HandleFunc("/", Home).Methods("GET")
// Other routes handling goes here
http.ListenAndServe(":8000",
csrf.Protect([]byte("32-byte-long-auth-key"))(r))
}
func Home(w http.ResponseWriter, r *http.Request) {
w.Header().Set("X-CSRF-Token", csrf.Token(r))
}
// More routes
You're question might be a bit broad but overall you're misusing the tools so I'm just going try and explain the basic ideas. The application you're using uses a 'double submit' pattern for CSRF protection. This requires changes in both the client and server code bases. The server should not be setting the X-CSRF-Token header, that is the role of the client. I've actually implemented a couple from scratch anti-CSRF solutions recently and they're pretty simple (both double submit pattern). I also used a few packages from vendors like MSTF and Apache (had to implement CSRF across like 20 years of applications on all kinds of stacks).
In the double submit pattern the server should be setting a cookie with a random value (like a guid), the cookie must be marked as secure. You can make it httponly as well, however it will require you to do a lot more work on your front end resources. On the client side, the simplest way to deal with this is to implement some JavaScript that reads the cookie value and adds it as a header before any POST request. You don't need to protect GET's typically. You could, but if your GET's are doing constructive/destructive things server side, then you're misusing the HTTP verb and I would correct that by making those requests POSTS rather than trying to protect every single request.
On the server side, it's best to do the CSRF check up front, in a common place where all requests come in. When a POST comes in, the server should read the cookie value, check for the header value and compare them. If they're equal then the request should be allowed to pass through, if they're not then you should boot them out with a 403 or something. After doing so the server should rewrite the cookie value (best to make it one use only).
Your client side script can have something like the code below, just make sure the resource is on every page load and you don't use form submits and this will cover everything. If you submit forms you'll need some other code like this to handle that. Some approaches prefer to write the value in the DOM server side. For example in .NET the CSRF library makes the value HTTPOnly and Secure and expects the devs to put a place holder token in every single form in every single cshtml file in their project... I personally think that is very stupid and inefficient. No matter how you do this, you're probably gonna have to do some custom work. Angular isn't going to implement the front end for gorillas CSRF library. gorilla probably isn't going to come with JavaScript for your client since it's an API library. Anyway, basic JavaScript example;
// three functions to enable CSRF protection in the client. Sets the nonce header with value from cookie
// prior to firing any HTTP POST.
function addXMLRequestCallback(callback) {
var oldSend;
if (!XMLHttpRequest.sendcallback) {
XMLHttpRequest.sendcallback = callback;
oldSend = XMLHttpRequest.prototype.send;
// override the native send()
XMLHttpRequest.prototype.send = function () {
XMLHttpRequest.sendcallback(this);
if (!Function.prototype.apply) {
Function.prototype.apply = function (self, oArguments) {
if (!oArguments) {
oArguments = [];
}
self.__func = this;
self.__func(oArguments[0], oArguments[1], oArguments[2], oArguments[3], oArguments[4]);
delete self.__func;
};
}
// call the native send()
oldSend.apply(this, arguments);
}
}
}
addXMLRequestCallback(function (xhr) {
xhr.setRequestHeader('X-CSRF-Token', getCookie('X-CSRF-Cookie'));
});
function getCookie(cname) {
var name = cname + "=";
var ca = document.cookie.split(';');
for (var i = 0; i < ca.length; i++) {
var c = ca[i];
while (c.charAt(0) == ' ') c = c.substring(1);
if (c.indexOf(name) == 0) return c.substring(name.length, c.length);
}
return "";
}
Now, if you can narrow your question a bit I can provide some more specific guidance but this is just a guess (maybe I'll read their docs when I have a minute). Gorilla is automatically going to set your cookie and do your server side check for you if you use csrf.Protect. The code you have setting the header in Go, that is what you need the JavaScript above for. If you set the header on the server side, you've provided no security at all. That needs to happen in the browser. If you send the value along with all your requests, Gorilla will most likely cover the rest for you.
Some other random thoughts about the problem space. As a rule of thumb, if an attacker can't replay a request, they probably can't CSRF you. This is why this simple method is so effective. Every incoming request has exactly one random GUID value it requires to pass through. You can store that value in the cookie so you don't have to worry about session moving across servers ect (that would require a shared data store server side if you're not using the double submit pattern; this cookie-header value compare business). There's no real chance of this value being brute forced with current hardware limitations. The single origin policy in browsers prevents attackers from reading the cookie value you set (only scripts from your domain will be able to access it if it's set as secure). The only way to exploit that is if the user has previously been exploited by XSS which I mean, kind of defeats the purpose of doing CSRF since the attacker would already have more control/ability to do malicious things with XSS.
I have used rest servlet binding to expose route as a service.
I have used employeeClientBean as a POJO , wrapping the actual call to employee REST service within it, basically doing the role of a service client.
So, based on the method name passed, I call the respective method in employee REST service, through the employeeClientBean.
I want to know how how I can handle the scenarios as added in commments in the block of code.
I am just new to Camel, but felt POJO binding is better as it does not couple us to camel specific APIs like exchange and processor or even use
any specific components.
But, I am not sure how I can handle the above scenarios and return appropriate JSON responses to the user of the route service.
Can someone help me on this.
public void configure() throws Exception {
restConfiguration().component("servlet").bindingMode(RestBindingMode.json)
.dataFormatProperty("prettyPrint", "true")
.contextPath("camelroute/rest").port(8080);
rest("/employee").description("Employee Rest Service")
.consumes("application/json").produces("application/json")
.get("/{id}").description("Find employee by id").outType(Employee.class)
.to("bean:employeeClientBean? method=getEmployeeDetails(${header.id})")
//How to handle and return response to the user of the route service for the following scenarios for get/{id}"
//1.Passed id is not a valid one as per the system
//2.Failure to return details due to some issues
.post().description("Create a new Employee ").type(Employee.class)
.to("bean:employeeClientBean?method=createEmployee");
//How to handle and return correct response to the user of the route service for the following scenarios "
//1. Employee being created already exists in the system
//2. Some of the fields of employee passed are as not as per constraints on them
//3. Failure to create a employee due to some issues in server side (For Eg, DB Failure)
}
I fear you are putting Camel to bad use - as per the Apache documentation the REST module is supporting Consumer implementations, e.g. reading from a REST-endpoint, but NOT writing back to a caller.
For your use case you might want to switch framework. Syntactically, Ratpack goes in that direction.
If a module requires a claim, and the user does not have the claim a 403 response is returned.
eg:
this.RequiresClaims(new[] { "SuperSecure" });
or
this.RequiresValidatedClaims(c => c.Contains("SuperSecure"));
but that just returns a blank page to the user.
How do I deal with a user not having the required claim?
Can I 'catch' the 403 and redirect?
The RequiresClaims method returns void or uses the pre-request hook to throw back a HttpStatusCode.Forbidden. What should I do so the user knows what has happened?
Many Thanks,
Neil
You can catch it either by writing your own post request hook (either at the app level, or the module level) or by implementing your own IErrorHandler, probably wrapping the default one.
The error handler stuff is going to change so you will be able to register multiple ones (for different error codes), it's setup to do that (with the "can/do" interface) but for some reason my brain didn't add it as a collection :-)