Correct place to audit query in Hot Chocolate graphql - hotchocolate

I am thinking should I audit user queries in HttpRequestInterceptor or DiagnosticEventListener for Hot Chocolate v11. The problem with latter is that if the audit failed to write to disk/db, the user will "get away" with the query.
Ideally if audit fail, no operation should proceed. Therefore in theory I should use HttpRequestInterceptor.
But How do I get IRequestContext from IRequestExecutor or IQueryRequestBuilder. I tried googling but documentation is limited.

Neither :)
The HttpRequestInterceptor is meant for enriching the GraphQL request with context data.
The DiagnosticEventListener, on the other hand, is meant for logging or other instrumentations.
If you want to write an audit log, you should instead go for a request middleware. A request middleware can be added like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseRequest(next => async context =>
{
})
.UseDefaultPipeline();
The tricky part here is to inspect the request at the right time. Instead of appending to the default pipeline, you can define your own pipeline like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseInstrumentations()
.UseExceptions()
.UseTimeout()
.UseDocumentCache()
.UseDocumentParser()
.UseDocumentValidation()
.UseRequest(next => async context =>
{
// write your audit log here and invoke next if the user is allowed to execute
if(isNotAllowed)
{
// if the user is not allowed to proceed create an error result.
context.Result = QueryResultBuilder.CreateError(
ErrorBuilder.New()
.SetMessage("Something is broken")
.SetCode("Some Error Code")
.Build())
}
else
{
await next(context);
}
})
.UseOperationCache()
.UseOperationResolver()
.UseOperationVariableCoercion()
.UseOperationExecution();
The pipeline is basically the default pipeline but adds your middleware right after the document validation. At this point, your GraphQL request is parsed and validated. This means that we know it is a valid GraphQL request that can be processed at this point. This also means that we can use the context.Document property that contains the parsed GraphQL request.
In order to serialize the document to a formatted string use context.Document.ToString(indented: true).
The good thing is that in the middleware, we are in an async context, meaning you can easily access a database and so on. In contrast to that, the DiagnosticEvents are sync and not meant to have a heavy workload.
The middleware can also be wrapped into a class instead of a delegate.
If you need more help, join us on slack.
Click on community support to join the slack channel:
https://github.com/ChilliCream/hotchocolate/issues/new/choose

Related

react-paypal-button-v2 returning the wrong order id

I was trying to debug a problem related to refunding Paypal orders (in a sandbox environment) using order IDs (which were stored previously). Every time I tried to perform a refund, the Paypal API would return an INVALID_RESOURCE_ID error, meaning that no such order existed. After much debugging, I have made a revelation with the initial process when I stored said order ID. The following method is how I am retrieving and storing said order id:
const onApprove = (data, actions) => {
// Redux method of saving checkout in backend with order ID via using data.orderID
dispatch(saveCheckout(data.orderID);
return actions.order.capture();
}
<PayPalButton
amount={totalPrice}
currency= "AUD"
createOrder={(data, actions) => createOrder(data, actions)}
onApprove={(data, actions) => onApprove(data, actions)}
options={{
clientId: "<placeholder>",
currency: "AUD"
}}
/>
I am using the recommended data.orderID from the docs and yet, upon inspecting the network tab, the following is shown:
{"id":"5RJ421191B663801G","intent":"CAPTURE","status":"COMPLETED","purchase_units":[{"reference_id":"default","amount":{"currency_code":"AUD","value":"24.00"},"payee":{"email_address":"sb-sg4zd7438633#business.example.com","merchant_id":"EJ7NSJGC6SRXQ"},"shipping":{"name":{"full_name":"John Doe"},"address":{"address_line_1":"1 Cheeseman Ave Brighton East","admin_area_2":"Melbourne","admin_area_1":"Victoria","postal_code":"3001","country_code":"AU"}},"payments":{"captures":[{"id":"7A2856455D561633D","status":"COMPLETED","amount":{"currency_code":"AUD","value":"24.00"},"final_capture":true,"seller_protection":{"status":"ELIGIBLE","dispute_categories":["ITEM_NOT_RECEIVED","UNAUTHORIZED_TRANSACTION"]},"create_time":"2021-10-11T00:40:58Z","update_time":"2021-10-11T00:40:58Z"}]}}],"payer":{"name":{"given_name":"John","surname":"Doe"},"email_address":"sb-432azn7439880#personal.example.com","payer_id":"KMEQSKCLCLUZ4","address":{"country_code":"AU"}},"create_time":"2021-10-11T00:40:48Z","update_time":"2021-10-11T00:40:58Z","links":[{"href":"https://api.sandbox.paypal.com/v2/checkout/orders/5RJ421191B663801G","rel":"self","method":"GET"}]}
The id saved by onApprove is 5RJ421191B663801G but there is another ID under captures and id which is 7A2856455D561633D. This is the actual order id I need to save in order to make the refund later on. However, I am struggling as to how I can retrieve this value as that id value seems to be only visible via the network. The objects returned via the onApprove and action.order.get() methods only return the first "false" id. Any advice would be greatly appreciated.
These are two separate types of IDs, the order ID (used only during buyer checkout approval), and the payment/transaction ID (which only exists after an order is captured, and is the one needed for any later refund or accounting purposes)
Since you are capturing on the client side with actions.order.capture(), this is where you would need to add a .then(function(data){ ... }) to do something with the capture data (particularly data.purchase_units[0].payments.captures[0].id). That is the id you would use for a refund.
In actual best practice, if anything important needs to be done with the capture id -- such as storing it in a database for reference -- you should not be creating and capturing orders on the client side, and instead calling a server-side integration where that database write will be performed.
Follow the Set up standard payments guide and make 2 routes on your server, one for 'Create Order' and one for 'Capture Order', documented here. Both routes should return only JSON data (no HTML or text). Inside the 2nd route, when the capture API is successful you should store its resulting payment details in your database (particularly the aforementioned purchase_units[0].payments.captures[0].id, which is the PayPal transaction ID) and perform any necessary business logic (such as sending confirmation emails or reserving product) immediately before forwarding your return JSON to the frontend caller.
Pair those 2 routes with the frontend approval flow: https://developer.paypal.com/demo/checkout/#/pattern/server
Or for react, use the official react-paypal-js

How to integrate custom authentication provider into IdentityServer4

Is it possible to somehow extend IdentityServer4 to run custom authentication logic? I have the requirement to validate credentials against a couple of existing custom identity systems and struggle to find an extension point to do so (they use custom protocols).
All of these existing systems have the concept on an API key which the client side knows. The IdentityServer job should now be to validate this API key and also extract some existing claims from the system.
I imagine to do something like this:
POST /connect/token
custom_provider_name=my_custom_provider_1&
custom_provider_api_key=secret_api_key
Then I do my logic to call my_custom_provider_1, validate the API key, get the claims and pass them back to the IdentityServer flow to do the rest.
Is this possible?
I'm assuming you have control over the clients, and the requests they make, so you can make the appropriate calls to your Identity Server.
It is possible to use custom authentication logic, after all that is what the ResourceOwnerPassword flow is all about: the client passes information to the Connect/token endpoint and you write code to decide what that information means and decide whether this is enough to authenticate that client. You'll definitely be going off the beaten track to do what you want though, because convention says that the information the client passes is a username and a password.
In your Startup.ConfigureServices you will need to add your own implementation of an IResourceOwnerPasswordValidator, kind of like this:
services.AddTransient<IResourceOwnerPasswordValidator, ResourceOwnerPasswordValidator>();
Then in the ValidateAsync method of that class you can do whatever logic you like to decide whether to set the context.Result to a successful GrantValidationResult, or a failed one. One thing that can help you in that method, is that the ResourceOwnerPasswordValidationContext has access to the raw request. So any custom fields you add into the original call to the connect/token endpoint will be available to you. This is where you could add your custom fields (provider name, api key etc).
Good luck!
EDIT: The above could work, but is really abusing a standard grant/flow. Much better is the approach found by the OP to use the IExtensionGrantValidator interface to roll your own grant type and authentication logic. For example:
Call from client to identity server:
POST /connect/token
grant_type=my_crap_grant&
scope=my_desired_scope&
rhubarb=true&
custard=true&
music=ska
Register your extension grant with DI:
services.AddTransient<IExtensionGrantValidator, MyCrapGrantValidator>();
And implement your grant validator:
public class MyCrapGrantValidator : IExtensionGrantValidator
{
// your custom grant needs a name, used in the Post to /connect/token
public string GrantType => "my_crap_grant";
public async Task ValidateAsync(ExtensionGrantValidationContext context)
{
// Get the values for the data you expect to be used for your custom grant type
var rhubarb = context.Request.Raw.Get("rhubarb");
var custard = context.Request.Raw.Get("custard");
var music = context.Request.Raw.Get("music");
if (string.IsNullOrWhiteSpace(rhubarb)||string.IsNullOrWhiteSpace(custard)||string.IsNullOrWhiteSpace(music)
{
// this request doesn't have the data we'd expect for our grant type
context.Result = new GrantValidationResult(TokenRequestErrors.InvalidGrant);
return Task.FromResult(false);
}
// Do your logic to work out, based on the data provided, whether
// this request is valid or not
if (bool.Parse(rhubarb) && bool.Parse(custard) && music=="ska")
{
// This grant gives access to any client that simply makes a
// request with rhubarb and custard both true, and has music
// equal to ska. You should do better and involve databases and
// other technical things
var sub = "ThisIsNotGoodSub";
context.Result = new GrantValidationResult(sub,"my_crap_grant");
Task.FromResult(0);
}
// Otherwise they're unauthorised
context.Result = new GrantValidationResult(TokenRequestErrors.UnauthorizedClient);
return Task.FromResult(false);
}
}

Azure Search RetryPolicy

We are using azure search and need to implement a retry stratgey as well as storing the Ids of failed documents as described.
Is there any documentation/samples on how to implement a RetryPolicy strategy in Azure Search.
Thanks
This is what I used:
private async Task<DocumentIndexResult> IndexWithExponentialBackoffAsync(IndexBatch<IndexModel> indexBatch)
{
return await Policy
.Handle<IndexBatchException>()
.WaitAndRetryAsync(5, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)), (ex, span) =>
{
indexBatch = ((IndexBatchException)ex).FindFailedActionsToRetry(indexBatch, x => x.Id);
})
.ExecuteAsync(async () => await _searchClient.IndexAsync(indexBatch));
}
It uses the Polly library to handle exponential backoff. In this case I use a model IndexModel that has a id field named Id.
If you like to log or store the ids of the failed attempts you can do that in the WaitAndRetryAsync function like
((IndexBatchException)ex)ex.IndexingResults.Where(r => !r.Succeeded).Select(r => r.Key).<Do something here>
There is currently no sample showing how to properly retry on IndexBatchException. However, there is a method you can use to make it easier to implement: IndexBatchException.FindFailedActionsToRetry. This method extracts the IDs of failed documents from the IndexBatchException, correlates them with the actions in a given batch, and returns a new batch containing only the failed actions that need to be retried.
Regarding the rest of the retry logic, you might find this code in the ClientRuntime library useful. You will need to tweak the parameters based on the characteristics of your load. The important thing to remember is that you should use exponential backoff before retrying to help your service recover, since otherwise your requests may be throttled.

How to design a restful API with right semantic?

For instance, when selling a subscription to a user - what the system will do is
create an organisation
create a user
create a subscription
create an authentication
create send out an email
more operations based on business logic
And ALL above need to happen in SAME DB transaction as unit of work.
In SOAP semantic, it can be abstracted as register(organisation, User, Plan, authentication details..more parameters) and returns a subscription object.
But in Restful World, we will only deal with resources (only noun in URL) with HTTP verbs, and I found it is very hard to describe such business related logic instead of simple CRUD?
There is no requirement for RESTFUL interfaces that they are mapped 1:1 to a database behind the API.
The logic in your case could be:
client -- POST: SubscriptionRequests(request) --> Server
client <-- RESPONSE: Status|Error -- Server
Upon success, the Status response could contain properties which contain URI's to resulting new entries. Such as: SubscriptionURI = "Subscriptions/ID49343" UserURI="Users/User4711".
And then someone could later on ask about active subscriptions with:
client -- GET: Subscriptions --> Server
client <-- RESPONSE: Subscriptions | Error -- Server
This scheme could be considered RESTful. There is no problem with the fact, that the server has to manipulate a database (invisible to the client) and how it does that.
There is also not a problem that subsequent GET operations on the Subscriptions resource (and Users resource, for that matter) yield different output compared to before the SubscriptionRequest operation having been executed.
There is also no compelling reason to create a more chatty interface, just because you happen to have a certain data base modeling behind.
In that sense, it would be worse if you created an API like:
client -- POST: Users(newUser) --> Server
client <-- RESPONSE: Status|Error -- Server
(if adding user worked bla bla ... )
client -- POST: Subscriptions(userId,other data..) --> Server
client <-- RESPONSE: Status|Error -- Server
Which would basically just mean you did not design your API but simply copied the structure of your data base tables behind (and those will change next week).
In summary, it is not the business of API design to care about how the implementation handles the data base. If you need transactions or if you use some other ways to make sure all those things which need to be done are done is up to the implementation of that SubscriptionRequests.POST handler.
In fact, you think using the RPC mode ;-)
With REST, you must think using resources and representations. What you want to do is adding a subscription, so I would suggest to have a list resource for subscription with a method POST that implements the registration. In the request payload, you will provide what you need for the subscription and get back hints regarding the created subscription.
Here is a sample of the request:
POST /subscriptions/
{
"organization": {
"id": "organizationId",
"name": "organization name",
(...)
},
"user": {
"lastName": "",
(...)
}
}
Here is a sample of the response:
{
"id": "subscriptionId",
"credentials": {
(...)
},
(...)
}
You can notice that the payloads are proposals and perhaps don't exactly match to your subscription, user, ... structures. So feel free to adapt them.
Hope it helps you,
Thierry

Why does WebApp2 auth.get_user_by_session() change the token?

I am using WebApp2 with auth for user sessions. My client will occasionally make nearly simultaneous requests to the server. The first one will make a request with session data that looks like this:
{
'cache_ts': 1408106895,
'token': u'GXpsaVQh5ZWtqxJMUBpGTr',
'user_id': 5690665774088192L,
'remember': 1,
'token_ts': 1408034938
}
Then after a call to auth.get_user_by_session(), the session comes back like this:
{
'cache_ts': 1408124980,
'token': u'0IVduczdGR5PkrMqNhBvzW',
'user_id': 5690665774088192L,
'remember': 1,
'token_ts': 1408124980
}
As you can see the token has been changed, and the timestamps updated.
Nearly simutaneously, another request is made that contains the same initial session data.
{
'cache_ts': 1408106895,
'token': u'GXpsaVQh5ZWtqxJMUBpGTr',
'user_id': 5690665774088192L,
'remember': 1,
'token_ts': 1408034938
}
However, that token is now invalid, so the session data is set to None. This wipes the users session, and causes lots of problems. Is there some setting I should be using to extend the life of the UserToken? Is there a more appropriate method than get_user_by_session()? I woud imagine that nearly simultaneous requests with the same session data shouldn't cause enormous issues. The ideal situation would be that if auth received invalid or expired tokens it would just ignore them, and throw an error.
Update 1
Hoped it was something simple like passing False to get_user_by_session(). That of course killed the session immediately.
Update 2
I've found that I only really need the user_id field, and that comes for free with the cookie data. Implementing that reduces the frequency of the issue. However the problem isn't actually fixed, and I'd love some input from anyone with familiarity of this library.
This is due to token_new_age parameter which defaults to 1 day so... every 24h the token will change.
This is a security measure because if someone hacks that session it will only work for 24h.
Parameter 'token_max_age' will also delete the token when time is consumed.

Resources