How to design a restful API with right semantic? - angularjs

For instance, when selling a subscription to a user - what the system will do is
create an organisation
create a user
create a subscription
create an authentication
create send out an email
more operations based on business logic
And ALL above need to happen in SAME DB transaction as unit of work.
In SOAP semantic, it can be abstracted as register(organisation, User, Plan, authentication details..more parameters) and returns a subscription object.
But in Restful World, we will only deal with resources (only noun in URL) with HTTP verbs, and I found it is very hard to describe such business related logic instead of simple CRUD?

There is no requirement for RESTFUL interfaces that they are mapped 1:1 to a database behind the API.
The logic in your case could be:
client -- POST: SubscriptionRequests(request) --> Server
client <-- RESPONSE: Status|Error -- Server
Upon success, the Status response could contain properties which contain URI's to resulting new entries. Such as: SubscriptionURI = "Subscriptions/ID49343" UserURI="Users/User4711".
And then someone could later on ask about active subscriptions with:
client -- GET: Subscriptions --> Server
client <-- RESPONSE: Subscriptions | Error -- Server
This scheme could be considered RESTful. There is no problem with the fact, that the server has to manipulate a database (invisible to the client) and how it does that.
There is also not a problem that subsequent GET operations on the Subscriptions resource (and Users resource, for that matter) yield different output compared to before the SubscriptionRequest operation having been executed.
There is also no compelling reason to create a more chatty interface, just because you happen to have a certain data base modeling behind.
In that sense, it would be worse if you created an API like:
client -- POST: Users(newUser) --> Server
client <-- RESPONSE: Status|Error -- Server
(if adding user worked bla bla ... )
client -- POST: Subscriptions(userId,other data..) --> Server
client <-- RESPONSE: Status|Error -- Server
Which would basically just mean you did not design your API but simply copied the structure of your data base tables behind (and those will change next week).
In summary, it is not the business of API design to care about how the implementation handles the data base. If you need transactions or if you use some other ways to make sure all those things which need to be done are done is up to the implementation of that SubscriptionRequests.POST handler.

In fact, you think using the RPC mode ;-)
With REST, you must think using resources and representations. What you want to do is adding a subscription, so I would suggest to have a list resource for subscription with a method POST that implements the registration. In the request payload, you will provide what you need for the subscription and get back hints regarding the created subscription.
Here is a sample of the request:
POST /subscriptions/
{
"organization": {
"id": "organizationId",
"name": "organization name",
(...)
},
"user": {
"lastName": "",
(...)
}
}
Here is a sample of the response:
{
"id": "subscriptionId",
"credentials": {
(...)
},
(...)
}
You can notice that the payloads are proposals and perhaps don't exactly match to your subscription, user, ... structures. So feel free to adapt them.
Hope it helps you,
Thierry

Related

Journey builder's custom activity: Fetch data extension data in bulk

I am new to Salesforce Marketing Cloud and journey builder.
https://developer.salesforce.com/docs/marketing/marketing-cloud/guide/creating-activities.html
We are building journey builder's custom activity in which it will use a data extension as the source and when the journey builder is invoked, it will fetch a row and send this data to our company's internal endpoint. The team got that part working. We are using the postmonger.js.
I have a couple of questions:
Is there a way to retrieve the data from data extension in bulk so that we can call our company's internal bulk endpoint? Calling the endpoint for each record in the data extension for our use case would not be efficient enough and won't work.
When the journey is invoked and an entry in the data extension is retrieved and that data is sent to our internal endpoint, is there a machanism to mark this entry as already sent such that next time the journey is run, it won't process the entry that's already sent?
Here is a snippet of our customActivity.js in which this is populating one record. (I changed some variable names.). Is there a way to populate multiple records such that when "execute" is called, it is passing a list of payloads as input to our internal endpoint.
function save() {
try {
var TemplateNameValue = $('#TemplateName').val();
var TemplateIDValue = $('#TemplateID').val();
let auth = "{{Contact.Attribute.Authorization.Value}}"
payload['arguments'].execute.inArguments = [{
"vendorTemplateId": TemplateIDValue,
"field1": "{{Contact.Attribute.DD.field1}}",
"eventType": TemplateNameValue,
"field2": "{{Contact.Attribute.DD.field2}}",
"field3": "{{Contact.Attribute.DD.field3}}",
"field4": "{{Contact.Attribute.DD.field4}}",
"field5": "{{Contact.Attribute.DD.field5}}",
"field6": "{{Contact.Attribute.DD.field6}}",
"field7": "{{Contact.Attribute.DD.field7}}",
"messageMetadata" : {}
}];
payload['arguments'].execute.headers = `{"Authorization":"${auth}"}`;
payload['configurationArguments'].stop.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].validate.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].publish.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].save.headers = `{"Authorization":"default"}`;
payload['metaData'].isConfigured = true;
console.log(payload);
connection.trigger('updateActivity', payload);
} catch(err) {
document.getElementById("error").style.display = "block";
document.getElementById("error").innerHtml = err;
}
console.log("Template Name: " + JSON.stringify(TemplateNameValue));
console.log("Template ID: " + JSON.stringify(TemplateIDValue));
}
});
Any advise or idea is highly appreciated!
Thank you.
Grace
Firstly, i implore you to not proceed with the design pattern of fetching data for each subscriber, from Marketing Cloud, that gets sent through the custom activity, for arguments sake i'll list two big issues.
You have no way of limiting the configuration of data extensions columns or column names in SFMC (Salesforce Marketing Cloud). If any malicious user or by human error would delete a column or change a column name your service would stop receiving that value.
Secondly, Marketing Cloud has 2 sets of API limitations, yearly and minute by minute. Depending on your licensing, you could run into the yearly limit.
The problem you have with limitation on minutes (2500 for REST and 2000 for SOAP) is that each usage of the custom activity in journey builder would multiple the amount of invocations per minute. Hitting this limit would cause issues for incremental data flows into SFMC.
I'd also suggest not retrieving any data from Marketing Cloud when a customer gets sent through a custom activity. Users should pick which corresponding rows/data that should be sent to the custom activity in their segmentation.
The eventDefinitionKey can be picked up from postmonger after requestedTriggerEventDefinition in the eventDefinitionModel function. eventDefinitionKey can then be used to programmatically populate SFMC's POST call with data from the Journey Data model, thus allowing marketers to select what data to be sent with the subscriber.
Following is some code to show how it would work in your customActivity.js
connection.on(
'requestedTriggerEventDefinition',
function (eventDefinitionModel) {
var eventKey = eventDefinitionModel['eventDefinitionKey'];
save(eventKey);
}
);
function save(eventKey) {
// subscriberKey fetched directly from Contact model
// columnName is populated from the Journey Data model
var params = {
subscriberKey: '{{Contact.key}}',
columnName: '{{Event.' + eventKey + '.columnName}}',
};
payload['arguments'].execute.inArguments = [params];
}

Correct place to audit query in Hot Chocolate graphql

I am thinking should I audit user queries in HttpRequestInterceptor or DiagnosticEventListener for Hot Chocolate v11. The problem with latter is that if the audit failed to write to disk/db, the user will "get away" with the query.
Ideally if audit fail, no operation should proceed. Therefore in theory I should use HttpRequestInterceptor.
But How do I get IRequestContext from IRequestExecutor or IQueryRequestBuilder. I tried googling but documentation is limited.
Neither :)
The HttpRequestInterceptor is meant for enriching the GraphQL request with context data.
The DiagnosticEventListener, on the other hand, is meant for logging or other instrumentations.
If you want to write an audit log, you should instead go for a request middleware. A request middleware can be added like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseRequest(next => async context =>
{
})
.UseDefaultPipeline();
The tricky part here is to inspect the request at the right time. Instead of appending to the default pipeline, you can define your own pipeline like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseInstrumentations()
.UseExceptions()
.UseTimeout()
.UseDocumentCache()
.UseDocumentParser()
.UseDocumentValidation()
.UseRequest(next => async context =>
{
// write your audit log here and invoke next if the user is allowed to execute
if(isNotAllowed)
{
// if the user is not allowed to proceed create an error result.
context.Result = QueryResultBuilder.CreateError(
ErrorBuilder.New()
.SetMessage("Something is broken")
.SetCode("Some Error Code")
.Build())
}
else
{
await next(context);
}
})
.UseOperationCache()
.UseOperationResolver()
.UseOperationVariableCoercion()
.UseOperationExecution();
The pipeline is basically the default pipeline but adds your middleware right after the document validation. At this point, your GraphQL request is parsed and validated. This means that we know it is a valid GraphQL request that can be processed at this point. This also means that we can use the context.Document property that contains the parsed GraphQL request.
In order to serialize the document to a formatted string use context.Document.ToString(indented: true).
The good thing is that in the middleware, we are in an async context, meaning you can easily access a database and so on. In contrast to that, the DiagnosticEvents are sync and not meant to have a heavy workload.
The middleware can also be wrapped into a class instead of a delegate.
If you need more help, join us on slack.
Click on community support to join the slack channel:
https://github.com/ChilliCream/hotchocolate/issues/new/choose

How to update the users birthday

I want to update the birthday of a user using the patch request.
Updating other properties works as expected but the moment the birthday property is included, the following error returned:
The request is currently not supported on the targeted entity set
I already tried to update the user to be sure the permissions are fine.
Application permissions are used.
This PATCH request to /V1.0/users/{id} works:
{
"givenName": "Fridas"
}
Passing this request body however:
{
"givenName":"Fridas",
"birthday" : "2014-01-01T00:00:00Z
}
throws an error
{
"error":
{
"code":"BadRequest",
"message":"The request is currently not supported on the targeted entity set",
"innerError":
{
"request-id":"5f0d36d1-0bff-437b-9dc8-5579a7ec6e72",
"date":"2019-08-13T15:27:40"
}
}
}
When I update the birthday separately, I get a 500 error. Print screens below. Updating the user id works fine, birthday does not.
Same user id is used in the request.
I'm not sure why this happens, but a workaround, albeit an annoying one, is to update birthday separately from other attributes.
E.g.
PATCH https://graph.microsoft.com/v1.0/users/userid
{
"birthday" : "2014-01-01T00:00:00Z"
}
Here is a screenshot from MS Graph Explorer:
In fact, this is a limitation in the current system.
User is a composite type. Under the covers some properties in user are mastered by different services, and we currently don't support updates across multiple services.
"birthday" is not mastered by Azure AD. So we can't update it with other properties mastered by Azure AD in the same call.
It is strongly recommended that you update this property separately. I can update it from my side. So you need a backend engineer to track this request for you.
This seems to affect more than Birthday.
Skills[] and Responsibilities[] are also returning 500 Internal Server Error when using PATCH request via REST API with:
{"skills": ["TESTING", "ANOTHER SKILL"]}
Same happens via the GraphServiceClient - except the result is:
Failed to call the Web Api: InternalServerError
Content: {
"error": {"code": "-1, Microsoft.Office.Server.Directory.DirectoryObjectUnauthorizedAccessException",
"message": "Attempted to perform an unauthorized operation.",
"innerError": {
"request-id": "1c2ccc54-0a0c-468f-a18c-6bdfbad4077d",
"date": "2019-08-28T13:23:55"
}}}
These requests work on the Graph Explorer page, but not via calls to the API.

How to integrate custom authentication provider into IdentityServer4

Is it possible to somehow extend IdentityServer4 to run custom authentication logic? I have the requirement to validate credentials against a couple of existing custom identity systems and struggle to find an extension point to do so (they use custom protocols).
All of these existing systems have the concept on an API key which the client side knows. The IdentityServer job should now be to validate this API key and also extract some existing claims from the system.
I imagine to do something like this:
POST /connect/token
custom_provider_name=my_custom_provider_1&
custom_provider_api_key=secret_api_key
Then I do my logic to call my_custom_provider_1, validate the API key, get the claims and pass them back to the IdentityServer flow to do the rest.
Is this possible?
I'm assuming you have control over the clients, and the requests they make, so you can make the appropriate calls to your Identity Server.
It is possible to use custom authentication logic, after all that is what the ResourceOwnerPassword flow is all about: the client passes information to the Connect/token endpoint and you write code to decide what that information means and decide whether this is enough to authenticate that client. You'll definitely be going off the beaten track to do what you want though, because convention says that the information the client passes is a username and a password.
In your Startup.ConfigureServices you will need to add your own implementation of an IResourceOwnerPasswordValidator, kind of like this:
services.AddTransient<IResourceOwnerPasswordValidator, ResourceOwnerPasswordValidator>();
Then in the ValidateAsync method of that class you can do whatever logic you like to decide whether to set the context.Result to a successful GrantValidationResult, or a failed one. One thing that can help you in that method, is that the ResourceOwnerPasswordValidationContext has access to the raw request. So any custom fields you add into the original call to the connect/token endpoint will be available to you. This is where you could add your custom fields (provider name, api key etc).
Good luck!
EDIT: The above could work, but is really abusing a standard grant/flow. Much better is the approach found by the OP to use the IExtensionGrantValidator interface to roll your own grant type and authentication logic. For example:
Call from client to identity server:
POST /connect/token
grant_type=my_crap_grant&
scope=my_desired_scope&
rhubarb=true&
custard=true&
music=ska
Register your extension grant with DI:
services.AddTransient<IExtensionGrantValidator, MyCrapGrantValidator>();
And implement your grant validator:
public class MyCrapGrantValidator : IExtensionGrantValidator
{
// your custom grant needs a name, used in the Post to /connect/token
public string GrantType => "my_crap_grant";
public async Task ValidateAsync(ExtensionGrantValidationContext context)
{
// Get the values for the data you expect to be used for your custom grant type
var rhubarb = context.Request.Raw.Get("rhubarb");
var custard = context.Request.Raw.Get("custard");
var music = context.Request.Raw.Get("music");
if (string.IsNullOrWhiteSpace(rhubarb)||string.IsNullOrWhiteSpace(custard)||string.IsNullOrWhiteSpace(music)
{
// this request doesn't have the data we'd expect for our grant type
context.Result = new GrantValidationResult(TokenRequestErrors.InvalidGrant);
return Task.FromResult(false);
}
// Do your logic to work out, based on the data provided, whether
// this request is valid or not
if (bool.Parse(rhubarb) && bool.Parse(custard) && music=="ska")
{
// This grant gives access to any client that simply makes a
// request with rhubarb and custard both true, and has music
// equal to ska. You should do better and involve databases and
// other technical things
var sub = "ThisIsNotGoodSub";
context.Result = new GrantValidationResult(sub,"my_crap_grant");
Task.FromResult(0);
}
// Otherwise they're unauthorised
context.Result = new GrantValidationResult(TokenRequestErrors.UnauthorizedClient);
return Task.FromResult(false);
}
}

Meteor - How safe it is?

I'm actually creating my first app using meteor, in particular using angular 2. I've experience with Angular 1 and 2, so based on it. I've some points of concern...
Let's imagine this scenario...My data stored on MongoDb:
Collection: clients
{
name : "Happy client",
password : "Something non encrypted",
fullCrediCardNumber : "0000 0000 0000 0000"
}
Now, on my meteor client folder, I've this struncture...
collection clients.ts (server folder)
export var Clients = new Mongo.Collection('clients');
component client.ts (not server folder)
import {Clients} from '../collections/clients.ts';
class MyClients {
clients: Array<Object>;
constructor(zone: NgZone) {
this.clients = Clients.find();
}
}
..and for last: the html page to render it, but just display the name of the clients:
<li *ngFor="#item of clients">
{{client.name}}
</li>
Ok so far. but my concern is: In angular 1 & 2 applications the component or controller or directive runs on the client side, not server side.
I set my html just to show the name of the client. but since it's ah html rendering, probably with some skill is pretty easy to inject some code into the HTML render on angular to display all my fields.
Or could be easy to go to the console and type some commands to display the entire object from the database collection.
So, my question is: How safe meteor is in this sense ? Does my concerns correct ? Is meteor capable to protect my data , protect the name of the collections ? I know that I can specify on the find() to not bring me those sensitive data, but since the find() could be running not on the server side, it could be easy to modify it on the fly, no ?
Anyway...I will appreciate explanations about how meteor is safe (or not) in this sense.
ty !
You can protect data by simply not publishing any sensitive data on the server side.
Meteor.publish("my-clients", function () {
return Clients.find({
contractorId: this.userId // Publish only the current user's clients
}, {
name: 1, // Publish only the fields you want the browser to know of
phoneNumber: 1
});
});
This example only publishes the name and address fields of the currently logged in user's clients, but not their password or fullCreditCardNumber.
Another good example is the Meteor.users collection. On the server it contains all user data, login credentials, profiles etc. for all users. But it's also accessible on the client side. Meteor does two important things to protect this very sensitive collection:
By default it only publishes one document: the user that's logged in. If you type Meteor.users.find().fetch() into the browser console, you'll only see the currently logged in user's data, and there's no way on the client side to get the entire MongoDB users collection. The correct way to do this is to restrict the amount of published documents in your Meteor.publish function. See my example above, or 10.9 in the Meteor publish and subscribe tutorial.
Not the entire user document gets published. For example OAuth login credentials and password hashes aren't, you won't find them in the client-side collection. You can always choose which part of a document gets published, a simple way to do that is using MongoDB projections, like in the example above.

Resources