We are creating an Extension in AL to import orders.
For this question we have simplified our extension to explain the challenges we are facing.
What we would like to do is import header + lines in Dynamics 365 Business Central.
To achive this we we have:
- Created a table (Header)
- Created a table (Lines)
- Created a page of Type API (Doc)
- Created a listPart (SalesLine)
Scenario 1
We have published page DOC and are trying to do a post request to this odata url.
POST https://api.businesscentral.dynamics.com/v1.0/{tennant}/Sandbox/ODataV4/Company('CRONUS%20NL')/Doc/ HTTP/1.1
Content-Type: application/json
Authorization: Basic {{username}} {{password}}
{
"name": "Description",
"SalesLines" : [{"lineno" : 1000}]
}
The response:
{
"error": {
"code": "BadRequest",
"message": "Does not support untyped value in non-open type."
}
}
Scenario 2
When we post:
POST https://api.businesscentral.dynamics.com/v1.0/3{tennant}/Sandbox/ODataV4/Company('CRONUS%20NL')/Doc/ HTTP/1.1
Content-Type: application/json
Authorization: Basic {{username}} {{password}}
{
"name": "Description"
}
We get the following response:
{
"#odata.context": "https://api.businesscentral.dynamics.com/v1.0/3ddcca3d-d343-4a06-95f9-f32dbf645199/Sandbox/ODataV4/$metadata#Company('CRONUS%20NL')/Doc/$entity",
"#odata.etag": "W/\"JzQ0O3BKUzExSUMrQUl4UXFQc2R6V1J1ellvZEttRTJoa2xhanNtV0M0K3Ezajg9MTswMDsn\"",
"id": 4,
"name": "Description",
"DateTime": "2019-05-20T19:33:13.73Z"
}
I have published our extension on GitHub
Help would be appriciated.
I have worked on a similar solution and found that the following things were required for it to work:
The part containing your lines must be placed inside the repeater on your header Page.
You must set the EntityName and EntitySetName on your part to the same values as on the actual page.
When calling the API you must append the parameter $expand=[EntitySetName of your lines part] e.g $expand=orderLines.
In the JSON body the property name of the array containing the lines must match the EntitySetName of the lines part.
I can provide some examples if the instructions above do not suffice.
Related
I'm running into an issue trying to execute a Token Request for OAuth in Snowflake.
I'm using Postman with a query param of grant_type=authorization_code but the oauth token-request endpoint continually sends back the following response.
{
"data": null,
"error": "unsupported_grant_type",
"code": null,
"message": "The provided grant type is not supported.",
"success": false,
"headers": null
}
Any ideas? Per the documentation this is one of the two supported grant types.
https://docs.snowflake.com/en/user-guide/oauth-custom.html
API URL :
https://example.com/oauth/token-request?grant_type=authorization_code&code=123&redirect_uri=https://localhost.com
The issue is that the Snowflake documentation is incorrect. I will be submitting a ticket to them to get it fixed.
The documentation indicates that you're supposed to include the items as query parameters; they belong in the POST body as per the standard, however.
The values for token generation should be passed under x-www-form-urlencoded section and there the following values should be passed:
redirect_uri
grant_type
code
Under the Header section, following should be passed:
Authorization
The value for this would be:
Basic <base 64 encoded value for clientid:client secret>
The encoded value can be generated from: https://www.base64encode.org or you may generate it using code.
According to the Graph API documentation, making a GET request to get groups with extension data that includes a filtered response is acceptable. For example, according to the doc referenced the following request should be valid:
GET https://graph.microsoft.com/v1.0/users/${id}/memberOf?$filter=graphlearn_courses/courseId eq ‘123’&$select=displayName,id,description,graphlearn_courses
This works when making the request as a singleton but fails and returns no response when the same request is made as part of a batch request:
POST https://graph.microsoft.com/v1.0/$batch
Accept: application/json
Content-Type: application/json
{
"requests": [
{
"id": "1",
"method": "GET",
"url": "/users/${id}/memberOf?$filter=graphlearn_courses/courseId eq ‘123’&$select=displayName,id,description,graphlearn_courses"
}
...
]
}
Can this be looked into and the issue resolved by someone at MS support please? Thank you in advance.
Schema extensions (legacy) are not returned with $select statement, but are returned without $select. So i would recommend you to try that and see if it helps. Documentation available # Microsoft Graph API limitations.
I would like to add some custom data to emails and to be able to filter them by using GraphAPI.
So far, I was able to create a Schema Extension and it gets returned successfully when I query https://graph.microsoft.com/v1.0/schemaExtensions/ourdomain_EmailCustomFields:
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#schemaExtensions/$entity",
"id": "ourdomain_EmailCustomFields",
"description": "Custom data for emails",
"targetTypes": [
"Message"
],
"status": "InDevelopment",
"owner": "hiding",
"properties": [
{
"name": "MailID",
"type": "String"
},
{
"name": "ProcessedAt",
"type": "DateTime"
}
]
}
Then I patched a specific message https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages/hidingmessageid:
PATCH Request
{"ourdomain_EmailCustomFields":{"MailID":"12","ProcessedAt":"2020-05-27T16:21:19.0204032-07:00"}}
The problem is that when I select the message, the added custom data doesn't appear by executing a GET request: https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages?$top=1&$select=id,subject,ourdomain_EmailCustomFields
Also, the following GET request gives me an error.
Request: https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages?$filter=ourdomain_EmailCustomFields/MailID eq '12'
Response:
{
"error": {
"code": "RequestBroker--ParseUri",
"message": "Could not find a property named 'e2_someguid_ourdomain_EmailCustomFields' on type 'Microsoft.OutlookServices.Message'.",
"innerError": {
"request-id": "someguid",
"date": "2020-05-29T01:04:53"
}
}
}
Do you have any ideas on how to resolve the issues?
Thank you!
I took your schema extension and copied and pasted it into my tenant, except with a random app registration I created as owner. then patched an email with your statement, and it does work correctly.
A couple of things here,
I would verify using microsoft graph explorer that everything is correct. eg, log into graph explorer with an admin account https://developer.microsoft.com/en-us/graph/graph-explorer#
first make sure the schema extensions exists
run a get request for
https://graph.microsoft.com/v1.0/schemaExtensions/DOMAIN_EmailCustomFields
It should return the schemaextension you created.
then
Run a get request for the actual message you patched not all messages that you filtered for now.
https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages/MESSAGEID?$select=DOMAIN_EmailCustomFields
here the response should be the email you patched and your EmailCustomField should be in the data somewhere, if it is not, that means that your patch did not work.
then you can run patch again from graph explorer
I did all this from graph explorer, easiest way to confirm.
two other things,
1) maybe the ?$top=1 in your get first message isn't the same message that you patched?
2) as per the documentation, you cannot use $filter for schema extensions with the message entity. (https://learn.microsoft.com/en-us/graph/known-issues#filtering-on-schema-extension-properties-not-supported-on-all-entity-types) So that second Get will never work.
Hopefully this helps you troubleshoot.
I have a search service running on azure in a free tier. On this service I already have a datasource, and indexer and an index defined.
I'd like to add another datasource (and index + indexer). When I do this (using postman) I get 403 Forbidden without any other error message.
This is the POST I made to this url - https://my-search-service-name.search.windows.net/datasources?api-version=2016-09-01:
"Content-Type": "application/json",
"api-key": "API-KEY-HERE"
{
"name": "datasource-prod",
"description": "Data source for search",
"type": "azuresql",
"credentials": { "connectionString" : "Server=tcp:xxxx.database.windows.net,1433;Initial Catalog=xxxxx_prod;Persist Security Info=False;User ID=xxxxxx;Password=xxxxxx!;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;" },
"container": {"name": "DataAggregatedView"},
"dataChangeDetectionPolicy": {
"#odata.type" : "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
"highWaterMarkColumnName" : "ChangeIndicator"
},
"dataDeletionDetectionPolicy": {
"#odata.type" : "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy",
"softDeleteColumnName" : "isDeleted",
"softDeleteMarkerValue" : "0"
}
}
Using the same request, with different name and database name worked perfectly and generated the existing (first) datasource. This error (403) - not even got the error message - happens only when I try to define a second datasource.
As I can understand from documentation, free search tier allows 3 datasources. Anyone had this issue? Any help/direction is appreciate!
Thank you.
Make sure you're using the admin API key. It looks like you may be using a query key.
This question has already been asked here but that answer didn't help me as I know that the URL is correct as it points to a US Standard url as detailed in the amazon guide here
So I'm attempting to upload a file to my bucket via the angular library ng-file-upload. I believe that I've setup the bucket correctly:
The buckets name is ac.testing
The CORS configuration for this:
<?xml version="1.0" encoding="UTF-8"?>
< CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
< CORSRule>
< AllowedOrigin>*</AllowedOrigin>
< AllowedMethod>POST</AllowedMethod>
< AllowedHeader>*</AllowedHeader>
< /CORSRule>
< /CORSConfiguration>
I had to put a space after the opening brackets above as SO didn't let me add it if I didn't.
I've understood this as allow POSTS from any location regardless of heading - is that correct?
And my policy which I generated:
{
"Version": "2012-10-17",
"Id": "Policy<numbers>",
"Statement": [
{
"Sid": "Stmt<numbers>",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::ac.testing/*"
}
]
}
Finally - in my actual code that attempts to upload:
var policy = {
"expiration": "2020-01-01T00:00:00Z",
"conditions": [
{"bucket": "ac.testing"},
["starts-with", "$key", ""],
{"acl": "private"},
["starts-with", "$Content-Type", ""],
["starts-with", "$filename", ""],
["content-length-range", 0, 524288000]
]
}
var uploadDetails = {
url: 'https://ac.testing.s3.amazonaws.com/',
method: 'POST',
data: {
key: file.name,
acl: 'public-read',
// application/octet-stream is the default format used by S3
'Content-Type': file.type != '' ? file.type : 'application/octet-stream',
AWSAccessKeyId: '<my-access-key-id>',
policy: policy,
signature: '<my-access-signature>',
filename: file.name,
},
file: file
};
if (file) {
// Upload to Amazon S3
file.upload = Upload.upload(uploadDetails).then(function(success) {
console.log(JSON.stringify(success) );
},function(error) {
console.log(error);
},function(progress) {
console.log(JSON.stringify(progress) );
});
}
So where am I going wrong here?
EDIT
A couple of things that I wasn't doing was encoding the policy and the signature according to Base64 encoding.
That process involves encoding your policy and then using that generated code, along with your AWS Secret Access Key to generate an SHA-1 HMAC code. Even after this it still returns the same error.
I also added another Permission for the bucket. It matches the policy and CORS settings already defined here but is defined as everyone:
EDIT 2
I created a new bucket called ac-k24-uploads and updated the url to match this:
uploadDetails = {
url: 'https://ac-k24-uploads.s3.amazonaws.com/',
Now the error coming back is this:
EDIT 3
Looking around it seems that headers that amazon doesn't like are being added to the request. I tried it on ng-file-uploads and it works but on my own machine it doesn't. The fix for this seems to be removing these headers like so:
method: 'POST',
headers: {
'Authorization': undefined
},
but I still get the same error. Why are these headers being attached to this request?
EDIT 4
Even with the code above the headers were still getting added at a different place so I located it and remove them. The error which I recieve now is this:
This is not related to CORS.
It is because of the dot in the name of your bucket.
If your bucket is indeed in US-Standard (example-bucket.s3.amazonaws.com is not necessarily a "US Standard" URL) then this is what you need:
var uploadDetails = {
url: 'https://s3.amazonaws.com/ac.testing/',
...
This is referred to as the path-based or path-style URL where the bucket is expressed as the first element of the path, rather than as including it in the hostname.
This is necessary if the bucket name has a dot, and s3. needs to be s3-region. for buckets not in US Standard.
This is a limitation of wildcard SSL/TLS certificates. S3 presents a certificate valid for *.s3[-region].amazonaws.com and the rules for such certificates state that * cannot match a dot... therefore the certificate is considered invalid and therefore it's an "insecure response."
Or, just create a bucket with a dash instead of a dot, and your code should work as-is.
Since you are using Signature V2, either hostname construct results in an identical signature for an otherwise-identical request. Not true for Sig V4.
The actual problem was two fold:
The headers were incorrect as the "Authorization" header was still present even after deleting it from the Upload.upload portion of the code.
To fix this I had to delete the object from my api.injector code - this was where it was being added.
delete config.headers['Authorization']
Following on from EDIT3 the signature that I generated was wrong - so I had to regenerate it.
Thanks to Michael for the help.