Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
We tried Form Recognizer custom training, with these steps (API 2.0)
https://pnagarjuna.wordpress.com/2020/01/07/azure-form-recognizer-service-custom-model-training-steps/
The Training modell is success (201), but after Check Custom Model Status we got this error
{ "modelInfo": { "modelId": "f17bd306-3c6a-4067-8ef1-5f2e6ced79e1", "status": "invalid", "createdDateTime": "2020-02-05T17:24:30Z", "lastUpdatedDateTime": "2020-02-05T17:24:31Z" }, "trainResult": { "trainingDocuments": [], "errors": [{ "code": "2014", "message": "No valid blobs found in the specified Azure blob container. Please conform to the document format/size/page/dimensions requirements." }] }}
We also check
https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview#custom-model
and everything is okay.
How can go further?
Thank you!
Gabor
Could you check if the prefix value in your post train request is consistent with the path in your azure blob container? If you put the sample files under the root path of your blob container, then give an empty string for prefix. As train and get trained model request are asynchronized in form recognizer v2.0, so some post request argument related error can only be fetched via get trained model request.
#Nini,
Could you provide an example for prefix value?
I face the same issue like author does.
I use 2.0 API version.
I generated SAS for whole container, the I use the next request in order to train custom model
{
"source": "https://{resourcename}.blob.core.windows.net/{containername}?sp=rl&st=2020-02-13T11:19:53Z&se=2021-02-14T11:19:00Z&sv=2019-02-02&sr=c&sig={signature}",
"sourceFilter": {
"prefix": "/USMF/VendorInvoices/Vendor - 1001/",
"includeSubFolders": false
},
"useLabelFile": false
}
target folder URI:
https://{resourcename}.blob.core.windows.net/{container name}/USMF/VendorInvoices/Vendor - 1001/
Response body:
{
"modelInfo": {
"modelId": "4e23f488-d8db-4c98-8018-4cd337d9a655",
"status": "invalid",
"createdDateTime": "2020-02-13T12:07:52Z",
"lastUpdatedDateTime": "2020-02-13T12:07:52Z"
},
"keys": {
"clusters": {}
},
"trainResult": {
"trainingDocuments": [],
"errors": [{
"code": "2014",
"message": "No valid blobs found in the specified Azure blob container. Please conform to the document format/size/page/dimensions requirements."
}]
}
}
If I keep training data set under root and therefore the prefix value is empty string then everything is OK.
Thank you for reporting this.
Any chance you can switch from policy defined SAS token (one with sig={signature}) to sas token with explicit permissions? (one with sp={permissionenum})
Could you explain your thought in details?
Here is what I did.
I generated the SAS token without applying any access policy. SAS is generated for whole container. I just chose Read, List permissions from the list and expiration date.
I am wondered that if I keep training data set under root folder then everything is OK. But when I put files under folder structure then the form recognizer service can't find those files.
The question has been resolved.
It's not an service issue definitely.
First of all, my prefix shouldn't contain '/' symbol at the beginning.
Another important point is the prefix is case sensitive.
In my case I've uploaded file with "USMF/VendorInvoices/Vendor - 1001/" prefix but requested model training with "usmf/VendorInvoices/Vendor - 1001/". So, this led to the error message - No valid blobs found in the specified Azure blob container. Please conform to the document format/size/page/dimensions requirements.
Related
I have a Logic App that retrieves the responses submitted by the users through Microsoft Forms.
When I see the Logic App Run, I can see the descriptor for each field (MuleSoft, IoT & Integration, Encuesta de tecnologías, ...), for example:
But in the "Show raw outputs" I can't see those fields, I get an identifier (rcb6ccf0fc9e44f74b44fa2715fec4f27, ...):
How I can retrieve those descriptors??
The solution is to add a 'Send an HTTP request to SharePoint' action to get the details of the form.
The Site Address is: https://forms.office.com
The Method is: GET
The Uri is: /formapi/api/forms('')?select=id,title,questions&$expand=questions($expand=choices)
This returns a JSON with all the questions and for each question the ID, Title and more info about the question.
We can implement a loop through these questions and with each ID, extract the response from the Microsoft Forms:
foreach": "#body('Send_an_HTTP_request_to_SharePoint')['questions']"
And Compose the result:
"Compose": {
"inputs": {
"Id": "#{items('For_each')['id']}",
"Name": "#items('For_each')['title']",
"Value": "#{body('Get_response_details')[item()['id']]}"
},
"runAfter": {},
"type": "Compose"
}
These are field identifiers. You can retrieve them directly from the Dynamic content of Get response details.
Alternatively, you can build your own JSON body(in your case Get response details) from Compose connector.
I'm trying to update the "Skills" property of a user (note: NOT using the Me endpoint). This returns an exception. I'm using the .NET SDK GraphServiceClient to do so, using the latest 3.6.0 version.
To Reproduce
Steps to reproduce the behavior, use the following sample code:
var user = new User()
{
Skills = skills
};
await graphClient.Users[userId].Request().UpdateAsync(user);
Expected behavior
I'd expect this code to send a POST request to the Graph API endpoint and that request should update the Skills property of the given used, provided that the authenticated service has been granted the User.ReadWrite.All permission in Azure AD (which it has).
Instead, the following error is being returned by the Graph endpoint:
"error": {
"code": "-1, Microsoft.SharePoint.Client.InvalidClientQueryException",
"message": "A type named 'Microsoft.SharePoint.user' could not be resolved by the model. When a model is available, each type name must resolve to a valid type.",
"innerError": {
"request-id": "1948a4ec-60fd-4212-873f-3d34f62f5601",
"date": "2020-05-25T09:35:33"
}
}
}
Not sure why this would not work. I followed the samples although those do not specifically mention updating the Skills property, you'd expect that to work equally for all properties. When I omit the property from the updated User object, I do not get an error in return (but obviously there's nothing updated in that case).
Till it is fixed in v1 as mentioned by baywet you can also use the following as work around for your scenario.
Graph client by default adds odata.type as microsoft.graph.user and sharepoint doesn't expect that value. You could change the oDataType to "https://graph.microsoft.com/v1.0/$metadata#users/$entity" or "Microsoft.SharePoint.user" or null, so that sharepoint can understand the type.
You can set the oDatatype by
var user = new User()
{
Skills = skills
};
user.ODataType = "https://graph.microsoft.com/v1.0/$metadata#users/$entity";
await graphClient.Users[userId].Request().UpdateAsync(user);
This is an ongoing issue that is being addressed in the service. The beta version should already be fixed and the v1 is in progress.
As a workaround you can either use the beta version or craft the request manually without an odata type.
I want to update the birthday of a user using the patch request.
Updating other properties works as expected but the moment the birthday property is included, the following error returned:
The request is currently not supported on the targeted entity set
I already tried to update the user to be sure the permissions are fine.
Application permissions are used.
This PATCH request to /V1.0/users/{id} works:
{
"givenName": "Fridas"
}
Passing this request body however:
{
"givenName":"Fridas",
"birthday" : "2014-01-01T00:00:00Z
}
throws an error
{
"error":
{
"code":"BadRequest",
"message":"The request is currently not supported on the targeted entity set",
"innerError":
{
"request-id":"5f0d36d1-0bff-437b-9dc8-5579a7ec6e72",
"date":"2019-08-13T15:27:40"
}
}
}
When I update the birthday separately, I get a 500 error. Print screens below. Updating the user id works fine, birthday does not.
Same user id is used in the request.
I'm not sure why this happens, but a workaround, albeit an annoying one, is to update birthday separately from other attributes.
E.g.
PATCH https://graph.microsoft.com/v1.0/users/userid
{
"birthday" : "2014-01-01T00:00:00Z"
}
Here is a screenshot from MS Graph Explorer:
In fact, this is a limitation in the current system.
User is a composite type. Under the covers some properties in user are mastered by different services, and we currently don't support updates across multiple services.
"birthday" is not mastered by Azure AD. So we can't update it with other properties mastered by Azure AD in the same call.
It is strongly recommended that you update this property separately. I can update it from my side. So you need a backend engineer to track this request for you.
This seems to affect more than Birthday.
Skills[] and Responsibilities[] are also returning 500 Internal Server Error when using PATCH request via REST API with:
{"skills": ["TESTING", "ANOTHER SKILL"]}
Same happens via the GraphServiceClient - except the result is:
Failed to call the Web Api: InternalServerError
Content: {
"error": {"code": "-1, Microsoft.Office.Server.Directory.DirectoryObjectUnauthorizedAccessException",
"message": "Attempted to perform an unauthorized operation.",
"innerError": {
"request-id": "1c2ccc54-0a0c-468f-a18c-6bdfbad4077d",
"date": "2019-08-28T13:23:55"
}}}
These requests work on the Graph Explorer page, but not via calls to the API.
Hello I am trying to create a new azure search service with S2 or S3 tier.
However, as I try to create, I am getting validation error:
Any idea how to resolve this?
below are raw error details:
{
"telemetryId": "127e12a7-2de5-420d-8ce3-181a5e663337",
"bladeInstanceId": "Blade_f10d818ac9854758a2308220f06e7274_5_0",
"galleryItemId": "Microsoft.Search",
"createBlade": "CreateBladeV3",
"code": "InvalidTemplateDeployment",
"message": "The template deployment 'Microsoft.Search' is not valid according to the validation procedure. The tracking id is 'd2a97835-fd96-4b15-95f9-416e19086f31'. See inner errors for details. Please see https://aka.ms/arm-deploy for usage details.",
"details": [
{
"code": "ServiceQuotaExceeded",
"message": "Operation would exceed standard3 service quota"
}
]
}
It looks to me like either the subscription you are using doesn't allow for paid services or you are trying to create an index in the service via import that is too large. However, S2 and S3 allow quite large indexes, so my guess is the former.
I have seen that it is possible to add a JSON schema when you are using the "HTTP Request"-trigger and adding the JSON schema in the "Request Body JSON Schema"-box.
I have also looked at adding schema in the "Integration Account", however the section in the documentation says its "to confirm that XML documents you receive are valid", which is not what i am looking for.
I am using a Azure Service Bus Queue.
In this case i am having PeekLock as a trigger, the idea is that the input in the service bus will be of a certain format. It will all be in JSON. I dont "care" or need to know what happens before the service bus, all i know is that each message will contain the same format. What my logic app is supposed to do is to receive the message in the service bus and then mail it to whoever its supposed to go to, and add if there is anything to add from blob storage. I want to be able to access certain "tags" or "parameters", since Service Bus only have its own few tags.
I used the jsonschema.net to get the schema, and here is the JSON of how a format will look like:
{
"items": [
{
"Key": "XXXXXX-XXXX-XXXX-XXXX-XXXXXXX",
"type": "Email",
"data": {
"subject": "Who is the father?",
"bodyBlobUID": "00000000-0000-0000-0000-000000000000",
"to": [
"darth.vader#hotmail.com"
],
"cc": [
"luke.skywalker#nomail.com"
],
"bcc": [
"leia.skywalker#nomail.com"
],
"encoding": "System.Text.UTF8Encoding",
"isBodyHtml": false,
"organisationUID": "00000000-0000-0000-0000-000000000000",
"BlobUIDs": [
"luke.skywalker#nomail.com"
]
}
}
]
}
So my questions are of 2 parts:
1: Is it possible to add JSON schemas without using the HTTP Request
trigger for using service bus?
2: If #1 is possible, or maybe it can
be done in another way. How do i access the tags or parameters of the
JSON format? At this moment i am trying to do transformations using
schemas and maps with the Integration account but it seems
unnecessary.
UPDATE: Parse JSON is now available in Logic Apps.
we will be releasing an action called JSON Parse next week, in which you can specify the service bus output as payload, define schema of the payload, then custom friendly tokens in subsequent steps.