sagemaker deep forecaseting file parsing error - amazon-sagemaker

I am trying out Deep AR foreDeep AR Forecastingcasting training algorithm. My training job keeps failing with the following error while parsing the jsonlines file:
row: 1) Failure reason
ClientError: Error when parsing json (source: /opt/ml/input/data/train/daily_call_vol_lines.json, row: 1)
I am attaching the file (json) with json lines format I tried with
Any help into why the parser fails on sagemaker side would help!
Pasting the file content:
{
"TimeStamp": "2017-07-01",
"Number of Calls": 14
}
{
"TimeStamp": "2017-07-02",
"Number of Calls": 62
}
{
"TimeStamp": "2017-07-03",
"Number of Calls": 972
}

The format you are using is not supported, each line should be of this form for a json file: {"start": "2017-07-01", "target": [14, 62, 972, ...]}. The description of the formats that are supported can be found here, you can also have a look of this end-to-end example that generates training data in json format.

Related

GraphAPI Schema Extensions don't appear for Messages

I would like to add some custom data to emails and to be able to filter them by using GraphAPI.
So far, I was able to create a Schema Extension and it gets returned successfully when I query https://graph.microsoft.com/v1.0/schemaExtensions/ourdomain_EmailCustomFields:
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#schemaExtensions/$entity",
"id": "ourdomain_EmailCustomFields",
"description": "Custom data for emails",
"targetTypes": [
"Message"
],
"status": "InDevelopment",
"owner": "hiding",
"properties": [
{
"name": "MailID",
"type": "String"
},
{
"name": "ProcessedAt",
"type": "DateTime"
}
]
}
Then I patched a specific message https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages/hidingmessageid:
PATCH Request
{"ourdomain_EmailCustomFields":{"MailID":"12","ProcessedAt":"2020-05-27T16:21:19.0204032-07:00"}}
The problem is that when I select the message, the added custom data doesn't appear by executing a GET request: https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages?$top=1&$select=id,subject,ourdomain_EmailCustomFields
Also, the following GET request gives me an error.
Request: https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages?$filter=ourdomain_EmailCustomFields/MailID eq '12'
Response:
{
"error": {
"code": "RequestBroker--ParseUri",
"message": "Could not find a property named 'e2_someguid_ourdomain_EmailCustomFields' on type 'Microsoft.OutlookServices.Message'.",
"innerError": {
"request-id": "someguid",
"date": "2020-05-29T01:04:53"
}
}
}
Do you have any ideas on how to resolve the issues?
Thank you!
I took your schema extension and copied and pasted it into my tenant, except with a random app registration I created as owner. then patched an email with your statement, and it does work correctly.
A couple of things here,
I would verify using microsoft graph explorer that everything is correct. eg, log into graph explorer with an admin account https://developer.microsoft.com/en-us/graph/graph-explorer#
first make sure the schema extensions exists
run a get request for
https://graph.microsoft.com/v1.0/schemaExtensions/DOMAIN_EmailCustomFields
It should return the schemaextension you created.
then
Run a get request for the actual message you patched not all messages that you filtered for now.
https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages/MESSAGEID?$select=DOMAIN_EmailCustomFields
here the response should be the email you patched and your EmailCustomField should be in the data somewhere, if it is not, that means that your patch did not work.
then you can run patch again from graph explorer
I did all this from graph explorer, easiest way to confirm.
two other things,
1) maybe the ?$top=1 in your get first message isn't the same message that you patched?
2) as per the documentation, you cannot use $filter for schema extensions with the message entity. (https://learn.microsoft.com/en-us/graph/known-issues#filtering-on-schema-extension-properties-not-supported-on-all-entity-types) So that second Get will never work.
Hopefully this helps you troubleshoot.

JSON-LD: 'Syntax error: value, object or array expected.'

I am getting the "value, object or array expected." syntax error when I test my JSON-LD code with Google's Structured Data Testing Tool. The error appears on line 143 of my code.
I am implementing Schema.org for a local business website with JSON-LD. I have
tried replacing the [ brackets with } brackets for the image object, and
even tried removing the comma on line 143. I either get the same error, or new errors appear. I have searched other problems related to this error, but they both had different code.
{
"#context": "https://schema.org",
"#type": "LocalBusiness",
"image": [
"http://secureservercdn.net/166.62.110.232/kkk.bd6.myftpupload.com/wp-
content/uploads/2019/05/360webclicks-logo2-4.fw_.png",
],
Highlighted error in the SDTT:
The last value must not be followed by a comma.
So, this
"image": [
"image.png",
],
should be this
"image": [
"image.png"
],
If you only have one value, you could omit the array ([…]):
"image": "image.png",

Deep insert in Odata v4 in Business Central AL Extension

We are creating an Extension in AL to import orders.
For this question we have simplified our extension to explain the challenges we are facing.
What we would like to do is import header + lines in Dynamics 365 Business Central.
To achive this we we have:
- Created a table (Header)
- Created a table (Lines)
- Created a page of Type API (Doc)
- Created a listPart (SalesLine)
Scenario 1
We have published page DOC and are trying to do a post request to this odata url.
POST https://api.businesscentral.dynamics.com/v1.0/{tennant}/Sandbox/ODataV4/Company('CRONUS%20NL')/Doc/ HTTP/1.1
Content-Type: application/json
Authorization: Basic {{username}} {{password}}
{
"name": "Description",
"SalesLines" : [{"lineno" : 1000}]
}
The response:
{
"error": {
"code": "BadRequest",
"message": "Does not support untyped value in non-open type."
}
}
Scenario 2
When we post:
POST https://api.businesscentral.dynamics.com/v1.0/3{tennant}/Sandbox/ODataV4/Company('CRONUS%20NL')/Doc/ HTTP/1.1
Content-Type: application/json
Authorization: Basic {{username}} {{password}}
{
"name": "Description"
}
We get the following response:
{
"#odata.context": "https://api.businesscentral.dynamics.com/v1.0/3ddcca3d-d343-4a06-95f9-f32dbf645199/Sandbox/ODataV4/$metadata#Company('CRONUS%20NL')/Doc/$entity",
"#odata.etag": "W/\"JzQ0O3BKUzExSUMrQUl4UXFQc2R6V1J1ellvZEttRTJoa2xhanNtV0M0K3Ezajg9MTswMDsn\"",
"id": 4,
"name": "Description",
"DateTime": "2019-05-20T19:33:13.73Z"
}
I have published our extension on GitHub
Help would be appriciated.
I have worked on a similar solution and found that the following things were required for it to work:
The part containing your lines must be placed inside the repeater on your header Page.
You must set the EntityName and EntitySetName on your part to the same values as on the actual page.
When calling the API you must append the parameter $expand=[EntitySetName of your lines part] e.g $expand=orderLines.
In the JSON body the property name of the array containing the lines must match the EntitySetName of the lines part.
I can provide some examples if the instructions above do not suffice.

Training Job in Sagemaker gives error in locating file in S3 to docker image path

I am trying to use scikit_bring_your_own/container/decision_trees/train mode, running in AWS CLI, I had no issues. Trying to replicate in Creating Sagemaker Training Job , facing issue in loading data from S3 to docker image path.
In CLI command we used specify the docker run -v $(pwd)/test_dir:/opt/ml --rm ${image} train from where the input needs to referred.
In training job, mentioned the S3 bucket location and output path for model artifacts.
Error entered in the exception as in train - "container/decision_trees/train"
raise ValueError(('There are no files in {}.\n' +
'This usually indicates that the channel ({}) was incorrectly specified,\n' +
'the data specification in S3 was incorrectly specified or the role specified\n' +
'does not have permission to access the data.').format(training_path, channel_name))
Traceback (most recent call last):
File "/opt/program/train", line 55, in train
'does not have permission to access the data.').format(training_path, channel_name))
So not understanding is there any tweaking required or any access missing.
kindly help
If you set the InputDataConfig in the CreateTrainingJob API like this
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "s3://<bucket>/a.csv"
}
},
"InputMode": "File",
},
{
"ChannelName": "eval",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "s3://<bucket>/b.csv"
}
},
"InputMode": "File",
}
]
SageMaker download the data specified above from S3 to the /opt/ml/input/data/channel_name directory in the Docker container. In this case, the algorithm container should be able to find the input data under
/opt/ml/input/data/train/a.csv
/opt/ml/input/data/eval/b.csv
You can find more details in https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html

Translating JSON file with Angular Gettext

I am using gettext to translate my AngularJS site - it all works fine where I have HTML attributes that I can add 'translate' to.
However I also have quite a large and complex JSON file which needs translating, which includes arrays and objects.
Is there any way to include this in the translation that gettext does, into the PO file? Or would I need to rethink the whole idea of using a JSON file to segment the customer flow?
I have included an initial extract of the JSON file below
{
"version": "1.1",
"name": "MVP",
"description": "Initial customer segmenting flow",
"enabled": true,
"funnel": [
{
"text": "I am...",
"image": "",
"help": "",
"options": [
{
"text": "Placing an order",
"image": "image1.png",
"next": 2
},
{
"text": "E-mailing customer service",
"image": "image2.png",
"next": 2
},
Thanks
James
Process the HTML file yourself with a script at build-time and dump all translatable messages into a dummy source file with the syntax expected by your string extractor, probably something like this:
<translate>I am ...</translate>
<translate>Placing an order</translate>
<translate>E-mailing customer service</translate>

Resources