In one of my Logic Apps I'm using gmail connector trigger "when a new email arrives", but it doesn't seem to be working.
I'm sending email that it should detect when the trigger is run, but the trigger history simply shows the trigger is skipping and therefor not firing the rest of the workflow.
How can I debug this issue?
The following code is the trigger section of the logic app:
"triggers": {
"When_a_new_email_arrives": {
"description": "",
"inputs": {
"host": {
"connection": {
"name": "#parameters('$connections')['gmail']['connectionId']"
}
},
"method": "get",
"path": "/Mail/OnNewEmail",
"queries": {
"fetchOnlyWithAttachments": false,
"from": "secret#secret.com",
"importance": "All",
"includeAttachments": false,
"label": "INBOX",
"starred": "All",
"subject": "something"
}
},
"recurrence": {
"frequency": "Day",
"interval": 1,
"startTime": "2019-08-17T08:40:00Z"
},
"type": "ApiConnection"
}
}
It setup to runs ones every day, although for testing purposes I'm running the trigger manually.
Update 1
I've tried sending the mail from my own mail adress after configuring the from-parameter in the trigger and that works as intented. So I think the issue might be due to something about the senders original mail message. I did some digging, and pulled out the original raw mail from gmail. It contains some logging information from gmail servers. Appearently something called DMARC authentication have failed. I wonder if this has anything do the problem that is arising, maybe the gmail connector will not accept the senders identity.
Here's the part about DMARC in the raw mail message:
Authentication-Results: mx.google.com;
spf=pass (google.com: domain of source-company#company-product.com designates 85.236.67.1 as permitted sender) smtp.mailfrom=source-company#company-product.com;
dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=source-company.com
Could this be the reason the connector does not detect these mails?
for this issue I did some test but I haven't met this issue. In my logic app, I set the "How often do you want to check for items?" box as 10 minutes. I didn't run the trigger manually(I didn't click the "Run" button). Then I sent an email to my gmail, and after about 10 minutes, when the trigger went to check my gmail, the logic app run the actions under the trigger successfully. Apart from this, if I sent two emails to my gmail in these ten minutes, the trigger will not be triggered twice, it will be triggered just once.
I saw you mentioned set once every day in your description. So for example, if you completed the configuration of the logic app at 1:00 pm, and your gmail received an email at 2:00 pm, it will not run the actions under the trigger at once. The trigger will check your gmail at 1:00 pm tomorrow, so the actions under the trigger will also run at 1:00 pm tomorrow. But when you test this logic app, if you run the logic app manually, when your gmail received an email, it will triggered at once.
I wonder if this explanation will be helpful to your issue ?
Related
I am facing the below issue with Azure Data Factory using Logic App.
I am using the Azure Data Factory pipeline for migration and Logic App for sending "Success & Failure" notification to the technical team.
Now success is working fine as the message is hardcoded, but failure is not as the Logic App web activity is not able to parse data factory pipeline error.
Here is the input that is going to Logic App web activity
Input
{
"url": "https://xxxxxxxxxxxxxxxxx",
"method": "POST",
"headers": {},
"body": "{\n \"title\": \"PIPELINE RUN FAILED\",\n \"message\":\"Operation on target Migration Validation failed: Execution fail against sql server. Sql error number: 50000. Error Message: The DELETE statement conflicted with the REFERENCE constraint \"FK_cmclientapprovedproducts_cmlinkclientchannel\". The conflict occurred in database \"Core7\", table \"dbo.cmClientApprovedProducts\", column 'linkclientchannelid'.\",\n \"color\": \"Red\",\n \"dataFactoryName\": \"LFC-TO-MCP-ADF\",\n \"pipelineName\": \"LFC TO MCP MIGRATION\",\n \"pipelineRunId\": \"f4f84365-58f0-4da1-aa00-64c3a4daa9e1\",\n \"time\": \"2020-07-31T22:44:01.6477435Z\"\n}"
}
Here is the error logic app is throwing
failures
{
"errorCode": "2108",
"message": "{\"error\":{\"code\":\"InvalidRequestContent\",\"message\":\"The request content is not valid and could not be deserialized: 'After parsing a value an unexpected character was encountered: F. Path 'message', line 3, position 202.'.\"}}",
"failureType": "UserError",
"target": "Send Failed Notification",
"details": []
}
I have tried various options, like set variable and convert by using various existing methods (string, json, replace etc), but no luck
e.g #string(activity('LOS migration').Error.Message)
Struggling almost all day this...please suggest if anyone faced a similar issue...
Below is the data flow activity
now it is working...
Pasting body content into the body text field box WITHOUT clicking on 'Add Dynamic Content' in web activity calling Logic App.
For the failure case, pass the error output use #{activity('LOS migration').error.message.
For sending email, it doesn't know if it's going to send a failure or success email. We have to adapt the body so the activity can use parameters, which we'll define later:
{
"DataFactoryName": "#{pipeline().DataFactory}",
"PipelineName": "#{pipeline().Pipeline}",
"Subject": "#{pipeline().parameters.Subject}",
"ErrorMessage": "#{pipeline().parameters.ErrorMessage}",
"EmailTo": "#pipeline().parameters.EmailTo"
}
We can reference this variables in the body by using the following format: #pipeline().parameters.parametername. For more details, you could refer to this article.
If you want to use the direct error message of the data factory activity as an input to the logic app email expression, you could try.
"ErrorMessage": "#{string(replace(activity('activity_name').Error.Message, '"',''''))}"
Replace 'activity_name' with your failing activity name.
I have two jobs and I want to execute the second one only when the first one has completed. Both are scheduled on cloud scheduler.
I am trying the get API to check the status of the first job but there is no data under the status field. Please note that I have tried to get this data when my first job was running.
{
"name": "projects/<project name>/locations/us-central1/jobs/<job name>",
"description": "Sample",
"appEngineHttpTarget": {
"httpMethod": "GET",
"appEngineRouting": {
"version": "test-v1",
"host": "test-v1.test.googleplex.com"
},
"relativeUri": "/api/v1/test",
"headers": {
"User-Agent": "AppEngine-Google; (+http://code.google.com/appengine)"
}
},
"userUpdateTime": "2020-07-17T11:44:16Z",
"state": "ENABLED",
"status": {},
"scheduleTime": "2020-07-18T11:00:00.834928Z",
"lastAttemptTime": "2020-07-17T11:44:30.439092Z",
"retryConfig": {
"maxRetryDuration": "0s",
"minBackoffDuration": "5s",
"maxBackoffDuration": "3600s",
"maxDoublings": 16
},
"schedule": "0 04 * * *",
"timeZone": "America/Los_Angeles",
"attemptDeadline": "18000s"
}
Where am I going wrong?
I was checking the documentation on this, and it looks like to get if a job is running you need to use state, as it will return the state of the job, but I could not find in the documentation a description on when the job is done.
It looks like once the job has finished, it will populate the field status, but status will give you a description of the http status from your job (so it will tell you if it failed or if it succeeded and specific information about the job failure or success)
So, in this case you would need to check if the state is ENABLED (this would tell you that the job is not paused or has other state) and if the status is populated (this would mean that the job finished and at the same time you will know if it succeeded or if it failed).
I have a Security System with traits action.devices.traits.ArmDisarm and action.devices.traits.StatusReport and some other sensors: WaterLeak Sensor, Door Sensor ...
I report some errors about other devices with StatusReport state.
For example: when the door sensor detects that the door is open, the security system must give deviceOpen error.
When I say, "Is my security system ok?", my server's response to the query intent is the JSON below, but Google Assistant says that he couldn't reach my action (Unexpected error happened).
Is there anything wrong with this response?
{
"requestId": "10417064006786362499",
"payload": {
"devices": {
"3rL3QL7Kq2HrQjs53Y7o": {
"isArmed": true,
"currentStatusReport": [
{
"blocking": true,
"deviceTarget": "4BCIpzBWpgLA24mMI7r2",
"priority": 0,
"statusCode": "deviceOpen"
},
{
"blocking": true,
"deviceTarget": "MxRCd6ERRSWzYzyNTE8S",
"priority": 0,
"statusCode": "waterLeakDetected"
}
],
"status": "EXCEPTIONS",
"online": true
}
}
}
}
In Firebase Console there are no errors.
Logs in Firebase Console
Your response to the query intent looks right, but there might be an error in other parts of the process. You can follow the Troubleshooting Guide to see how your failed intent is counted in the Smart Home metrics and what are the details on your logs. (Firebase logs only gives info about your server. The logging mentioned in the guide (Google Cloud Logging) is a different and more comprehensive for the intent handling)
I would like to add some custom data to emails and to be able to filter them by using GraphAPI.
So far, I was able to create a Schema Extension and it gets returned successfully when I query https://graph.microsoft.com/v1.0/schemaExtensions/ourdomain_EmailCustomFields:
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#schemaExtensions/$entity",
"id": "ourdomain_EmailCustomFields",
"description": "Custom data for emails",
"targetTypes": [
"Message"
],
"status": "InDevelopment",
"owner": "hiding",
"properties": [
{
"name": "MailID",
"type": "String"
},
{
"name": "ProcessedAt",
"type": "DateTime"
}
]
}
Then I patched a specific message https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages/hidingmessageid:
PATCH Request
{"ourdomain_EmailCustomFields":{"MailID":"12","ProcessedAt":"2020-05-27T16:21:19.0204032-07:00"}}
The problem is that when I select the message, the added custom data doesn't appear by executing a GET request: https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages?$top=1&$select=id,subject,ourdomain_EmailCustomFields
Also, the following GET request gives me an error.
Request: https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages?$filter=ourdomain_EmailCustomFields/MailID eq '12'
Response:
{
"error": {
"code": "RequestBroker--ParseUri",
"message": "Could not find a property named 'e2_someguid_ourdomain_EmailCustomFields' on type 'Microsoft.OutlookServices.Message'.",
"innerError": {
"request-id": "someguid",
"date": "2020-05-29T01:04:53"
}
}
}
Do you have any ideas on how to resolve the issues?
Thank you!
I took your schema extension and copied and pasted it into my tenant, except with a random app registration I created as owner. then patched an email with your statement, and it does work correctly.
A couple of things here,
I would verify using microsoft graph explorer that everything is correct. eg, log into graph explorer with an admin account https://developer.microsoft.com/en-us/graph/graph-explorer#
first make sure the schema extensions exists
run a get request for
https://graph.microsoft.com/v1.0/schemaExtensions/DOMAIN_EmailCustomFields
It should return the schemaextension you created.
then
Run a get request for the actual message you patched not all messages that you filtered for now.
https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages/MESSAGEID?$select=DOMAIN_EmailCustomFields
here the response should be the email you patched and your EmailCustomField should be in the data somewhere, if it is not, that means that your patch did not work.
then you can run patch again from graph explorer
I did all this from graph explorer, easiest way to confirm.
two other things,
1) maybe the ?$top=1 in your get first message isn't the same message that you patched?
2) as per the documentation, you cannot use $filter for schema extensions with the message entity. (https://learn.microsoft.com/en-us/graph/known-issues#filtering-on-schema-extension-properties-not-supported-on-all-entity-types) So that second Get will never work.
Hopefully this helps you troubleshoot.
I have created a Neptune instance in my AWS and a Load Balancer to access it from my local machine to play around.
I'm basically redirecting all connections on the :80 at my LB to :8182 in my Neptune.
So I can easily query it through the browser. In fact, this is the output for the /status:
// 20191211170323
// http://my-lb/status
{
"status": "healthy",
"startTime": "Mon Dec 09 20:06:21 UTC 2019",
"dbEngineVersion": "1.0.2.1.R2",
"role": "writer",
"gremlin": {
"version": "tinkerpop-3.4.1"
},
"sparql": {
"version": "sparql-1.1"
},
"labMode": {
"ObjectIndex": "disabled",
"Streams": "disabled",
"ReadWriteConflictDetection": "enabled"
}
}
Problem is when I try to connect with it through Gremlin Console or Java code I'm getting the following errors:
gremlin> :remote connect tinkerpop.server conf/remote-neptune.yaml
ERROR org.apache.tinkerpop.gremlin.driver.Handler$GremlinResponseHandler - Could not process the response
io.netty.handler.codec.http.websocketx.WebSocketHandshakeException: Invalid handshake response getStatus: 403 Forbidden
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13.verify(WebSocketClientHandshaker13.java:226)
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker.finishHandshake(WebSocketClientHandshaker.java:276)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketClientHandler.channelRead0(WebSocketClientHandler.java:69)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:748)
And my remote-neptune.yaml is as simple as:
hosts: [my-lb]
port: 80
connectionPool: { enableSsl: false}
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
I have updated my AWS credentials although I don't think that's related since I'm accessing it through the LB.
And the weirdest part is that this same scenario was working like a week ago :/
Any ideas?
Thanks!
Looks like the problem has auto resolved, but just sharing a few things to watch out for in case this happens again in the future. If you see connection issues, your first line of operation should be to check if its a network connectivity issue. (You mentioned that you were going to check if something changed with regards to security groups, so do update if that was indeed that case). To check if it indeed is a SG issue - log into your client instance, and do a simple telnet call to the DB endpoint.
telnet <endpoint> <port>
If it responds with "Connected", then you can be sure that your SGs are correct, and now you are dealing with an Application layer problem.
As called out in comments, some of the possible culprits could be:
You previously had a setup without IAM Auth in Neptune (not on ALB) and now you enabled IAM Auth. (Emphasis - I'm referring to IAM Auth on the database, and not some other component in between).
Gremlin client-server mismatches.
Some explicit settings on the ALB that could hinder the requests.
And a few others. To summarize, try to classify if it is a L2/L3 issue or an L7 issue and start investigating based off that.