Microsoft Graph API subscription triggers same email with different IDs - microsoft-graph-mail

I have a weird hard-to-replicate issue with Graph API and Outlook subscription endpoints. A user is authorised in my app and subscription is created for me/messages with change type created.
Everything works fine in 99% of the cases, but once in a while the endpoint is triggered several times with the same email. There is no changes to the email or any other part of the request, except ID, even timestamp. I have no idea how to replicate it consistently and/or fix an issue. Is there any scenario where Graph API would send the same message twice with slightly different IDs? It looks like they are sequentially generated IDs too, as they differ by 1-3 characters in the very end.

Related

PowerAutomate: is there a way for a trigger to be fired by receiving mails for a different mailboxes

I have a service account that should represent the flow for multiple mail-accounts. The flow is started whenever a mail arrives in a mailbox, then a confirmation should be sent after that. Let's assume I have 100 different mail-accounts in the company and the flow should be triggered for all of them, this should be bundled (dynamically) in the service account. How can this be done?
With dynamically I mean not "hardcoded" mail-accounts in the trigger (because they are changing a lot).
I never have used logic apps, but are they a better solution for that?
I have 2 ways for this
WAY-1
You can either use a Distributed list mail where all the members in your team are the members and trigger an email.
WAY-2
I have created a list adding the person column and adding the members to the list in SharePoint and sent mails using that column. Here is the Screenshot of the flow that I used

Webhook Subscription for AzureAd groups and Users not working

I have created a webhook subscription for Users and Groups by making a POST call to https://graph.microsoft.com/v1.0/subscriptions with the following as payload:
{
"changeType": "updated,deleted",
"notificationUrl": "https://a0317384.ngrok.io",
"resource": "groups",
"expirationDateTime": "2019-06-25T19:23:45.9356913Z",
"clientState": "<redacted>"
}
A Subscription is successfully getting created and I am returning verification token from my endpoint. I can also see it in the list of Subscriptions by making GET call on above URL.
When I am making some changes in Groups, like changing displayName or adding Members to the Group, I am not seeing notifications in real-time. Sometimes I am getting notifications in a bulk and other times the notifications do not arrive at all.
I have tried multiple times to delete and re-create the Subscription, but I still see the same behavior.
Can anyone tell why is it happening?
Notifications can be batched for performance optimizations and the delay to deliver notifications can vary based on service load and other factors.
While debugging you should also make sure there's no blocking conditions set by the IDE (like a break point for instance) that might block other incoming requests.
Lastly, it's pretty rare, but service outages can happen, in that case the best thing to do it to contact support.

How to retrieve site url's efficiently for all users in a tenant using Microsoft Graph API

Here is the problem:
I have a tenant with 50,000 users Every day I need to pull that user list to see what has changed. Example: Which users were added or removed, and what are their mySite URL is.
I can get some general information calling /users but, I need each user's mySite. The only way I have found to retrieve that is to call /users/userId?$select=mySite.
This implies I must make 50k calls and I then encounter throttling issues.
Is there a way through Microsoft Graph (or some other mechanism) to pull the user data, including mySite efficiently?

Synchronizing Clients with Gmail

What is Synchronizing client with gmail ? Can anybody give a detailed explanation, because i want to have a better understanding over this concept.
For example, if your client keeps any local cache of the Gmail mailbox data like the Message.Id and labels, or headers, or the entire email. Then in order to update your client you're synchronizing it with Gmail--pulling new updates down to your client. In cases of clients designed for offline use, then synchronizing may also mean pushing local updates back up to the server (e.g. label updates made by client while "offline" that get applied at some later point). That's the general definition of synchronizing.
For the Gmail API specific case, Gmail has a backend mailbox-wide history Id. Any change that affects that account in any way gets a history identifier and most (but not all) history changes affect the state of email messages. Like adding a new message, changing the labels on a message, or deleting a message. Clients of the Gmail API can poll the history Id and find out what's changed since the last time they synchronized and pull down updates to maintain their sync.

Is it better to process auto-complete/suggestions on the client or server?

I am building a web app that will use an auto-complete/suggestions for the end user as they type their information in. This will be specifically for entering Country, Province, City information.
Do a wild card search on the database on each keystroke:
SELECT CityName
FROM City
WHERE CityName LIKE '%#CityName%'
Return a list of all Cities to a given Province to the client and have the client do the matching:
SELECT CityName
FROM City
WHERE ProvinceID = #ProvinceID
These would be returned to the client as a JSON string via an ajax call to a web service. My thoughts are that javascript would be able to handle the list of 100+ entries via JSON faster than the database would be able to do a wildcard search, but I'd like the communities input.
In the past, I have used both techniques. If you are talking about 100 or so entries, and assuming each entry is very small, it will likely be faster to do the autocomplete filter on the client side. That will provide you with better response time (although probably negligible) and will reduce the load on your server.
Google actually does a live search while the user is typing, and it seems to be pretty responsive from the user's point of view. This is an example where the query must be executed server-side because the dataset is far too large to transfer to the client.
One thing you might do is wait until the user types two keystrokes before fetching the list from the server, thus narrowing down the results initially. Of course, that adds complexity - you would then need to refresh the list if the user changes either of the first two keystrokes.
We have implemented same functionality using ajax auto complete control we wait the user type three keystroke before fetching the list from server we have not done any coding at client side we just assigned web services method which return list to ajax control and its start working
In the end user's interest, it is always better to handle this client-side.
The Telerik Autocomplete controller allows for both ways.
Of course under load client-side autocomplete is likely to make the application crawl.

Resources