Azure Logic Apps: receive encrypted email - azure-logic-apps

I need to receive encrypted email from a shared mailbox. I want to decrypt the mail, check the signature and then extract the attachments.
It sounds like this is not supported out of the box by Logic Apps (receiving email ok but not decryption), am I right?

Yes, this is not possible out of the box with Azure Logic Apps. But, there is a workaround to your requirement.
Use Azure Functions in your Logic App orchestration to decrypt your Email
Also, Azure Functions can be used to check the Signature. This process is also not possible out of the box (considering all the test cases). Refer this blog and tweak it to your scenario https://www.serverless360.com/blog/extract-email-attachment-with-microsoft-flow
P.S: PGP encryption and decryption is on top user voice list.

Related

Logic App how to read secret info for use within a workflow from app settings/some other secure place?

Currently, I'm trying to access Graph API from within a (Standard) Logic App to search for Sharepoint documents. To do so, I try using the following flow (I need delegated permissions, application permissions cannot use search endpoint):
https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/calling-graph-api-from-azure-logic-apps-using-delegated/ba-p/1997666
As one can see in the blog post above, there is a step where the following string gets passed into the body of the first request to get an access token for a delegated user:
grant_type=password&resource=https://graph.microsoft.com&client_id=client_id&username=serviceaccountusername&password=serviceaccountpassword&client_secret=clientsecret
Now the client secret and service account password are two things which I absolutely don't want to have visible in the Logic App code and/or designer screen. Is there a way to securely read these from for instance the 'app settings' (in which I could reference them from a KeyVault)? I really can't find a good way on how to achieve this and I think it's a must to not be able to read these secrets/passwords from the Designer/code view.
Definitely use a KeyVault and make sure that for all steps involved, secure the inputs/outputs where ever that secret information may be visible.
The below example is the call to get the secret and therefore, I only want the outputs to be secured.
Result
With your HTTP call, it's likely that you'll only want the inputs to be secured.
Be sure to use a managed identity on your LogicApp and then assign that managed identity to the KeyVault Secrets User role on the KV itself.
There's plenty of documentation on this topic ...
https://learn.microsoft.com/en-us/azure/logic-apps/create-managed-service-identity?tabs=consumption

java googlemail blocks multiple access

I need to allow a user of my App to email themselves when an even occurs. I am not sure how to do this.
My first idea is to create a dummy gmail account, and have my App sign-in and send from there via java code. This means hardcoding the password BUT as account not used for anything other than one way emailing - it does not seem to be a problem.
However, I understand that google is pretty proactive about security and if my App (which is global) tries to log into same account in several different countries during a 24 hour period - it will block the email.
I have seen the "delegate" functionality, but that would mean that each user needs their own gmail account which is not practical.
Is there a way to force gmail to allow the sign-ins to happen from wherever?
Or is there a better approach to this problem?
probably not a good idea to have your app to mail from a private account, if I understand you correctly. Best to use email service like http://expresspigeon.com or http://sendgrid.com and simply send a transactional email from your app account. In other words, use an ESP.
The safest would be to ask the user for all the configuration information necessary to access their email server as themself, then send the email as themself to themself. You can use JavaMail to send the message, but you'll need to ask for all the configuration information that any other email application would ask for in order to configure access to their mail server.
There may also be Android-specific ways to do this using the default email application.

Secure / Authenticated interaction from a WP7 app

I am working on a WP7 application. This WP7 application will interact with some web services that I have created. I do not want other applications interacting with these web services. The reason why is because I do not want them stealing my data. With that in mind, here is what I'm currently doing:
Connecting to web services via HTTPS
Making my users initially login to the application
Passing the users username / password with each web service interaction
At this time, I don't see what is stopping a malicious developer from creating a username / password combo and using that account in their application to interact with my web services. How do I really lock this thing down?
Thanks!
As a start towards a more secure system you should stop storing the password and sending it over the wire with each request (even if you're using SSL).
If you must pass it with each request, store a salted hash of the password and use that instead.
I'm using a multi layered approach to this problem. I recommend thinking creatively and using a variety of methods to validate that requests are coming from devices you expect requests to come from.
Alternatively, if there is any merit in your scenario, open up your api to 3rd party developers and make this work toward your objectives.
If you do decide to store a key in your app, don't store RAW text but instead declare a byte array of the UTF8 values, this won't be as easy to read.
You can then handshake with your service using a salted hash of the key the first time the app is run, the service hands out another key for the device to actually use day-to-day.
The phone should have an almost accurate time, so you can recalculate the key each day or hour. You can also revoke the key at the server end for just that device.
This API will be useful in ensuring you can blacklist a device permanently.
DeviceExtendedProperties.GetValue(“DeviceUniqueId”).ToByte();
I've not looked into symmetric encryption by you might even be able to use the above unique ID as a private key.
I think the key to success is that first hand-shake, and ensuring that is not snooped. If it's a really important system, then don't use any of these ideas since rolling your own encryption is always flimsy to anyone with serious intent - use well-known methods and read up.
Luke
You could introduce an "Authorized Application ID" feature where the application sends its name or identifier within each HTTP request body. Then on the server side you can verify the application's identity (e.g. store the authorized app ID's in a table). The application ID would be encrypted within the HTTP(S) body.
This would also give you the option of pushing out new application ID's in updated versions of the WP7 application if you wanted to get rid of an older application ID. You'd also be able support new applications on difference devices or platforms in the future.
You may want to look at this
http://channel9.msdn.com/Blogs/Jafa/Windows-Phone-7-Trade-Me-Developer-Starter-Kit

Google apps applications talk to each other

I am looking for a way for two Google Apps applications to talk to each other and share data between each other. I have the following scenario:
Application A logs user in using Google Apps login
Application B logs user in using Google Apps login
then these applications need to communicate directly to each other (server-to-server) using some APIs
The question is: how do these applications verify that the other one is logged in with the same user to Google? I would imagine something like:
- Application A gets some 'token' from Google and sends it to Application B
- Application B verifies that this token is valid for the same Google account as it is logged in with
Is there a way to accomplish that via Google Federated Login? I am talking about Hybrid protocol here.
Here's a simple way to do it:
You keep everything keyed to the user's Google userid on both applications.
You share the data using HTTP requests that contain the userid.
To prevent leaking of the userids (forbidden by the account API) and to verify the messages really come from the other application, you encrypt the requests with a symmetric cipher such as AES or Blowfish or whatever you like. Both applications have the same key embedded.
You could public key cryptography. With just two applications, it's not worth it in my opinion. If you start having more apps, public key makes sense.
The fine print: encryption does not guarantee integrity or origin without additional measures. You need to take precautions against playback, for example by incorporating a time-stamp or sequence number. You need to take precautions against tampering, e.g. with a checksum. Make sure to use CBC and good initialization vectors. Keep the key secret.
user.user_id() is always the same across all the apps for the same user. So you can simply compare values returned by user.user_id(). Is this what you are looking for?
Note: Every user has the same user ID
for all App Engine applications. If
your app uses the user ID in public
data, such as by including it in a URL
parameter, you should use a hash
algorithm with a "salt" value added to
obscure the ID. Exposing raw IDs could
allow someone to associate a user's
activity in one app with that in
another, or get the user's email
address by coercing the user to sign
in to another app.
From docs

Securly Storing OpenID identifiers and OAuth tokens

I am creating a web app that will use OpenID logins and OAuth tokens with Youtube. I am currently storing the OpenID identity and OAuth token/token secret in plain text in the database.
Is it inappropriate to store these values as plain text? I could use a one-way encryption for the OpenID identifier but I don't know if that is necessary. For the OAuth tokens, I would need to use a two-way encryption as my app relies on getting the session token for some uses.
Is it necessary to encrypt the OpenID identity? Could someone use it to gain access to a user's account?
First, there is a registered application that has consumer_key and consumer_secret.
When users authenticate and "allow" your registered application, you get back:
an access_token that is considered the user's "password" and would allow JUST YOUR application to act on the user's behalf.
So, getting just the user's access_token from your database won't help much if they don't also have the consumer_key and consumer_secret for complete access.
The service provider compares all 4 parameters on request. It would be smart to encrypt these 4 parameters before storage and decrypt them before response.
This is just when you need to update or make changes to the user's resource owner on behalf of a user. To keep a user logged-in on your site, use sessions.
The OAuth Token and Secret should both obviously be kept safe in your database, but you can't store them using 1 way encryption the same way you would for a password. The reason being is that you need the token and secret to be able to sign the request.
This would also be the case if you are running an OAuth server, you still need the original token/secret to verify the request.
If you want to you could still encrypt them using a 2 way encryption algorithm such as AES to offer security in case your database or database backups get compromised.
There's two schools of thought here.
The first argument is that: you should treat OAuth tokens like passwords. If anyone were to access your database, obtain all the OpenID/OAuth pairs and run an man-in-the-middle attack, they could impersonate any user on your site.
The second argument is this: by the time someone has access to your database and sufficient access to your network to run an man-in-the-middle attack, you're hosed anyway.
I'd personally err on the side of caution and just encrypt them; it's a standard practice for passwords, so you might as well give yourself just that little extra peace of mind.
Meanwhile, Google has this advice:
"Tokens should be treated as securely as any other sensitive information stored on the server."
source: http://code.google.com/apis/accounts/docs/OAuth.html
And some random guy on the web has specific implementation advice:
If they’re on a regular disk file, protect them using filesystem
permissions, make sure that they’re
encrypted, and hide the password well
If they’re in a database, encrypt the fields, store the key
well, and protect access to the
database itself carefully. *
If they’re in LDAP, do the same.
archived post (original post URL, now a dead link)
OpenID URL shouldn't be encrypted because this is your "open id" literally, everyone should know the value. Besides, the URL needs to be an index in the database and it's always problematic to encrypt the index in the database.
OAuth token/secret should be secret and encryption may improve security if you have to store the token long term. In our OAuth consumer application, token/secret is only stored in session for a short while and we choose not to encrypt them. I think that's secure enough. If someone can peek into our session storage, they probably have our encryption key also.
Yes, these should be symmetrically encrypted (say, AES-256 in CBC mode) at rest in a database. A simple way to encrypt these tokens is using SecureDB's Encryption as a Service RESTful APIs.
Disclosure: I work at SecureDB.

Resources